Judging a field by its cover
There’s a lot of hope out there riding on the idea of climate-smart agriculture — that by tweaking how we grow food, we can store more carbon on farms and help avoid catastrophic climate change. In many ways, the enthusiasm makes sense. We need to turn the corner on global emissions as soon as possible, and many studies have shown that soil carbon storage can play a significant role, often at a cost that is very competitive with other strategies. Add to that the chance that it could bring real revenue to farmers, many who are struggling to make a profit and would love to help solve climate change, and you’ve got a recipe for excitement.
Already, the USDA has many programs to help farmers adopt new practices thought to be climate friendly. For example, the pandemic response funds included money specifically for cover crops, one of the most widely touted climate-smart practices. And the recently signed Inflation Reduction Act (IRA) has $20 billion earmarked for practices that “directly improve soil carbon, reduce nitrogen losses, or reduce, capture, avoid, or sequester carbon dioxide, methane, or nitrous oxide emissions, associated with agricultural production.”
But before we go too far down this path, it would be nice to know the answers to some key questions, like how much carbon can soils actually store? How easily can we measure any carbon gains, which will be critical for any payment scheme? And how much do practices that build carbon affect other things we care about?
This last one has been particularly interesting to our group, since we have a great interest (some might say obsession) with the yields of the main crops that feed the world, like corn and soybean. If climate-smart practices help yields or make them more resilient to drought and other climate risks, this would be an important additional reason to adopt them. On the other hand, if they hurt yields or make them more susceptible to climate risks, then there could be unintended risks, including for large indirect emissions associated with land expansion to make up for the lower yields.
A couple of years ago, we looked at reduced tillage, which is a key climate-smart practice that is already widely used by ~half of farmers in the United States (We found that no-till generally has a small benefit for yields). In a paper out this week, we turned our attention to cover crops.
Some quick background: a cover crop is defined as anything planted in the off-season that is primarily grown “for seasonal cover and other conservation purposes”. These purposes typically include erosion control, soil improvement, and water quality improvement. For a great and amusing background on different types of cover crops, see this blog post. By far the most common cover crop used now is rye, shown below.
While there are some clear benefits of having a cover crop, the costs of sowing and terminating the crop often make it a money-loser for farmers. But government subsidies can make it profitable, and a lot of funding has been going to cover crops. For example, the figure below from a recent USDA report shows how one of the main conservation payment programs has increasingly focused on cover cropping.
Cover crops are not yet nearly as widespread as no-till. They are currently used on about 20 million acres, which is about 10% of the total corn and soy acreage in the U.S. (of ~180 million acres). But 20 million acres is still a lot if you’re interested in judging how they are doing. That’s 20 million acres worth of collective experience. And with satellites, we can see every single one of those acres.
That brings us to the study. Briefly, what we did is 1) find every field that had grown cover crops for at least 3 years, 2) find other fields that are very similar to those fields in every way except that they didn’t grow any cover crops, 3) measure the yields for these two groups and see if they differ, and 4) identify factors that can explain the places where the yield effect of cover crops are biggest. In more technical terms, we used a causal forest analysis to estimate the average treatment effect and the heterogeneity of treatment effects.
The most striking result was that cover crops, on average, are having a very clear effect on yields for both corn and soy, and that effect is negative. The figure below shows the result for corn — an average loss of 5.5%. For the econometricians out there, we also use placebo tests to ensure that these results aren’t driven by the treatment being confounded with the outcome. In other words, we find no effect of cover crops on yields for years before they were adopted (which is a good thing, since there would be no plausible causal story for an effect backwards in time).
The paper has more details on the methods, and also on the heterogeneity. But two things seem worth emphasizing here. First is that just because effects are negative now, it doesn’t mean they always will be negative. It could be that the benefits take a while to kick in, and it’s likely that farmers will become better at implementation. Hopefully studies like this can help guide that implementation, for example by showing that rye might not be the best choice for many regions. As a clever student remarked, cover crops seemed like a good idea but things in the Corn Belt have gone a-rye.
Second, it shows how important it is to be scrutinizing new agricultural practices — even ones that are politically and scientifically popular. Agriculture is a very tricky business to get right, and things typically don’t work out as planned. Learning by doing is really important, and adjustments are almost always needed both in the sense of farmer practice and government policy. The combination of satellite data and powerful machine learning methods like causal forests can help us be more nimble in making these adjustments.
About the author: