Opinion:Six Lessons from the History of Natural Climate Solutions
When popular media get natural climate solutions wrong, it’s usually because they’re struggling to understand complex mechanisms that have evolved over more than 45 years. Here is a brief look back on that evolution. Second of a three-part series.
Steve Zwick has been covering climate issues for more than 30 years, including 15 as managing editor of Ecosystem Marketplace. He currently produces the Bionic Planet podcast, and the views expressed here are his and his alone.
7 February 2022 | Once upon a time, not that long ago, environmental NGOs were utterly dependent on the generosity of others to finance their programs. It’s an arrangement that worked for some, but not for most and rarely for those engaged in activities requiring decades to reach fruition, such as ecological restoration and rural development. Some got funded, but most did not, and many of those that got off the ground ended up treading water when their short-term funding dried up.
Meanwhile, in the corporate world, a handful of companies were taking climate change seriously, and they found themselves in a related quandary.
Carpet maker Interface, for example, joined the climate challenge decades ago, and at some point, it decided to develop a line of “carbon negative” carpets.
The idea was simple: by replacing vinyl backing with something made of plants, they’d flip their production process from being a source of greenhouse gasses to one that pulls more carbon from the atmosphere than it emits. That simple idea, however, proved difficult to realize, and the company wasn’t able to bring its carbon-negative carpets to market until 2020.
Fortunately, however, they’d been able to offer carbon-neutral carpets long before then.
In part by helping NGOs finance activities that reduced greenhouse gas emissions from deforestation and forest degradation while enhancing the ability of land systems to store carbon – activities that would eventually come under the rubric of “REDD+.
When Interface started offsetting its emissions, it financed activities directly, with no clear consensus on how to measure its impacts. Those early days were a wild west of sorts, with companies taking disparate approaches and sharing little in the way of lessons learned.
Over time, however, environmental standards emerged.
Drawing on the latest scientific literature and input from a broad array of stakeholders, these standards encouraged a process of scientific review, stakeholder engagement, and public consultation. This yielded detailed methodologies for everything from identifying endangered forests and quantifying human impacts to generating verified reductions and removals. Different standards had different philosophical approaches, but they built on the same science and evolved in surprisingly similar ways. Furthermore, they were designed to update over time as science and experience provided new insights.
The methodologies made it possible for companies to drive emissions down deeper and faster than they otherwise could, and many did just that. A 2016 analysis by Ecosystem Marketplace found corporations that used voluntary carbon markets tended to do so as part of a strategy to reduce overall emissions — by, for example, creating an internal price on carbon, among other things.
Then came the Great Awakening of 2019, when the world finally started taking the climate challenge seriously. Suddenly, companies that never thought of climate before started taking action – albeit with varying degrees of sincerity – and reporters found themselves struggling to understand complex mechanisms that hadn’t been taught in school.
It’s a steep learning curve, especially if you come to it fresh, but here are six things I’ve learned over the years that may at least get you pointed in the right direction.
Carbon markets started with natural climate solutions more than 45 years ago.
Natural Climate Solutions (NCS) date back to a seminal 1976 paper entitled “Can We Control the Carbon Dioxide in the Atmosphere?”
“The long-term response [to climate change] must be to stop burning fossil fuels and convert our industry to renewable photosynthetic fuels, nuclear fuels, geothermal heat and direct solar-energy conversion,” it said. “But a worldwide shift from fossil to non-fossil fuels could not be carried out in a few years, [and an] emergency plant-growing program would provide the necessary short-term response to hold the CO2 at bay while the shift away from fossil fuels is being implemented.”
The author of the paper was Freeman Dyson, a visionary physicist, statistician, and mathematician known as much for rock-solid breakthroughs in his core disciplines as for his more speculative ideas outside them. He had some wacky ideas that I should probably acknowledge. He believed, for example, that we could turn comets into space-borne farms, and later in life he drastically overstated the potential for Natural Climate Solutions to fix the mess.
Visionaries are like that, and there’s a bonus lesson here. Even geniuses have crackpot ideas, especially when venturing outside their core disciplines, and individual papers mean nothing if they’re not substantiated by later exploration.
Fortunately, decades of exploration have substantiated his tree-planting idea, and nature has been central to the climate challenge since 1979. That’s when the UN held its first World Climate Conference (WCC). There, delegates unanimously declared that “deforestation and changes of land use” were two of the three leading sources of man-emitted carbon dioxide. The third cause was “the burning of fossil fuels.”
That unanimous declaration also stated “that some effects on a regional and global scale may be detectable before the end of this century and become significant before the middle of the next century” and called for continued research. This spawned the Intergovernmental Panel on Climate Change (IPCC), which provides a compendium of the latest climate science in context. The IPCC’s most recent report tells us that deforestation alone still generates about 13 percent of anthropogenic emissions, while land use more broadly accounts for about 30 percent.
Around the time of the first WCC, William Nordhaus and others started to model the interplay between climate change and economic growth. Nordhaus focused on the ability of high carbon prices to drive change, while others focused on the ability of carbon markets to drive finance into the most cost-effective reductions – two approaches that critics often contend are contradictory but which I’d argue are complementary if properly implemented.
Meanwhile, in 1992, the Rio de Janeiro Earth Summit gave us the three Rio Conventions: the Convention on Biological Diversity (CBD), the United Nations Framework Convention on Climate Change (UNFCCC), and the United Nations Convention to Combat Desertification (UNCCD). All of them impact forests, and the UNFCCC called for the “conservation and enhancement, as appropriate, of sinks and reservoirs of all 11 greenhouse gasses not controlled by the Montreal Protocol, including biomass, forests and oceans as well as other terrestrial, coastal and marine ecosystems.”
The UNFCCC also brought the possibility of creating a global currency to finance change: namely, carbon credits, which work globally because greenhouse gasses have the same impact on the atmosphere no matter where they’re emitted or where they’re removed.
Interest in carbon markets has always fluctuated with climate awareness, which has in turn fluctuated with media attention. As media finally give the climate challenge the coverage it requires, markets are responding. As a result, we now see that the “high carbon price” and “most efficient abatement” approaches are not mutually exclusive. Prices for compliance offsets in the European Union are north of €90 per ton, which forces companies to reduce emissions internally, while prices for voluntary offsets are all over the place. This gives companies the ability to finance reductions and removals above and beyond what’s required by law and in ways that suit their own goals and aspirations.
To reiterate: a high compliance price is designed to force internal reductions, while low prices in the voluntary markets funnel money into the most cost-effective mitigation opportunities or next-generation technologies. The low-hanging fruit, however, is rapidly being depleted as finance moves to ever-greater challenges. Voluntary prices, in other words, are going up.
Every acre of forest we save now means 50 fewer hectares we have to plant down the road, but conservation is expensive.
In 2019, an analysis in the journal Nature concluded that a hectare of deforestation pumps 355 metric tons of carbon dioxide into the air, while reforesting the same area absorbs just 6.7 metric tons per year. That means we’ll have to reforest 50 hectares for every hectare of forest we lose in a given year (or wait 50 years for that hectare to recover).
The insight is hardly new, and it’s woven into the history of Natural Climate Solutions. Take, for example, an agroforestry project called Mi Cuenca (My Basin). It was launched in 1974 by the NGO CARE to help Guatemalan farmers improve their water. It delivered clear social and environmental benefits, but it struggled to attract long-term funding.
Then, in the 1980s, an electric company called Applied Energy Services (AES) tried to offer its customers renewable energy and realized that large-scale renewables weren’t commercially viable. Undaunted, AES asked the World Resources Institute (WRI) for help.
WRI pointed AES to Mi Cuenca and helped CARE calculate the amount of carbon its activities could sequester if properly funded. AES then decided to finance Mi Cuenca based on that sequestered carbon, and the project was rechristened “Mi Bosque” (My Forest).
The real epiphany, however, came two years later, when WRI decided “to put some hard numbers into what was then a very soft debate.” It began modeling Mi Bosque’s impacts on surrounding forests and concluded that the project’s biggest climate impact came not from the new trees it had planted or the soil it had revived, but from the forests it had saved. By helping farmers increase yields, it turned out, they’d reduced their need to chop additional trees.
By all accounts, those early calculations overstated the project’s impact, but they also sparked decades of experimentation and adjustment that brought us to where we are now.
Carbon methodologies have evolved through 45 years of trial, error, and adjustment.
NGOs, governments, companies, and UN agencies started building on WRI’s work almost from the start, with pilot projects across Latin America, Eastern Europe, Asia, and Africa. The goals were twofold. To promote scalability, they wanted to create standardized methodologies. To prevent greenwashing, they had to make sure those methodologies were solid.
If they succeeded, they’d be able to create a tangible asset – a verified claim on emission reductions that emitters could use to reduce their carbon footprints in the present while reducing internally for the future.
Proponents knew from the start that the process of developing such methodologies would be tedious. It would require peer review from scientific experts in disciplines as disparate as forestry and rural development, and it would require public consultation among stakeholders in forest communities.
As a result, it wasn’t until 1997 that we saw the first standardized, verified credits associated with tree planting. That’s when the Scolel’te community forestry project was certified under the Plan Vivo Standard in Chiapas, Mexico.
It took even longer for the first REDD+ units to be generated and sold. That happened in 2011, when the Kasigau Corridor REDD+ Project was certified under the Verified Carbon Standard (VCS).
The pilot projects that ran throughout the 1990s were as diverse as the terrains and types of deforestation they addressed – from “frontier deforestation” in areas where large-scale agriculture was devouring forests to “mosaic deforestation” where smallholders were being forced to cut trees for survival. The methodologies were equally diverse and complex, as reflected in scores of scientific papers published throughout the 1990s and early 2000s.
It was clear from the start that there would never be universal agreement on any methodology and that every project would have outcomes that were partly subjective. Rather than let the perfect become the enemy of the good, proponents set out to identify procedures and activities that could be verified, and that would deliver environmental benefits on the whole. Some projects would underperform, but others would overperform, and it’s what they did in aggregate that mattered.
It was also clear that stand-alone projects, which are designed to address locally-specific drivers of deforestation in areas with weak governance, would face a different dynamic than would the next generation of projects embedded in jurisdictional programs, which would be designed to drive systemic change. More on this in a bit.
In 2007, Winrock International asked some of the world’s leading deforestation experts to summarize the current thinking on stand-alone projects in a paper entitled “Mitigation and Adaptation Strategies for Global Change.” Since then, the process of creating methodologies and related projects has tended to unfold along these lines:
To create a carbon methodology, some entity will identify a challenge that isn’t currently being addressed, such as the inability of smallholders to defend their land from illegal loggers. They will then develop a stepwise approach to addressing it and present it to a standard-setting body. The proposal will then go through a series of expert reviews, where it is amended before being put out for public consultation. At that point, stakeholders will have a month or two to make comments, and then the procedure may be amended further before finally being adopted as a recognized methodology.
Once a methodology exists, a project developer can follow it to create a project design document (PDD), which is a detailed explanation of the specific challenge and how the project hopes to address it. A recognized third-party auditor must then validate the project design to ensure its legitimacy. If the design is approved, the proponent will develop the project, which another third-party auditor must then verify to ensure it has been properly implemented. Only then can credits be issued.
It’s obviously more complicated than this, and I’ve explained the process in more detail here, while Tanya Dimitrova has done so here. Several leading NGOs did so here, and the public-private BioCarbon Fund did so here.
As a result of all this, large-scale conservation efforts that previously struggled to make ends meet have managed to scale up, and not by relying on goodwill alone. They’re scaling up by delivering verified ecological benefits which are, in turn, adjusted to reflect scientific uncertainty.
Over the past decade, research has shown a simpler way of projecting short-term deforestation by looking at fewer variables, and the process is evolving dramatically as the lessons from the last decade are incorporated into new approaches and the world shifts from stand-along projects to jurisdictional programs. As more people and resources enter the space, it will be possible to construct simpler reference levels with more third-party input rather than just oversight.
REDD+ is just one component in a global mosaic of interconnected solutions, none of which can meet the climate challenge on their own.
REDD+ won’t magically fix the climate mess alone, but nothing will. Instead, it dovetails with government policies, private-sector initiatives, and NGO-driven solutions. The Forest Stewardship Council (FSC), for example, emerged in the wake of the 1992 Rio Earth Summit – not to replace government policy, but to compensate for its failure. The FSC develops standards for certifying sustainably sourced timber, and it works by stimulating demand for sustainably harvested timber products. Getting certified is expensive, and REDD+ is emerging as a tool for jump-starting that process.
Another initiative emerged in 2004, when NGOs and corporates launched the Roundtable on Sustainable Palm Oil (RSPO) to promote the use of sustainably produced palm oil. This was followed in 2006 by the Round Table on Responsible Soy (RTRS). In each case, the criteria were squishy at first, and consumer demand was slow to materialize.
Things are improving on both fronts, but for our purposes the important thing is that the extra cost of producing certified product outweighs the market premium, and REDD+ is emerging as a tool for bridging this financing gap.
Meanwhile, as consumer awareness grows, so do corporate initiatives. In 2010, for example, 400 private companies passed a resolution via the Consumer Goods Forum (CGF) to purge deforestation from the supply chains of cattle, soy, oil palm and pulp & paper. Then, in 2014, UN General Secretary Ban Ki Moon launched the New York Declaration on Forests, which is a cluster of ten pledges designed to end deforestation by 2030 while restoring hundreds of millions of acres of degraded land.
These efforts work on the demand side of supply chains, and while they haven’t ended deforestation, they have led to a major restructuring of corporate supply chains. REDD+ provides targeted funding to scale these initiatives up, and the transparency they bring can provide leverage points for government regulation.
The move from projects to jurisdictional programs was planned from the start.
Many countries are implementing jurisdictional REDD+ programs, or programs that cover an entire country or state, instead of the smaller areas that stand-alone projects focus on. This is not, as the Guardian and others claim, a response to news coverage of project shortcomings. Instead, it’s baked into the evolutionary process and has been for well over a decade.
The UNFCCC focuses on jurisdictional REDD+, but negotiators recognized early on that it would take years for countries to agree on international protocols and still longer to develop the monitoring and management systems needed to implement them. To promote early action, the Paraguayan delegation proposed a “nested approach” in 2008. This made it possible for countries to start with isolated projects that would eventually be absorbed into jurisdictional systems within countries that chose to pursue them.
From the start, it was clear that stand-alone projects are different from projects that nest in jurisdictional programs – largely because stand-alone projects address site-specific drivers of deforestation in areas where weak jurisdictional oversight is treated as a separate issue, while projects that nest in jurisdictional programs are designed to encourage systemic improvement by sharing risk between projects and jurisdictions. For this reason, the baselines of nested projects will probably be lower than for stand-alone projects, as Simeon Tegel pointed out in 2010.
“The risk is that these projects could be punished for their early success through discrepancies with an eventual national REDD methodology,” he wrote in Ecosystem Marketplace.
This reality is finally upon us, albeit a decade later than expected, and it’s sure to provide a jolt. Projects will complain about more conservative baselines, and critics will claim the whole system needs to be abandoned. This, however, is how evolution works. Stand-alone baselines come from a time when projects were doing all the heavy lifting on their own and the data was scarce, while nested baselines come from a time when the jurisdictions are supposed to be helping them and more data is available. At the same time, the methodologies for establishing baselines in stand-alone projects are cumbersome and complex, while those for nested projects are streamlined and simple. As I see it, we’re stepping sideways in order to get a better footing for the future. It’s something we have to do, but that sideways step could be disastrous if we land on mushy ground. That’s why it’s important to get a critical mass of minds engaged in doing it right. For the sake of encouraging future development, it will also be necessary to recognize existing projects that were created in good faith, and there will certainly be a contentious period of adjustment.
But I digress.
When negotiators failed to reach a global agreement at the 2009 Copenhagen climate talks, governments and NGOs encouraged a proliferation of stand-alone projects and subnational initiatives to promote early action.
At the same time, donor countries such as Norway, Germany, and the United States continued offering “results-based” payments to jurisdictions that were able to document reductions in emissions from deforestation. These payments are different from offsets in that they do not involve the creation of a carbon asset. As a result, the accounting is less rigorous than that required for offset-quality payments.
Many projects began as cash-strapped conservation efforts that were struggling to cover costs, while some indigenous projects began as unfunded Life Plans, which are indigenous management strategies designed to revive traditional agricultural practices. Over time, the projects and methodologies became more sophisticated while global climate talks ground on.
At the 2013 climate talks in Warsaw, negotiators signed off on the Warsaw Framework for REDD+. Under the framework, countries that want to receive jurisdictional REDD+ funding must first calculate a Forest Reference Emissions Level (FREL), which is an estimate of future emissions from deforestation. Countries then submit their FREL to the UNFCCC, which provides a technical assessment but neither accepts nor rejects it.
As the Paris Agreement takes effect, a country’s FREL will serve as a pie from forestry-based emission reductions, including those associated with carbon projects, are sliced.
Although a FREL isn’t a baseline in itself, it can serve as the basis of one.
The upshot is that the methods of identifying priority areas within a jurisdictional system are different from those used to develop baselines for stand-alone projects. They focus, for example, on geospatial indicators such as proximity to recent past deforestation instead of intricate, site-specific modeling.
The challenge now is aligning these two systems without losing projects that have already delivered ecological results.
Ideologues of all stripes believe their TRUTH trumps anyone else’s facts.
In the early 1990s, the science underpinning carbon finance was untested, while the uncertainties were unknown, and the methodologies were non-existent. But the science improved, the uncertainties came into focus, and the methodologies came into being. As evidence mounted, opposition to market-based NCS waned, but a small yet vocal contingent of NGOs remained opposed.
In 2001, biologist Philip Fearnside, then with Brazil’s National Institute of Amazonian Research, analyzed the disparate views and found that opposition to including forests in the Kyoto Protocol’s Clean Development Mechanism (CDM) came almost exclusively from a small cluster of European NGOs or their affiliates in developing countries, while support for including forests in the CDM came from NGOs based in the United States or in countries facing deforestation loss – primarily Brazil.
“The reasons for these differences are not scientific, despite the debate frequently being couched in scientific terms,” he wrote. “It is very important to distinguish between what is a scientific conclusion and what is a moral judgment.”
This doesn’t mean forest-carbon methodologies have evolved to a state of perfection, and Fearnside hasn’t been shy about criticizing shortcomings when he sees them, but it shows how long this ideological rift has existed.
In the next installment, I’ll provide a framework for differentiating legitimate critiques from ideological bias and science denial.
Please see our Reprint Guidelines for details on republishing our articles.