International
Green economy: A path to another possible world?
The “green economy” proposal now touted by
businesses, governments, universities and NGOs
has been fed by the vogue and creed of global warming.
The role of new technologies, with their unpredictable risks,
is central to the green economy agenda,
which still sees the environment as a mere provider of resources
and people as a labor force to work and a mass to consume.
The green economy is neither clean nor is it green
and it won’t get our planet off its collision path
with crisis and collapse.
Célio Bermann
Twenty years after the United Nations Conference on Environment and Development, also known as the Earth Summit or Río Summit and in Portuguese as Eco 92, Río de Janeiro was again the site of the follow-up conference, Río+20. Its two main agenda points were the eradication of poverty and what is being touted as the green economy. Critical voices have argued that the green economy sins of an excessive “scientificist” positivism, as it relies on resolving the problems generated by climate change by applying science at the expense of political debate. According to its detractors, the green economy wagers on technologies whose risks are unpredictable, such as nanotechnology, synthetic biology and geo-engineering, areas in which States and businesses have already invested billions of dollars.
An imprudent and
erroneous termThe United Nations Environment Programme (UNEP) launched the Green Economy Initiative in October 2008 to mobilize and redirect savings and investments in green technologies and natural infrastructure. One of the results was the publication in March the following year of a 28-page “Global Green New Deal” policy brief, which in turn served as the basis for a 631-page document published in 2011 under the title “Pathways to sustainable development and poverty eradication,” with a 38-page “Synthesis for Policy Makers.” In this latter document UNEP defines a green economy as one “that results in improved human well-being and social equity, while significantly reducing environmental risks and ecological scarcities. In its simplest expression, a green economy can be thought of as one which is low carbon, resource efficient and socially inclusive.”
In the Río+20 context of sustainable development and poverty eradication, as Carlos Walter Porto-Gonçalves recalls in Sustentando a insustentabilidade (Sustaining unsustainability), the issue of the green economy was built on “a notion full of ambiguities, with no scientific or philosophical consistency, which would only serve to legitimize the opening of markets that, based on mercantile logic and a value system that is quantitatively measured and therefore limitless, tends to feed its tension with the ecological and cultural diversity of both the planet and humanity. For that reason, consecrating that term is not only imprudent but also a scientific and philosophical mistake.”
The underpinning principle of UNEP’s quite general definition of “green economy” is an economy that substitutes fossil fuels with renewable energies and low carbon-emitting technologies. To reflect critically on the foundations of that definition, we need to look at the historical evolution of the environmental debate, starting with the results of the 1992 Earth Summit.
The “decarbonization” strategyThe main results of that conference were the drafting of conventions on Biodiversity, Desertification and Climate Change and the production of a number of documents including the Earth Charter, the Statement of Principles for the Sustainable Management of Forests, Agenda 21 and the Río Declaration on Environment and Development.
The energy question acquired more relevance in the UN Framework Convention on Climate Change, which addressed the problematic involving the increased emission and concentration of greenhouse effect gases. In 1997 it resulted in the drafting of the Kyoto Protocol Document, which established that greenhouse effect gas emissions had to be reduced by 5.2% by 2012, taking 1990 as the reference year.
From the scientific-institutional viewpoint, the work by the Intergovernmental Panel on Climate Change (IPCC) provided the technical-scientific basis for evaluating the human contribution to the increase of emissions and concentration of greenhouse effect gases, identifying the burning of fossil fuels (mineral carbon, petroleum and natural gas) and changes in soil use resulting from the loss of vegetation coverage due to burning as the main focal points for implementing emission-reduction measures.
The Kyoto Protocol and its flexible instruments or mechanisms (Clean Development Mechanism, International Emissions Trading, Joint Implementation) encountered resistance in various countries known to be major emitters; they were only ratified and put into effect on February 16, 2005, after Russia had ratified them in November of the previous year. For its part, the Convention on Climate Change established a calendar of annual “Conference of the Parties” meetings, the first of which was held in Berlin in 1995. Since then 15 more have been held, the latest one in Durban, South Africa, in 2011.
During that entire period, the issue of renewable energies was imposed as a strategy for replacing the burning of fossil fuels and therefore reducing greenhouse effect gas emissions. The strategy was called “decarbonization” and presented with evidence in countless congresses, seminars and meetings in which academics, governments, politicians, businesses and nongovernmental organizations (NGOs) participated. The decarbonization strategy is precisely one of the underpinnings of the green economy.
The global warming creed:
Believers and skepticsSince its creation in 1988, the IPCC has produced four evaluative reports on climate change: in 1990, 1995, 2001 and 2007. The latest presented the issue of global warming and climate change as the result of human action in the framework of a scientific consensus. That report, using reliable data gathered since 1850, showed that 11 of the 12 previous years were the warmest ever registered, and that the majority of the warming produced since the mid-20th century is attributable to human activity, with more than 90% credibility.
In a joint article published in 2007, scientists William Collins, Robert Colman, Philip Mote, James Haywood and Martin R. Manning defended the IPCC’s viewpoint. For them, the certainty that human beings are responsible for the increase in the atmospheric concentration of greenhouse effect gases is related to the fact that some of those gases—for example the majority of the halocarbons—have no natural source. They also argued that two important observations demonstrate the human influence with respect to other gases—basically carbon dioxide, methane and nitric oxide. The first is that the geographic differences in the concen-trations reveal that the sources are predominantly in the areas with the greatest demographic density in the Northern Hemisphere. The second is that analyses of isotopes, which can identify the emitting sources, indicate that the increase of carbon dioxide comes largely from the burning of fossil fuels (coal, petroleum and natural gas). The increase in methane and nitric oxide levels is a product of agricultural practices as well as the burning of fossil fuels.
A large number of skeptical scientists do not share this certainty with respect to the contribution of anthropic (human) emissions to the global warming process. Some are even quite reticent about the very nature of the work developed by the IPCC. For example, the now deceased French scientist Marcel Leroux, a former professor of climatology at the Jean Moulin-Lyon III University and director of the National Scientific Research Center’s Climatology Laboratory, argued with respect to the IPCC reports that “the announced number of scientists who participated in those reports can build hopes and hide the monolithic nature of the message. In reality, a small dominant team is imposing its points of view on a majority without climatological competencies. The ‘I’ of IPCC means, in effect, intergovernmental. This expresses the fact that the scientists are, above all, governmental representatives.”
For its part, IPCC points out that it does not conduct new investigations, monitor data related to climate change or recommend climate policies. Nonetheless, its role in feeding “climatic alarmism” is undeniable. For Leroux, “global warming is an issue that has become fashionable, in particular after the summer of 1988. After that, ‘dust bowl’ anguish moved to stage front in the United States, followed by greenhouse panic. Initially a climatology issue, it was treated emotionally and irrationally, which then turned into alarmism. That is when it lost its scientific content.”
Scientific preoccupation or
economic and political interests?Other internationally renowned scientists have also criticized the loss of scientific content in the IPCC’s works, among them Richard Lindzen, professor of meteorology at the Massachusetts Institute of Technology; Robert Balling, professor of geography at Arizona State University; and Patrick Michaels, professor of environmental sciences of the University of Virginia; as well as Bjørn Lomborg, Fred Singer, John Cristy and Stephen McIntyre.
It should also be pointed out that the nature of the activities and aims of several of these scientists have been called into question due to the financial support they received from the oil and coal industries, sectors that were and still are interested in using these academic works as a scientific basis for denying their alleged responsibility in the increased concentration of greenhouse effect gases. Making the task of disassociating science from economic and political-ideological interests even more difficult, some of these scientists created the Heartland Institute, which acquired relevance for sheltering the “free market environmentalism” current, which bases its work on the vision that market principles suffice to ensure environmental protection and resource conservation.
The scientific credibility of the IPCC’s works were definitively put to the test with the publishing, in its fourth report in 2007, of the prediction that the likelihood of the Himalayan glaciers having melted and therefore disappeared by around 2035 was “very high,” without citing major evidence. That is the same expression it used in classifying the over 90% probability that global warming is caused by human beings.
That fourth report states that “…the likelihood of [the Himalayan glaciers] disappearing by the year 2035 and perhaps sooner is very high if the Earth keeps warming at the current rate.” Although IPCC alleged in its defense that this prediction did not go into the final summary for the governments, the strongest repercussion of the error committed was observed at the end of the 15th Conference of the Parties, held in Copenhagen. which was marked by the failure of negotiations for a second stage of the Kyoto Protocol.
Are sustainable energies possible?
And if so, at what scale?The fact is that there is no consensus about whether carbon dioxide emissions of an anthropic origin have a significant effect on global warming. Many scientists consider the human contribution to the verified global carbon dioxide emissions on the planet to be absolutely negligible. Luiz Carlos Molion, professor of meteorology at the Federal University of Alagoas and representative of the South American countries in the Climatology Commission of the World Meteorological Organization (OMM), has said of the debates that took place in December 2009 around the 15th Conference of the Parties in Copenhagen that “the natural flows of the oceans, poles, volcanoes and vegetation add up to 200 billion tons of emissions per year. Our uncertainty about that number is more or less 40 billion. Human beings produce barely 6 billion tons. Therefore, humans represent 3%. If in that conference they had agreed to reduce emissions by half, what would 3 billion tons represent compared to 200 billion? It’s not going to change absolutely anything in the climate.”
Furthermore, according to 2011 data from the International Energy Agency (IEA), the total world supply of primary energy for 2009 was 12.15 billion tons of petroleum equivalents. Of that, 86.7% originated from fossil fuels, including uranium. In other words, the so-called “renewable energies,” including hydraulic, represented barely 13.3% of the supply of primary energy in the world.
The debate needs a change of focusThe result of these facts is that humanity will be experiencing the inexorability of extreme dependence on fossil fuels for the coming decades. The efforts to replace them with “sustainable energy resources” are not only fragile in terms of the required scale, but also physically impossible.
A 2010 investigation by the UN Food and Agriculture Organization (FAO) on replacing vehicle fossil fuels with agrofuels, considering the current technological status, came up with figures such as these: Replacing the automobile gasoline consumed in the world (in 2009) would require dedicating an area of 482.2 million hectares to the production of ethanol, which is equivalent to 41% of the total surface used that year for the production of cereals, legumes, sugar, oleaginous seeds and vegetables, or 35% of the world’s total available cultivable land. And replacing the total mineral diesel consumed in the world in the same year would require devoting an area of over 1.729 billion hectares to the production of biodiesel, an area equivalent to 1.25 times the total area of cultivable lands available in the world.
Given these realities, the green economy, by taking the strategy of reducing carbon emissions as an instrument to achieve sustainable development, is appropriating the issue of climate change to promote the expansion of agro-fuels, subordinated to agribusiness interests. The current state of technological knowledge, including the production of “second generation ethanol” or the use of other sources for the advanced production of biodiesel (micro-algae, cyano-bacteria and genetic manipulation) does not permit the alteration of the current international picture, marked by the foods vs. agrofuels conflict.
As is well known, we are going through a period in which the process of climate change is becoming more acute, whether due to human action or to causes of a natural origin. Extreme events abound and can be identified by the increased frequency of hotter days in some regions of the planet and of colder days in others. The increased frequency and intensity of rains, which at times cause catastrophic floods, as well as the recording of ever more intense and frequent tropical cyclones, tornados and hurricanes are evidence of the need for a change of focus in the current international debate. This debate must no longer be limited to the creed of warming, which only muddies thinking and points toward false energy solutions.
The concern about contamination caused by greenhouse effect gas emissions due to the use of fossil fuels must give way to a scientifically more solid perception based on the dramatic increase in air contamination (for example, emissions of dust, smoke, hydrocarbons and even the very pathogenic aromatics, as well as nitrogen and sulfur gases, which are precursors of acid rain and of the breathable tropospheric ozone) and on the degradation of the population’s living conditions in carboniferous areas, those with an iron and steel industry and those dedicated to agribusiness. Those are the issues crucial to the health and survival of the environment and of the human species that have been obscured, neglected and omitted by companies, governments, universities and NGOs that adopted the vogue and creed of warming.
Proposed technological solutionsTechnology has a central role in the green economy. A half century after the birth of the modern environmental movement, this line of thinking seems to favor technological solutions more than policies for all social problems. The positivist technical perspective finds affirmation in the green economy based on the idea that it is possible to end the dependence on natural resources and solve the climate problem through the development of technologies. The main technologies discussed in the preparations for the Río+20 Summit are nanotechnology, synthetic biology and geoengineering.
Nanotechnology. Nanotechnology permits the manipulation of material on a nanometric scale; in other words on the order of a billionth part of a meter. On this scale, the characteristics of the chemical elements—electric conductivity, color, the form in which they react to atmo¬spheric pressure, etc.—are altered. For that reason, nanotechnology offers the possibility of using a much smaller amount of raw material to produce certain products, and it is believed this would make it possible to substitute already overused merchandise or raw materials with new ones produced through this technology.
The governments of the United States, Japan, the United Kingdom and China in particular have already made huge investments in nanotechnology. Together these countries have spent $50 billion in basic research in this field since 2001. Comparatively speaking, this is more money than was invested in the Manhattan Project, which created the first atomic bomb.
At the outset, the governments made the majority of these investments, but in 2007 the private sector began to overtake them. The investments come from corporations working in the areas of energy, mining, chemistry and information technology, such as Nestlé, Monsanto and Syngenta. According to data from the ETC Group, a Canadian NGO that monitors new technologies, private sector investment in basic nanotechnology research has now reached some US$7 billion annually.
Synthetic biology. Synthetic biology can be described as the biological part of nanotechnology, as it allows the manipulation of the elements that make up the DNA of living organisms. The investors say that the development of synthetic biology will make it possible to create any kind of organism, thus permitting the creation of new forms of life. Based on that, it is believed that it will be possible to synthesize microbes capable of turning biomass into electrical energy, fuel and food. In theory, it would be possible to synthesize a microbe capable of producing plastic, for example, based on the cellulose present in vegetables.
The difference between that technology and genetic engineering—which is used to create genetically modified organisms—is that in theory synthetic biology makes it possible to synthesize DNA from zero, while genetic engineering “only” transfers one or more genes from one organism to another. The level of investments in synthetic biology is also impressive. The major oil companies, such as Exxon and Shell, have invested heavily in this area. Exxon alone spent US$600 million last year on a synthetic biology
company. Meanwhile, the US government invested US$l billion in small businesses in that sector in 2010.
Geoengineering. This is basically a strategy that encompasses various technologies, including synthetic biology and nanotechnology, to intervene on a large scale in oceans and in the atmosphere to deal with climate change.
The scientists working on projects in this field allege that it is impossible to reverse climate changes unless we are willing to consider using geoengineering. This is proposed in two different ways, the first of which is to reduce the amount of sunlight that reaches the Earth through a strategy called “solar radiation management.” The idea is to block the sun’s light by bombarding the stratosphere with sulfates to simulate what happens when a volcano erupts. Some researchers allege that it is possible to construct enormous tubes—25 kilometers in height—that would be strewn around the world to bombard the atmosphere with sulfates to stabilize the temperature.
The second strategy, oceanic fertilization, proposes choosing a part of the ocean poor in nutrients such as iron and urea, and pouring nanoparticulates of these nutrients into it to create a proliferation of phytoplankton (a set of microscopic aquatic vegetable organisms, mainly algae). They would absorb carbon dioxide from the atmosphere and upon dying would sink and end up deposited on the sea floor. Thirteen experiences of this type have been conducted since 1993 around the world, financed mainly by the governments of the United States, the United Kingdom and Germany. All were failures, but the governments are still trying, at an ever greater cost.
The fact that geo-engineering investment can still be considered modest may be explained by the moratorium established in 2011 by the UN Convention on Biological Diversity on geoengineering experiments that could have consequences that reach beyond the borders of the countries conducting them or that could have long-term effects. Only small experiments are therefore permitted. It was a decision supported by 193 countries. In reality, there are two moratoriums against geoengineering. The first, approved by the UN in 2008, is directed against the oceanic fertilization experiments. The following year, Germany conducted tests that violated the moratorium, provoking an enormous wave of protests, including in Germany itself, that led to the interruption of the experiments. In 2010 that moratorium was extended to cover solar radiation management. But synthetic biology and nanotechnology are not subject to any kind of regulation.
The risks of technologies
at the service of the green economyThe use of these techniques as a solution to environmental problems also has a lot of credibility in the academic world. The majority of the most recent Nobel Prize winners in physics and chemistry work in nanotechnology and synthetic biology, while the world’s most important universities—Oxford, Cambridge, Harvard, MIT and Stanford—are involved in research in these areas There is, however, no debate about the risks involved in these technologies because there is consensus in academia regarding their enormous potential. No one today is discussing the environmental and health risks that could result from the indiscriminate use of these innovations.
There is also a risk related to the potential of trans¬forming the global economy because no one knows who would control the changes or own the technologies. There is no global-scale capacity—even in the framework of the UN—to monitor and evaluate new technologies.
Canadian Pat Mooney, director of the ETC Group, said in an interview earlier this year in the Brazilian journal Poli that “a special regulation is needed in the case of nanotechnology, given the small size of the particulates and the fact that the characteristics of the materials change a lot. The regulatory agencies of the United States and Europe have no way to exercise greater regulation over nanotechnology and synthetic biology until there is a major accident that affects one of the two. The governments have already invested too much in these technologies to abandon them now. The regulators know that their hands are tied, because it is a political issue.”
This violates the principle of precaution, one of the main achievements of the Río 92 Summit, which established that if it is not known with certainty that a technology is safe, prudence suggests it not be used until more is known. Mooney reminds us that “the two UN bodies that had some technical capacity to evaluate new technologies were partially or completely dissolved in 1993. The Commission on Science and Technology for Development, which occupied an entire building in New York, lost so much financing that it is currently limited to two people in one room of the United Nations building in Geneva. The UN Commission on Transnational Corporations, which was the only UN body that monitored the private sector at a global scale and technology transfers among private companies was also dissolved, in this case because the US government cut its budget.”
With respect to the impacts involved, a serious concern related to nanotechnology is the granting of patents. Thousands of products can now be found in the market that use nanotechnology to some degree: sunscreen lotions, cosmetics, clothes and other items are already using nanoparticulates. But only in the past few years has research started analyzing what happens when nanoparticulates enter the human organism or the environment. All claim that risks are involved and that new research is required.
Another risk is the release into the environment of organisms that did not previously exist in nature. Most of them will probably be unable to survive outside of the laboratory, but there could be some that do. It is impossible to predict the velocity at which such organisms would be able to mutate or develop the capacity to reproduce and give rise to something new. We also know that no matter how secure they may be, laboratories do not guarantee that these organisms will remain confined.
As for geoengineering, the simple action of launching sulfates into the stratosphere could be extraordinarily dangerous. It is not even known how geoengineering might affect the wind systems, ocean currents or amount of rainfall, and this might have an enormous impact on determining what could or could not be cultivated in certain places and who could or could not inhabit certain regions. The National Academy of Sciences of the United States, the Royal Society of the United Kingdom and several German institutions have already produced reports on geoengineering that all say the same thing: this technology is extremely dangerous and must only be considered as a last resort.
Neither clean nor greenThe environmental discourse is being used as an opportunity to create new markets, which include the commercializing of nature. Some European governments feel that, with the crisis, they don’t have the money to preserve it. They argue that if there is a way to make money by conserving ecosystems it would become an attractive option. One example would be using nature in the carbon emissions compensation market.
Such “financializing” is seen as a solution, but in fact is part of the origin of the crisis we are facing. The problem is that the countries of the North began pressuring for its adoption in the Río+20 Summit, with the idea that the green economy is the best way to get out of the crisis, with synthetic biology and nanotechnology playing a central role. Europe and North America want recognition that a new economy based on these technologies is “clean” and “green” so they can convince the rest of the world that this new economy is the solution to the environmental, economic and social problems.
The knot is in the global inequityAs a strategy for surmounting the crisis, the green economy does not seem geared to a radically different way of life than the current one, but rather an intensification of today’s dominant forms of production and consumption in the world, which generate inequalities between countries and peoples as well as multiple crises such as the environmental one. It is not an instrument for the needed construction of a new base of social relations of production and consumption in the context of “another possible world,” a theme so present in the debates of social movements ever since the first World Social Forum in 2001 in Porto Alegre, Brazil. Speaking of an “inclusive economy” without frontally attacking the node of inequalities is pure illusion. Including is not enough. The global disparity has to be reduced: the wealthy have to reduce their consumption of energy and natural resources so that the poor can benefit from an increase in this consumption.
The proposal found in documents on the green economy to calculate “natural capital,” arguing that the measure is necessary to interest corporations in preservation, is an error because its real purpose is merely to find other means of capital accumulation and thus overcome the current financial crisis, changing nothing in the existing system. It is worth noting that there is now a methodology to measure the market value of what were previously considered common goods: air, water, biodiversity, etc. It appears in The Economics of Ecosystems and Biodiversity (TEEB), a study hosted by UNEP with financial support from the European Commission, Germany, the United Kingdom, Netherlands, Norway, Sweden and Japan, that was presented at the last conference of the Convention on Biological Diversity in 2010.
To get out of the crisis and the collapseAnother illusion is that the wealthy countries will be able to open up this ecological space only through efficiency and the development of a low carbon emission economy. While desirable and necessary, efficiency is, however, insufficient. The savings obtained by increasing efficiency end up being used in other kinds of consumption, which annuls or could even outstrip the resources saved, a phenomenon known as the “rebound effect.”
The economy continues to be perceived as the main system, and it considers the environment as a mere resource provider and society as a labor force and a mass of buyers so the wheel of production and consumption can continue turning via processes that lead to an accumulation of goods and unequal and unjust access to opportunities.
The course of international policy, and thus of the world economy, has to be redefined. The new correlation of forces being consolidated in the world has to be translated into new guidelines and a new agenda has to be established that pulls the planet off the route to crisis and collapse.
Célio Bermann is a professor and researcher at the Electro-technology and Energy Institute of the University of Sao Paulo and an adviser to social and environmental movements. This text appeared in the May-June 2012 edition (No. 239) of Nueva Sociedad. Translated and subtitled by envío.
|