GEO-ENGINEERING C LIMATE CHANGE Environmental Necessity or Pandora’s Box?
If anthropogenic carbon emissions continue un...
70 downloads
736 Views
9MB Size
Report
This content was uploaded by our users and we assume good faith they have the permission to share this book. If you own the copyright to this book and it is wrongfully on our website, we offer a simple DMCA procedure to remove your content from our site. Start by pressing the button below!
Report copyright / DMCA form
GEO-ENGINEERING C LIMATE CHANGE Environmental Necessity or Pandora’s Box?
If anthropogenic carbon emissions continue unabated and global temperatures continue to rise, large-scale geo-engineering projects may prove our last hope for controlling the Earth’s climate. This book is the first to present a detailed and critical appraisal of the geo-scale engineering interventions that have been proposed as potential measures to defeat the devastation of runaway global warming. Early chapters set the scene by presenting a historical and philosophical overview of global warming and by discussing projections of future CO2 emissions and techniques for predicting climate tipping points. Subsequent chapters then review proposals to limit CO2 concentrations through improved energy technologies, direct removal of CO2 from the atmosphere and stimulation of enhanced CO2 uptake by the oceans. Schemes for solar radiation management involving the reflection of sunlight back into space and using artificially brightened, low-level marine stratus clouds and stratospheric aerosols are also described and assessed. With some technologies already at prototype-testing stage, the pros and cons of the various schemes are thoroughly examined – throwing light on the passionate public debate about their safety. Written by a group of the world’s leading authorities on the subject, this comprehensive reference is essential reading for researchers and government policy makers at Copenhagen and beyond. Brian Launder graduated in mechanical engineering from Imperial College in 1961 and obtained Masters and Doctoral degrees from the Massachusetts Institute of Technology for research in gas turbines. He rejoined his former department at Imperial as a lecturer and was promoted to Reader in Fluid Mechanics in 1971. In 1976 he began a four-year spell as professor at the University of California before accepting the position as head of the thermo-fluids division at UMIST. He was admitted as a Fellow of the Royal Society and of the Royal Academy of Engineering in 1994. Thereafter he chaired the Institute’s Environmental Strategy Group while from 2000 to 2006 he served as regional director of the Tyndall Centre for Climate Change Research and currently sits on the Royal Society’s geo-engineering panel. Professor Launder has received honorary doctorates from three European universities and numerous prizes in recognition of his research contributions. Michael Thompson graduated from the University of Cambridge with first class honours in mechanical sciences in 1958. Further degrees include ScD (Cambridge) and honorary DSc (University of Aberdeen). He spent a year as a Fulbright researcher in aeronautics and astronautics at Stanford before joining University College London (UCL) in 1964. He was appointed a professor in 1977, and subsequently Director of the Centre for Nonlinear Dynamics. Professor Thompson was elected a Fellow of the Royal Society in 1985 and has served on the Council of the Society. He has been honoured with the OMAE Award (American Society of Mechanical Engineers, 1985), the Alfred Ewing Medal (Institution of Civil Engineers, 1992) and a Gold Medal for lifetime contributions to mathematics (IMA, 2004). He is currently Emeritus Professor at UCL, Honorary Fellow at the Department of Applied Mathematics and Theoretical Physics (DAMTP) in Cambridge, and Sixth Century Professor (part-time) in Theoretical and Applied Dynamics at Aberdeen.
G EO - E N G I N E E R I NG C L IM ATE CHANGE Environmental Necessity or Pandora’s Box? Edited by
B R I A N L AU N D E R University of Manchester
J. MICHAEL T. T HOMPSON University of Cambridge
cambridge university press Cambridge, New York, Melbourne, Madrid, Cape Town, Singapore, S˜ao Paulo, Delhi, Dubai, Tokyo Cambridge University Press The Edinburgh Building, Cambridge CB2 8RU, UK http://www.cambridge.org Information on this title: www.cambridge.org/9780521198035 c Cambridge University Press 2010
This volume includes contributions written by authors for Philosophical Transactions of the Royal Society: Mathematical, Physical and Engineering Sciences, and copyright in these chapters rests with the Royal Society. This publication is in copyright. Subject to statutory exception and to the provisions of relevant collective licensing agreements, no reproduction of any part may take place without the written permission of Cambridge University Press. First published 2010 Printed in the United Kingdom at the University Press, Cambridge A catalogue record for this publication is available from the British Library Library of Congress Cataloguing in Publication data ISBN 978-0-521-19803-5 Hardback
Cambridge University Press has no responsibility for the persistence or accuracy of URLs for external or third-party internet websites referred to in this publication, and does not guarantee that any content on such websites is, or will remain, accurate or appropriate.
Contents
List of contributors Preface
page vii xv
Part I Scene setting 1 Geo-engineering: could we or should we make it work? Stephen H. Schneider 2 Reframing the climate change challenge in light of post-2000 emission trends Kevin Anderson and Alice Bows 3 Predicting climate tipping points J. Michael T. Thompson and Jan Sieber 4 A geophysiologist’s thoughts on geo-engineering James Lovelock 5 Coping with carbon: a near-term strategy to limit carbon dioxide emissions from power stations Paul Breeze Part II Carbon dioxide reduction 6 Capturing CO2 from the atmosphere David W. Keith, Kenton Heidel and Robert Cherry 7 Carbon neutral hydrocarbons Frank S. Zeman and David W. Keith 8 Ocean fertilization: a potential means of geo-engineering? R. S. Lampitt et al. 9 The next generation of iron fertilization experiments in the Southern Ocean V. Smetacek, and S. W. A. Naqvi
v
1 3
27 50 84
93 105 107 127 149
181
vi
Contents
Part III Solar radiation management 10 Global temperature stabilization via controlled albedo enhancement of low-level maritime clouds John Latham et al. 11 Sea-going hardware for the cloud albedo method of reversing global warming Stephen Salter, Graham Sortino and John Latham 12 An overview of geo-engineering of climate using stratospheric sulphate aerosols Philip J. Rasch et al. 13 Global and Arctic climate engineering: numerical model studies Ken Caldeira and Lowell Wood Index The colour plates are situated between pages 148 and 149
205 207
229
250 286 307
Contributors
E. P. Achterberg National Oceanography Centre European Way Southampton SO14 3ZH UK Kevin Anderson Tyndall Centre for Climate Change Research Mechanical, Civil and Aerospace Engineering University of Manchester PO Box 88 Manchester M60 1QD UK T. R. Anderson National Oceanography Centre European Way Southampton SO14 3ZH UK Keith Bower SEAES University of Manchester PO Box 88 Manchester M60 1QD UK Alice Bows Tyndall Centre for Climate Change Research Mechanical, Civil and Aerospace Engineering vii
viii
List of contributors
University of Manchester PO Box 88 Manchester M60 1QD UK Paul Breeze 62 Elm Park Court Pinner HA5 3LL UK Ken Caldeira Department of Global Ecology Carnegie Institution 260 Panama Street Stanford CA 94305 USA Chih-Chieh (Jack) Chen CGD Division NCAR PO Box 3000 Boulder CO 80307 USA Robert Cherry Idaho National Laboratory PO Box 1625 Idaho Falls ID 83415 USA Tom Choularton SEAES University of Manchester PO Box 88 Manchester M60 1QD UK Alan Gadian ICAS School of Earth and Environment University of Leeds Leeds LS2 9JT UK
List of contributors
Rolando R. Garcia NCAR PO Box 3000 Boulder CO 80307 USA Andrew Gettelman CGD Division NCAR PO Box 3000 Boulder CO 80307 USA Kenton Heidel Energy and Environmental Systems Group University of Calgary Calgary Alberta T2N 1N4 Canada J. A. Hughes National Oceanography Centre European Way Southampton SO14 3ZH UK M. D. Iglesias-Rodriguez National Oceanography Centre European Way Southampton SO14 3ZH UK David W. Keith Energy and Environmental Systems Group University of Calgary Calgary Alberta T2N 1N4 Canada
ix
x
List of contributors
B. A. Kelly-Gerreyn National Oceanography Centre European Way Southampton SO14 3ZH UK Laura Kettles ICAS School of Earth and Environment University of Leeds Leeds LS2 9JT UK R. S. Lampitt National Oceanography Centre European Way Southampton SO14 3ZH UK John Latham MMM Division NCAR PO Box 3000 Boulder CO 80307 USA Brian Launder School of Mechanical, Aerospace and Civil Engineering University of Manchester PO Box 88 Manchester M60 1QD UK James Lovelock Green College University of Oxford Woodstock Road Oxford OX2 6HG UK
List of contributors
M. Lucas Department of Oceanography University of Cape Town Rondebosch Cape Town 7701 Republic of South Africa Hugh Morrison MMM Division NCAR PO Box 3000 Boulder CO 80307 USA S. W. A. Naqvi National Institute of Oceanography Dona Paula Goa 403 004 India Luke Oman Department of Earth and Planetary Sciences Johns Hopkins University Baltimore MD 21218 USA E. E. Popova National Oceanography Centre European Way Southampton SO14 3ZH UK Philip J. Rasch Pacific Northwest National Laboratory 902 Batelle Boulevard PO Box 999, MSIN K9-34 Richland WA 99352 USA
xi
xii
List of contributors
Alan Robock Department of Environmental Sciences Rutgers University New Brunswick NJ 08901 USA Stephen Salter Institute for Energy Systems School of Engineering University of Edinburgh Edinburgh EH9 3JL UK R. Sanders National Oceanography Centre European Way Southampton SO14 3ZH UK Stephen H. Schneider Department of Biology Woods Institute for the Environment Stanford University Stanford CA 94305 USA J. G. Shepherd National Oceanography Centre European Way Southampton SO14 3ZH UK Jan Sieber Department of Mathematics University of Portsmouth Portsmouth PO1 3HF UK V. Smetacek Alfred Wegener Institute for Polar and Marine Research 27570 Bremerhaven Germany
List of contributors
D. Smythe-Wright National Oceanography Centre European Way Southampton SO14 3ZH UK Graham Sortino School of Informatics University of Edinburgh Edinburgh EH8 9AB UK Georgiy L. Stenchikov King Abdullah University of Science & Technology Thuwal Saudi Arabia J. Michael T. Thompson Centre for Mathematical Sciences Wilberforce Road Cambridge CB3 0WA UK Simone Tilmes NCAR PO Box 3000 Boulder CO 80307 USA Richard P. Turco Department of Atmospheric and Oceanic Sciences University of California Los Angeles CA 90095 USA Lowell Wood Hoover Institution Stanford CA 94305 USA
xiii
xiv
List of contributors
A. Yool National Oceanography Centre European Way Southampton SO14 3ZH UK Frank S. Zeman Department of Earth and Environmental Engineering Columbia University New York NY 10027 USA
Preface
Today, the individually neutral words ‘global’ and ‘warming’ combine to provide an epithet whose consequences, already causing misery and premature death for millions, hold the prospect of unquantifiable change and potential disaster on a global scale for the decades to come. While the link between rising global temperatures and increasing atmospheric concentrations of CO2 has been known for more than a century (Arrhenius, 1896), there is increasingly the sense that governments are failing to come to grips with the urgency of setting in place measures that will assuredly lead to our planet reaching a safe equilibrium. Today, the developed world is struggling to meet its (arguably inadequate) carbon-reduction targets while emissions by China and India have soared. Meanwhile, signs suggest that the climate is even more sensitive to atmospheric CO2 levels than had hitherto been thought. Alarmed by what are seen as inadequate responses by politicians, for a number of years some scientists and engineers have been proposing major ‘last-minute’ schemes that, if properly developed and assessed in advance, could be available for rapid deployment, should the present general concern about climate change be upgraded to a recognition of imminent, catastrophic and, possibly, irreversible increases in global temperatures with all their associated consequences. While such geoscale interventions may be risky, the time may well come when they are accepted as less risky than doing nothing. The above sets out the main elements that led the Philosophical Transactions of the Royal Society A to publish a theme issue on such geo-engineering approaches and to subject them to critical appraisal by acknowledged experts from around the world. In inviting contributors to that issue (which the present editors also edited), we turned especially to speakers at a workshop on Macro-Engineering Options for Climate Change Management and Mitigation organised by Professor John Shepherd FRS and Professor Harry Elderfield FRS in January 2004 on behalf of the Tyndall Centre for Climate Change Research. Published under the theme title xv
xvi
Preface
Geoscale Engineering to Avert Dangerous Climate Change (Launder & Thompson, 2008), the issue has been and continues to be highly cited in the academic media and elsewhere. Now, to extend and update this collection, and to bring it to a wider and more diverse audience, we are delighted to have worked with Cambridge University Press on this present book which contains edited versions of all the Phil. Trans. papers plus two entirely new articles prepared expressly for this volume. The collected papers are here presented in three parts, the first of which, Scene setting, comprising five articles, provides a historical and philosophical overview combined with projections of future CO2 emission levels and the foreseen capacity (or limitations) of conventional sequestration. A further topic examined, in a chapter written specifically for this book, is the prediction of climate tipping points, an approach that may possibly be able to provide forewarning of impending abrupt climate change. The section Carbon dioxide reduction (CDR) gives examples of three approaches to limiting CO2 growth or, possibly, in the longer term, even of reducing carbon dioxide concentrations. In a specially commissioned chapter for the present volume, Keith et al. assess the prospects for ‘air capture’, that is, the direct removal of CO2 from the atmosphere. This is a strategy first proposed (in the context of climate change) a decade ago (Lackner, 1999) which is now being intensely explored, with several pre-commercial prototypes undergoing testing in North America and Europe. Their actual final development and deployment will depend inter alia on the price paid for captured CO2 , a subject on which there will surely be much discussion in the immediate future. Once captured, the carbon dioxide must be stored. While this could be by one of the routes considered with conventional carbon capture and storage (CCS) an alternative approach suggested in the chapter by Zeman and Keith is to recombine carbon dioxide with hydrogen to produce liquid hydrocarbon fuels. This section concludes with two chapters on stimulating the take-up of CO2 directly in the oceans. Over the years, such approaches have stimulated passionate debate for or against and the two chapters here included reflect this ambivalence. In the final section of the book, Solar radiation management (SRM), two schemes for raising the proportion of incoming sunlight reflected back into space are presented. With both approaches the reflecting agents are in the atmosphere, in one case brightened, low-level marine stratus clouds and in the second stratospheric aerosols. While CDR schemes, because they attack the root cause of global warming, are commonly seen as the preferred geo-engineered route (albeit inferior to the unachieved goal of reducing the emissions of CO2 to a trickle), only SRM strategies can act swiftly following deployment should the risk of catastrophe appear imminent. While, with both SRM schemes, there are many legal, ethical,
Preface
xvii
research and technological issues still to be resolved, there is the prospect that these should be soluble within the relatively short time-frame of 15–20 years. A further potentially important group of SRM strategies – space-based approaches – is not represented in this section largely because the development obstacles and the costs involved suggested to us a potential implementation date significantly further into the future. Mention of such alternative approaches is, however, included in the opening overview chapter by Stephen Schneider. We wish to express our thanks to authors for adhering to (or nearly adhering to) the various deadlines set, and to the referees – at least two of whom reviewed each chapter in this volume – for their prompt and often very detailed and helpful responses. Finally we thank Suzanne Abbott of the Royal Society for facilitating the transfer of materials to Cambridge University Press. Brian Launder J. Michael T. Thompson References Arrhenius, S., 1896, ‘On the influence of carbonic acid in the air on the temperature of the ground’ Phil. Mag., 41, 237–275. Lackner, K., 1999, ‘CO2 sequestration from the air – is it an option?’ Proc 24th Int. Conf. on Coal Utilization and Fuel Systems, 885–896. Launder, B. E. and Thompson, J. M. T. (editors), 2008, Phil. Trans. R. Soc. A, 366, No. 1882.
Part I Scene setting
1 Geo-engineering: could we or should we make it work? stephen h. schneider
Schemes to modify large-scale environment systems or control climate have been proposed for over 150 years to (i) increase temperatures in high latitudes, (ii) increase precipitation, (iii) decrease sea ice, (iv) create irrigation opportunities, or (v) offset potential global warming by injecting iron in the oceans or sea-salt aerosol in the marine boundary layer or spreading dust in the stratosphere to reflect away an amount of solar energy equivalent to the amount of heat trapped by increased greenhouse gases from human activities. These and other proposed geoengineering schemes are briefly reviewed. Recent schemes to intentionally modify climate have been proposed as either cheaper methods to counteract inadvertent climatic modifications than conventional mitigation techniques such as carbon taxes or pollutant emissions regulations or as a counter to rising emissions as governments delay policy action. Whereas proponents argue cost-effectiveness or the need to be prepared if mitigation and adaptation policies are not strong enough or enacted quickly enough to avoid the worst widespread impacts, critics point to the uncertainty that (i) any geo-engineering scheme would work as planned or (ii) that the many centuries of international political stability and cooperation needed for the continuous maintenance of such schemes to offset century-long inadvertent effects is socially feasible. Moreover, the potential exists for transboundary conflicts should negative climatic events occur during geo-engineering activities.
Geo-Engineering Climate Change: Environmental Necessity or Pandora’s Box?, eds. Brian Launder and Michael Thompson. Published by Cambridge University Press. © Cambridge University Press 2010.
3
4
Stephen H. Schneider
1.1 Historical perspective (modified after Schneider 1996) In Homer’s Odyssey, Ulysses is the frequent beneficiary (or victim) of deliberate weather modification schemes perpetuated by various gods and goddesses. In Shakespeare’s The Tempest, a mortal (albeit a human with extraordinary magical powers) conjures up a tempest in order to strand on his mystical island a passing ship’s crew with his enemies aboard. In literature and myth, only gods and magicians had access to controls over the elements. But in the twentieth century, serious proposals for the deliberate modification of weather and/or climate have come from engineers, futurists or those concerned with counteracting inadvertent climate modification from human activities. Some argue that it would be better or cheaper to counteract inadvertent human impacts on the climate with deliberate schemes rather than to tax polluting activities or to switch to alternative means of economic sustenance. So, while control of the elements has been in the human imagination and literature for millennia, it is only in the waning of the last millennium that humans have proposed serious techniques that might just do the job – or at least could modify the climate, even if all the basic ramifications are not now known, or even knowable. As early as the mid-1800s, US government meteorologist J. P. Espy put forth a rainmaking proposal to cut and burn vast tracts of forest in order to create columns of heated air to generate clouds and trigger precipitation. In the end, the results of his ‘artificial rain’ experiments couldn’t please everyone and were met with widespread suspicion (Fleming 2007). Such precipitation manipulation schemes continued to be posited and tested by countries over the ensuing century, in tandem with other climate modification proposals. The US President’s Science Advisory Committee suggested in 1965 that if warming by CO2 due to the greenhouse effect ever became a problem, the government might take countervailing geo-engineering steps such as spreading a reflective substance across the ocean waters, or sowing particles high in the atmosphere to encourage the formation of reflective clouds. Calculations suggested such steps were feasible, and could cost less than many government programmes (Weart 2008). In ca. 1960 (the exact date is unknown), authors N. Rusin and L. Flit from the former Soviet Union published a long essay entitled ‘Man versus climate’. In this essay the authors, displaying a traditional Russian geographical perspective, claim that ‘the Arctic ice is a great disadvantage, as are the permanently frozen soil (permafrost), dust storms, dry winds, water shortages in the deserts, etc.’. And, they go on, ‘if we want to improve our planet and make it more suitable for life, we must alter its climate’. But this must not be for hostile purposes, they caution, as ‘almost all the huge programmes for changing nature, e.g. the reversal of the flow of northern
Geo-engineering: could we or should we make it work?
5
rivers and the irrigation of Central Asian deserts, envisage improvements in the climate’ (Rusin & Flit 1960, p. 17). They recount earlier proposals for dazzling projects such as injecting tiny white particles suspended in space in the path of the solar radiation, to light up the night sky. M. Gorodsky and later V. Cherenkov put forward ‘proposals to surround the Earth with a ring of such particles, similar to the ring around Saturn’ (in Rusin and Flit 1960). The plan was to create a 12 per cent increase in solar radiation, such that high latitudes would ‘become considerably warmer and the seasons would scarcely differ from one another’. And so it goes in this essay, detailing plans to divert rivers from the Arctic to the Russian wheat fields, or from the Mediterranean to irrigate areas in Asian USSR. One ambitious project is to create a ‘Siberian Sea’ with water taken from the Caspian Sea and Aral Sea areas. Of course, flowery rhetoric with images of blooming now-arid zones stands in stark contrast to the ecological disaster that surrounds the Aral Sea today; environmental degradation is associated with much less ambitious engineering projects (Glazovsky 1990). But the upbeat little pamphlet, written at the height of human technological hubris in the mid-twentieth century, certainly is filled with, if nothing else, entertaining geo-engineering schemes. Other sets of such schemes have also been part of geo-engineering folklore and include damming the Straits of Gibraltar, the Gulf Stream, the Bering Straits, the Nile or creating a Mediterranean drain back into Central Africa where a ‘second Nile’ would refill Lake Chad, turning it into the ‘Chad Sea’ after the Straits of Gibraltar were dammed. However, the authors of such schemes do not emphasise the fact that the current Mediterranean produces a significant fraction of the salty water that sinks and becomes intermediate depth water in the North Atlantic, only to rise again in the high North Atlantic, in the Iceland–Norwegian Sea areas, making that part of the world’s oceans sufficiently salty that surface water sinks to the bottom at several degrees Celsius above freezing. In that process of sinking, approximately half the bottom waters of the world’s oceans are formed. The Gulf Stream’s surface water that flows into the higher latitudes of the northeastern North Atlantic and into Scandinavia allows northwestern Europe to enjoy a more moderate climate relative to that of, say, Hudson Bay. The latter, at the same latitude, does not have the benefit of the salty waters and the Gulf Stream’s penetration high into the North Atlantic, which inhibits sea ice formation and contributes to a milder climate. Other examples of attempts to modify the atmosphere, but for a different purpose, followed from the first use of the word ‘geo-engineering’ of which I am aware. This term was informally coined in the early 1970s by Cesare Marchetti (and formally published at the invitation of the editor of Climatic Change in its inaugural issue as Marchetti 1977). Marchetti outlined his thesis:
6
Stephen H. Schneider
The problem of CO2 control in the atmosphere is tackled by proposing a kind of ‘fuel cycle’ for fossil fuels where CO2 is partially or totally collected at certain transformation points and properly disposed of. CO2 is disposed of by injection into suitable sinking thermohaline currents that carry and spread it into the deep ocean that has a very large equilibrium capacity. The Mediterranean undercurrent entering the Atlantic at Gibraltar has been identified as one such current; it would have sufficient capacity to deal with all CO2 produced in Europe even in the year 2100. (Marchetti 1977)
About the same time Russian climatologist Mikhail Budyko expanded on this theme of geo-engineering, also for the purpose of counteracting inadvertent climate modification: If we agree that it is theoretically possible to produce a noticeable change in the global climate by using a comparatively simple and economical method, it becomes incumbent on us to develop a plan for climate modification that will maintain existing climatic conditions, in spite of the tendency toward a temperature increase due to man’s economic activity. The possibility of using such a method for preventing natural climatic fluctuations leading to a decrease in the rate of the hydrological cycle in regions characterized by insufficient moisture is also of some interest. (Budyko 1977, p. 244)
Fortunately, Budyko does go on to apply the appropriate caveat: ‘The perfection of theories of climate is of great importance for solving these problems, since current simplified theories are inadequate to determine all the possible changes in weather conditions in different regions of the globe that may result from modifications of the aerosol layer of the stratosphere.’ What Budyko proposed is a stratospheric particle layer to reflect away enough sunlight to counteract heat trapping from anthropogenic greenhouse warming. Obviously, he believed that deliberate climate modification would be premature before the consequences could be confidently precalculated. Anticipating the increasing calls for deliberate climate modification as a geoengineering countermeasure for the advent or prospect of inadvertent climate modification, William Kellogg and I raised a number of aspects of this issue that had only been hinted at by previous authors. After summarising a whole host of such schemes, we concluded: One could go on with such suggestions, some to cool and some to warm vast regions of the earth, some to change the patterns of rainfall, some to protect from damaging storms, and so forth. They could be used to improve the current climate (for some) or to offset a predicted deterioration of climate (for some), whether the deterioration was natural or maninduced . . . We believe that it would be dangerous to pursue any large-scale operational climate control schemes until we can predict their long-term effects on the weather patterns
Geo-engineering: could we or should we make it work?
7
and the climate with some acceptable assurance. We cannot do so now, and it will be some time – if ever – before we can. To tamper with a system that determines the livelihood and life-styles of people the world over would be the height of irresponsibility if we could not adequately foresee the outcome. However, we recognize that this may not be the opinion of some, especially those who live in the affected regions where a prediction of climatic change could be a forecast of local disaster if the predicted change were not offset. (Kellogg & Schneider 1974)
We went on to argue that some people could even consider use of climate modification as an overt or clandestine weapon against economic or political rivals, and that that prospect might require the need for an international treaty. We noted that the potential for disputes would be very high since any natural weather disaster occurring during the time that some group was conducting deliberate climate modification experiments could lead those affected by that disaster to make accusations that the climate modifiers were responsible for that event. Courts could be clogged with expert witnesses testifying on the one hand how the deliberate intervention could not possibly have caused some unusual hurricane or ‘300-year flood’, followed by other witnesses (perhaps the same ones collecting double fees?) turning around and testifying for the other side that current knowledge is insufficient to rule out the possibility that a geo-engineering scheme in one part of the world might very well have affected some extreme event on the other side of the world. We concluded, only partially tongue in cheek, that: We have raised many more questions than we are even remotely capable of answering, but we do wish to offer one ‘modest’ proposal for ‘no-fault climate disaster insurance.’ If a large segment of the world thinks the benefits of a proposed climate modification scheme outweigh the risks, they should be willing to compensate those (possibly even a few of themselves) who lose their favored climate (as defined by past statistics), without much debate as to whether the losers were negatively affected by the scheme or by the natural course of the climate. After all, experts could argue both sides of cause and effect questions and would probably leave reasonable doubts in the public’s mind . . . (Kellogg & Schneider 1974)
A number of people picked up the geo-engineering issue on and off in the 20 years since. For example, Schelling (1983) pointed out that world economic development was not going to pay much attention to global warming prospects given the Realpolitik of population and economic growth advocates within the political establishments in nearly all countries. Schelling concluded that, should global warming prove to be as significant as some climatologists or ecologists feared plausible, then perhaps we should consider geo-engineering as a cost-effective and politically acceptable alternative to energy taxes or fuel switching (which could spell the politically unpalatable demise of the coal industry, for example).
8
Stephen H. Schneider
1.2 Geo-engineering as part of global change cost/benefit analysis? However, 20 years ago, with the significant increase in publicity attached to the problem of global warming, and in the wake of the intense heat waves, drought, fires and large hurricanes in North America in 1988 and 1989, concerns reached the halls of the US Congress and many parliaments around the world. Legislators began examining serious proposals for energy taxes, fuel switching, demand-side management, lifestyle changes and other policies that directly affect some economic interests (e.g. Lashof & Tirpak 1990; Gaskins & Weyant 1993; IPCC 1995a). Then, as foreshadowed some decades earlier calls for geo-engineering inevitably resurfaced after a relatively quiescent two decades. The most ambitious attempt to justify and classify a range of geo-engineering options was associated with a US National Research Council panel on the policy implications of global warming (NAS 1992). In particular, Robert Frosch, a member of that panel, worked assiduously to try to gather information on many proposed schemes, and then did a careful job of engineering analysis. Not only did he write down a range of geo-engineering schemes and try to calculate their potential effectiveness for climate control, but also he did order of magnitude calculations of the relative costs of changing the Earth’s temperature by geo-engineering versus conventional means such as energy taxes (estimating how many dollars per ton carbon dioxide reduction the climate control scheme might be equivalent to). As a member of that panel, I can report that the very idea of including a chapter on geo-engineering led to serious internal and external debates. Many participants (including myself) were worried that even the very thought that we could offset some aspects of inadvertent climate modification by deliberate climate modification schemes could be used as an excuse to maintain the status quo by those who would be negatively affected by controls on the human appetite to continue polluting and using the atmosphere as an unpriced sewer. Their view was that geo-engineering as a possibility would create a moral hazard. Those who worried about climatic impacts often favoured market incentives to reduce emissions or regulations for cleaner alternative technologies. However, Frosch effectively countered that argument. Supposing, it was said, a currently envisioned low probability but high consequence outcome really started to unfold in the decades ahead (e.g. 5 ◦ C warming in the next century – which had already been characterised as having potential catastrophic implications for ecosystems; see Peters & Lovejoy 1992; Root & Schneider 1993). Let us also assume, it was argued, that by the second decade of the twenty-first century the next generation of scientific assessments such as IPCC (1995b, 2007a) converged on confidently forecasting that the Earth had become committed to climate change (and its consequences – IPCC 2007b) serious enough either to require a dramatic retrenchment from our fossil fuel-based economy (which would
Geo-engineering: could we or should we make it work?
9
be politically difficult to accept in a world increasingly dependent on carbon fuels, especially coal) or to endure potentially catastrophic climatic changes. Under such a scenario, we would simply have to practice geo-engineering as the ‘least evil’, it was argued.
1.3 Geo-engineering revisited in the twenty-first century Recently, Crutzen (2006) reignited the geo-engineering debate in a special topic issue of Climatic Change. His arguments were based on exasperation with the capacity of society to mitigate the ‘right way’, and thus to prevent ‘dangerous anthropogenic interference with the climate system’ (as it was put by the United Nations Framework Convention on Climate Change signed by over 190 countries). Thus, geo-engineering may be the only remaining solution, Crutzen lamented. A lively debate followed in that special issue, mirrored in many media accounts, since Crutzen, a Nobel Laureate in chemistry, generated a great deal of public interest for this debate considering his credentials as one of the pioneers in discovering ozone depletion and as an advocate for mitigation rules to protect the ozone layer. The present volume is the most recent incarnation of the geo-engineering debate – and some other novel solutions. I will briefly summarise the logic for geo-engineering and alternative energy schemes outlined in various contributions and discuss some general aspects after that. Anderson & Bows (Chapter 2) begin by emphasising the difficult current situation with respect to mitigation action relative to mitigation need: It is increasingly unlikely that an early and explicit global climate change agreement or collective ad hoc national mitigation policies will deliver the urgent and dramatic reversal in emission trends necessary for stabilization at 450 ppmv CO2 e. Similarly, the mainstream climate change agenda is far removed from the rates of mitigation necessary to stabilize at 550 ppmv CO2 e. Given the reluctance, at virtually all levels, to openly engage with the unprecedented scale of both current emissions and their associated growth rates, even an optimistic interpretation of the current framing of climate change implies that stabilization much below 650 ppmv CO2 e is improbable. The analysis presented within this paper suggests that the rhetoric of 2 ◦ C is subverting a meaningful, open and empirically informed dialogue on climate change. While it may be argued that 2 ◦ C provides a reasonable guide to the appropriate scale of mitigation, it is a dangerously misleading basis for informing the adaptation agenda. In the absence of an almost immediate step change in mitigation (away from the current trend of 3% annual emission growth), adaptation would be much better guided by stabilization at 650 ppmv CO2 e (i.e. approx. 4 ◦ C). (Anderson & Bows, Chapter 2, p. 46)
10
Stephen H. Schneider
To help accelerate the reduction of CO2 emissions, given the reluctance for fuel switching, Breeze (Chapter 5) joins with those who have extended the Marchetti suggestion for deep oceanic sequestration of CO2 to deep terrestrial sequestration underground: To achieve a reduction in carbon emissions from coal-fired plants, however, it will be necessary to develop and introduce carbon capture and sequestration technologies. Given adequate investment, these technologies should be capable of commercial development by ca 2020 . . . Two things are required. The first is investment, primarily from Western governments, to develop the technologies for carbon capture and storage to a state where they can be deployed economically on a wide scale. The second is the introduction of global carbon emission limits, with cost penalties for emitting carbon that are sufficiently stringent to persuade generators across the globe to build plants based on these technologies. (Breeze, Chapter 5, p. 93 and 104)
But, carbon capture and sequestration for coal-fuelled power plants would not do much for transportation systems dependent on liquid fuels. To address this knotty problem, Zeman & Keith (Chapter 7) call for a switch of conventional fuels to ‘carbon neutral hydrocarbons’ (CNHCs), as the ‘viable alternative to hydrogen or conventional biofuels which would warrant a comparable level of research effort and support’. They call for both direct and indirect methods of producing such CNHCs, using biomass and air capture with chemical plants at massive scales. After an extensive set of preliminary engineering–economic analyses of possibilities, they conclude somewhat optimistically that: . . . CNHCs may be a cost-effective way to introduce hydrogen into the transportation infrastructure in a gradual manner. The continued use of liquid hydrocarbon fuels will minimize the disruptions and delays caused by requiring a different transportation infrastructure. It is far from evident, however, that any of these solutions, including electric vehicles, will be the method of choice. The lack of a clear technological ‘winner’ warrants equal attention and funding on all potential solutions. (Zeman & Keith, Chapter 7, p. 145)
In this chapter is a ringing endorsement of inducing technological change with investments in unconventional alternatives to jump start the mitigation process in transportation sectors. A further scheme for reducing CO2 levels that also draws heavily on chemical engineering technology and insight is the strategy that has become known as air capture. This approach is aimed at countering the emissions from such distributed sources of carbon dioxide as moving vehicles. The current status of such approaches is well summed up in this volume by Keith et al. (Chapter 6), though the first publications on this topic appeared more than 60 years ago (e.g. Spector & Dodge 1946). In the context of alleviating climate change, the early proposals sprang from Klaus Lackner and co-workers (Lackner 1999; Lackner et al. 2001) who proposed
Geo-engineering: could we or should we make it work?
11
using wind-scrubbing devices. Comparing the process to the way trees remove CO2 from the air, he described a tower design on the scale of a small-town water tower that could take up the CO2 emissions of 15 000 cars. Lackner posited that it would take 250 000 such towers worldwide to take out as much carbon dioxide as the world is currently putting into the atmosphere. He argued: ‘The way to get carbon dioxide out of the atmosphere is akin to how a tree does it. It puts surfaces up, the leaves, over which the CO2 flows and as the air flows the CO2 is being absorbed. Once you have absorbed that on a surface which is, let us say, wetted with a liquid, you can collect that liquid and then remove the CO2 from that liquid.’ This could then be disposed of underground or used in manufacturing processes (Public Radio International 2007). Massachusetts Institute of Technology engineer Howard Herzog has questioned whether Lackner’s artificial tree design would hold together on the proposed scale and cautioned that more research was needed on the technology. Herzog also said that more energy could be expended in keeping the slats coated in absorbent and disposing of it, than would be saved, possibly producing more CO2 than would be removed (Beal 2007). As with other technological innovations, companies are forming and producing prototypes. Indeed, Lackner joined forces with Allen and Burt Wright and the late Gary Comer in Global Research Technologies (GRT). Other companies involved with prototype development include that of David Keith whose concepts are outlined later in this volume (Keith et al., Chapter 6) and the ETH Group led by Aldo Steinfeld (Nikulshina et al. 2009) which employs solar energy as the thermal source needed to drive their process. GRT created a working model far removed in appearance from the original artificial tree but retaining many of the essential chemical-engineering principles: a machine in which air moves through panels of hanging fabric coated with a proprietary material that captures the CO2 . Then the doors close, and the fabric strips are sprayed with a sodium carbonate solution that binds the CO2 and becomes sodium bicarbonate. The water is drained off and an electrodialysis process strips CO2 from the sodium bicarbonate. With the firm’s electricity supplied by the coal-fired plants of Tucson Electric Power, the process produces as much carbon dioxide as it strips from the air (as Herzog cautioned), but the Wrights and Lackner are encouraged, foreseeing more efficient machines powered with renewable energy. Once these are in place, proponents firmly believe that air-capture technologies can significantly start reducing the amount of carbon dioxide in the atmosphere. Lackner has said that the process could not only offset but also reduce the amount of CO2 in the global atmosphere, with an investment of $1.6 trillion. ‘It’s a big scale, so big it’s unimaginable’, said Wright. ‘But we as a species constantly build big things. If the Great Wall of China were an air collector, it would remove 8 billion
12
Stephen H. Schneider
tons a year’ (Beal 2007). Wright estimated that 50 million shipping-container-sized devices would handle the problem. In addition to questions about whether such technologies are actually sufficiently scaleable and the availability of non-carbon-producing energy sources for the scrubbers, any real deployment also depends on the proof that CO2 can be successfully stored and development of a carbon offset market that will pay for the machines along with public and private investment in the portfolio of solutions, which includes CO2 capture and sequestration. Returning to the chapters in this volume, so far the authors I have mentioned are not really talking about geo-engineering – despite Marchetti’s coining of that term in the context of oceanic disposal of smoke-stack-removed CO2 – but rather about alternative ways to mitigate CO2 concentrations that are unconventional and need a big RD&D boost. At this point, several authors of subsequent chapters move towards real geo-engineering – manipulations of stocks and flows of components of the Earth’s biogeochemical processes to alter the radiative balance of the atmosphere. We can begin this departure with the suggestions of Lampitt et al. (Chapter 8) for modifications to oceanic processes to create sequestration of carbon: The oceans sequester carbon from the atmosphere partly as a result of biological productivity. Over much of the ocean surface, this productivity is limited by essential nutrients and we discuss whether it is likely that sequestration can be enhanced by supplying limiting nutrients. Various methods of supply have been suggested and we discuss the efficacy of each and the potential side effects that may develop as a result. Our conclusion is that these methods have the potential to enhance sequestration but that the current level of knowledge from the observations and modelling carried out to date does not provide a sound foundation on which to make clear predictions or recommendations. (Lampitt et al. Chapter 8, p. 149)
Moreover, they conclude: There is at present a clear and urgent need for tightly focused research into the effects of ocean fertilization. The critical areas of research will involve large-scale field experiments (100 × 100 km) tightly coupled to high-resolution three-dimensional computational models with embedded biogeochemistry. This is required for each of the four classes of fertilization schemes that have been proposed. Until completed satisfactorily, it is impossible to provide a rational judgment about whether the schemes proposed are (i) likely to be effective and (ii) likely to cause unacceptable side effects. Once this research has been carried out, it will be the responsibility of the science community to perform appropriate cost–benefit–risk analyses in order to inform policy. At the same time, discussions between the commercial, regulatory and scientific communities must take place so that the principles and practices of verification can be established. (Lampitt et al. Chapter 8, p. 174)
Geo-engineering: could we or should we make it work?
13
The next generation of experiments on iron fertilisation of the oceans is dealt with in greater depth by Smetacek & Naqvi (Chapter 9), who explain the process: Iron fertilization of the open ocean, by both natural and artificial means, has been in the limelight ever since John Martin formulated the ‘iron hypothesis’ (Martin 1990). It postulates that adding iron to nutrient-rich but low productive ocean regions, by dust in the past and artificial fertilization in the future, would stimulate phytoplankton blooms, which would drawdown significant amounts of atmospheric carbon dioxide (CO2 ) and, by mass sinking, sequester the carbon for long time scales in the deep ocean and sediments. The hypothesis was welcomed by biogeochemists and palaeoceanographers as a plausible mechanism to explain the lower glacial atmospheric CO2 levels that coincided with higher dust deposition rates compared with the interglacials. Plankton ecologists on the other hand were sceptical that the trace nutrient iron could limit phytoplankton growth to the same extent as light and the macronutrients nitrate and phosphorus. Unfortunately, the spectre of wanton commercialization of OIF put the scientific community as a whole on its guard. . . . We argue that such experiments, if carried out at appropriate scales and localities, will not only show whether the technique is feasible, but will also lead to a better understanding of the structure and functioning of pelagic ecosystems in general and the krill-based Southern Ocean ecosystem, in particular. (Smetacek & Naqvi, Chapter 9, p. 181 and 182)
The comment about putting the community ‘on its guard’ proved prescient, as current international negotiations concerning governance of the oceanic commons by the Intergovernmental Oceanographic Commission are seriously considering banning all commercial oceanic manipulations that look like geo-engineering, and allowing only research experiments (see: http://ioc-unesco.org/index.php?option= com_oe&task=viewEventAgenda&eventID=187). Another geo-engineering scheme to manipulate the atmosphere–ocean interface is proposed by Salter et al. (Chapter 11): Wind-driven spray vessels will sail back and forth perpendicular to the local prevailing wind and release micron-sized drops of seawater into the turbulent boundary layer beneath marine stratocumulus clouds. The combination of wind and vessel movements will treat a large area of sky. When residues left after drop evaporation reach cloud level they will provide many new cloud condensation nuclei giving more but smaller drops and so will increase the cloud albedo to reflect solar energy back out to space. If the possible power increase of 3.7 Wm−2 from double pre-industrial CO2 is divided by the 24-hour solar input of 340 Wm−2 , a global albedo increase of only 1.1 per cent will produce a sufficient offset. The method is not intended to make new clouds. It will just make existing clouds whiter. (Salter et al. Chapter 11, p. 229)
The latter proposals for marine-cloud albedo enhancement are elaborated on by Latham et al. (Chapter 10) who report computations of the effects of raising cloud condensation nuclei:
14
Stephen H. Schneider
Analytical calculations, cloud modelling and (particularly) GCM computations suggest that, if outstanding questions are satisfactorily resolved, the controllable, globally averaged negative forcing resulting from deployment of this scheme might be sufficient to balance the positive forcing associated with a doubling of CO2 concentration. This statement is supported quantitatively by recent observational evidence from three disparate sources. We conclude that this technique could thus be adequate to hold the Earth’s temperature constant for many decades. More work – especially assessments of possible meteorological and climatological ramifications – is required on several components of the scheme, which possesses the advantages that (i) it is ecologically benign – the only raw materials being wind and seawater, (ii) the degree of cooling could be controlled, and (iii) if unforeseen adverse effects occur, the system could be immediately switched off, with the forcing returning to normal within a few days (although the response would take a much longer time). (Latham et al. Chapter 10, p. 207)
The authors go on to say: In addition to requiring further work on technological issues concerning the cloud albedo enhancement scheme (Salter et al., Chapter 11), we need to address some limitations in our understanding of important meteorological aspects, and also make a detailed assessment of possibly adverse ramifications of the deployment of the technique, for which there would be no justification unless these effects were found to be acceptable. (Latham et al., Chapter 10, p. 222)
The most ambitious and large-scale set of schemes are very similar to the Budyko stratospheric aerosol suggestions, but with much more precision and regional focus. Caldeira & Wood (Chapter 13) propose reducing incoming solar radiation (insolation) with injected aerosols that have been tested with current generations of climate models: We perform numerical simulations of the atmosphere, sea ice, and upper ocean to examine possible effects of diminishing incoming solar radiation, insolation, on the climate system. We simulate both global and Arctic climate engineering in idealized scenarios in which insolation is diminished above the top of the atmosphere. We consider the Arctic scenarios because climate change is manifesting most strongly there. Our results indicate that, while such simple insolation modulation is unlikely to perfectly reverse the effects of greenhouse gas warming, over a broad range of measures considering both temperature and water, an engineered high CO2 climate can be made much more similar to the low CO2 climate than would be a high CO2 climate in the absence of such engineering. (Caldeira & Wood, Chapter 13, p. 286)
Why do this grand-scale environmental manipulation scheme? They offer the standard answer: As desirable and affordable as reductions in emissions of greenhouse gases may be, they are not yet being achieved at the scale required. Emissions of CO2 into the atmosphere
Geo-engineering: could we or should we make it work?
15
are increasing more rapidly than foreseen in any of the IPCC marker scenarios (Raupauch et al. 2007) with each release of CO2 producing a warming that persists for many centuries (Matthews & Caldeira 2008). (Caldeira & Wood, Chapter 13, p. 287)
After proposing very specific suggestions for insolation reductions, Caldeira and Wood suggest that, while not perfect offsets to CO2 increases, stratospheric dust injections carefully designed could eliminate much of the warming that would otherwise occur with business-as-usual emissions. But, as with the other authors, caveats abound: Nobody claims that such climate engineering would be perfect or is devoid of risks. Furthermore, it is clear that such climate engineering will not reverse all adverse effects of carbon dioxide emission; for example, climate engineering will not reverse the acidifying effect of carbon dioxide on the oceans (Caldeira & Wickett 2003). (Caldeira & Wood, Chapter 13, pp. 302–3)
Reflecting Kellogg and Schneider’s tongue-in-cheek ‘no fault climate disaster insurance’, they wisely warn that: ‘Of course, it would be strongly preferable to obtain international consensus and cooperation before deployment and operation of any climate engineering system.’ Then, inexplicably in my view, they go on to suggest that geo-engineering could, nevertheless, be easily done unilaterally – precisely what Kellogg and I warned was very risky to international security owing to the likely perception of negative effects in some places at some times. This is the way Caldeira and Wood put it: However, unlike CO2 emissions reduction, the success of climate engineering does not depend fundamentally on such consensus and cooperation. Putting aside the question of whether or not such a course of action would be wise, a climate engineering scheme could be deployed and operated unilaterally by a single actor, perhaps at remarkably low economic expense. (Caldeira & Wood, Chapter 13, p. 303)
Fortunately, their ultimate paragraph returns to more circumspect reasoning: Modelling of climate engineering is in its infancy. However, continued growth in CO2 emissions and atmospheric CO2 concentrations, combined with preliminary numerical simulations such as those presented here, constitute a prima facie case for exploring climate engineering options – and associated costs, risks and benefits – in greater detail. (Caldeira & Wood, Chapter 13, p. 304)
Others who have not contributed to this book have put forth other broad-scale geo-engineering approaches. In an article published in 2006 in Proceedings of the National Academy of Sciences of the United States of America, Roger Angel, reflecting the kinds of schemes proposed 30 years earlier in Rusin and Flit, described
16
Stephen H. Schneider
the concept of blocking 1.8 per cent of the solar flux with a space sunshade orbited near the inner Lagrange point (L1), in-line between the Earth and the Sun. Building on the work of Early (1989), Angel’s transparent sunshade would be used to deflect the sunlight, rather than to absorb it to minimise the shift in balance out from L1 caused by radiation pressure (Angel 2006). The plan involves launching a constellation of trillions of small free-flying spacecraft far above Earth into an orbit aligned with the Sun, called the L1 orbit. This constellation of spacecraft would form a long, cylindrical cloud approximately half the Earth’s diameter, and approximately 10 times longer. Approximately 10 per cent of the sunlight passing through the 60 000-mile length of the cloud, pointing lengthwise between the Earth and the Sun, would be diverted away from the Earth, which would uniformly reduce sunlight over the planet by approximately 2 per cent. According to Angel, this would be enough to balance the heating of a doubling of atmospheric carbon dioxide in Earth’s atmosphere. Angel says, ‘The concept builds on existing technologies. It seems feasible that it could be developed and deployed in about 25 years at a cost of a few trillion dollars. With care, the solar shade should last about 50 years. So the average cost is about $100 billion a year, or about two-tenths of 1 per cent of the global domestic product.’ However, he does add the standard caveats for all such Buck Rogers-like schemes: ‘The sunshade is no substitute for developing renewable energy, the only permanent solution. A similar massive level of technological innovation and financial investment could ensure that’ (EurekaAlert 2006). Another set of extremely large-scale proposals that have been championed by Martin Hoffert and others and have gained interest is the construction of a spacebased solar power system. Hoffert et al. (2002) posited this idea among others as ways to not only reduce energy consumption but also to provide non-carbonemitting forms of energy and thus help stop the increase in the carbon-dioxideinduced component of climate change, which is an energy problem in their view. They describe the space-based Solar System as follows: Space solar power (SSP) exploits the unique attributes of space to power Earth (Glaser 1968; Glaser et al. 1997). Solar flux is ∼8 times higher in space than the long-term surface average on spinning, cloudy Earth. If theoretical microwave transmission efficiencies (50 to 60%) can be realized, 75 to 100 We could be available at Earth’s surface per m2 of PV array in space, ≤1/4 the area of surface PV arrays of comparable power. In the 1970s, the National Aeronautics and Space Administration (NASA) and the U.S. Department of Energy (DOE) studied an SSP design with a PV array the size of Manhattan in geostationary orbit [(GEO): 35 800 km above the equator] that beamed power to a 10-km by 13-km surface rectenna with 5 GWe output. [10 TW equivalent (3.3 TWe ) requires 660 SSP units.] Other architectures, smaller satellites, and newer technologies were explored in the NASA ‘Fresh Look Study’ (Mankins 1997). Alternative locations are 200- to 10,000-km altitude satellite constellations (Hoffert & Potter 1997), the Moon (Criswell 2002a; Criswell 2002b), and
Geo-engineering: could we or should we make it work?
17
the Earth–Sun L2 Lagrange exterior point [one of five libration points corotating with the Earth–Sun system (Landis 1997)]. (Hoffert et al. 2002)
Interviewed by the American Public Broadcasting Station in 2000, Hoffert said: In my opinion, such a business could eventually evolve into one in which you could sell either bits of information or kilowatt-hours, using essentially compatible components: the same kinds of antennas in orbit and similar kinds of antennas on the ground. Of course, it has to be studied, but I think that the levels of microwave power, in terms of the objective health effects, will be below those that occupational health and safety regulations say are dangerous to humans. Until now, it’s been very interesting and exciting for us to explore the solar system and to find out how the universe works with the Hubble telescope, but for the enormous investments we’ve put into space, we haven’t gotten very much return as a society. There are energy resources available in space, and it’s possible to exploit them. So that could become a frontier that could also play a very big role in human energy consumption on the Earth.
He goes on to say that ‘a 50- to 100-year time scale is not too long for that to happen in, when you consider what happened in the last 100 years’. Finally in this volume, geophysiologist Lovelock (Chapter 4), the co-inventer of the Gaia hypothesis about planetary homeostasis, weighs in on the subject with a perspective from his long years of experience: The gaps that exist in our knowledge about the state of the ocean, the cryosphere and even the clouds and aerosols of the atmosphere make prediction unreal. The response of the biosphere to climate and compositional change is even less well understood; most of all, we are ignorant about the Earth as a self-regulating system and only just beginning to recognize that many separate but connected subsystems exist that can exert positive and negative feedback on a global scale. (Lovelock, Chapter 4, page 85)
Nevertheless, even Lovelock cannot resist putting a dog of his own into this enticing geo-engineering show: Lovelock & Rapley (2007) suggested the use of a system of large pipes held vertically in the ocean surface to draw up cooler nutrient-rich water from just below the thermocline. The intention was to cool the surface directly, to encourage algal blooms that would serve to pump down CO2 and also to emit gases such as DMS, volatile amines and isoprene (Nightingale & Liss 2003), which encourage cloud and aerosol formation. The pipes envisaged would be about 100 m in length and 10 m in diameter and held vertically in the surface waters and equipped with a one-way valve. Surface waves of average height 1 m would mix in 4.2 tons of cooler water per second. Our intention was to stimulate interest and discussion in physiological techniques that would use the Earth system’s energy and nutrient resources to reverse global heating. We do not
18
Stephen H. Schneider
know whether the proposed scheme would help restore the climate, but the idea of improving surface waters by mixing cooler nutrient-rich water from below has a long history; indeed, it is at present used by the US firm Atmocean Inc. to improve the quality of ocean pastures. (Lovelock, Chapter 4, p. 86)
Nevertheless, his early cautionary emphasis soon returns with an apt metaphor in a section labelled ‘Planetary medicine’: What are the planetary health risks of geo-engineering intervention? Nothing we do is likely to sterilise the Earth but the consequences of planetary scale intervention could hugely affect humans. Putative geoengineers are in a similar position to that of physicians before the 1940s. (Lovelock, Chapter 4, p. 90)
In a later section entitled ‘Ethics’ Lovelock reiterates his opening theme: Before we start geoengineering we have to raise the following question: are we sufficiently talented to take on what might become the onerous permanent task of keeping the Earth in homeostasis? Consider what might happen if we start by using a stratospheric aerosol to ameliorate global heating; even if it succeeds it would not be long before we faced the additional problem of ocean acidification. This would need another medicine, and so on. We could find ourselves enslaved in a Kafka-like world from which there is no escape. (Lovelock, Chapter 4, pp. 90–91)
1.4 Personal reflections I am sympathetic to those who expressed the concern about a moral hazard – that the very knowledge of the potential of geo-engineering to offset some inadvertent global change disturbances could provide ammunition to those who wished to ‘solve’ the side effects of indefinite expansionary consumption and population trends with a dubious technological fix rather than a fundamental change in the political acceptability of the fossil-fuel-based growth paradigm. Nevertheless, I, too, share the concern of others that, should the middle to high end of the current range of estimates of possible inadvertent climatic alterations actually begin to occur, we would indeed face a very serious need for drastic, unpopular action to change consumption patterns in a short time – a politically dubious prospect. Thus I, somewhat reluctantly, voted with the majority of the NAS panellists in 1991 who agreed to allow a carefully worded chapter on the geo-engineering options to remain in the report – provided it had enough explicit caveats that the committee could not possibly be interpreted as advocating near-term use of such schemes, only a study of their climatic, ecological, social and economic implications. For example, in the report of the mitigation panel of the NAS committee, it was noted that:
Geo-engineering: could we or should we make it work?
19
it is important to recognize that we are at present involved in a large project of inadvertent ‘geoengineering’ by altering atmospheric chemistry, and it does not seem inappropriate to inquire if there are countermeasures that might be implemented to address the adverse impacts . . . Our current inadvertent project in ‘geoengineering’ involves great uncertainty and great risk. Engineered countermeasures need to be evaluated but should not be implemented without broad understanding of the direct effects and the potential side effects, the ethical issues, and the risks. (NAS 1992, p. 433)
It seems that this volume a decade and a half later than the NAS report has, in virtually every chapter, echoed the warnings that have been implicit in all responsible essays on geo-engineering since Budyko in the mid-1970s. In the context of one geo-engineering proposal, the deliberate use of augmented stratospheric aerosols as a seemingly inexpensive advertent attempt to offset a few watts per square metre of inadvertent global-scale heating from anthropogenic greenhouse gases, it is likely that stratospheric dust cooling could not possibly be a perfect regional balance to greenhouse warming owing to the very patchy nature of the greenhouse forcing itself (see Schneider 1996). That is, aerosols injected in the stratosphere would, owing to high winds and a lifetime measured in years, become relatively uniformly distributed in zonal bands over the hemisphere. This means that they would reflect sunlight back to space relatively uniformly around latitude zones. Their reflection of sunlight would also vary with latitude because the amount of incoming sunlight and its relative angle to the Earth changes with latitude and season. This fairly uniform, zonally averaged rejection of sunlight would be balancing a relatively non-uniform trapping of heat associated with greenhouse gases owing to the patchy nature of cloudiness (Schneider 1994) in the lower atmosphere (to say nothing of the very patchy nature of soot and sulphate aerosols in the lower atmosphere). Thus, even if we somehow could manage to engineer our stratospheric aerosol injections to exactly balance on a hemispheric (or global) basis the amount of hemispherically (or globally) averaged heat trapped by human-contributed greenhouse gases such as CO2 and methane, we would still be left with some regions heated to excess and others to deficit. This would inevitably give rise to regional temperature anomalies and induce other regional climatic anomalies that could very well lead to greater than zero net global climate change even if the hemispheric average of radiative forcing for all human activities were somehow reduced to zero. This also suggests the need to create a hierarchy of anthropogenic climate forcing indicators, of which global average radiative forcing is but the simplest aggregate modulus. Govindasamy & Caldeira (2000) tested this concern with a climate model and argued that although perfect offsets are not possible at regional scales, zonal injections of aerosols make a CO2 -doubled world that has aerosol geo-engineering
20
Stephen H. Schneider
compensations look much more like an undisturbed greenhouse effect world. However, let me remind the reader that it is well established that climate models do not represent regional climatic patterns as well as they simulate continental to hemispheric scale projections (IPCC 2007a). I will not argue that regional climatic anomalies arising from some aerosol geoengineering offset scheme would necessarily be worse than an unabated 3–6 ◦ C warming before 2100 – a range that implies a very high likelihood for dangerous anthropogenic interference with the climate system (IPCC 2007b). Rather, I simply wish to reiterate why the strong caveats, which suggest that it is premature to contemplate implementing any geo-engineering schemes in the near future, are stated by all responsible people who have addressed the geo-engineering question. Such caveats must be repeated at the front, middle and conclusion sections of all discussions of this topic, as they indeed are in this volume as well. 1.5 Who would reliably manage geo-engineering projects for the world community over a century or two? Indeed, as noted by all responsible authors who have addressed this problem, much is technically uncertain and geo-engineering could be a ‘cure worse than the disease’ (Schneider & Mesirow 1976), given our current level of ignorance of both advertent and inadvertent climate modifications. But, there is also the potential for human conflict associated with the fact that deliberate intervention in the climate system could, as noted more than 30 years ago, coincide with seriously damaging climatic events that may or may not have been connected to the modification scheme and likely could not conclusively be shown to be connected to or disconnected from that modification. This potential for conflicts poses serious social and political obstacles to would-be climate controllers, regardless of how technically or cost effective the engineering schemes may eventually turn out to be. Of course, this has to be traded off against the potential for conflicts from the uneven distribution of climate impacts from unabated emissions that will drive global warming. Fortunately, the seemingly staggering costs – trillions of dollars – of mitigation that substitutes non-carbon-emitting sources for conventional fossil-fuel-burning devices represent a mere year or so delay in being some 500 per cent richer a century from now with 450 ppm CO2 with stringent climate mitigation versus a potentially dangerous 900 ppm concentration if there are no significant mitigation policies deployed (see Azar & Schneider 2002). Thus, repeated assertions that society will not invest in mitigation – and thus geo-engineering will be needed – seem as premature as arguing for near-term deployment of still-untested geoengineering schemes. Moreover, the potential for climate policy to be implemented will probably intensify as severe climate impacts occur and people become more
Geo-engineering: could we or should we make it work?
21
aware of the short delay times to be equally well off associated with conventional mitigation. Institutions currently do not exist with the firm authority to assess or enforce responsible use of the global commons (Nanda & Moore 1983; Choucri 1994). There are some partially successful examples (e.g. the Montreal Protocol and its extensions to control ozone-depleting substances, the nuclear non-proliferation treaty or the atmospheric nuclear test ban treaty) of nation states willing to cede some national sovereignty to international authorities for the global good. However, it would require a significant increase in ‘global-mindedness’ on the part of all nations to set up institutions to attempt to control climate and to compensate any potential losers should the interventions possibly backfire – or even be perceived to have gone awry. Moreover, such an institution would need the resources, skills and authority to inject continuously, and monitor over a century or two, measured amounts of dust in the stratosphere, iron in the oceans or sea-salt aerosols into clouds in order to counteract the inadvertent enhanced heat-trapping effects of long-lived constituents such as CO2 . I, for one, am highly dubious about the likelihood of a sufficient and sustainable degree of global-scale international cooperation needed to assure a high probability that world climate control and compensation authorities (e.g. see Schneider & Mesirow 1976) could be maintained without interruption by wars or ideological disputes for the next two centuries. Just imagine if we needed to do all this in 1900 and then the rest of twentieth-century history unfolded as it actually did! Would climate control have been rationally maintained, or would gaps and rapid transient reactions have been the experience? In Schneider (1996), I proposed the following health metaphor as apt: it is better to cure heroin addiction by paced medical care that weans the victim slowly and surely from drug addiction than by massive substitution of methadone or some other ‘more benign’ or lower-cost narcotic. For me, a more rapid implementation of energy-efficient technologies, alternative, less polluting agricultural or energy production systems (e.g. Johansson et al. 1993), better population planning, wildlife habitat protection (particularly for threatened ecosystems) and commodity pricing that reflects not simply the traditional costs of labour, production, marketing and distribution but also the potential ‘external’ costs from a modified environment (e.g. NAS 1992 ; IPCC 2007c) are the kinds of lasting measures that can cure ‘addiction’ to polluting practices without risking the potential side effects of geo-engineering – planetary methadone in my metaphor. Rather than pin our hopes on the gamble that geo-engineering will prove to be inexpensive, benign and administratively sustainable over centuries – none of which can remotely be assured now – in my value system, I – and most of the authors of this volume as well – would prefer to start to lower the human impact on the Earth through more conventional means.
22
Stephen H. Schneider
However, critics have asked, is it not one’s reluctance to embrace manipulations of nature at a large scale, ignoring the potential consequences of ‘geosocial engineering’, implicit in changing the culture away from its fossil-fuel-based growth and development habits? Do we not know comparably little about the social consequence of carbon taxes, such critics suggest, and are not the potential human consequences of manipulating the world’s economy potentially worse than the politics or environmental implications of geo-engineering? In Schneider (1996), I had several responses to these legitimate concerns. First, although in principle these are empirically determinable comparisons between the relative consequences of ‘geo’ versus ‘social’ engineering, in practice both are sufficiently unprecedented on the scales being considered here that estimates of impacts will remain highly uncertain and subjective for some time to come. Moreover, values will dominate the trade-off: for example, risk aversion versus risk proneness or the precautionary principle for protecting nature versus the unfettered capacity of enterprising individuals, firms or nations to act to improve their economic conditions. Second, I do not plead guilty to the charge of nature-centric bias by ignoring the cultural consequences of emissions policies. One who worries more about potential side effects of geo-engineering countermeasures to inadvertent modification of nature than about side effects of manipulating the world’s energy economy via, say, carbon taxes is simply recognising that it was humans, not nature, that began the spiral of difficulties in the first place by indulging in inadvertent modifications of the environment. Rather, the bias is on the anthropocentrists, since they ignore that searching for solutions to human disturbances to nature that do not raise yet additional risks for coupled human natural systems (Liu et al. 2007) is a way of balancing anthropocentric and nature-centric values. Carbon taxes are simply one possible way to internalise into human economics the potential external damages (or ‘externalities’) of our activities. So too are the mild versions of geo-engineering – carbon removal from the climate system. To be sure, flexibility is essential for any policy to be both effective and fair, plus it needs to be capable of being reversed if unforeseen negative outcomes materialise. But, since human systems have already disturbed nature in the first place, it seems to me that the risks of countering inadvertent human impacts on nature should next be borne by humans, not an already besieged nature. Nevertheless, I do (somewhat reluctantly) agree that study of geo-engineering potential is clearly needed, given our growing inadvertent impacts on the planet (negative impacts that are already being borne unfairly in vulnerable places such as Bangladesh and sub-Saharan Africa – and even the Gulf Coast of the USA). But, I must conclude with a caveat: I would prefer to get slowly unhooked from our economic dependence on massive increases in carbon fuels than to try to cover the potential side effects of business-as-usual development with decades of sulphuric
Geo-engineering: could we or should we make it work?
23
acid injections into the atmosphere or iron into the oceans or aerosols into the marine boundary layer, to say nothing of Buck Rogers schemes in space. But, deep-Earth sequestration, CO2 removal in biological processes and alternative non-carbon-emitting fuels – also discussed in this volume – clearly do not suffer from the side effects of more aggressive radiative forcing offset geo-engineering schemes, or the who-controls-the-climate problems, and are obvious candidates for rapid learning-by-doing efforts. In short, my personal prescription for climate policies can be summarized in five sequenced steps. (i) Adaptation is essential now to assist those who will likely be harmed by ‘in the pipeline’ climate change. Actions that simultaneously enhance ‘sustainable development’ would seem the most attractive options. (ii) Performance standards required of buildings, appliances, machines and vehicles to wring the maximum potential for cost-effective engineering energy efficiency need to be mandatory and widespread. (iii) A ‘learning-by-doing feeding frenzy’ needs to emerge, where we set up public–private partnerships to fashion incentives to help us ‘invent our way out’ of the problem of high-emitting technological and social systems. (iv) A shadow price on carbon has to be established to ensure that the full costs of any energy production or end use system is part of the price of doing business. Cap-and-trade and carbon taxes are the prime examples of such schemes to internalise external risks from business-as-usual emissions, but these schemes must recognise the special problems they may pose for certain groups: poor people and coal miners or SUV workers. So, in addition to internalising these externalities to protect the environmental commons, we need to consider side payments or other compensation schemes to be fair to the losers of the mitigation policies and to provide a transition for them to a softer economic landing, so to speak. (v) Finally, my last policy category in the sequence is to consider deploying geoengineering schemes. However, as has been said by all in this volume, and as I fully agree, R&D is needed and should be an early part of the climate policy investment sequencing, even if deployment of the more aggressive schemes to offset radiative forcing is the last resort.
References American Public Broadcasting System 2000 Beyond fossil fuels: an interview with Professor Martin Hoffert by Jon Palfreman in the series What’s Up with the Weather, a production of Nova and Frontline. See http://www.pbs.org/wgbh/warming/ beyond/. Angel, R. 2006 Feasibility of cooling the Earth with a cloud of small spacecraft near the inner Lagrange point (L1). Proc. Natl Acad. Sci. USA 103, 17184–17189. (doi:10.1073/pnas.0608163103)
24
Stephen H. Schneider
Azar, C. & Schneider, S. H. 2002 Are the economic costs of stabilizing the atmosphere prohibitive? Ecol. Econ. 42, 73–80. (doi:10.1016/S0921-8009(02)00042-3) Beal, T. 2007 Tucson firm has plan for towers to suck up CO2 . Arizona Daily Star, 25 November 2007. See http://www.azstarnet.com/metro/213272. Budyko, M. I. 1977 Climatic Changes. (Transl. Izmeniia Klimata, Gidrometeoizdat, Leningrad, 1974, p. 244.) Washington, DC: American Geophysical Union. Caldeira, K. & Wickett, M. E. 2003 Anthropogenic carbon and ocean pH. Nature 425, 365. (doi:10.1038/425365a) Choucri, N. (ed.) 1994 Global Accord: Environmental Challenges and International Responses. Cambridge, MA: MIT Press. Criswell, D. R. 2002a In Innovative Solutions to CO2 Stabilization (ed. R. Watts), pp. 345–410. New York: Cambridge University Press. Criswell, D. R. 2002b Solar power via the moon. Ind. Physicist 8, 12–15. Crutzen, P. J. 2006 Albedo enhancement by stratospheric sulfur injections: a contribution to resolve a policy dilemma? Climatic Change 77, 211–220. (doi:10.1007/s10584-006-9101-y) Early, J. T. 1989 Space-based solar screen to offset the greenhouse effect. J. Br. Interplanet. Soc. 42, 567–569. EurekaAlert 2006 Space sunshade might be feasible in global warming emergency. AAAS, 3 November 2006. See http://www.eurekalert.org/pub releases/2006-11/uoassm110306.php. Fleming, J. R. 2007 The climate engineers. Wilson Quart. Rev. Spring 2007, 46–60. Gaskins, D. & Weyant, J. 1993 EMF-12: model comparisons of the costs of reducing CO2 emissions. Am. Econ. Rev. 83, 318–323. Glaser, P. E. 1968 Power from the sun: its future. Science 162, 857–861. (doi:10.1126/science.162.3856.857) Glaser, P. E., Davidson, F. P. & Csigi, K. I. (eds.) 1997 Solar Power Satellites. New York: Wiley–Praxis. Glazovsky, N. F. 1990 The Aral Crisis: The Origin and Possible Way Out. Moscow: Nauka. Govindasamy, B. & Caldeira, K. 2000 Geoengineering Earth’s radiation balance to mitigate CO2 -induced climate change. Geophys. Res. Lett 27, 2141–2144. (doi:10.1029/1999GL006086) Hoffert, M. I. & Potter, S. E. 1997 Energy supply. In Engineering Response to Global Climate Change (ed. R. G. Watts), pp. 205–259. Boca Raton, FL: Lewis. Hoffert, M. I. et al. 2002 Advanced technology paths to global climate stability: energy for a greenhouse planet. Science 298, 981–987. (doi:10.1126/science.1072357) Intergovernmental Panel on Climate Change 1995a Economic and Social Dimensions of Climate Change: Contribution of Working Group III to the Second Assessment Report of the Intergovernmental Panel on Climate Change (eds. J. P. Bruce, H. Lee & E. F. Haites). Cambridge, UK: Cambridge University Press. Intergovernmental Panel on Climate Change 1995b The Science of Climate Change: Contribution of Working Group I to the Second Assessment Report of the Intergovernmental Panel on Climate Change (eds. J. T. Houghton, L. G. Meira Filho, B. A. Callender, N. Harris, A. Kattenberg & K. Maskell) Cambridge, UK: Cambridge University Press. Intergovernmental Panel on Climate Change 2007a The Physical Science Basis: Contribution of Working Group I to the Fourth Assessment Report of the Intergovernmental Panel Climate Change (eds. S. Solomon, D. Qin, M. Manning,
Geo-engineering: could we or should we make it work?
25
Z. Chen, M. Marquis, K. B. Averyt, M. Tignor & H. L. Miller) Cambridge, UK: Cambridge University Press. Intergovernmental Panel on Climate Change 2007b Impacts, Adaptation and Vulnerability: Contribution of Working Group II to the Fourth Assessment Report of the Intergovernmental Panel on Climate Change (eds. M. L. Parry, O. F. Canziani, J. P. Palutikof, P. J. van der Linden & C. E. Hanson) Cambridge, UK: Cambridge University Press. Intergovernmental Panel on Climate Change 2007c Mitigation of Climate Change: Contribution of Working Group III to the Fourth Assessment Report of the Intergovernmental Panel on Climate Change (eds. B. Metz, O. Davidson, P. Bosch, R. Dave & L. Meyer) Cambridge, UK: Cambridge University Press. Johansson, T. B., Kelly, H., Reddy, A. K. N. & Williams, R. H. 1993 Renewable Energy: Sources for Fuels and Electricity. Washington, DC: Island Press. Kellogg, W. W. & Schneider, S. H. 1974 Climate stabilization: for better or for worse? Science 186, 1163–1172. (doi:10.1126/science.186.4170.1163) Lackner, K. 1999 CO2 sequestration from the air: is it an option? Proc. 24th Int. Conf. on Coal Utilization and Fuel Systems, Clearwater, FL, 8–11 March 1999. Lackner, K., Grimes, P. & Ziock, H. J. 2001 Capturing CO2 from the air. Proc. 1st Natl Conf. on Carbon Sequestration, Washington, DC, 14–17 May 2001. Landis, G. A. 1997 Proc. of Solar Power Satellite ’97 Conf., 24–28 August 1997, Montreal, Canada, pp. 327–328. Ottawa, Canada: Canadian Aeronautics and Space Institute. Lashof, A. & Tirpak, D. A. (eds.) 1990 Policy Options for Stabilizing Global Climate, Report to Congress: Executive Summary. Washington, DC: US Environmental Protection Agency, Office of Policy, Planning and Evaluation. Liu, J. et al. 2007 Complexity of coupled human and natural systems. Science 317, 1513–1516. (doi:10.1126/science.1144004) Lovelock, J. & Rapley, C. R. 2007 Ocean pipes could help the Earth to cure itself. Nature 449, 403. (doi:10.1038/449403a) Mankins, J. 1997 The space solar power. Aerosp. Am. 35, 30–36. Marchetti, C. 1977 On geoengineering and the CO2 problem. Climatic Change 1, 59–68. (doi:10.1007/BF00162777) Martin, J. H. 1990 Glacial–interglacial CO2 change: the iron hypothesis. Paleoceanography 5, 1–13. (doi:10.1029/PA005i001p00001) Matthews, H. D. & Caldeira, K. 2008 Stabilizing climate requires near-zero emissions. Geophys. Res. Lett 35, L04705. (doi:10.1029/2007GL032388) Nanda, V. P. & Moore, P. T. 1983 Global management of the environment: regional and multilateral initiatives. In World Climate Change: The Role of International Law and Institutions (ed. V. P. Nanda), pp. 93–123. Boulder, CO: Westview Press. National Academy of Sciences 1992 Policy Implications of Greenhouse Warming: Mitigation, Adaptation, and the Science Base, Panel on Policy Implications of Greenhouse Warming, Committee on Science, Engineering, and Public Policy, pp. 433–464. Washington, DC: National Academy Press. Nightingale, P. D. & Liss, P. S. 2003 Gases in seawater. Treatise Geochem. 6, 49–81. Nikulshina, V., Gebald, C. & Steinfeld, A. 2009 CO2 capture from atmospheric air via consecutive CaCO-carbonation and CaCO3 -calcination cycles in a fluidized bed solar reactor. Chem. Eng. J. 146, 244–248. Peters, R. L. & Lovejoy, T. E. 1992 Global Warming and Biological Diversity. New Haven, CT: Yale University Press.
26
Stephen H. Schneider
President’s Science Advisory Committee 1965 Restoring the Quality of Our Environment, Report of the Environmental Pollution Panel. Washington, DC: The White House. Public Radio International 2007 Climate contest. In Living on Earth series with Steve Curwood: Klaus Lackner interview by Emily Taylor, 16 February 2007. See http://www.loe.org/shows/shows.htm?programID=07-P13-00007#feature2. Raupach, M. R., Marland, G., Ciais, P., Le Quere, C., Canadell, J. G., Klepper, G. & Field, C. B. 2007 Global and regional drivers of accelerating CO2 emissions. Proc. Natl Acad. Sci. USA 104, 10288–10293. (doi:10.1073/pnas.0700609104) Root, T. L. & Schneider, S. H. 1993 Can large-scale climatic models be linked with multiscale ecological studies?. Conserv. Biol. 7, 256–270. (doi:10.1046/j.1523-1739.1993.07020256.x) Rusin, N. & Flit, L. 1960 Man versus climate. (Transl. Dorian Rottenberg.) In Peace Publishers. Moscow: Peace Publishers. Schelling, T. C. 1983 Climate change: implications for welfare and policy. In Changing Climate, pp. 442–482. Washington, DC: National Research Council. Schneider, S. H. 1994 Detecting climatic change signals: are there any ‘fingerprints’? Science 263, 341–347. (doi:10.1126/science.263.5145.341) Schneider, S. H. 1996 Geoengineering: could – or should – we do it? Climatic Change 33, 291–302. (doi:10.1007/BF00142577) Schneider, S. H. & Mesirow, L. E. 1976 The Genesis Strategy: Climate and Global Survival. New York: Plenum. Spector, N. A. & Dodge, B. F. 1946 Removal of CO2 from atmospheric air. Trans. Am. Inst. Chem. Eng. 42, 827–848. Weart, S. 2008 Climate modification schemes. On The Discovery of Global Warming website: www. aip.org/history/climate/RainMake.htm
2 Reframing the climate change challenge in light of post-2000 emission trends kevin anderson and alice bows
The 2007 Bali conference heard repeated calls for reductions in global greenhouse gas emissions of 50 per cent by 2050 to avoid exceeding the 2 ◦ C threshold. While such endpoint targets dominate the policy agenda, they do not, in isolation, have a scientific basis and are likely to lead to dangerously misguided policies. To be scientifically credible, policy must be informed by an understanding of cumulative emissions and associated emission pathways. This analysis considers the implications of the 2 ◦ C threshold and a range of post-peak emission reduction rates for global emission pathways and cumulative emission budgets. The chapter examines whether empirical estimates of greenhouse gas emissions between 2000 and 2008, a period typically modelled within scenario studies, combined with short-term extrapolations of current emissions trends, significantly constrains the 2000–2100 emission pathways. The chapter concludes that it is increasingly unlikely any global agreement will deliver the radical reversal in emission trends required for stabilization at 450 ppmv carbon dioxide equivalent (CO2 e). Similarly, the current framing of climate change cannot be reconciled with the rates of mitigation necessary to stabilize at 550 ppmv CO2 e and even an optimistic interpretation suggests stabilization much below 650 ppmv CO2 e is improbable.
Geo-Engineering Climate Change: Environmental Necessity or Pandora’s Box?, eds. Brian Launder and Michael Thompson. Published by Cambridge University Press. © Cambridge University Press 2010.
27
28
Kevin Anderson and Alice Bows
2.1 Introduction In the absence of global agreement on a metric for delineating dangerous from acceptable climate change, 2 ◦ C has, almost by default, emerged as the principal focus of international and national policy.1 Moreover, within the scientific community, 2 ◦ C has come to provide a benchmark temperature against which to consider atmospheric concentrations of greenhouse gases and emission reduction profiles. While it is legitimate to question whether temperature is an appropriate metric for representing climate change and, if it is, whether 2 ◦ C is the appropriate temperature (Tol 2007), this is not the purpose of this chapter. Instead, the chapter begins by considering the implications of the 2 ◦ C threshold for global emission pathways, before proceeding to consider the implications of different emission pathways on stabilization concentrations and associated temperatures. Although the policy realm generally focuses on the emissions profiles between 2000 and 2050, the scientific community tends to consider longer periods, typically up to and beyond 2100. By using a range of cumulative carbon budgets with differing degrees of carbon-cycle feedbacks, this chapter assesses whether global emissions of greenhouse gases between 2000 and 2008, combined with short-term extrapolations of emission trends, significantly impact the 2008–2100 cumulative emission budget available, and hence emission pathways. In brief, the chapter combines current greenhouse gas emissions data (including deforestation) with up-to-date emission trends and the latest scientific understanding of the relationships between emissions and concentrations to consider three questions. (i) Given a small set of emissions pathways from 2000 to a date where global emissions are assumed to peak (2015, 2020 and 2025), what emission reduction rates would be necessary to remain within the 2000–2100 cumulative emission budgets associated with atmospheric stabilization of carbon dioxide equivalent (CO2 e) at 450 ppmv? The accompanying scenario set is hereafter referred to as ‘Anderson Bows 1’ (AB1). (ii) Given the same pathways from 2000 to the 2020 emissions peak, what concentrations of CO2 e are associated with subsequent annual emission reduction rates of 3, 5 and 7 per cent? The accompanying scenario set is hereafter referred to as ‘Anderson Bows 2’ (AB2). (iii) What are the implications of the findings from (i) and (ii) for the current framing of the climate agenda more generally, and the appropriateness of the 2 ◦ C threshold as the driver of mitigation and adaptation policy more specifically?
1
For example, in March 2007, European leaders reaffirmed their commitment to the 2 ◦ C threshold (European Commission 2007).
Reframing the climate change challenge
29
2.2 Analysis framing 2.2.1 Correlating 2 ◦ C with greenhouse gas concentration and carbon budgets What constitutes an acceptable temperature increase is a political rather than a scientific decision, though the former may be informed by science. By contrast, the correlation between temperature, atmospheric concentration of CO2 e and anthropogenic cumulative emission budgets emerges, primarily, from our scientific understanding of how the climate functions. According to a recent synthesis of global climate models (Meinshausen 2006, table 28.1), the 550 ppmv CO2 e concentration, around which much policy discussion revolves, suggests an 82 per cent mid-value probability of exceeding 2 ◦ C. By contrast, to provide a 93 per cent mid-value probability of not exceeding 2 ◦ C, the concentration would need to be stabilized at, or below, 350 ppmv CO2 e, i.e. below current levels. While Meinshausen’s analysis demonstrates the gulf between the science and the policy of approximately 2 ◦ C, the analysis within the IPCC’s Fourth Assessment Report (IPCC 2007a), hereafter AR4, suggests that the scale of the challenge is even more demanding. Not only has the ‘best estimate’ of climate sensitivity risen from 2.5 ◦ C in the 1996 report (IPCC 1996, p. 39) to 3 ◦ C in AR4, but also the inclusion of carbon-cycle feedbacks has significantly reduced the cumulative anthropogenic emissions (carbon budget) associated with particular concentrations of CO2 e (IPCC 2007a, topic 5, p. 6). Understanding current emission trends in particular and the links between global temperature changes and national emission budgets more generally (sometimes referred to as the ‘correlation trail’: Anderson & Bows 2007), is essential if policy is to be evidence based. Currently, national and international policies are dominated by long-term reduction targets with little regard for the cumulative carbon budget described by particular emission pathways. Within the UK, for example, while the government has acknowledge the link between temperature and concentration, the principal focus of its 2006 policies was on reducing emissions by 60 per cent by 2050 (excluding international aviation and shipping: Bows & Anderson 2007). Closer examination of the UK’s relatively ‘mature’ climate change policy reveals a further inconsistency. Within many official documents 550 ppmv CO2 e and 550 ppmv CO2 are used interchangeably,2 with the latter equating to approximately 615 ppmv CO2 e (extrapolated from IPCC 2007a, topic guide 5, table 5.1); the policy repercussions of this scale of ambiguity are substantial. Whether considering climate change from an international, national or regional perspective, it is essential that the associated policy debate be informed by the latest 2
For example, the RCEP uses CO2 e in RCEP (2000), whereas the Energy White Paper (DTI 2006) and Climate Change Programme (DEFRA 2006) both refer to CO2 alone.
30
Kevin Anderson and Alice Bows
science on the ‘correlation trail’ from temperature and atmospheric concentrations of CO2 e through to global carbon budgets and national emission pathways. Without such an informed debate, the scientific and policy uncertainties that unavoidably arise are exacerbated unnecessarily and significantly.
2.2.2 Recent emissions data and science: impact on carbon budgets Carbon-cycle feedbacks The atmospheric concentration of CO2 depends not only on the quantity of emissions released into the atmosphere (natural and anthropogenic), but also on land use changes and the capacity of carbon sinks within the biosphere. As the atmospheric concentration of CO2 increases (at least within reasonable bounds), so there is a net increase in its take-up rate from the atmosphere by vegetation and the ocean. However, changes in rainfall and temperature in response to increased atmospheric greenhouse gas concentrations affect the absorptive capacity of natural sinks (Jones et al. 2006; Canadell et al. 2007; Le Qu´er´e et al. 2007). While the complex and interactive nature of these effects leads to uncertainties with regard to the size of the carbon-cycle feedbacks (Cox et al. 2006), all models studied agree that a global mean temperature increase will reduce the biosphere’s ability to store carbon emissions over the timescales considered here (Friedlingstein et al. 2006). Consequently, pathways to stabilizing CO2 concentrations that include feedbacks have lower permissible emissions than those pathways that exclude such feedbacks. According to AR4, for example, with feedbacks included, stabilizing at 450 ppmv CO2 e correlates with cumulative emissions some 27 per cent lower than without feedbacks, over a 100-year period (IPCC 2007a, topic guide 5, p. 6). The impact of this latest science on the link between emissions and temperature is of sufficient scale to require the emission-reduction pathways associated with particular concentrations and hence temperatures be revisited. Latest empirical emissions data The current suites of emission scenarios informing the international and national climate change agenda seldom include empirical emissions data post-2000, choosing instead to model recent emissions; both the 2006 Stern Review (Stern 2006, p. 231) and the UK’s 2007 draft climate change bill (DEFRA 2007) illustrate this tendency. However, recent empirical data have shown global emissions to have risen at rates well in excess of those contained within these and many other emissions scenarios (Raupach et al. 2007). For example, while Stern assumes a mean annual CO2 e emission growth between 2000 and 2006 of approximately
Reframing the climate change challenge
31
0.95 per cent, the growth rate calculated from the latest empirical data is closer to 2.4 per cent.3 Similarly, the UK’s draft climate change bill (DEFRA 2007) contains an emission pathway between 2000 and 2006 in which emissions fall, while over the same period the UK government’s emission inventory suggests, at best, that emissions have been stable. A further and important revision to recent emissions data relates to deforestation. Within many scenarios, including Stern, emissions resulting from deforestation are estimated to be in the region of 7.3 GtCO2 in 2000. However, recent data have suggested this to be an overestimate, with R. A. Houghton (2006, personal communication) having recently revised his earlier figure downward to 5.5 GtCO2 .4 The impact of this reduction allied to the latest emission data reinforces the need to revisit emission pathways. 2.3 Scenario analysis 2.3.1 Overview The scenario analysis presented within this paper is for the basket of six greenhouse gases only and relies, principally, on the scientific understanding contained within AR4. The analysis does not take account of the following: r the radiative forcing impacts of aerosols and non-CO2 aviation emissions (e.g. emissions of NOx in the upper troposphere, vapour trails and cirrus formation);5 r the most recent findings with respect to carbon sinks;6 r previously underestimated emission sources;7 and r the implications of early emission peaks for ‘overshooting’ stabilization concentrations and the attendant risks of additional feedbacks. 3
4
5
6 7
CO2 data from the Carbon Dioxide Information Analysis Centre (CDIAC) including recent data from G. Marland (2006, personal communication); non-CO2 greenhouse gas data from the USA Environmental Protection Agency (EPA 2006) including the projection for 2005, and assuming deforestation emissions in 2005 to be 5.5 GtCO2 (1.5 GtC), with a 0.4 per cent growth in the preceding 5 years in line with data within the Global Forest Resources Assessment (FAO 2005). FAO (2005) contains rates of tropical deforestation for the 1990s revised downward from those in the 2000 Global Forest Resources Assessment (FAO 2000; R. A. Houghton 2006, personal communication). An earlier estimate based on high-resolution satellite data over areas identified as ‘hot spots’ of deforestation, estimated the figure at nearer 3.7 GtCO2 (1 GtC) for 2000 (Achard et al. 2004). It is Houghton’s more recent estimate that is used in this chapter. There remains considerable uncertainty as to the actual level of radiative forcing associated with aerosols, exacerbated by their relatively short residence times in the atmosphere and uncertainty as to future aerosol emission pathways (Cranmer et al. 2001; Andreae et al. 2005). Similarly, there remain significant uncertainties as to the radiative forcing impact of non-CO2 emissions from aviation, particularly contrails and linear cirrus (e.g. Stordal et al. 2004; Mannstein & Schumann 2005). For example, and in particular, the reduced uptake of CO2 in the Southern Ocean (Raupach et al. 2007) and the potential impact of low-level ozone on the uptake of CO2 in vegetation (Cranmer et al. 2001). For example, significant uncertainties in the emissions estimates for international shipping (Corbett & Kohler 2003; Eyring et al. 2005).
32
Kevin Anderson and Alice Bows
While aerosols are most commonly associated with net global (or at least regional) cooling, the other factors outlined above are either net positive feedbacks or, as is the case for high peak-level emissions, increase the likelihood of net positive feedbacks. Consequently, the correlations between concentration and mitigation outlined in this analysis are, in time, liable to prove conservative. The scenarios are for CO2 e emission pathways during the twenty-first century, with empirical data used for the opening years of the century (in contrast to modelled or ‘what if’ data). The full scenario sets (AB1 and AB2) comprise different combinations of the following: (i) emissions of CO2 from deforestation, (ii) emissions of non-CO2 greenhouse gases and (iii) emissions of CO2 from energy and industrial processes. For AB1 r Deforestation. Two low-emission scenarios for the twenty-first century. r Non-CO2 greenhouse gases. Three scenarios peaking in 2015, 2020 and 2025 and subsequently reducing to 7.5 GtCO2 e per year. r Energy and process CO . Three scenarios peaking in 2015, 2020 and 2025 and subse2 quently reducing to maintain the total cumulative emissions for the twenty-first century within the AR4 450 ppmv CO2 e range (with carbon-cycle feedbacks).
For AB2 r Deforestation. Two low-emission scenarios for the twenty-first century. r Non-CO2 greenhouse gases. One scenario peaking in 2020 subsequently reducing to 7.5 GtCO2 e per year (as per AB1 with a 2020 peak). r Energy and process CO2 . Three scenarios, each following the same pathway to a 2020 peak, but subsequently reducing at different rates to maintain total annual CO2 e reductions of 3, 5 and 7 per cent.
The following sections detail the deforestation and non-CO2 greenhouse gas emission scenarios used to derive the post-peak energy and process CO2 emission scenarios and ultimately the total global CO2 e scenarios for the twenty-first century.
2.3.2 Deforestation emissions A significant portion of the current global annual anthropogenic CO2 emissions are attributable to deforestation (in the region of 12–25 per cent). However, carbon mitigation policy, particularly in OECD nations, tends to focus on those emissions from energy and industrial processes (hereafter referred to as energy and process emissions), with less direct regard for emissions arising from deforestation. While the relatively high levels of uncertainty associated with deforestation emissions
Reframing the climate change challenge
33
Table 2.1 Deforestation emission scenario summary for two scenarios used to build the subsequent full CO2 e scenarios (deforestation low, DL ; deforestation high, DH ) and one for illustrative purposes only (deforestation very high, DVH )
Name DL (developed for this analysis) DH (Moutinho & Schwartzman) DVH (Moutinho & Schwartzman)
Peak date
2100 carbon stock remaining % (carbon stock) (GtCO2 )
Emissions 2000–2100 (GtCO2 )
5.5 (1060)
2015
80 (847)
213
5.5 (1060)
2020
70 (741)
319
5.5 (1060)
2036
55 (583)
477
2000 emissions/year (carbon stock) (GtCO2 )
make their inclusion in global mitigation scenarios problematic, the scale of emissions is such that they must be included. Within this chapter two deforestation scenarios are developed; both assume climate change to be high on the political agenda and represent relatively optimistic reductions in the rate of, and hence the total emissions released from, deforestation.8 They both have a year 2000 baseline of 5.5 GtCO2 , but post-2015 have different deforestation rates and hence different stocks of carbon remaining in 2100 (i.e. the amount of carbon stored in the remaining forest). The scenarios are illustrated numerically in Table 2.1 and graphically in Figure 2.1. The scenarios are dependent not only on the baseline but also on estimates of the change in forestry carbon stocks between 2000 and 2100. The stock values used in the scenarios are taken from Moutinho & Schwartzman (2005) and based on their estimate of total forest carbon stock in 2000 of 1060 GtCO2 . According to their assumptions, the carbon stock continues to be eroded at current rates until either 2012 or 2025, following which emissions from deforestation decline to zero by either 2100 or until they equate to 15 per cent of a particular nation’s forest stock (compared with 2000). They estimate two values for the carbon stocks, released as CO2 emissions by 2100 as 319 and 477 GtCO2 . This implies that within their scenarios, either 70 or 55 per cent of total carbon stocks remain globally. Given that this chapter and its accompanying AB1 and AB2 scenarios are premised on climate change being high on the international agenda, Moutinho & Schwartzman’s 8
While the scenarios are at least as optimistic as those underpinning, for example, the 2005 Forest Resource Assessment (FAO 2005) and the 2006 Stern Report, it could be argued they are broadly in keeping with the high-profile deforestation gained during the 2007 United Nations Climate Change Conference in Bali.
34
Kevin Anderson and Alice Bows
Figure 2.1 Deforestation emission scenarios showing three CO2 emissions pathways based on varying levels of carbon stocks remaining in 2100.
55 per cent of total carbon stock value is considered too pessimistic within the context of this analysis, and although presented in Figure 2.1, is not included in the analysis from this point onwards. Moreover, to allow for a more stringent curtailment of deforestation, the scenario developed for a 70 per cent stock remaining estimate is complemented by one with 80 per cent remaining. The DL and DH curves both assume no increase in deforestation rates from current levels, with DL beginning to drop from the peak level of 5.5 GtCO2 , 5 years prior to DH . This, combined with the higher level of forestry, and hence carbon stock remaining in 2100, gives the DL curve a faster rate of reduction in deforestation than is the case for the DH curve (typically, 7.4 and 4.8 per cent for DL and DH , respectively).9 2.3.3 Non-CO2 greenhouse gas emissions To estimate the percentage reductions required from energy and process CO2 emissions for both AB1 and AB2, it is necessary to consider a range of future emission scenarios for the non-CO2 greenhouse gases. Accordingly, three scenarios are developed assuming current US Environmental Protection Agency (EPA) estimates and projections of emissions from 2000 up to a range of peaking years, after which 9
DL per cent change value is the mean for the period between 2030 and 2050, and DH is the mean value for 2040–2060.
Reframing the climate change challenge
35
Table 2.2 Non-CO2 greenhouse gas emission scenario summary
Name
2000 emissions (GtCO2 )
Early action Mid-action Late action
9.5 9.5 9.5
Peak year
Mean growth to peak (%)
Peak annual emission (GtCO2 e)
Total 2000–2100 emissions (GtCO2 e)
2015 2020 2025
1.31 1.51 1.53
11.4 12.2 13.3
858 883 916
Figure 2.2 Three non-CO2 greenhouse gas emission scenarios with emission pathways peaking at different years but all achieving the same residual level by 2050.
emissions are assumed to decline towards the same long-term stable level. All the scenarios represent a long-term halving in emission intensity, with the difference between them arising from the range of cumulative emissions associated with each of the peaking dates. The scenarios are illustrated numerically in Table 2.2 and graphically in Figure 2.2. Anthropogenic non-CO2 greenhouse gas emissions are dominated by methane and nitrous oxide and, along with the other non-CO2 greenhouse gases, accounted for approximately 9.5 GtCO2 e in 2000 (EPA 2006; similar figures are used within the Stern Review), equivalent to 23 per cent of global CO2 e emissions. Understanding how this significant portion of emissions may change in the future
36
Kevin Anderson and Alice Bows
is key to exploring the scope for future emissions reduction from all the greenhouse gases. The three non-CO2 greenhouse gas scenarios presented here are broadly consistent with a global drive to alleviate climate change. The principal difference between the scenarios is the date at which emissions are assumed to peak, with the range chosen to match that for the total CO2 e emissions, namely an early-action scenario where emissions peak in 2015, a mid-action peak of 2020 and finally a late-action peak in 2025. All three scenarios have a growth rate from the year 2000 up until a few years prior to the peak, equivalent to that projected by the EPA (2006),10 and broadly in keeping with recent trend data. The scenarios all contain a smooth transition through the period of peak emissions and on to a pathway leading towards a post-2050 value of 7.5 GtCO2 e. This value is again specifically chosen to reflect a genuine global commitment to tackle climate change. It is approximately 25 per cent lower than the current level and consistent with a number of other 450 ppmv scenarios.11 Given that the majority of the non-CO2 greenhouse gas emissions are associated with food production, it is not possible, with our current understanding of the issues, to envisage how emissions could tend to zero while there remains a significant human population. The 7.5 GtCO2 e figure used in this paper, assuming a global population in 2050 of 9 billion (thereafter remaining stable), is equivalent to approximately halving the emission intensity of current food production. While a reduction of this magnitude may be considered ambitious in a sector with little overall emission elasticity, such improvements are necessary if global CO2 e concentrations are to be maintained within any reasonable bounds. The non-CO2 greenhouse gas scenarios have similar growth rates from 2000 to their respective peak values, and ultimately all have the same post-2050 emission level (7.5 GtCO2 e). The rate of reduction in emissions from the respective peaks demonstrates the importance of timely action to curtail the current rise in annual emissions: the early-action scenario is required to reduce at 1.35 per cent per year, while the mid- and late-action scenario values are at 2 and 3 per cent, respectively. Similarly, Table 2.2 and Figure 2.2 demonstrate the importance for cumulative values of non-CO2 greenhouse gas emissions not rising much higher than today and that the post-peak reduction rate achieves the long-term residual emission level as soon as is possible (7.5 GtCO2 e by 2050). If the year in which emissions reach the residual level had been 2100 rather than 2050, the modest differences in cumulative emissions between the early-, mid- and late-action scenarios would have been substantially increased. Given that the cumulative value of non-CO2 greenhouse 10 11
EPA values for global warming potential of the basket of six gases are slightly different from those used in IPCC. The difference, though noted here, does not significantly alter the analysis or results. For example, in Stern (2006, p. 233), for both his 450 ppmv CO2 e and 500–450 ppmv overshoot curve.
Reframing the climate change challenge
37
gas emissions is a significant proportion of total cumulative CO2 e emissions, any delay in achieving the residual value would have significant implications for the reduction rate of energy and process CO2 emissions necessary to meet the AB1 and AB2 criteria.
2.3.4 CO2 e emission scenarios for the twenty-first century Having developed the deforestation and non-CO2 greenhouse gas scenarios, this section presents the complete greenhouse gas emission scenarios, AB1 and AB2, for the twenty-first century. The emissions released from the year 2000 until the peak dates are discussed here in relation to both AB1 and AB2, before the post-peak scenarios for each of the scenario sets are presented. AB1 and AB2: emissions from 2000 to the peak years By combining the deforestation and non-CO2 greenhouse gas scenarios with assumptions about energy and process CO2 , scenarios for all greenhouse gas emissions up until the three peaking dates are developed. Energy and process CO2 emissions for the years 2000–2005 are taken from the Carbon Dioxide Information Analysis Centre (CDIAC), with estimates for 2006–2007 based on BP inventories (BP 2007). From 2007 to the three peaking dates of 2015 (early action), 2020 (mid-action) and 2025 (late-action) emissions of energy and process CO2 grow at 3 per cent per year until 5 years prior to peaking. Beyond this point, emission growth gradually slows to zero at the peak year before reversing thereafter. The 3 per cent emission growth rate chosen for CO2 is consistent with recent historical trends. Between 2000 and 2005, CDIAC data show a mean annual growth in energy and process CO2 emissions of 3.2 per cent; this includes the slow growth years following the events of 11 September 2001. AB1: emissions from peak years to 2100 From the peak years onwards, AB1 (summarized in Table 2.3) takes the approach that to remain within the bounds of a 450 ppmv CO2 e stabilization target, the cumulative emissions between 2000 and 2100 must not exceed the range presented within the latest IPCC report in which carbon-cycle feedbacks are included (IPCC 2007b). AB1 final scenarios The emission pathways for the full greenhouse gas AB1 scenarios from 2000 to 2100 are presented in Figure 2.3. The plots comprise the earlier deforestation and non-CO2 greenhouse gas scenarios with growing energy and process CO2
38
Kevin Anderson and Alice Bows
Table 2.3 Summary of the core components of scenario set AB1 Characteristic
2015–2100
2020–2100
2025–2100
Deforestationa Non-CO2 greenhouse gasesa Approximate peaking value (GtCO2e ) Cumulative emissions (GtCO2 e) IPCC AR4 2100 residual emissions (GtCO2 e)
DH and DL early action 54
DH and DL mid-action 60
DH and DL late action 64
low: 1376 medium: 1798 high: 2202 7.5
low: 1376 medium: 1798 high: 2202 7.5
low: 1376 medium: 1798 high: 2202 7.5
Deforestation and non-CO2 greenhouse gas scenarios as in Tables 2.1 and 2.2.
80
(a)
60
40
20
0 2000
2020
2040
2060
2080
2100
Emissions of greenhouse gases (GtCO2e)
Emissions of greenhouse gases (GtCO2e)
80
(b)
60
40
20
0 2000
2020
2040
2060
2080
2100
Year
Year
80 Emissions of greenhouse gases (GtCO2e)
a
(c )
60
Low D L Low D H Medium D L Medium D H
40
High D L High D H
20
0 2000
2020
2040
2060
2080
2100
Year
Figure 2.3 Greenhouse gas emission scenarios for AB1 with emissions peaking in (a) 2015, (b) 2020 and (c) 2025 (see also colour plate).
Reframing the climate change challenge
39
Table 2.4 Scenarios assessed in relation to their practical feasibility. (X denotes a scenario rejected on the basis of being quantitatively impossible or with prolonged percentage annual reduction rates greater than 15%. The percentage reductions given illustrate typical sustained annual emission reductions required to remain within budget.) Deforestation DL
Deforestation DK
Peak date
Low
Medium
High
Low
Medium
High
2015 2020 2025
X X X
13% X X
4% 8% X
X X X
X X X
4% 11% X
emissions up to the peaking year, and all have total twenty-first century cumulative values of CO2 e matching the 450 ppmv figures within AR4. It is evident from the data underpinning Figure 2.3 that 10 of the 18 proposed pathways cannot be quantitatively reconciled with the cumulative CO2 e emissions budgets for 450 ppmv provided within AR4. Table 2.4 identifies the ‘impossible’ scenarios (including three with prolonged annual reduction rates greater than 15%) and illustrates the post-peak level of sustained emission reduction necessary to remain within budget. AB1: implications for energy and process CO2 The constraints on the greenhouse gas emission pathways of achieving 450 ppmv CO2 e render most of the AB1 scenarios impossible to achieve. Having established which scenarios are at least quantitatively possible and subtracting the respective non-CO2 greenhouse gas and deforestation emissions, the energy and process emissions associated with each of the scenarios that remain feasible (Figure 2.4) can be derived. Figure 2.4 illustrates that complete decarbonization of the energy and process system is necessary by between 2027 and 2063, if the total greenhouse gas emissions are to remain within the IPCC’s 450 ppmv CO2 e budgets. Moreover, in combination with Table 2.5, it is evident that the only meaningful opportunity for stabilizing at 450 ppmv CO2 e occurs if the highest of the IPCC’s cumulative emissions range is used and if emissions peak by 2015. AB2: emissions from 2020 (peak year) to 2100 The AB2 scenario set complements the AB1 scenario set by exploring the implications for CO2 e budgets of three post-peak annual emission reduction rates (3, 5 and 7 per cent). Only one peaking year is considered within this analysis with
40
Kevin Anderson and Alice Bows
Table 2.5 Twenty-year sustained post-peak per cent reductions in energy and process CO2 emissions (from 5 years following the peak year). (X denotes a scenario rejected on the basis of being quantitatively impossible, with prolonged per cent annual reduction rates greater than 15% or scenarios where full decarbonization is necessary within 20 years.) Deforestation DL
Deforestation DH
Peak date
Low
Medium
High
Low
Medium
High
2015 2020 2025
X X X
X X X
∼6% X X
X X X
X X X
∼8% X X
Figure 2.4 Energy and process CO2 emissions derived by subtracting the non-CO2 emissions and deforestation emissions from the total greenhouse gas emissions over the period of 2000–2100, for the AB1 scenarios.
2020 chosen as arguably the most ‘realistic’ of the three dates in terms of both the ‘practicality’ of being achieved and of the respective scope for remaining within ‘reasonable’ bounds of CO2 e concentrations. Table 2.6 summarizes the data underpinning Figure 2.5.
Reframing the climate change challenge
41
Table 2.6 Summary of the core components of the AB2 scenarios Characteristic
2020–2100
Deforestationa Non-CO2 greenhouse gasesa Approximate peaking value (GtCO2 e) Post-2020 CO2 e reductions (%) 2100 residual emissions (GtCO2 e)
DH and DL mid-action 60 3, 5 and 7 7.5
a
Deforestation and non-CO2 greenhouse gas scenarios as in Tables 2.1 and 2.2
Emissions of greenhouse gases (GtCO2e)
80 7% reduction D L 7% reduction D H 5% reduction D L 5% reduction D H
60
3% reduction D L 3% reduction D H 40
20
0 2000
2020
2040
2060
2080
2100
Year
Figure 2.5 Greenhouse gas emission scenarios peaking in 2020, with sustained percentage emission reductions of 3, 5 and 7 per cent. The 3 and 5 per cent DH scenarios are so similar to the 3 and 5 per cent DL that they are hidden behind those profiles.
The pathways within Figure 2.5 equate to a range in cumulative CO2 e emissions for 2000–2100 of 2.4 TtCO2 e, 2.6 TtCO2 e and 3 TtCO2 e for 7, 5 and 3 per cent reductions, respectively. According to the cumulative emissions data contained within the Stern Review (Stern 2006: figure 8.1, p. 222), the first two values approximate to a CO2 e concentration of approximately 550 ppmv with the latter being closer to 650 ppmv.
42
Kevin Anderson and Alice Bows
Table 2.7 Post-peak (2020) per cent reduction in energy and process CO2 emissions Deforestation DL (%)
Deforestation DH (%)
Total CO2 e Energy and process CO2
3 3
3 4
CO2 emissions from energy and industrial processes (GtCO2)
Annual reduction
5 6
7 9
5 7
7 12
80 7% reduction D L 7% reduction D H 5% reduction D L 5% reduction D H
60
3% reduction D L 3% reduction D H 40
20
0 2000
2020
2040
2060
2080
2100
Year
Figure 2.6 CO2 emissions derived by removing the non-CO2 greenhouse gas emissions and deforestation emissions from the total greenhouse gas emissions over the period of 2000–2100 for the AB2 scenarios.
AB2: implications for energy and process CO2 Having developed the total CO2 e pathways for AB2, and given the deforestation and non-CO2 greenhouse gas emission scenarios outlined earlier, the associated energy and process CO2 scenarios can be derived (Figure 2.6). Table 2.7 indicates typical post-peak annual reduction rates in energy and process CO2 emissions for the families of 3, 5 and 7 per cent CO2 e scenarios. According to these results, the 3, 5 and 7 per cent CO2 e annual reduction rates comprising the AB2 scenarios correspond with energy and process decarbonization rates of 3–4, 6–7 and 9–12 per cent, respectively. While the latter two ranges correlate broadly with stabilization at 550 ppmv CO2 e, the former, although arguably offering less unacceptable rates of reduction, correlates with stabilization nearer 650 ppmv CO2 e.
Reframing the climate change challenge
43
2.4 Discussion 2.4.1 AB1 scenarios The AB1 scenarios presented here focus on 450 ppmv CO2 e and can be broadly separated into three categories. (i) Scenarios that quantitatively exceed the IPCC’s 450 ppmv CO2 e budget range: this equates to 10 of the 18 scenarios. Scenarios in this category are quantitatively impossible. (ii) Scenarios with current emission growth continuing until 2015, emissions peaking by 2020 and thereafter undergoing dramatic annual reductions of between 8 and 33 per cent. Scenarios in this category are, for the purpose of this paper, considered politically unacceptable. (iii) Scenarios that, as early as 2010, break with current trends in emissions growth, with emissions subsequently peaking by 2015 and declining rapidly thereafter (approx. 4 per cent per year). Scenarios in this category are discussed below.
For scenarios within category (iii) to be viable, it is necessary that the IPCC’s upper value for 450 ppmv cumulative emissions between 2000 and 2100 be correct. If, on the other hand, the IPCC’s mid or low value turns out be more appropriate, category (iii) scenarios will either be politically unacceptable (i.e. above 8 per cent per annum reduction) or quantitatively impossible. However, even should the IPCC’s high level (‘optimistic’) value be correct, the accompanying 4 per cent per year reductions in CO2 e emissions beginning in under a decade (i.e. by 2018) are unlikely to be politically acceptable without a sea change in the economic orthodoxy. The scale of this challenge is brought into sharp focus in relation to energy and process emissions. According to the analysis conducted in this paper, stabilizing at 450 ppmv requires, at least, global energy related emissions to peak by 2015, rapidly decline at 6–8 per cent per year between 2020 and 2040, and for full decarbonization sometime soon after 2050. The characteristics of the resulting 450 ppmv scenario are summarized in Table 2.8. This assumes that the most optimistic of the IPCC’s range of cumulative emission values is broadly correct. While this analysis suggests stabilizing at 450 ppmv is theoretically possible, in the absence of an unprecedented step change in the global economic model and the rapid deployment of successful CO2 scrubbing technologies, 450 ppmv is no longer a viable stabilization concentration. The implications of this for climate change policy, particularly adaptation, are profound. The framing of climate change policy is typically informed by the 2 ◦ C threshold; however, even stabilizing at 450 ppmv CO2 e offers only a 46 per cent chance of not exceeding 2 ◦ C (Meinshausen 2006). As a consequence, any further
44
Kevin Anderson and Alice Bows
Table 2.8 Summary of the core components of the 450 ppmv scenario considered theoretically possible within the constraints of the analysis and assuming the IPCC’s most ‘optimistic’ 450 ppmv CO2 e cumulative value Characteristics
Quantity
IPCC 450 ppmv upper limit cumulative value for 2000–2100 ( GtCO2 e) Peak in CO2 e emissions Post-peak annual CO2 e decarbonization rate Total decarbonization date (including forestry and excluding non-CO2 e greenhouse gas residual) Post-peak sustained annual energy and process decarbonization rate Total energy and process decarbonization date
858 2015 ∼4% ∼ 2060–2075 ∼6–8% ∼ 2050–2060
delay in global society beginning down a pathway towards 450 ppmv leaves 2 ◦ C as an inappropriate and dangerously misleading mitigation and adaptation target.
2.4.2 AB2 scenarios From the analysis underpinning the AB2 scenarios, it is evident that the rates of emission reduction informing much of the climate change debate, particularly in relation to energy, correlate with higher stabilization concentrations than is generally recognized. The principal reason for this divergence arises, in the first instance, from the difference between empirical and modelled emissions data for post-2000. For example, in describing ‘[T]he Scale of the Challenge’ Stern’s ‘stabilization trajectories’ assume a mean annual emissions growth almost 1.5 per cent lower than was evident from the empirical data between 2000 and 2006. While the subsequent impact on cumulative emissions for this period is, in itself, significant, the substantive difference arises from short-term extrapolations of current trends. Stern’s range of peak emissions for 2015 are some 10 GtCO2 e lower than would be the case if present trends continued out to 2010, with growth subsequently reducing to give a peak in emissions by 2015.12 This substantial divergence in emissions is exacerbated significantly as the peak date goes beyond 2015. If emissions were to peak by 2020 (as was assumed for the AB2 scenarios), 12
Comparing values outlined in Stern (2006, p. 233) with those in AB1 and AB2 for 2015. In addition, Stern envisages a global CO2 e emissions increase of approximately 5 GtCO2 e between 2000 and 2015 compared with provisional estimates for China alone of between 4.2 and 5.5 GtCO2 e, extending up to 12.2 GtCO2 e (T. Wang & J. Watson of the Sussex Energy Group (SEG) 2008, personal communication). If the lower SEG estimate for China is correct, Stern’s analysis implicitly assumes that global emissions (excluding China) remain virtually unchanged between 2000 and 2015.
Reframing the climate change challenge
45
and again following a slowing in growth during the 5 years prior to the peaking date, emissions would, by 2020, be between 14 and 16 GtCO2 e higher than Stern’s 2020 range. This difference alone equates to over a third of current global annual emissions, with knock-on implications for short- to medium-term cumulative emissions seriously constraining the viable range of long-term stabilization targets. While climate change is claimed to be a central issue within many policy dialogues, rarely are absolute annual carbon mitigation rates greater than 3 per cent considered viable. In addition, where mitigation polices are more developed, seldom do they include emissions from international shipping and aviation (Bows & Anderson 2007). Stern (2006, p. 231) drew attention to historical precedents of reductions in carbon emissions, concluding that annual reductions of greater than 1 per cent have ‘been associated only with economic recession or upheaval’. For example, the collapse of the former Soviet Union’s economy brought about annual emission reductions of over 5 per cent for a decade. By contrast, France’s 40-fold increase in nuclear capacity in just 25 years and the UK’s ‘dash for gas’ in the 1990s both corresponded, respectively, with annual CO2 and greenhouse gas emission reductions of only 1 per cent (not including increasing emissions from international shipping and aviation). Set against this historical experience, the reduction rates contained within the AB2 scenarios are without a structurally managed precedent. In all but one of the AB2 scenarios, the challenge faced with regard to total CO2 e reductions is increased substantially when considered in relation to decarbonizing the energy and process systems. Despite the optimistic deforestation and non-CO2 greenhouse gas emission scenarios developed for this chapter, the repercussions for energy and process emissions are extremely severe. Stabilization at 550 ppmv CO2 e, around which much of Stern’s analysis revolved, requires global energy and process emissions to peak by 2020 before beginning an annual decline of between 6 and 12 per cent; rates well in excess of those accompanying the economic collapse of the Soviet Union. Even for the 3 per cent CO2 e reduction scenario (i.e. stabilization at 600–650 ppmv CO2 e), the current rapid growth in energy and process CO2 emissions would need to cease by 2020 and begin reducing at between 3 and 4 per cent annually soon after. It is important to note that for both AB1 and AB2 scenarios, there is a risk of a transient overshoot of the ‘desired’ atmospheric concentration of greenhouse gases as a consequence of the rate of change in the emission pathway. Given that overshoot scenarios remain characterized by considerable uncertainty and are the subject of substantive ongoing research (e.g. Schneider & Mastrandrea 2005; Nusbaumer & Matsumoto 2008), they have not been addressed within either AB1 or AB2.
46
Kevin Anderson and Alice Bows
2.5 Conclusions Given the assumptions outlined within this chapter and accepting that it considers the basket of six gases only, incorporating both carbon-cycle feedbacks and the latest empirical emissions data into the analysis raises serious questions about the current framing of climate change policy. In the absence of the widespread deployment and successful application of geo-engineering technologies (sometimes referred to as macro-engineering technologies) that remove and store atmospheric CO2 , several headline conclusions arise from this analysis. r If emissions peak in 2015, stabilization at 450 ppmv CO e requires subsequent annual 2 reductions of 4 per cent in CO2 e and 6.5 per cent in energy and process emissions. r If emissions peak in 2020, stabilization at 550 ppmv CO e requires subsequent annual 2 reductions of 6 per cent in CO2 e and 9 per cent in energy and process emissions. r If emissions peak in 2020, stabilization at 650 ppmv CO e requires subsequent annual 2 reductions of 3 per cent in CO2 e and 3.5 per cent in energy and process emissions.
These headlines are based on the range of cumulative emissions within IPCC AR4 (for 450 ppmv) and the Stern Report (for 550 and 650 ppmv),13 with the accompanying rates of reduction representing the mid-values of the ranges discussed earlier. While for both the 550 and 650 ppmv pathways peak dates beyond 2020 would be possible, these would be at the expense of a significant increase in the already very high post-peak emission reduction rates. These conclusions have stark repercussions for mitigation and adaptation policies. By association, they raise serious questions as to whether the current global economic orthodoxy is sufficiently resilient to absorb the scale of the challenge faced. It is increasingly unlikely that an early and explicit global climate change agreement or collective ad hoc national mitigation policies will deliver the urgent and dramatic reversal in emission trends necessary for stabilization at 450 ppmv CO2 e. Similarly, the mainstream climate change agenda is far removed from the rates of mitigation necessary to stabilize at 550 ppmv CO2 e. Given the reluctance, at virtually all levels, to openly engage with the unprecedented scale of both current emissions and their associated growth rates, even an optimistic interpretation of the current framing of climate change implies that stabilization much below 650 ppmv CO2 e is improbable. The analysis presented within this paper suggests that the rhetoric of 2 ◦ C is subverting a meaningful, open and empirically informed dialogue on climate change. 13
The 450 ppmv figure is from AR4 (IPCC 2007a), while the 550 and 650 ppmv figures are from Jones et al. (2006) and include carbon-cycle feedbacks (used in Stern’s analysis). Although the Jones et al. figures are above the mid-estimates of the impact of feedbacks, there is growing evidence that some carbon-cycle feedbacks are occurring earlier than was thought would be the case, e.g. the reduced uptake of CO2 by the Southern Ocean (Raupach et al. 2007).
Reframing the climate change challenge
47
While it may be argued that 2 ◦ C provides a reasonable guide to the appropriate scale of mitigation, it is a dangerously misleading basis for informing the adaptation agenda. In the absence of an almost immediate step change in mitigation (away from the current trend of 3 per cent annual emission growth), adaptation would be much better guided by stabilization at 650 ppmv CO2 e (i.e. approx. 4 ◦ C).14 However, even this level of stabilization assumes rapid success in curtailing deforestation, an early reversal of current trends in non-CO2 greenhouse gas emissions and urgent decarbonization of the global energy system. Finally, the quantitative conclusions developed here are based on a global analysis. If, during the next two decades, transition economies, such as China, India and Brazil, and newly industrializing nations across Africa and elsewhere are not to have their economic growth stifled, their emissions of CO2 e will inevitably rise. Given any meaningful global emission caps, the implications of this for the industrialized nations are bleak. Even atmospheric stabilization at 650 ppmv CO2 e demands the majority of OECD nations begin to make draconian emission reductions within a decade. Such a situation is unprecedented for economically prosperous nations. Unless economic growth can be reconciled with unprecedented rates of decarbonization (in excess of 6 per cent per year15 ), it is difficult to envisage anything other than a planned economic recession being compatible with stabilization at or below 650 ppmv CO2 e. Ultimately, the latest scientific understanding of climate change allied with current emission trends and a commitment to ‘limiting average global temperature increases to below 4 ◦ C above pre-industrial levels’, demands a radical reframing16 of both the climate change agenda, and the economic characterization of contemporary society. References Achard, F., Eva, H. D., Mayaux, P., Stibig, H.-J. & Belward, A. 2004 Improved estimates of net carbon emissions from land cover change in the tropics for the 1990s. Glob. Biogeochem. Cycles 18, GB2008. (doi:10.1029/2003GB002142) Anderson, K. & Bows, A. 2007 A Response to the Draft Climate Change Bill’s Carbon Reduction Targets, Tyndall Centre Briefing Note no. 17. Tyndall Centre for Climate 14
15
16
Meinshausen (2006) estimates the mid-range probability of exceeding 4 ◦ C at approximately 34 per cent for 600 ppmv and 40 per cent for 650 ppmv. Given this analysis has not factored in a range of other issues with likely net positive impacts, adapting for estimated impacts of at least 4 ◦ C appears wise. At 650 ppmv the range of global decarbonization rate is 3–4 per cent per year (Table 2.7, columns 1 and 4). As OECD nations represent approximately 50 per cent of global emissions, and assuming continued CO2 emission growth from non-OECD nations for the forthcoming two decades, the OECD nations will need to compensate with considerably higher rates of emission reductions. This is not assumed desirable or otherwise, but is a conclusion of (i) the quantitative analysis developed within this chapter, (ii) the premise that stabilization in excess of 600–650 ppmv CO2 e should be avoided and (iii) Stern’s assertion that annual reductions of greater than 1 per cent have ‘been associated only with economic recession or upheaval’ (Stern 2006, p. 231).
48
Kevin Anderson and Alice Bows
Change Research. See http://www.tyndall.ac.uk/publications/briefing notes/bn17. pdf. Andreae, M. O., Jones, C. D. & Cox, P. M. 2005 Strong present-day aerosol cooling implies a hot future. Nature 435, 1187–1190. (doi:10.1038/nature03671) Bows, A. & Anderson, K. L. 2007 Policy clash: can projected aviation growth be reconciled with the UK Government’s 60% carbon-reduction target? Trans. Policy 14, 103–110. (doi:10.1016/j.tranpol.2006.10.002) BP 2007 Statistical Review of World Energy 2007. British Petroleum. See http://www.bp. com/liveassets/bp internet/globalbp/globalbp uk english/ reports and publications/ statistical energy review 2007/STAGING/ local assets/downloads/spreadsheets/ statistical review full report workbook 2007.xls. Canadell, J. G. et al. 2007 From the cover: contributions to accelerating atmospheric CO2 growth from economic activity, carbon intensity, and efficiency of natural sinks. Proc. Natl Acad. Sci. USA 104, 18 866–18 870. (doi:10.1073/pnas.0702737104) Corbett, J. J. & Kohler, H. W. 2003 Updated emissions from ocean shipping. J. Geophys. Res. 108, 4650. (doi:10.1029/2003JD003751) Cox, P. M., Huntingford, C. & Jones, C. D. 2006 Conditions for sink-to-source transitions and runaway feedbacks from the land carbon-cycle. In Avoiding Dangerous Climate Change, eds. Schellnhuber, H. J., Cramer, W., Nakicenovic, N., Wigley, T. & Yohe, G. Cambridge, UK: Cambridge University Press, pp. 155–161. Cranmer, W. et al. 2001 Global response of terrestrial ecosystem structure and function to CO2 and climate change: results from six dynamic global vegetation models. Glob. Change Biol. 7, 357–373. (doi:10.1046/j.1365-2486.2001.00383.x) DEFRA 2006 Climate Change: The UK Programme 2006. Norwich, UK: Department of Food and Rural Affairs. DEFRA 2007 Draft Climate Change Bill. London:Department of Food and Rural Affairs. DTI 2006 Our Energy Challenge: Securing Clean, Affordable Energy for the Long Term. London: Department of Trade and Industry. EPA 2006 Global Anthropogenic Non-CO2 Greenhouse Gas Emissions: 1990–2020. Washington, DC: Office of Atmospheric Programs, Climate Change Division, USA Environmental Protection Agency. European Commission 2007 Limiting Global Climate Change to 2 degrees Celsius: The Way Ahead for 2020 and Beyond. Brussels: Commission of the European Communities. Eyring, V., K¨ohler, H. W., van Aardenne, J. & Lauer, A. 2005 Emissions from international shipping: 1. The last 50 years. J. Geophys. Res. 110, D17305 (doi:10.1029/2004JD005619) FAO 2000 Global Forest Resources Assessment 2000, FAO Forestry Paper no. 124. Rome: Food and Agriculture Organisation of the United Nations. FAO 2005 Global Forest Resources Assessment 2005: Global Synthesis, FAO Forestry Paper no. 140. Rome: Food and Agriculture Organisation of the United Nations. Friedlingstein, P. et al. 2006 Climate-carbon-cycle feedback analysis, results from the C4MIP model intercomparison. J. Clim. 19, 3337–3353. (doi:10.1175/JCL13800.1) IPCC 1996 Climate Change 1995: The Science of Climate Change. Contribution of Working Group I to the Second Assessment Report of the Intergovernmental Panel on Climate Change. Cambridge, UK: Cambridge University Press. IPCC 2007a Climate Change 2007: Synthesis Report. Fourth Assessment Report of the Intergovernmental Panel on Climate Change. Cambridge, UK: Cambridge University Press.
Reframing the climate change challenge
49
IPCC 2007b Climate Change 2007: The Physical Science Basis. Report of Working Group I to the Fourth Assessment Report of the Intergovernmental Panel on Climate Change. Cambridge, UK: Cambridge University Press. Jones, C. D., Cox, P. M. & Huntingford, C. 2006 Impact of climate-carbon-cycle feedbacks on emissions scenarios to achieve stabilisation. In Avoiding Dangerous Climate Change, eds. Schellnhuber, H. J., Cramer, W., Nakicenovic, N., Wigley, T. & Yohe, G. Cambridge, UK: Cambridge University Press, pp. 323–331. Le Qu´er´e, C. et al. 2007 Saturation of the Southern Ocean CO2 sink due to recent climate change. Science 316, 1735–1738. (doi:10.1126/science.1136188) Mannstein, H. & Schumann, U. 2005 Aircraft induced contrail cirrus over Europe. Meteorologische Z. 14, 549–554. (doi:10.1127/0941-2948/2005/0058) Meinshausen, M. 2006 What does a 2 ◦ C target mean for greenhouse gas concentrations? A brief analysis based on multi-gas emission pathways and several climate sensitivity uncertainty estimates. In Avoiding Dangerous Climate Change, eds. Schellnhuber, H. J., Cramer, W., Nakicenovic, N., Wigley, T. & Yohe, G. Cambridge, UK: Cambridge University Press, pp. 253–279. Moutinho, P. & Schwartzman, S. (eds.) 2005 Tropical Deforestation and Climate Change. Bel´em, Brazil: Amazon Institute for Environmental Research. Nusbaumer, J. & Matsumoto, K. 2008 Climate and carbon-cycle changes under the overshoot scenario. Glob. Planet. Change 62, 164–172. (doi:10.1016/j.gloplacha.2008.01.002) Raupach, M. R., Marland, G., Ciais, P., Le Qu´er´e, C., Canadell, J. G., Klepper, G. & Field, C. B. 2007 Global and regional drivers of accelerating CO2 emissions. Proc. Natl Acad. Sci. USA 104, 10 288–10 293. (doi:10.1073/pnas.0700609104) RCEP 2000 Energy: The Changing Climate, 22nd report, CM 4749. London: The Stationery Office. Schneider, S. H. & Mastrandrea, M. D. 2005 Inaugural article: probabilistic assessment of “dangerous” climate change and emissions pathways. Proc. Natl Acad. Sci. USA 102, 15 728–15 735. (doi:10.1073/pnas.0506356102) Stern, N. 2006 Stern Review on the Economics of Climate Change. Cambridge, UK: Cambridge University Press. Stordal, F., Myhre, G., Arlander, W., Svendby, T., Stordal, E. J. G., Rossow, W. B. & Lee, D. S. 2004 Is there a trend in cirrus cloud cover due to aircraft traffic? Atmos. Chem. Phys. Discuss. 4, 6473–6501. Tol, R. S. J. 2007 Europe’s long-term climate target: a critical evaluation. Energy Policy 35, 424–432. (doi:10.1016/j.enpol.2005.12.003)
3 Predicting climate tipping points j. michael t. thompson and jan sieber
There is currently much interest in examining climatic tipping points, to see if it is feasible to predict them in advance. Using techniques from bifurcation theory, recent work looks for a slowing down of the intrinsic transient responses, which is predicted to occur before an instability is encountered. This is done, for example, by determining the short-term autocorrelation coefficient ARC(1) in a sliding window of the timeseries: this stability coefficient should increase to unity at tipping. Such studies have been made both on climatic computer models and on real paleoclimate data preceding ancient tipping events. The latter employ reconstituted time-series provided by ice cores, sediments, etc., and seek to establish whether the actual tipping could have been accurately predicted in advance. One such example is the end of the Younger Dryas event, about 11 500 years ago, when the Arctic warmed by 7 ◦ C in 50 years. A second gives an excellent prediction for the end of ‘greenhouse’ Earth about 34 million years ago when the climate tipped from a tropical state into an icehouse state, using data from tropical Pacific sediment cores. This prediction science is very young, but some encouraging results are already being obtained. Future analyses, relevant to geo-engineering, will clearly need to embrace both real data from improved monitoring instruments, and simulation data generated from increasingly sophisticated predictive models.
Geo-Engineering Climate Change: Environmental Necessity or Pandora’s Box?, eds. Brian Launder and Michael Thompson. Published by Cambridge University Press. © Cambridge University Press 2010.
50
Predicting climate tipping points
51
3.1 Introduction The geo-engineering proposals assessed in this book aim to combat global warming by proactively manipulating the climate. All authors are agreed that these are indeed risky procedures. They would only be actively pursued if all else had failed, and there was a well-researched consensus that to do nothing would lead rapidly to an environmental catastrophe of major proportions. We should note as well that if the climate is thought to have already passed a bifurcation point, one has to consider carefully whether it is in fact too late to usefully apply geo-engineering techniques, because an irreversible transition may already be under way. Biggs et al. (2009) study an analogous problem in their fisheries model: after tipping how fast does one have to implement measures to jump back to the previous state? The emergence of a consensus would inevitably rely on scientific projections of future climatic events, central to which would be sudden, and usually irreversible features that are now called tipping points. The Intergovernmental Panel on Climate Change (IPCC, 2007) made some brief remarks about abrupt and rapid climate change, but more recently Lenton et al. (2008) have sought to define these points rigorously. The physical mechanisms underlying these tipping points are typically internal positive feedback effects of the climate system. Since any geo-engineering measure will have to rely strongly on natural positive feedback mechanisms to amplify its effect, the proximity to a tipping point is of real significance to the engineers planning the intervention. Table 3.1 shows a list of candidates proposed by Lenton et al. (2008), and the possible effects of their tipping on the global climate. All of these subsystems of the climate have strong internal positive feedback mechanisms. Thus, they have a certain propensity for tipping and are susceptible to input (human or otherwise). As column 2 shows, the primary deterministic mechanisms behind several of the listed tipping events are so-called bifurcations, special points in the control parameter space (see columns 4 and 5) at which the deterministic part of the dynamical system governing the climate changes qualitatively (for example, the currently attained steady state disappears). In Section 3.3 we review possible bifurcations and classify them into three types, safe, explosive and dangerous. Almost universally these bifurcations have a precursor: in at least one mode all feedback effects cancel at the linear level, which means that the system is slowing down, and the local (or linear) decay rate (LDR) to the steady state decreases to zero. This implies that, near tipping points, geo-engineering can be expected to be most effective (and most dangerous) because the climate system is most susceptible to disturbances.
M
L
Boreal forest
b
a
M
Arctic summer sea-ice Greenland ice sheet (GIS) West antarctic ice sheet (WAIS) Atlantic thermohaline circulation El Ni˜no Southern Oscillation Indian summer monsoon (ISM) SaharaSahel and W - African monsoon Amazon rainforest Vegetation fraction (+) Tree fraction (−) Tree fraction (−)
M
ITCZ, Intertropical Convergence Zone. EEP, Eastern Equatorial Pacific.
M
Amplitude (+) Rainfall (−)
Areal extent (−) Ice volume (−) Ice volume (−) Overturning (−)
Feature, F (change)
M
H
M
H
Risk
Tipping element
Precipitation, dry season length Local Tair
Thermocline depth, sharpness in EEPb Planetary albedo over India Precipitation
Local Tair or, less Tocean Freshwater input to North Atlantic
Local Tair , ocean heat transport Local Tair
Control parameter, μ
1100 mm yr−1 ∼+7 ◦ C
100 mm yr−1
0.5
??
+5 to +8 ◦ C +0.1 to +0.5 Sv
∼+3 ◦ C
??
μcrit ∼ 10 yr (rapid) >300 yr (slow) >300 yr (slow) ∼100 yr (gradual) ∼100 yr (gradual) ∼1 yr (rapid) ∼10 yr (rapid) ∼50 yr (gradual) ∼50 yr (gradual)
+0.5 to +2 ◦ C +1 to +2 ◦ C +3 to +5 ◦ C +3 to +5 ◦ C +3 to +6 ◦ C NA
+3 to +4 ◦ C +3 to +5 ◦ C
+3 to +5 ◦ C
Transition time, T
Global warming
Biodiversity loss, decreased rainfall Change in type of the ecosystem
Drought in SE Asia and elsewhere Drought, decreased carrying capacity Increased carrying capacity
Regional cooling, sea level, ITCZa shift
Sea level +5 m
Amplified warming, ecosystem change Sea level +2 to +7 m
Key impacts
Table 3.1 Summary of Lenton’s Tipping Elements, namely climate subsystems that are likely to be candidates for future tipping with relevance to political decision making. In column 2, the risk of there being an underlying bifurcation is indicated as follows: H = high, M = medium, L = low. This list will be discussed in greater detail in Section 3.5.
Predicting climate tipping points
53
The analysis and prediction of tipping points of climate subsystems is currently being pursued in several streams of research, and we should note in particular the excellent book by Marten Scheffer about tipping points in nature and society, which includes ecology and some climate studies, due to appear this year (Scheffer, 2009). Most of the research is devoted to creating climate models from first principles, tuning and initialising these models by assimilating geological data, and then running simulations of these models to predict the future. Climate models come in varying degrees of sophistication and realism, more complex ones employing up to 3 × 108 variables (Dijkstra, 2008). Predictions do not only rely on a single ‘best model’ starting from the ‘real initial conditions’. Typically, all qualified models are run from ensembles of initial conditions and then statistical analysis over all generated outcomes is performed (IPCC, 2007). An alternative to the model and simulate approach (and in some sense a short-cut) is to realise that mathematically some of the climate-tipping events correspond to bifurcations (see Section 3.3 for a discussion), and then to use time-series analysis techniques to extract precursors of these bifurcations directly from observational data. This method still benefits from the modelling efforts because simulations generated by predictive models allow analysts to hone their prediction techniques on masses of high-quality data, with the possibility of seeing whether they can predict what the computer eventually displays as the outcome of its run. Transferring these techniques to real data from the Earth itself is undoubtedly challenging. Still, bifurcation predictions directly from real time-series will be a useful complement to modelling from first principles because they do not suffer from all the many difficulties of building and initialising reliable computer models. Our review discusses the current state of bifurcation predictions in climate time-series, focussing on methods, introduced by Held & Kleinen (2004) and Livina & Lenton (2007), for the analysis of the collapse of the global conveyor belt of oceanic water, the thermohaline circulation (THC). This conveyor is important, not so much for the water transport per se, but because of the heat and salt that it redistributes. The paper by Livina & Lenton (2007) is particularly noteworthy in that it includes what seems to be the first bifurcational predictions using real data, namely the Greenland ice-core paleo-temperature data spanning the time from 50 000 years ago to the present. The unevenly spaced data comprised 1586 points and their DFA propagator (this quantity reaches +1 when the local decay rate vanishes; see Section 3.4.1) was calculated in sliding windows of length 500 data points. The results are shown in Figure 3.1, and the rapid warming at the end of the Younger Dryas event, around 11 500 years before the present, is spectacularly anticipated by an upward trend in the propagator, which is heading towards its critical value of +1 at about the correct time.
54
J. Michael T. Thompson and Jan Sieber −20
Temperature (◦C)
−30
(a)
End of the Younger Dryas
−40 −50 −60 1.0
Target DFA1 propagator A
(b)
sliding window
0.8
Years before the present 0.6
40000
30 000
20000
10000
0
Figure 3.1 Results of Livina & Lenton (2007). (a) Greenland ice core (GISP2) paleo-temperature with an unevenly spaced record, visible in the varying density of symbols on the curve. The total number of data points is N = 1586. In (b) the DFA1 propagator is calculated in sliding windows of length 500 points and mapped into the middle points of the windows. A typical sliding window ending near the tipping is shown. Thus, from a prediction point of view the propagator estimates would end at point A (see remarks at the end of Section 3.4.1).
In a second notable paper, Dakos et al. (2008) systematically estimated the LDR for real data in their analysis of eight ancient tipping events via reconstructed time-series. These are: (a) the end of the greenhouse Earth about 34 million years ago when the climate tipped from a tropical state (which had existed for hundreds of millions of years) into an icehouse state with ice caps, using data from tropical Pacific sediment cores, (b) the end of the last glaciation, and the ends of three earlier glaciations, drawing on data from the Antarctica Vostok ice core, (c) the Bølling–Aller¨od transition which was dated about 14 000 years ago, using data from the Greenland GISP2 ice core, (d) the end of the Younger Dryas event about 11 500 years ago when the Arctic warmed by 7 ◦ C in 50 years, drawing on data from the sediment of the Cariaco Basin in Venezuela, (e) the desertification of North Africa when there was a sudden shift from a savanna-like state with scattered lakes to a desert about 5000 years ago, using the sediment core from ODP Hole 658C, off the west coast of Africa.
In all of these cases, the dynamics of the system slows down before the transition. This slow-down was revealed by a short-term autocorrelation coefficient, ARC(1), of the time series which examines to what extent the current points are correlated to preceding points and gives an estimate of the LDR. It is expected to increase towards unity at an instability, as described in Section 3.4.
Predicting climate tipping points
55
3.2 Climate models as dynamical systems Thinking about modelling is a good introduction to the ideas involved in predicting climate change, so we will start from this angle. Now, to an applied mathematician, the Earth’s climate is just a very large dynamical system that evolves in time. Vital elements of this system are the Earth itself, its oceans and atmosphere, and the plants and animals that inhabit it (including, of course, ourselves). In summary, the five key components are often listed succinctly as atmosphere, ocean, land, ice and biosphere. Arriving as external stimuli to this system are sunlight and cosmic rays, etc.: these are usually viewed as driving forces, often just called forcing. In modelling the climate we need not invoke the concepts of quantum mechanics (for the very small) or relativity theory (for the very big or fast). So one generally considers a system operating under the deterministic rules of classical physics, employing, for example, Newton’s laws for the forces, and their effects, between adjacent large blocks of sea water or atmosphere. A block in the atmosphere might extend 100 km by 100 km horizontally and 1 km vertically, there being perhaps 20 blocks stacked vertically over the square base: for example, in a relatively low-resolution model, Selten et al. (2004) use blocks of size 3.75◦ in latitude and longitude with 18 blocks stacked vertically in their simulation. (For current high-resolution models see IPCC (2007).) So henceforth in this section, we will assume that the climate has been modelled primarily as a large deterministic dynamical system evolving in time according to fixed rules. For physical, rather than biological, entities, these rules will usually relate to adjacent (nearest-neighbour) objects at a particular instant of time (with no significant delays or memory effects). It follows that our climate model will have characteristics in common with the familiar mechanical systems governed by Newton’s laws of motion. From a given set of starting conditions (positions and velocities of all the components, for example), and external deterministic forcing varying in a prescribed fashion with time, there will be a unique outcome as the model evolves in time. Plotting the time-evolution of these positions and velocities in a conceptual multidimensional phase space is a central technique of dynamical systems theory. See Kantz & Schreiber (2003) for the relevance of phase space to time-series analysis. Despite the unique outcome, the results of chaos theory remind us that the response may be essentially unknowable over timescales of interest because it can depend with infinite sensitivity on the starting conditions (and on the numerical approximations used in a computer simulation). To ameliorate this difficulty, weather and climate forecasters now often make a series of parallel simulations from an ensemble of initial conditions which are generated by adding different small perturbations to the original set; and they then repeat all of this on different
56
J. Michael T. Thompson and Jan Sieber
models. This ensemble approach, pioneered by Tim Palmer and others, is described by Buizza et al. (1998) and Sperber et al. (2001). Mechanical systems are of two main types. First is the idealised closed conservative (sometimes called Hamiltonian) system in which there is no input or output of energy, which is therefore conserved. These can be useful in situations where there is very little ‘friction’ or energy dissipation, such as when studying the orbits of the planets. A conservative system, like a pendulum with no friction at the pivot and no air resistance, tends to move for ever: it does not exhibit transients, and does not have any attractors. Second, is the more realistic dissipative system where energy is continuously lost (or dissipated). An example is a real pendulum which eventually comes to rest in the hanging-down position, which we call a point attractor. A more complex example is a damped pendulum driven into resonance by steady harmonic forcing from an AC electromagnet: here, after some irregular transient motion, the pendulum settles into a stable ‘steady’ oscillation, such as a periodic attractor or a chaotic attractor. In general, a dissipative dynamical system will settle from a complex transient motion to a simpler attractor as the time increases towards infinity. These attractors, the stable steady states of the system, come in four main types: the point attractors, the periodic attractors, the quasi-periodic (toroidal) attractors and the chaotic attractors (Thompson & Stewart, 2002). Climate models will certainly not be conservative, and will dissipate energy internally, though they also have some energy input: they can be reasonably expected to have the characteristics of the well-studied dissipative systems of (for example) engineering mechanics, and are, in particular, well known to be highly non-linear. 3.3 Concepts from bifurcation theory A major component of non-linear dynamics is the theory of bifurcations, these being points in the slow evolution of a system at which qualitative changes or even sudden jumps of behaviour can occur. In the field of dissipative dynamics co-dimension-1 bifurcations are those events that can be ‘typically’ encountered under the slow sweep of a single control parameter. A climate model will often have (or be assumed to have) such a parameter under the quasi-static variation of which the climate is observed to gradually evolve on a ‘slow’ timescale. Slowly varying parameters are external influences that vary on geological timescales, for example, the obliquity of the Earth’s orbit. Another common type of slowly varying parameter occurs if one models only a subsystem of the climate, for example, oceanic water circulation. Then the influence of an interacting subsystem (for example, freshwater forcing from melting ice sheets) acts as a parameter that changes slowly over time.
Predicting climate tipping points
57
Table 3.2 Safe bifurcations. These include the supercritical forms of the local bifurcations and the less well-known global ‘band merging’. The latter is governed by a saddle-node event on a chaotic attractor. Alternative names are given in parentheses. (a) Local supercritical bifurcations 1. Supercritical Hopf 2. Supercritical Neimark–Sacker (secondary Hopf) 3. Supercritical flip (period-doubling) (b) Global bifurcations 4. Band merging
Point to cycle Cycle to torus Cycle to cycle Chaos to chaos
These bifurcations are characterised by the following features: Subtle: continuous supercritical growth of new attractor path Safe: no fast jump or enlargement of the attracting set Determinate: single outcome even with small noise No hysteresis: path retraced on reversal of control sweep No basin change: basin boundary remote from attractors No intermittency: in the responses of the attractors
Table 3.3 Explosive bifurcations. These are less common global events, which occupy an intermediate position between the safe and dangerous forms. Alternative names are given in parentheses. 5. Flow explosion (omega explosion, SNIPER) 6. Map explosion (omega explosion, mode-locking) 7. Intermittency explosion: Flow 8. Intermittency explosion: Map (temporal intermittency) 9. Regular-saddle Explosion (interior crisis) 10. Chaotic-saddle Explosion (interior crisis)
Point to cycle Cycle to torus Point to chaos Cycle to chaos Chaos to chaos Chaos to chaos
These bifurcations are characterised by the following features: Catastrophic: global events, abrupt enlargement of attracting set Explosive: enlargement, but no jump to remote attractor Determinate: with single outcome even with small noise No hysteresis: paths retraced on reversal of control sweep No basin change: basin boundary remote from attractors Intermittency: lingering in old domain, flashes through the new
An encounter with a bifurcation during this evolution will be of great interest and significance, and may give rise to a dynamic jump on a much faster timescale. A complete list of the (typical) co-dimension-1 bifurcations, to the knowledge of the authors at the time of writing, is given by Thompson & Stewart (2002). It is this list of local and global bifurcations that is used to populate Tables 3.2–3.5. The technical details and terminology of these tables need not concern the general
58
J. Michael T. Thompson and Jan Sieber
Table 3.4 Dangerous bifurcations. These include the ubiquitous folds where a path reaches a smooth maximum or minimum value of the control parameter, the subcritical local bifurcations, and some global events. They each trigger a sudden jump to a remote ‘unknown’ attractor. In climate studies these would be called tipping points, as indeed might other nonlinear phenomena. Alternative names are given in parentheses. (a) Local saddle-node Bifurcations 11. Static fold (saddle-node of fixed point) 12. Cyclic fold (saddle-node of cycle) (b) Local Subcritical Bifurcations 13. Subcritical Hopf 14. Subcritical Neimark–Sacker (secondary Hopf) 15. Subcritical flip (period-doubling) (c) Global Bifurcations 16. Saddle connection (homoclinic connection) 17. Regular-saddle catastrophe (boundary crisis) 18. Chaotic-saddle catastrophe (boundary crisis)
From point From cycle From point From cycle From cycle From cycle From chaos From chaos
These bifurcations are characterised by the following features: Catastrophic: sudden disappearance of attractor Dangerous: sudden jump to new attractor (of any type) Indeterminacy: outcome can depend on global topology Hysteresis: path not reinstated on control reversal Basin: tends to zero (b), attractor hits edge of residual basin (a, c) No intermittency: but critical slowing in global events
reader, but they do serve to show the vast range of bifurcational phenomena that can be expected even in the simplest nonlinear dynamical systems, and certainly in climate models as we see in Section 3.6. A broad classification of the co-dimension-1 attractor bifurcations of dissipative systems into safe, explosive and dangerous forms (Thompson et al., 1994) is illustrated in Tables 3.2–3.4 and Figure 3.2, while all are summarised in Table 3.5 together with notes on their precursors. It must be emphasised that these words are used in a technical sense. Even though in general the safe bifurcations are often literally safer than the dangerous bifurcations, in certain contexts this may not be the case. In particular, the safe bifurcations can still be in a literal sense very dangerous: as when a structural column breaks at a ‘safe’ buckling bifurcation! Note carefully here that when talking about bifurcations we use the word ‘local’ to describe events that are essentially localised in phase space. Conversely we use the word ‘global’ to describe events that involve distant connections in phase space. With this warning, there should be no chance of confusion with our use, elsewhere, of the word ‘global’ in its common parlance as related to the Earth.
Predicting climate tipping points
59
Table 3.5 List of all co-dimension-1 bifurcations of continuous dissipative dynamics, with notes on their precursors. Here S, E and D are used to signify the safe, explosive and dangerous events respectively. LDR is the local decay rate, measuring how rapidly the system returns to its steady state after a small perturbation. Being a linear feature, the LDR of a particular type of bifurcation is not influenced by the sub- or supercritical nature of the bifurcation. Precursors of Co-dimension-1 Bifurcations Supercritical Hopf S: point to cycle LDR → 0 linearly with control Supercritical Neimark S: cycle to torus LDR → 0 linearly with control Supercritical flip S: cycle to cycle LDR → 0 linearly with control Band merging S: chaos to chaos Separation decreases linearly Flow explosion E: point to cycle Path folds. LDR → 0 linearly along path Map explosion E: cycle to torus Path folds. LDR → 0 linearly along path Intermittency explosion: flow E: point to chaos LDR → 0 linearly with control Intermittency explosion: map E: cycle to chaos LDR → 0 as trigger (fold, flip, Neimark) Regular interior crisis E: chaos to chaos Lingering near impinging saddle cycle Chaotic interior crisis E: chaos to chaos Lingering near impinging chaotic saddle Static fold D: from point Path folds. LDR → 0 linearly along path Cyclic fold D: from cycle Path folds. LDR → 0 linearly along path Subcritical Hopf D: from point LDR → 0 linearly with control Subcritical Neimark D: from cycle LDR → 0 linearly with control Subcritical flip D: from cycle LDR → 0 linearly with control Saddle connection D: from cycle Period of cycle tends to infinity Regular exterior crisis D: from chaos Lingering near impinging saddle cycle Chaotic exterior crisis D: from chaos Lingering near impinging accessible saddle
In Tables 3.2–3.4 we give the names of the bifurcations in the three categories, with alternative names given in parentheses. We then indicate the change in the type of attractor that is produced by the bifurcation, such as a point to a cycle, etc. Some of the attributes of each class (safe, explosive or dangerous) are then listed at the foot of each table. Among these attributes, the concept of a basin requires some comment here. In the multidimensional phase space of a dissipative dynamical system (described in Section 3.2) each attractor, or stable state, is surrounded by a region of starting points from which a displaced system would return to the attractor. The set of all these points constitutes the basin of attraction. If the system were displaced to, and then released from any point outside the basin, it would move to a different attractor (or perhaps to infinity). Basins also undergo changes and bifurcations, but for simplicity of exposition in this brief review we focus on the more common attractor bifurcations. In Figure 3.2 we have schematically illustrated three bifurcations that are codimension-1, meaning that they can be typically encountered under the variation of a single control parameter, μ, which is here plotted horizontally in the left column.
Point
µ crit
Point
µcrit
C
Point
C
Jump or Tip
µcrit
C
µ
µ
µ
Cycle
Cycle
µ < µ crit
disturbance
µ < µcrit
q
disturbance
µ < µ crit
small disturbance
disturbance
q slightly larger
q
time
time
time
q
q
q
µ > µ crit
µ > µ crit
µ > µ crit
time
time
time
Figure 3.2 Schematic illustration of the three bifurcation types. On the left the control parameter, μ, is plotted horizontally and the response, q, vertically. The middle column shows the time series of a response to small disturbances if μ < μcrit . On the right we show how the system drifts away from its previously stable steady state if μ > μcrit . The different types of events are (from top to bottom) (a) safe, (b) explosive and (c) dangerous.
Stable path Unstable path
(c)
q
Dangerous event Static fold
(b)
q
Explosive event Flow explosion
(a)
q
Safe event Super-critical Hopf
Predicting climate tipping points
61
The response, q, is plotted vertically. To many people, the most common (safe) bifurcation is what is called the supercritical pitchfork or stable-symmetric point of bifurcation (Thompson & Hunt, 1973). This was first described by Euler (1744) in his classic analysis of the buckling of a slender elastic column, and is taught to engineering students as ‘Euler buckling’ in which the load carried by the column is the control parameter. Poincar´e (1885) explored a number of applications in astrophysics. In this event, the trivial primary equilibrium path on which the column has no lateral deflection (q = 0), becomes unstable at a critical point, C, where μ = μcrit . Passing vertically though C, and then curving towards increasing μ, is a stable secondary equilibrium path of deflected states, the so-called post-buckling path. The existence of (stable) equilibrium states at values of μ > μcrit is why we call the bifurcation a supercritical pitchfork. In contrast, many shell-like elastic structures exhibit a dangerous bifurcation with an (unstable) post-buckling path that curves towards decreasing values of the load, μ, and is accordingly called a subcritical pitchfork. These two pitchforks are excellent examples of safe and dangerous bifurcations, but they do not appear in our lists because they are not co-dimension1 events in generic systems. That the bifurcation of a column is not co-dimension-1 manifests itself by the fact that a perfectly straight column is not a typical object; any real column will have small imperfections, lack of straightness being the most obvious one. These imperfections round off the corners of the intersection of the primary and secondary paths (in the manner of the contours of a mountain pass), and destroy the bifurcation in the manner described by catastrophe theory (Poston & Stewart, 1978; Thompson, 1982). We shall see a subcritical pitchfork bifurcation in a schematic diagram of the THC response due to Rahmstorf (2000) in Figure 3.3 below. This is only observed in very simple (non-generic) models and is replaced by a fold in more elaborate ones. It is because of this lack of typicality of the pitchforks, that we have chosen to illustrate the safe and dangerous bifurcations in Figure 3.2 by other (co-dimension1) bifurcations. As a safe event, we show in Figure 3.2(a) the supercritical Hopf bifurcation. This has an equilibrium path increasing monotonically with μ whose point attractor loses its stability at C in an oscillating fashion, throwing off a path of stable limit cycles which grow towards increasing μ. This occurs, for example, at the onset of vibrations in machining, and triggers the aerodynamic flutter of fins and ailerons in aircraft. Unlike the pitchfork, this picture is not qualitatively changed by small perturbations of the system. As our explosive event, we show in Figure 3.2(b) the flow explosion involving a saddle-node fold on a limit cycle. Here the primary path of point attractors reaches a vertical tangent, and a large oscillation immediately ensues. As with the supercritical Hopf, all paths are re-followed on reversing the sweep of the control parameter μ: there is no hysteresis.
62
J. Michael T. Thompson and Jan Sieber
Finally, as our dangerous event in Figure 3.2(c), we have chosen the simple static fold (otherwise known as a saddle-node bifurcation), which is actually the most common bifurcation encountered in scientific applications: and we shall be discussing one for the THC in Section 3.6.1. Such a fold is in fact generated when a perturbation rounds off the (untypical) subcritical pitchfork, revealing a sharp imperfection sensitivity notorious in the buckling of thin aerospace shell structures (Thompson & Hunt, 1984). In the fold, an equilibrium path of stable point attractors being followed under increasing μ folds smoothly backwards as an unstable path towards decreasing μ as shown. Approaching the turning point at μcrit there is a gradual loss of attracting strength, with the LDR of transient motions (see Section 3.4) passing directly through zero with progress along the arc-length of the path. This makes its variation with μ parabolic, but this fine distinction seems to have little significance in the climate tipping studies of Sections 3.6–3.7. Luckily, in these studies, the early decrease of LDR is usually identified long before any path curvature is apparent. As μ is increased through μcrit the system finds itself with no equilibrium state nearby, so there is inevitably a fast dynamic jump to a remote attractor of any type. On reversing the control sweep, the system will stay on this remote attractor, laying the foundation for a possible hysteresis cycle. We see immediately from these bifurcations that it is primarily the dangerous forms that will correspond to, and underlie, the climate tipping points that concern us here. (Though if, for example, we adopt Lenton’s relatively relaxed definition of a tipping point based on time-horizons (see Section 3.5), even a safe bifurcation might be the underlying trigger.) Understanding the bifurcational aspects will be particularly helpful in a situation where some quasi-stationary dynamics can be viewed as an equilibrium path of a mainly deterministic system, which may nevertheless be stochastically perturbed by noise. We should note that the dangerous bifurcations are often indeterminate in the sense that the remote attractor to which the system jumps often depends with infinite sensitivity on the precise manner in which the bifurcation is realised. This arises (quite commonly and typically) when the bifurcation point is located exactly on a fractal basin boundary (McDonald et al., 1985; Thompson, 1992, 1996). In a model, repeated runs from slightly varied starting conditions would be needed to explore all the possible outcomes. Table 3.5 lists the precursors of the bifurcations from Tables 3.2–3.4 that one would typically use to determine if a bifurcation is nearby in a (mostly) deterministic system. One perturbs the observed steady state by a small ‘kick’. As the steady state is still stable the system relaxes back to the steady state. This relaxation decays exponentially proportional to exp(λt) where t is the time and λ (a negative quantity in this context) is the critical eigenvalue of the destabilizing mode (Thompson & Stewart, 2002). The local decay rate, LDR (called κ in Section 3.4), is the negative of λ.
Predicting climate tipping points
63
Defined in this way, a positive LDR tending to zero quantifies the ‘slowing of transients’ as we head towards an instability. We see that the vast majority (though not all) of the typical events display the useful precursor that the LDR vanishes at the bifurcation (although the decay is in some cases oscillatory). Under light stochastic noise, the variance of the critical mode will correspondingly exhibit a divergence proportional to the reciprocal of the LDR. The LDR precursor certainly holds, with monotonic decay, for the static fold which is what we shall be looking at in Section 3.6.1 in the collapse of the North Atlantic thermohaline circulation. The fact, noted in Table 3.5, that close to the bifurcation some LDRs vary linearly with the control, while some vary linearly along the (folding) path is a fine distinction that may not be useful or observable in climate studies. The outline of the co-dimension-1 bifurcations that we have just presented applies to dynamical flows which are generated by continuous systems where time changes smoothly as in the real world, and as in those computer models that are governed by differential equations. There are closely analogous theories and classifications for the bifurcations of dynamical maps governed (for example) by iterated systems, where time changes in finite steps. It is these analogous theories that will be needed when dealing with experimental data sets from ice cores, etc., as we shall show in the following section. Meanwhile the theory for discrete time data, has direct relevance to the possibility of tipping points in parts of the biosphere where time is often best thought of in generations or seasons; in some populations, such as insects, one generation disappears before the next is born. The equivalent concept that we shall need for analysing discrete-time data is as follows. The method used in our examples from the recent literature (in Sections 3.6 and 3.7) is to search for an underlying linearised deterministic map of the form yn+1 = cyn which governs the critical slowing mode of the transients. This equation represents exponential decay when the eigenvalue of the mapping, c, is less than 1, but exponential growth when c is greater than 1. So corresponding to LDR dropping to zero, we shall be expecting c to increase towards unity. 3.4 Analysis of time-series near incipient bifurcations Time-series of observational data can help to predict incipient bifurcations in two ways. First, climate models, even if derived from first principles, require initial conditions on a fine mesh and depend on parameters (for example, the effective re-radiation coefficient from the Earth’s land surface). Both, initial conditions and parameters, are often not measurable directly but must be extracted indirectly by
64
J. Michael T. Thompson and Jan Sieber
fitting the output of models to training data. This process is called data assimilation. The alternative is to skip the modelling step and search for precursors of incipient dangerous bifurcations directly in a monitored time-series. A typical example of an observational time series is shown (later) in the upper part of Figure 3.6. The time-series clearly shows an abrupt transition at about 34 million years before the present. One of the aims of time-series analysis would be to predict this transition (and, ideally, its time) from features of the time series prior to the transition. In this example one assumes that the system is in an equilibrium-like state which then disappears in a static fold, 34 million years ago. According to Table 3.5 the LDR tends to zero as we approach such a bifurcation. A decreasing LDR corresponds to a slowing down of small-scale features in the time-series which one can expect to be visible in many different ways. If it is possible to apply small pulse-like disturbances (or one knows that this type of disturbance has been present during the recording) the LDR is observable directly as the recovery rate from this disturbance (this was suggested for ecological systems by van Nes & Scheffer (2007)). However, natural disturbances that are typically present are noise-induced fluctuations around the equilibrium. These fluctuations on short timescales can be used to extract information about a decrease of the LDR. For example, the power spectrum of the noisy time-series shifts toward lower frequencies. This reddening of the spectrum was analysed and tested by Kleinen et al. (2003) as an indicator of a decrease of the LDR using the box models by Stommel (1961), and by Biggs et al. (2009) in a fisheries model. Carpenter & Brock (2006) find that a decreasing LDR causes an increasing variance of the stationary temporal distributions in their study of stochastic ecological models. Also in studies of ecological models, Guttal & Jayaprakash (2008, 2009) find that increasing higher-order moments (such as skewness) of the temporal distribution can be a reliable early warning signal for a regime shift, as well as increasing higher-order moments of spatial distributions. Making the step from temporal to spatial distributions is of interest because advancing technology may be able to increase the accuracy of measured spatial distributions more than measurements of temporal distributions (which require data from the past). 3.4.1 Autoregressive modelling and detrended fluctuation analysis Held & Kleinen (2004) use the noise-induced fluctuations on the short timescale to extract information about the LDR using autoregressive (AR) modelling (see Box et al. (1994) for a text book on statistical forecasting). In order to apply AR modelling to unevenly spaced, drifting data from geological records Dakos et al. (2008) interpolated and detrended the time-series. We outline the procedure of Dakos et al. (2008) in more detail for the example of a single-valued time-series
Predicting climate tipping points
65
that is assumed to follow a slowly drifting equilibrium of a deterministic, dissipative dynamical system disturbed by noise-induced fluctuations. (1) Interpolation If the time spacing between measurements is not equidistant (which is typical for geological time-series) then one interpolates (for example, linearly) to obtain a time-series on an equidistant mesh of time steps t. The following steps assume that the time step t satisfies 1/κ t 1/κ i where κ is the LDR of the time-series and κ i are the decay rates of other, non-critical, modes. For example, Held & Kleinen (2004) found that t = 50 years fits roughly into this interval for their tests on simulations (see Figure 3.4 below). The result of the interpolation is a time series xn of values approximating measurements on a mesh tn with time steps t. (2) Detrending To remove the slow drift of the equilibrium one finds and subtracts the slowly moving average of the time series xn . One possible choice is the average X(tn ) of the time series xn taken for a Gaussian kernel of a certain bandwidth d. The result of this step is a time-series yn = xn – X(tn ) which fluctuates around zero as a stationary time-series. Notice that X(tn ) is the smoothed curve in the upper part of Figure 3.6 below. (3) Fit LDR in moving window One assumes that the remaining time-series, yn , can be modelled approximately by a stable scalar linear mapping, the so-called AR(1) model, disturbed by noise
yn+1 = cyn + σ ηn where σ ηn is the instance of a random error at time tn and c (the mapping eigenvalue, sometimes called the propagator) is the correlation between successive elements of the time-series yn . In places we follow other authors by calling c the first-order autoregressive coefficient, written as ARC(1). We note that under our assumptions c is related to the LDR, κ, via c = exp(κt). If one assumes that the propagator, c, drifts slowly and that the random error, σ ηn , is independent and identically distributed (i.i.d.) sampled from a normal distribution then one can obtain the optimal approximation of the propagator c by an ordinary least-squares fit of yn+1 = cyn over a moving time window [tm−k . . . tm+k ]. Here the window length is 2k, and the estimation of c will be repeated as the centre of the window, given by m, moves through the field of data. The solution cm of this least-squares fit is an approximation of c(tm ) = exp(κ(tm )t) and, thus, gives also an approximation of the LDR, κ(tm ), at the middle of the window. The evolution of the propagator c is shown in the bottom of Figure 3.6. Finally, if one wants to make a prediction about the time tf at which the static fold occurs one has to extrapolate a fit of the propagator time series c(tm ) to find the time tf such that c(tf ) = 1. The AR(1) model is only suitable to find out whether the equilibrium is close to a bifurcation or not. It is not able to distinguish between possible types of bifurcation as listed in Table 3.5. Higher-order AR models can be reconstructed. For the data presented by Dakos et al. (2008) these higher-order AR models confirm that, first,
66
J. Michael T. Thompson and Jan Sieber
the first-order coefficient really is dominant, and, second, that this coefficient is increasing before the transition. Livina & Lenton (2007) modified step 3 of the AR(1) approach of Held & Kleinen (2004), aiming to find estimates also for shorter time-series with a longrange memory using detrended fluctuation analysis (DFA: originally developed by Peng et al. (1994) to detect long-range correlation in DNA sequences). For DFA one determines the variance V(k) of the cumulated sum of the detrended timeseries yn over windows of size k and fits the relation between V(k) and k to a power law: V(k) ∼ kα . The exponent α approaches 3/2 when the LDR of the underlying deterministic system decreases to zero. The method of Livina & Lenton (2007) was tested for simulations of the GENIE1 model and on real data for the Greenland ice-core paleo-temperature (GISP2) data spanning the time from 50 000 years ago to the present. Extracting bifurcational precursors such as the ARC(1) propagator from the GISP2 data is particularly challenging because the data set is comparatively small (1586 points) and unevenly spaced. Nevertheless, the propagator estimate extracted via Livina & Lenton’s detrended fluctuation analysis shows not only an increase but its intersection with unity would have predicted the rapid transition at the end of the Younger Dryas accurately. See Lenton et al. (2009) for further discussion of the GENIE simulations. Both methods, AR modelling and DFA, can in principle be used for (nearly) model-free prediction of tipping induced by a static fold. When testing the accuracy of predictions on model-generated or real data one should note the following two points. First, assign the ARC(1) estimate to the time in the middle of the moving time window for which it has been fitted. Dakos et al. (2008) have shifted the time argument of their ARC(1) estimate to the endpoint of the fitting interval because they were not concerned with accurate prediction (see Section 3.4.2). Second, use only those parts of the time series c(t) that were derived from data prior to the onset of the transition. We can illustrate this using Figure 3.1. The time interval between adjacent data points used by Livina & Lenton (2007) and shown in Figure 3.1(a) is not a constant. The length of the sliding window in which the DFA1 propagator is repeatedly estimated is likewise variable. However, we show in Figure 3.1(b) a typical length of the window, drawn as if the right-hand leading edge of the window had just reached the tipping point. For this notional window, the DFA1 result would be plotted in the centre of the window at point A. Since in a real prediction scenario we cannot have the right-hand leading edge of the window passing the tipping point, the DFA1 graph must be imagined to terminate at A. Although when working with historical or simulation data it is possible to allow the leading edge to pass the tipping point (as Livina & Lenton have done) the results after A become increasingly erroneous from a prediction point of view
Predicting climate tipping points
67
because the desired results for the pre-tipping DFA1 are increasingly contaminated by the spurious and irrelevant behaviour of the temperature graph after the tip. 3.4.2 Comments on predictive power Ultimately, methods based on AR modelling have been designed to achieve quantitative predictions, giving an estimate of when tipping occurs with a certain confidence interval (similar to Figure 3.4). We note, however, that Dakos et al. (2008), which is the most systematic study applying this analysis to geological data, make a much more modest claim: the propagator c(t) shows a statistically significant increase prior to each of the eight tipping events they investigated (listed in the introduction); correspondingly the estimated LDR will show a statistically significant decrease. Dakos et al. (2008) applied statistical rank tests to the propagator c(tn ) to establish statistical significance. In the procedures of Section 3.4.1 one has to choose a number of method parameters that are restricted by a priori unknown quantities, for example, the step size t for interpolation, the kernel bandwidth d and the window length 2k. A substantial part of the analysis in Dakos et al. (2008) consisted of checking that the observed increase of c is largely independent of the choice of these parameters, thus demonstrating that the increase of c is not an artefact of their method. The predictions one would make from the ARC(1) time-series, c(t), are, however, not as robust on the quantitative level (this will be discussed for two examples of Dakos et al. (2008) in Section 3.7). For example, changing the window length 2k or the kernel bandwidth d shifts the time-series of the estimated propagator horizontally and vertically: even a shift by 10 per cent corresponds to a shift for the estimated tipping by possibly thousands of years. Also the interpolation step size t (interpolation is necessary due to the unevenly spaced records and the inherently non-discrete nature of the time-series) may cause spurious autocorrelation. Another difficulty arises from an additional assumption one has to make for accurate prediction: the underlying control parameter is drifting (nearly) linearly in time during the recorded time-series. Even this assumption is not sufficient. A dynamical system can nearly reach the tipping point under gradual variation (say, increase) of a control parameter but turn back on its own if the parameter is increased further. The only definite conclusion one can draw from a decrease of the LDR to a small value is that generically there should exist a perturbation that leads to tipping. For a recorded time-series this perturbation may simply not have happened. The term ‘generic’ means that certain second-order terms in the underlying non-linear deterministic system should have a substantially larger modulus than the vanishing LDR (Thompson & Stewart, 2002). This effect may lead to false positives when testing predictions using past data even if the AR models are perfectly accurate and the assumptions behind them are satisfied.
68
J. Michael T. Thompson and Jan Sieber
These difficulties all conspire to restrict the level of certainty that can be gained from predictions based on time series. Fortunately, from a geo-engineering point of view, these difficulties may be of minor relevance because establishing a decrease of the LDR is of the greatest interest in its own right. After all, the LDR is the primary direct indicator of sensitivity of the climate to perturbations (such as geo-engineering measures).
3.5 Lenton’s tipping elements Work at the beginning of this century which set out to define and examine climate tipping (Lockwood, 2001; Rahmstorf, 2001; National Research Council, 2002; Alley et al., 2003; Rial, et al., 2004) focussed on abrupt climate change: namely when the Earth system is forced to cross some threshold, triggering a transition to a new state at a rate determined by the climate system itself and faster than the cause, with some degree of irreversibility. As we noted in Section 3.3, this makes the tipping points essentially identical to the dangerous bifurcations of non-linear dynamics. As well as tipping points, the concept has arisen of tipping elements, these being well-defined subsystems of the climate which work (or can be assumed to work) fairly independently, and are prone to sudden change. In modelling them, their interactions with the rest of the climate system are typically expressed as a forcing that varies slowly over time. Recently, Lenton et al. (2008) have made a critical evaluation of policy-relevant tipping elements in the climate system that are particularly vulnerable to human activities. To do this they built on the discussions and conclusions of a recent international workshop entitled ‘Tipping points in the Earth system’ held at the British Embassy, Berlin, which brought together 36 experts in the field. Additionally they conducted an expert elicitation from 52 members of the international scientific community to rank the sensitivity of these elements to global warming. In their work, they use the term tipping element to describe a subsystem of the Earth system that is at least sub-continental in scale, and can be switched into a qualitatively different state by small perturbations. Their definition is in some ways broader than that of some other workers because they wish to embrace the following: non-climatic variables; cases where the transition is actually slower than the anthropogenic forcing causing it; cases where a slight change in control may have a qualitative impact in the future without however any abrupt change. To produce their short list of key climatic tipping elements, summarised in Table 3.1 (in the introduction) and below, Lenton et al. (2008) considered carefully to what extent they satisfied the following four conditions guaranteeing their relevance to
Predicting climate tipping points
69
international decision-making meetings such as Copenhagen (2009), the daughter of Kyoto. Condition 1 There is an adequate theoretical basis (or past evidence of threshold behaviour) to show that there are parameters controlling the system that can be combined into a single control μ for which there exists a critical control value μcrit . Exceeding this critical value leads to a qualitative change in a crucial system feature after prescribed times. Condition 2 Human activities are interfering with the system such that decisions taken within an appropriate political time horizon can determine whether the critical value for the control, μcrit , is reached. Condition 3 The time to observe a qualitative change plus the time to trigger it lie within an ethical time horizon which recognizes that events too far away in the future may not have the power to influence today’s decisions. Condition 4 A significant number of people care about the expected outcome. This may be because (i) it affects significantly the overall mode of operation of the Earth system, such that the tipping would modify the qualitative state of the whole system, or (ii) it would deeply affect human welfare, such that the tipping would have impacts on many people, or (iii) it would seriously affect a unique feature of the biosphere. In a personal communication, Tim Lenton kindly summarised his latest views as to which of these are likely to be governed by an underlying bifurcation. They are listed in the headings as follows. (1) Arctic summer sea-ice: possible bifurcation Area coverage has strong positive feedback, and may exhibit bi-stability with perhaps multiple states for ice thickness (if the area covered by ice decreases less energy from insolation is reflected, resulting in increasing temperature and, thus, decreased ice coverage). The instability is not expected to be relevant to Southern Ocean sea-ice because the Antarctic continent covers the region over which it would be expected to arise (Maqueda et al., 1998). Some researchers
70
(2)
(3)
(4)
(5)
(6)
J. Michael T. Thompson and Jan Sieber think a summer ice-loss threshold, if not already passed, may be very close and a transition could occur well within this century. However Lindsay & Zhang (2005) are not so confident about a threshold, and Eisenman & Wettlaufer (2009) argue that there is probably no bifurcation for the loss of seasonal (summer) sea-ice cover: but there may be one for the year-round loss of ice cover. See also Winton (2006). Greenland ice sheet: bifurcation Ice-sheet models generally exhibit multiple stable states with non-linear transitions between them (Saltzman, 2002), and this is reinforced by paleo-data. If a threshold is passed, the IPCC (2007) predicts a timescale of greater than 1000 years for a collapse of the sheet. However, given the uncertainties in modelling a lower limit of 300 years is conceivable (Hansen, 2005). West Antarctic ice sheet: possible bifurcation Most of the West Antarctic ice sheet (WAIS) is grounded below sea level and could collapse if a retreat of the grounding line (between the ice sheet and the ice shelf) triggers a strong positive feedback. The ice sheet has been prone to collapse, and models show internal instability. There are occasional major losses of ice in the so-called Heinrich events. Although the IPCC (2007) has not quoted a threshold, Lenton estimates a range that is accessible this century. Note that a rapid sea-level rise (of greater than 1 metre per century) is more likely to come from the WAIS than from the Greenland ice sheet. Atlantic thermohaline circulation: fold bifurcation A shut-off in Atlantic thermohaline circulation can occur if sufficient freshwater enters in the North to halt the density-driven North Atlantic Deep Water formation. Such THC changes played an important part in rapid climate changes recorded in Greenland during the last glacial cycle (Rahmstorf, 2002): see Section 3.7.2 for predictive studies of the Younger Dryas tipping event. As described in Section 3.6.1, a multitude of mathematical models, backed up by past data, show the THC to exhibit bi-stability and hysteresis with a fold bifurcation (see Figure 3.3 and discussion in Section 3.6.1). Since the THC helps to drive the Gulf Stream, a shut-down would significantly affect the climate of the British Isles. El Ni˜no Southern Oscillation: some possibility of bifurcation The El Ni˜no Southern Oscillation (ENSO) is the most significant ocean–atmosphere mode of climate variability, and it is susceptible to three main factors: the zonal mean thermocline depth, the thermocline sharpness in the eastern equatorial Pacific (EEP), and the strength of the annual cycle and hence the meridional temperature gradient across the equator (Guilyardi, 2006). So increased ocean heat uptake could cause a shift from present-day ENSO variability to greater amplitude and/or more frequent El Ni˜nos (Timmermann et al., 1999). Recorded data suggests switching between different (self-sustaining) oscillatory regimes: however, it could be just noise-driven behaviour, with an underlying damped oscillation. Indian summer monsoon: possible bifurcation The Indian Summer Monsoon (ISM) is driven by a land-to-ocean pressure gradient, which is itself reinforced by the moisture that the monsoon carries from the adjacent Indian Ocean. This moisture–advection feedback is described by Zickfeld et al. (2005). Simple models of the monsoon give bistability and fold bifurcations, with the monsoon switching from ‘on’ and ‘off’ states.
Predicting climate tipping points
71
Some data also suggest more complexity, with switches between different chaotic oscillations. (7) Sahara/Sahel and West African monsoon: possible bifurcation The monsoon shows jumps of rainfall location even from season to season. Such jumps alter the local atmospheric circulation, suggesting multiple stable states. Indeed past greening of the Sahara occurred in the mid-Holocene and may have occurred rapidly in the earlier Bølling–Aller¨od warming. Work by de Menocal et al. (2000) suggests that the collapse of vegetation in the Sahara about 5000 years ago occurred more rapidly than could be attributed to changes in the Earth’s orbital features. A sudden increase in green desert vegetation would of course be a welcome feature for the local population, but might have unforeseen knock-on effects elsewhere. (8) Amazon rainforest: possible bifurcation In the Amazon basin, a large fraction of the rainfall evaporates causing further rainfall, and for this reason simulations of Amazon deforestation typically generate about 20–30 per cent reductions in precipitation (Zeng et al., 1996), a lengthening of the dry season, and increases in summer temperatures (Kleidon & Heimann, 2000). The result is that it would be difficult for the forest to re-establish itself, suggesting that the system may exhibit bi-stability. (9) Boreal forest: probably not a bifurcation The Northern or Boreal forest system exhibits a complex interplay between tree physiology, permafrost and fire. Climate change could lead to large-scale dieback of these forests, with transitions to open woodlands or grasslands (Joos et al., 2001; Lucht et al., 2006). Based on limited evidence, the reduction of the tree fraction may have characteristics more like a quasi-static transition than a real bifurcation.
3.6 Predictions of tipping points in models 3.6.1 Shutdown of the thermohaline circulation We choose to look, first, at the thermohaline circulation (THC) because it has been thoroughly examined over many years in computer simulations, and its bifurcational structure is quite well understood. The remarkable global extent of the THC is well known. In the Atlantic it is closely related to, and helps to drive, the North Atlantic Current (including the Drift), and the Gulf Stream: so its variation could significantly affect the climate of the British Isles and Europe. It exhibits multi-stability and can switch abruptly in response to gradual changes in forcing which might arise from global warming. Its underlying dynamics are summarised schematically in Figure 3.3 (below) adapted from the paper by Rahmstorf et al. (2005), which itself drew on the classic paper of Stommel (1961). This shows the response, represented by the overturning strength of the circulation (q), versus the forcing control, represented by the freshwater flux (from rivers, glaciers, etc.) into the North Atlantic (μ). The suggestion is that anthropogenic (man-induced) global warming may shift this control parameter, μ,
72
J. Michael T. Thompson and Jan Sieber Overturning, q (Sv) 40 THC ‘on’
Possible premature shut-down due to noise Fold
Restart of convection
Subcritical pitchfork
Hysteresis cycle
or Fold
Advective spin-down THC ‘off’
0 0 Freshwater forcing (Sv)
0.2
Figure 3.3 A schematic diagram of the thermohaline response showing the two bifurcations and the associated hysteresis cycle (Rahmstorf 2000). The subcritical pitchfork bifurcation will be observed in very simple models, but will be replaced by a fold in more elaborate ones: see, for example, Figure 3.5(b). Note that 1 Sv is 106 cubic metres per second, which is roughly the combined flow rate of all rivers on Earth.
past the fold bifurcation at a critical value of μ = μcrit (= 0.2 in this highly schematic diagram). The hope is that by tuning a climate model to available climatological data we could determine μcrit from that model, thereby throwing some light on the possible tipping of the real climate element. The question of where the tipping shows in models has been addressed in a series of papers by Dijkstra & Weijer (2003, 2005), Dijkstra et al. (2004), and Huisman et al. (2009) using a hierarchy of models of increasing complexity. The simplest model is a box model consisting of two connected boxes of different temperatures and salinity representing the North Atlantic at low and high latitudes. For this box model it is known that two stable equilibria coexist for a large range of freshwater forcing. The upper end of the model hierarchy is a full global ocean circulation model. Using this high-end model, Dijkstra & Weijer (2005) applied techniques of numerical bifurcation theory to delineate two branches of stable steady-state solutions. One of these had a strong northern overturning in the Atlantic while the other had hardly any northern overturning, confirming qualitatively the sketch shown in Figure 3.3. Finally, Huisman et al. (2009) have discovered four different flow regimes of their computer model. These they call the Conveyor (C), the Southern
Predicting climate tipping points
73
Sinking (SS), the Northern Sinking (NS) and the Inverse Conveyor (IC), which appear as two disconnected branches of solutions, where the C is connected with the SS and the NS with the IC. The authors argue that these findings show, significantly, that the parameter volume for which multiple steady states exist is greatly increased. An intuitive physical mechanism for bi-stability is the presence of two potential wells (at the bottom of each is a stable equilibrium) separated by a saddle, which corresponds to the unstable equilibrium. Applying a perturbation then corresponds to a temporary alteration of this potential energy landscape. Dijkstra et al. (2004) observed that this picture is approximately true for ocean circulation if one takes the average deviation of water density (as determined by salinity and temperature) from the original equilibrium as the potential energy. They showed, first for a box model and then for a global ocean circulation model, that the potential energy landscape of the unperturbed system defines the basins of attraction fairly accurately. This helps engineers and forecasters to determine whether a perturbation (for example, increased freshwater influx) enables the bi-stable system to cross from one basin of attraction to the other. Concerning the simple box models of the THC, we might note their similarity to the atmospheric convection model in which Lorenz (1963) discovered the chaotic attractor: this points to the fact that we must expect chaotic features in the THC and other climate models. See Dijsktra (2008) for a summary of the current state of ocean modelling from a dynamical systems point of view, and, for example, Tziperman et al. (1994) and Tziperman (1997) for how predictions of ocean models connect to full global circulation models. Building on these modelling efforts, ongoing research is actively trying to predict an imminent collapse at the fold seen in the models (for example, Figure 3.3) from bifurcational precursors in time series. Held & Kleinen (2004) use the LDR (described earlier in Section 3.4 and in Table 3.5) as the diagnostic variable that they think is most directly linked to the distance from a bifurcation threshold. They demonstrate its use to predict the shutdown of the North Atlantic thermohaline circulation using the oceanic output of CLIMBER2, a predictive coupled model of intermediate complexity (Petoukhov et al., 2000). They make a 50 000-years transient run with a linear increase in atmospheric CO2 from 280 to 800 parts per million (ppm), which generates within the model an increase in the fresh water forcing which is perturbed stochastically. This run results in the eventual collapse of the THC as shown in Figure 3.4. In Figure 3.4(a) the graph (corresponding approximately to the schematic diagram of Figure 3.3) is fairly linear over much of the timescale: there is no adequate early prediction of the fold bifurcation in terms of path curvature. The graph of Figure 3.4(b) shows the variation of the first-order autoregressive coefficient
J. Michael T. Thompson and Jan Sieber
(a)
Overturning
74
20
Propagator, c
0
(b)
q(Sv)
10 0.2
0.4
0.6
t 0.8 1.0 Target
0.4
0.6
0.8
1.0 Moving window 0.8 0.6
t
95% error zone
0.2
1.0
Time (50 000 years)
Figure 3.4 Results of Held & Kleinen (2004) which give a good prediction of the collapse of the thermohaline circulation induced by a four-fold linear increase of CO2 over 50 000 years in a model simulation. Collapse present at t ≈ 0.8 in (a) is predicted to occur when the propagator, c = ARC(1), shown in (b), or its linear fit, reaches +1.
or propagator, ARC(1), which is described in Section 3.4. Unlike the response diagram of q(t), the time-series of ARC(1), although noisy, allows a fairly good prediction of the imminent collapse using the linear fit drawn: the fairly steady rise of ARC(1) towards its critical value of +1 is indeed seen over a very considerable timescale. Notice that the linear fit is surrounded by a 95 per cent zone, giving probability bounds to the collapse time. These bounds emphasise that much more precise predictions will be needed before they can be used to guide policy on whether to implement geo-engineering proposals. 3.6.2 Global glaciation and desertification of Africa Alongside their extensive studies of past climatic events using real paleo-data, Dakos et al. (2008) also made some model studies as illustrated in Figure 3.5. For these, and subsequent figures, the number of data points, N, is quoted in the captions. In pictures of this type it is worth observing that there seems to be no agreed way of plotting the estimated autocorrelation coefficient. Held & Kleinen (2004) and Livina & Lenton (2007) plot ARC(1) at the centre of the moving window in which it has been determined. Meanwhile Dakos et al. (2008) plot ARC(1) at the final point of this window. Here, we have redrawn the results from the latter article but shifted the ARC(1) back by half the length of the sliding window, bringing the graphs into the format of Held & Kleinen (2004) and Livina & Lenton (2007). This is important whenever the intention is to make a forward extrapolation to a
0.4
0.5
0.6
0.7
0.8
0.9
2500
3750
(a)
Time (years)
1250
5000
0.80
0.84
0.88
0.92
0.96
0
20
30
(b)
40
Target
Time (1000 years)
10
Mid− Window
ARC(1)
Window
Hysteresis
50
Freshwater forcing (Sv) −0.30 −0.25 −0.20 −0.15 −0.10 −0.05
Salinity
0.84
0.88
0.92
0.96
1.00
0
20
40
60
80
8000
6000
(c)
Years before present
7000
Mid− Window
ARC(1)
5000
Target
Summer insolation (W m−2 ) 1320 1340
Hysteresis
Window
1300
Vegetation (%)
Figure 3.5 Results of Dakos et al. (2008) for three examples based on predictive models: (a) runaway to glaciated Earth (N = 800), (b) collapse of thermohaline circulation (N = 1000), (c) desertification of North Africa (N = 6002). Notice the notional hysteresis loops sketched on (b) and (c). These pictures have been redrawn as mid-window plots.
0
Mid− Window
ARC(1)
1.00
1.0
Target
0
260
0.008 0.004
Window
270
280
1.000
Relative radiative forcing 0.989 0.978 0.963
Temperature (K)
76
J. Michael T. Thompson and Jan Sieber CaCO3 (%)
Tip
1.00 Target
ARC(1) Moving window
80
Moving window
0.95
Tropical climate
60
0.90 Ice caps form
40 20
Midwindow
0.85
Endwindow
0.80
0 40
38
36 (a)
34
32
40
38
36 34 32 Millions of years before present (b)
Figure 3.6 The ancient greenhouse to icehouse tipping with N = 482 data points. This is one of the best correlations obtained by Dakos et al. (2008) in their work on eight recorded tipping points. Here the sediments containing CaCO3 were laid down 30–40 million years ago. Redrawn from Dakos et al. (2008), as described in the text.
target, as we are doing here (see Section 3.4.1). This forward extrapolation can be made by any appropriate method. In fact, approaching (close to) an underlying fold bifurcation, ARC(1) will vary linearly along the solution path, but parabolically with the control parameter: this parabolic effect will only be relevant if the upper solution path is already curving appreciably, which is not the case in most of the present examples displayed here. 3.7 Predictions of ancient tippings We have already presented the results of Livina & Lenton (2007) on the ending of the Younger Dryas event using Greenland ice-core data in Figure 3.1. Here we turn to Dakos et al. (2008) who present a systematic analysis of eight ancient climate transitions. They show that prior to all eight of these transitions the ARC(1) propagator c extracted from the time-series of observations (as described in Section 3.4) shows a statistically significant increase, thus providing evidence that these ancient transitions indeed correspond to tipping events. We show in the following subsections the results of Dakos et al. (2008) for two of these events (leaving out the statistical tests). 3.7.1 The greenhouse to icehouse tipping We show first in Figure 3.6 their study of the greenhouse–icehouse tipping event that happened about 34 million years ago. The time-series in Figure 3.6(a) is the data, namely the calcium carbonate (CaCO3 ) content from tropical Pacific sediment cores. The smooth central line is the Gaussian kernel function used to filter out
Predicting climate tipping points Greyscale
1.00
Tip
Moving window
190
0.99
Younger Dryas
ARC(1)
Target
Moving window
0.98
180
77
MidWindow
Pre-Boreal
0.97
170 12 400
12 000
11 600
12 400
12 000
11 600 Years before present
(a)
(b)
Figure 3.7 A second illustration taken from Dakos et al. (2008) for the end of the Younger Dryas event using the greyscale from basin sediment in Cariaco, Venezuela.
slow trends. The graph in Figure 3.6(b) shows the two plots of ARC(1) that are described in Section 3.6.2, and we notice that the mid-window projection is very close to the target, namely the known tipping point from the paleo-data. 3.7.2 End of the Younger Dryas event To put things in perspective, Figure 3.7 shows a less well-correlated example from the Dakos paper, this one for the end of the Younger Dryas event using the greyscale from the Cariaco basin sediments in Venezuela. This Younger Dryas event (Houghton, 2004) was a curious cooling just as the Earth was warming up after the last ice age, as is clearly visible, for example, in records of the oxygen isotope δ18 O in Greenland ice. It ended in a dramatic tipping point, about 11 500 years ago, when the Arctic warmed by 7 ◦ C in 50 years. Its behaviour is thought to be linked to changes in the thermohaline circulation. As we have seen, this ‘conveyor belt’ is driven by the sinking of cold salty water in the North and can be stopped if too much fresh melt makes the water less salty, and so less dense. At the end of the ice age when the ice sheet over North America began to melt, the water first drained down the Mississippi Basin into the Gulf of Mexico. Then, suddenly, it cut a new channel near the St Lawrence River to the North Atlantic. This sudden influx of fresh water cut off part of the ocean ‘conveyor belt’, the warm Atlantic water stopped flowing North, and the Younger Dryas cooling was started. It was the restart of the circulation that could have ended the Younger Dryas at its rapid tipping point, propelling the Earth into the warmer Pre-Boreal era. In Figure 3.7(b), we see that the (mid-window) plot of the propagator ARC(1) gives a fairly inadequate prediction of the tipping despite its statistically significant increase. A possible cause for this discrepancy might be the violation of the central assumption underlying the extraction of ARC(1): before tipping the system
78
J. Michael T. Thompson and Jan Sieber
is supposed to follow a slowly drifting equilibrium disturbed by noise-induced fluctuations. ARC(1) is very close to its critical value +1 for the whole time before tipping, which suggests that the underlying deterministic system is not at an equilibrium. Note that due to the detrending procedure the fitted ARC(1) will always be slightly less than +1. We might note finally that a very recent paper on the Younger Dryas event by Bakke et al. (2009) presents high-resolution records from two sediment cores obtained from Lake Kr˚akenes in western Norway and the Nordic seas. Multiple proxies from the former show signs of rapid alternations between glacial growth and melting. Simultaneously, sea temperature and salinity show an alternation related to the ice cover and the inflow of warm, salty North Atlantic waters. The suggestion is that there was a rapid flipping between two states before the fast tip at the end of Younger Dryas which created the permanent transition to an interglacial state. This strengthens the suspicion that the deterministic component of the dynamics behind the time series in Figure 3.7(a) is not near a slowly drifting equilibrium. It will be interesting to see if any useful time-series analyses can be made of this rapid fluttering action. 3.8 Concluding remarks Our illustrations give a snapshot of very recent research showing the current status of predictive studies. They show that tipping events, corresponding mathematically to dangerous bifurcations, pose a likely threat to the current state of the climate because they cause rapid and irreversible transitions. Also, there is evidence that tipping events have been the mechanism behind climate transitions of the past. Model studies give hope that these tipping events are predictable using time-series analysis: when applied to real geological data from past events prediction is often remarkably good but is not always reliable. With today’s and tomorrow’s vastly improved monitoring, giving times-series that are both longer (higher N) and much more accurate, reliable estimates can be confidently expected. However, if a system has already passed a bifurcation point one may ask whether it is in fact too late to usefully apply geo-engineering because an irreversible transition is already under way. Techniques from non-linear dynamical systems enter the modelling side of climate prediction at two points. First, in data assimilation, which plays a role in the tuning and updating of models, the assimilated data is often Lagrangian (for example, it might come from drifting floats in the ocean). It turns out that optimal starting positions for these drifters are determined by stable and unstable manifolds of the vector field of the phase space flow (Kuznetsov et al., 2003). Second, numerical bifurcation-tracking techniques for large-scale systems have become
Predicting climate tipping points
79
applicable to realistic large-scale climate models (Huisman et al., 2009). More generally, numerical continuation methods have been developed (for example, LOCA by Salinger et al. (2002)) that are specifically designed for the continuation of equilibria of large physical systems. These general methods appear to be very promising for the analysis of tipping points in different types of deterministic climate models. These developments will permit efficient parameter studies where one can determine directly how the tipping event in the model varies when many system parameters are changed simultaneously. This may become particularly useful for extensive scenario studies in geo-engineering. For example, Dijkstra et al. (2004) demonstrated how bifurcation diagrams can help to determine which perturbations enable threshold-crossing in the bi-stable THC system, and Biggs et al. (2009) studied how quickly perturbations have to be reversed to avoid jumping to coexisting attractors in a fisheries model. Furthermore, subtle microscopic non-linearities, currently beyond the reach of climate models, may have a strong influence on the large spatial scale. For example, Golden (2009) observes that the permeability of sea-ice to brine drainage changes drastically (from impermeable to permeable) when the brine volume fraction increases across the 5 per cent mark. This microscopic tipping point may have a large-scale follow-on effect on the salinity of sea water near the arctic, and thus, the THC. Incorporating microscopic non-linearities into the macroscopic picture is a challenge for future modelling efforts. Concerning the techniques of time-series analysis, two developments in related fields are of interest. First, theoretical physicists are actively developing methods of time-series analysis that take into account unknown non-linearities, allowing for short-term predictions even if the underlying deterministic system is chaotic (Kantz & Schreiber, 2003). These methods permit, to a certain extent, the separation of the deterministic, chaotic, component of the time-series from the noise (see also Takens, 1981). As several of the tipping events listed in Table 3.1 involve chaos, non-linear time-series analysis is a promising complement to the classical linear analysis. Second, much can perhaps be learned from current predictive studies in the related field of theoretical ecology, discussing how higher-order moments of the noise-induced distributions help to detect tipping points. See Section 3.4 for a brief description and Biggs et al. (2009) for a recent comparison between indicators in a fisheries model. Acknowledgements We are deeply indebted to many people for valuable discussions and comments. In particular, we would like to thank Professor Tim Lenton of the University of East
80
J. Michael T. Thompson and Jan Sieber
Anglia and his colleague Dr Valerie Livina for their continuous and detailed advice during the writing of the paper. The research group at Wageningen University in the Netherlands has also provided greatly appreciated input, notably from Professor Marten Scheffer and his research student Vasilis Dakos. Other valuable comments were received from Ian Eisenman and Eli Tziperman. Finally special thanks go to Professor Bernd Krauskopf of the non-linear dynamics group at Bristol University for his careful reading and commentary on the whole manuscript. References Alley, R. B., Marotzke, J., Nordhaus, W. D., Overpeck, J. T., Peteet, D. M., Pielke, R. A., Pierrehumbert, R. T., Rhines, P. B., Stocker, T. F., Talley, L. D. & Wallace, J. M. (2003) Abrupt climate change. Science 299, 2005–2010. Bakke, J., Lie, O., Heegaard, E., Dokken, T., Haug, G. H., Birks, H. H., Dulski, P. & Nilsen, T. (2009) Rapid oceanic and atmospheric changes during the Younger Dryas cold period. Nature Geoscience 2, 202–205 (doi: 10.1038/NGEO439) Biggs, R., Carpenter, S. R. & Brock, W. A. (2009) Turning back from the brink: detecting an impending regime shift in time to avert it. Proc. Nat. Acad. Sci. USA 106, 826–831. (doi: 10.1073/pnas.0811729106) Box, G., Jenkins, G. M. & Reinsel, G. C. (1994) Time Series Analysis, Forecasting and Control, 3rd edition. New York: Prentice-Hall. Buizza, R., Petroliagis, T., Palmer, T., Barkmeijer, J., Hamrud, M., Hollingsworth, A., Simmons, A. & Wedi, N. (1998) Impact of model resolution and ensemble size on the performance of an ensemble prediction system. Q. J. Roy. Meteorol. Soc. B 124, 1935–1960. Carpenter, S. R. & Brock, W. A. (2006) Rising variance: a leading indicator of ecological transition. Ecol. Lett. 9, 311–318. Dakos, V., Scheffer, M., van Nes, E. H., Brovkin, V., Petoukhov, V. & Held, H. (2008) Slowing down as an early warning signal for abrupt climate change, Proc. Nat. Acad. Sci. USA 105, 14 308–14 312. Dijkstra, H. A. (2008) Dynamical Oceanography. New York: Springer. Dijkstra, H. A. & Weijer, W. (2003) Stability of the global ocean circulation: the connection of equilibria within a hierarchy of models. J. Marine Res. 61, 725–743. Dijkstra, H. A. & Weijer, W. (2005) Stability of the global ocean circulation: basic bifurcation diagrams. J. Phys. Oceanogr. 35, 933–948. Dijkstra, H. A., Raa, L. T. & Weijer, W. (2004) A systematic approach to determine thresholds of the ocean’s thermohaline circulation. Tellus Series A, Dynam. Meteorol. Oceanogr. 56, 362–370. Eisenman, I. & Wettlaufer, J. S. (2009) Nonlinear threshold behaviour during the loss of Arctic sea ice. Proc. Nat. Acad. Sci. USA 106, 28–32. Euler, L. (1744) Methodus lnveniendi Lineas Curvas Maximi Minimive Proprietate Gaudentes (Appendix, De curvis elasticis). Lausanne and Geneva: Marcum Michaelem Bousquet. Golden, K. (2009) Climate change and the mathematics of transport in sea ice. Notices of the AMS 56, 562–584. Guilyardi, E. (2006) El Ni˜no, mean state, seasonal cycle interactions in a multi-model ensemble. Climate Dynam. 26, 329–348.
Predicting climate tipping points
81
Guttal, V. & Jayaprakash, C. (2008) Changing skewness: an early warning signal of regime shifts in ecosystems. Ecol. Lett. 11, 450–460. (doi:10.1111/j.1461–0248.2008.01160) Guttal, V. & Jayaprakash, C. (2009) Spatial variance and spatial skewness: leading indicators of regime shifts in spatial ecological systems. Theor. Ecol. 2, 3–12. (doi:10.1007/s12080–008-0033–1) Hansen, J. E. (2005) A slippery slope: how much global warming constitutes “dangerous anthropogenic interference”? Climatic Change 68, 269–279. Held, H. & Kleinen, T. (2004) Detection of climate system bifurcations by degenerate fingerprinting, Geophys. Res. Lett. 31, L23207. (doi:10.1029/2004GL020972) Houghton, J. (2004) Global Warming: The Complete Briefing. Cambridge, UK: Cambridge University Press. Huisman, S. E., Dijkstra, H. A., von der Heydt, A. & de Ruijter, W. P. M. (2009) Robustness of multiple equilibria in the global ocean circulation. Geophys. Res. Lett. 36, L01610. (doi:10.1029/2008GL036322) IPCC (2007) Climate Change 2007: Contribution of Working Groups I–III to the Fourth Assessment Report of the Intergovernmental Panel on Climate Change. I The Physical Science Basis, II Impacts, Adaptation and Vulnerability, III Mitigation of Climate Change. Cambridge, UK: Cambridge University Press. Joos, F., Prentice, I. C., Sitch, S., Meyer, R., Hooss, G., Plattner, G.-K., Gerber, S. & Hasselmann, K. (2001) Global warming feedbacks on terrestrial carbon uptake under the Intergovernmental Panel on Climate Change (IPCC) Emission Scenarios. Glob. Biogeochem. Cycles 15, 891–907. Kantz, H. & Schreiber, T. (2003) Nonlinear Time Series Analysis, 2nd edn. Cambridge, UK: Cambridge University Press. Kleidon, A. & Heimann, M. (2000) Assessing the role of deep rooted vegetation in the climate system with model simulations: mechanism, comparison to observations and implications for Amazonian deforestation. Climate Dynam. 16, 183–199. Kleinen, T., Held, H. & Petschel-Held, G. (2003) The potential role of spectral properties in detecting thresholds in the Earth system: application to the thermohaline circulation. Ocean Dynam. 53, 53–63. Kuznetsov, L., Ide, K. & Jones, C. K. R. T. (2003) A method for assimilation of Lagrangian data. Monthly Weather Rev. 131, 2247–2260. Lenton, T. M., Held, H., Kriegler, E., Hall, J. W., Lucht, W., Rahmstorf, S. & Schellnhuber, H. J. (2008) Tipping elements in the Earth’s climate system. Proc. Nat. Acad. Sci. USA 105, 1786–1793. Lenton, T. M., Myerscough, R. J., Marsh, R., Livina, V. N., Price, A. R., Cox, S. J. & Genie Team (2009) Using GENIE to study a tipping point in the climate system. Phil. Trans. R. Soc. A 367, 871–884. (doi:10.1098/Rsta.2008.0171) Lindsay, R. W. & Zhang, J. (2005) The thinning of arctic sea ice, 1988–2003: have we passed a tipping point? J. Climate 18, 4879–4894. Livina, V. N. & Lenton, T. M. (2007) A modified method for detecting incipient bifurcations in a dynamical system. Geophys. Res. Lett. 34, L03712. (doi:10.1029/2006GL028672) Lockwood, J. G. (2001) Abrupt and sudden climatic transitions and fluctuations: a review. Int. J. Climatol. 21, 1153–1179. Lorenz, E. N. (1963) Deterministic nonperiodic flow. J. Atmos. Sci. 20, 130–141. Lucht, W., Schaphoff, S., Erbrecht, T., Heyder, U. & Cramer, W. (2006) Terrestrial vegetation redistribution and carbon balance under climate change. Carbon Balance and Management 1, 6. (doi:10.1186/1750–0680–1–6)
82
J. Michael T. Thompson and Jan Sieber
McDonald, S. W., Grebogi, C., Ott, E. & Yorke, J. A. (1985) Fractal basin boundaries. Physica D 17, 125–153. Menocal, P. de, Oritz, J., Guilderson, T., Adkins, J., Sarnthein, M., Baker, L. & Yarusinsky, M. (2000) Abrupt onset and termination of the African humid period: rapid climate response to gradual insolation forcing. Quat. Sci. Rev. 19, 347–361. Maqueda, M. A., Willmott, A. J., Bamber, J. L. & Darby, M. S. (1998) An investigation of the small ice cap instability in the Southern Hemisphere with a coupled atmosphere–sea ice–ocean–terrestrial ice model. Climate Dynam. 14, 329–352. National Research Council (2002) Abrupt Climate Change: Inevitable Surprises. Washington, DC: National Academy Press. Nes, E. H. van & Scheffer, M. (2007) Slow recovery from perturbations as a generic indicator of a nearby catastrophic shift. Am. Nat. 169, 738–747. Peng, C. K., Buldyrev, S. V., Havlin, S., Simons, M., Stanley, H. E. & Goldberger, A. L. (1994) Mosaic organization of DNA nucleotides. Phys. Rev. E 49, 1685–1689. Petoukhov, V., Ganopolski, A., Brovkin, V., Claussen, M., Eliseev, A., Kubatzki, C. & Rahmstorf, S. (2000) CLIMBER-2: a climate system model of intermediate complexity. Part I: model description and performance for present climate. Climate Dynam. 16, 1–17. Poincar´e, H. (1885) Sur l’equilibre d’une masse fluide anim´ee d’un mouvement de rotation. Acta Math. 7, 259. Poston, T. & Stewart, I. (1978) Catastrophe Theory and its Applications. London: Pitman. Rahmstorf, S. (2000) The thermohaline ocean circulation: a system with dangerous threshold? Climatic Change 46, 247–256. Rahmstorf, S. (2001) Abrupt climate change. In Encyclopaedia of Ocean Sciences, eds. Steele, J., Thorpe, S. & Turekian, K. London: Academic Press, pp. 1–6. Rahmstorf, S. (2002) Ocean circulation and climate during the past 120 000 years. Nature 419, 207–214. Rahmstorf, S., Crucifix, M., Ganopolski, A., Goosse, H., Kamenkovich, I., Knutti, R., Lohmann, G., Marsh, R., Mysak, L. A., Wang, Z. & Weaver, A. J. (2005) Thermohaline circulation hysteresis: a model intercomparison, Geophys. Res. Lett. 32, L23605. (doi:10.1029/2005GL023655) Rial, J. A., Pielke, R. A., Beniston, M., Claussen, M., Canadel, J., Cox, P., Held, H., de Noblet-Ducoudre, N., Prinn, R., Reynolds, J. F. & Salas, J. D. (2004) Nonlinearities, feedbacks and critical thresholds within the Earth’s climate system. Climate Change 65, 11–38. Salinger, A. G., Bou-Rabee, N. M., Pawlowski, R. P., Wilkes, E. D., Burroughs, E. A., Lehoucq, R. Lehoucq, B. & Romero, L. A. (2002) LOCA 1.0 Library of Continuation Algorithms: Theory and Implementation Manual, Sandia Report SAND2002–0396. Albuquerque, NM: Sandia National Laboratories. Saltzman, B. (2002) Dynamical Paleoclimatology. London: Academic Press. Scheffer, M. (2009) Critical Transitions in Nature and Society. Princeton, NJ: Princeton University Press. Selten, F. M., Branstator, G. W., Dijkstra, H. W. & Kliphuis, M. (2004) Tropical origins for recent and future Northern Hemisphere climate change. Geophys. Res. Lett. 31, L21205. Sperber, K. R., Brankovic, C., Deque, M., Frederiksen, C. S., Graham, R., Kitoh, A., Kobayashi, C., Palmer, T., Puri, K., Tennant, W. & Volodin, E. (2001) Dynamical seasonal predictability of the Asian summer monsoon. Monthly Weather Revi. 129, 2226–2248.
Predicting climate tipping points
83
Stommel, H. (1961) Thermohaline convection with two stable regimes of flow. Tellus 8, 224–230. Takens, F. (1981), Detecting strange attractors in turbulence. In Dynamical Systems and Turbulence, eds. Rand, D. A. & Young, L. S. New York: Springer, p. 366. Thompson, J. M. T. (1982) Instabilities and Catastrophes in Science and Engineering. Chichester, UK: Wiley. Thompson, J. M. T. (1992) Global unpredictability in nonlinear dynamics: capture, dispersal and the indeterminate bifurcations. Physica D 58, 260–272. Thompson, J. M. T. (1996) Danger of unpredictable failure due to indeterminate bifurcation. Zang. Math. Mech. S 4, 199–202. Thompson, J. M. T. & Hunt, G. W. (1973) A General Theory of Elastic Stability. London: Wiley. Thompson, J. M. T. & Hunt, G. W. (1984) Elastic Instability Phenomena. Chichester, UK: Wiley. Thompson, J. M. T. & Stewart, H. B. (2002) Nonlinear Dynamics and Chaos, 2nd edn. Chichester, UK: Wiley. Thompson, J. M. T., Stewart, H. B. & Ueda, Y. (1994) Safe, explosive and dangerous bifurcations in dissipative dynamical systems. Phys. Rev. E 49, 1019–1027. Timmermann, A., Oberhuber, J., Bacher, A., Esch, M., Latif, M. & Roeckner, E. (1999) Increased El Ni˜no frequency in a climate model forced by future greenhouse warming. Nature 398, 694–697. Tziperman, E. (1997) Inherently unstable climate behaviour due to weak thermohaline ocean circulation. Nature 386, 592–595. Tziperman, E., Toggweiler, J. R., Feliks, Y. & Bryan, K. (1994) Instability of the thermohaline circulation with respect to mixed boundary-conditions: is it really a problem for realistic models. J. Phys. Oceanogr. 24, 217–232. Winton, M. (2006) Does the Arctic sea ice have a tipping point? Geophys. Res. Lett. 33, L23504. (doi:10.1029/2006GL028017) Zeng, N., Dickinson, R. E. & Zeng, X. (1996) Climatic impact of Amazon deforestation: a mechanistic model study. J Climate 9, 859–883. Zickfeld, K., Knopf, B., Petoukhov, V. & Schellnhuber, H. J. (2005) Is the Indian summer monsoon stable against global change? Geophys. Res. Lett. 32, L15707.
4 A geophysiologist’s thoughts on geo-engineering james lovelock
The Earth is now recognized as a self-regulating system that includes a reactive biosphere; the system maintains a long-term steady-state climate and surface chemical composition favourable for life. We are perturbing the steady state by changing the land surface from mainly forests to farm land and by adding greenhouse gases and aerosol pollutants to the air. We appear to have exceeded the natural capacity to counter our perturbation and consequently the system is changing to a new and as yet unknown but probably adverse state. I suggest here that we regard the Earth as a physiological system and consider amelioration techniques, geo-engineering, as comparable to nineteenth century medicine. 4.1 Introduction If geo-engineering is defined as purposeful human activity that significantly alters the state of the Earth, we became geo-engineers soon after our species started using fire, for cooking, land clearance and smelting bronze and iron. There was nothing unnatural in this; other organisms have been massively changing the Earth since life began 3.5 Gyr ago. Without oxygen from photosynthesizers, there would be no fires. Morton (2007) in his remarkable book Eating the Sun describes the crucial role of these organisms in shaping the evolution of the Earth and its climate. Organisms change their world locally for purely personal selfish reasons; if the advantage conferred by the ‘engineering’ is sufficiently favourable, it allows them and their environment to expand until dominant on a planetary scale.
Geo-Engineering Climate Change: Environmental Necessity or Pandora’s Box?, eds. Brian Launder and Michael Thompson. Published by Cambridge University Press. © Cambridge University Press 2010.
84
A geophysiologist’s thoughts on geo-engineering
85
Our use of fires as a biocide to clear land of natural forests and replace them with farmland was our second act of geo-engineering; together these acts have led the Earth to evolve to its current state. As a consequence, most of us are now urban and our environment is an artefact of engineering. During this long engineering apprenticeship, we changed the Earth, but until quite recently, like the photosynthesizers, we were unaware that we were doing it, still less the adverse consequences. It might seem that the Fourth Assessment Report of the IPCC (2007) by over 1000 of the world’s most able climate scientists would provide us with most of what we need to know to ameliorate adverse climate change. Unfortunately, it does not; the conclusions so far are tentative and preliminary. The gaps that exist in our knowledge about the state of the oceans, the cryosphere and even the clouds and aerosols of the atmosphere make prediction unreal. The response of the biosphere to climate and compositional change is even less well understood; most of all, we are ignorant about the Earth as a self-regulating system and only just beginning to recognize that many separate but connected subsystems exist that can exert positive and negative feedback on a global scale. It was not until 2001 that the Amsterdam Declaration stated as follows: the Earth system is a selfregulating system comprising the atmosphere, oceans and surface rocks and all of the organisms, including humans. Earth system science is acknowledged, but like a new book that one day we will read, it stays on the shelf. Consequently, the climate models of the IPCC are still based on atmospheric physics and the programs of their models do not yet include the code needed for a self-regulating Earth. Land and ocean surface changes are touched on but mainly from the viewpoint of their passive effect on the atmosphere. Even Lenton’s (2006) review of climate change to the end of the millennium still appears to view the climate as mainly determined by geophysics. This concentration on atmospheric physics is a natural consequence of the evolution of climate science from weather forecasting, but most of all this is because there has been neither the time nor the resources to do more. We may soon need to try geo-engineering because careful observation and measurement show that climate is changing faster than forecast by the gloomiest of the IPCC models (Rahmstorf et al. 2007). 4.2 Geo-engineering techniques Physical means of amelioration, such as changing the planetary albedo, are the subject of other chapters in this volume and I thought it would be useful here to describe physiological methods for geo-engineering. These include tree planting, the fertilization of ocean algal ecosystems with iron, the direct synthesis of food from inorganic raw materials and the production of biofuels. I will also briefly
86
James Lovelock
describe the idea that oceans be fertilized to encourage algal growth by mixing into the surface waters the nutrient-rich water from below the thermocline. Tree planting would seem to be a sensible way to remove CO2 naturally from the air, at least for the time it takes for the tree to reach maturity. But in practice the clearance of forests for farm land and biofuels is now proceeding so rapidly that there is little chance that tree planting could keep pace. Forest clearance has direct climate consequences through water cycling and atmospheric albedo change and is also responsible for much of the CO2 emissions. Agriculture in total has climatic effects comparable to those caused by fossil fuel combustion. For this reason, it would seem better to pay the inhabitants of forested regions to preserve their trees than plant new trees on cleared ground. The charity Cool Earth exists to gather funds for this objective. It is insufficiently appreciated that an ecosystem is an evolved entity comprising a huge range of species from microorganisms, nematodes, invertebrates, small and large plants, animals and trees. While ecosystems have the capacity to evolve with climate change, plantations can only die. Oceans cover over 70 per cent of the Earth’s surface and are uninhabited by humans. In addition, most of the ocean surface waters carry only a sparse population of photosynthetic organisms, mainly because the mineral and other nutrients in the water below the thermocline do not readily mix with the warmer surface layer. Some essential nutrients such as iron are present in suboptimal abundance even where other nutrients are present and this led to the suggestion by John Martin in a lecture in 1991 that fertilization with the trace nutrient iron would allow algal blooms to develop that would cool the Earth by removing CO2 (see Watson 1997). Lovelock & Rapley (2007) suggested the use of a system of large pipes held vertically in the ocean surface to draw up cooler nutrient-rich water from just below the thermocline. The intention was to cool the surface directly, to encourage algal blooms that would serve to pump down CO2 and also to emit gases such as DMS, volatile amines and isoprene (Nightingale & Liss 2003), which encourage cloud and aerosol formation. The pipes envisaged would be approximately 100 m in length and 10 m in diameter and held vertically in the surface waters and equipped with a one-way valve. Surface waves of average height 1 m would mix in 4 tons of cooler water per second. Our intention was to stimulate interest and discussion in physiological techniques that would use the Earth system’s energy and nutrient resources to reverse global heating. We do not know whether the proposed scheme would help restore the climate, but the idea of improving surface waters by mixing cooler nutrient-rich water from below has a long history; indeed, it is at present used by the US firm Atmocean Inc. to improve the quality of ocean pastures. The idea of ocean pipes for geo-engineering was strongly resisted by the scientific community on the grounds
A geophysiologist’s thoughts on geo-engineering
87
that their use would release CO2 from the lower waters to the atmosphere. We were aware of this drawback, but knowing that the low CO2 levels during the glaciation were reached when the ocean was less stratified than now, we thought that algal growth following the mixing might take down more CO2 than was released. The next step would be the experimental deployment of the pipes, observations and measurements. Planting crops specifically for fuel, although sometimes an economic necessity, is a source, not a sink, for CO2 . Biofuels might be made green again if sufficient of the waste carbon from the plants could be permanently buried. Thus if any of the ocean fertilization schemes work, their value could be enhanced by harvesting the algae, extracting food and fuel and then burying the waste in the deep ocean as heavier-than-water pellets. This would remove a sizeable proportion of the carbon photosynthesized and place it as an insoluble residue on the ocean floor. The temperature of the deep ocean is close to 4 ◦ C and the residence time of water there is at least 1000 years. The buried carbon would effectively be out of circulation. It might be possible also to bury land-based agricultural waste at these deep ocean sites. This idea may be even more unpopular than the pipes. Critics rightly fear that waste buried in the ocean might be a source of nitrous oxide or other greenhouse gases, but again we may before long reach desperate times; so should we reject an experimental burial of carbon now? Another amelioration technique is the direct synthesis of food from CO2 , nitrogen and trace minerals. When food was abundant, it seemed an otiose proposal, but not now since food prices are rising. Massive crop failure in future adverse climates would give food synthesis an immediately vital role. The procedure for food synthesis would involve the production of a feed stock of sugars and amino acids from air and water as an industrial chemical operation, using either renewable or nuclear energy. This basic nutrient would be fed to tissue cultures of meat or vegetable cells and then harvested as food. Something similar to this kind of synthesized food already exists in a commercial form. It is a cultured mycoprotein product and supermarkets sell it under the brand name ‘Quorn’. Misplaced fear stops us from using nuclear energy, the most practical and available geo-engineering procedure of all; we even ignore the use of high-temperature nuclear reactors for the synthesis of food and liquid fuels directly from CO2 and water. 4.3 Geophysiology The Earth system is dynamically stable but with strong feedbacks. Its behaviour resembles more the physiology of a living organism than that of the equilibrium box models of the last century (Lovelock 1986). Broecker (1991) has shown by
88
James Lovelock
observation and models that even the wholly physical models of the Earth system are non-linear, often because the properties of water set critical points during warming and cooling. These include the heat-driven circulation of the oceans. The phase change from ice to water is accompanied by an albedo change from 0.8 to 0.2 and this strongly affects climate (Budyko 1969). There are other purely physical feedbacks in the system: the ocean surface stratifies at 12–14 ◦ C, the rate of water evaporation from land surfaces becomes a problem for plants at temperatures above 22–25 ◦ C and atmospheric relative humidity has a large direct effect on the size and effective albedo of aerosols. In a simple energy balance model, Henderson-Sellers & McGuffie (2005) show the large climate discontinuity between the ice-free and icy worlds and marked hysteresis. Model systems that include, in addition to geophysics, an active and evolving biota self-regulate at physiologically favourable temperatures. Lovelock & Kump (1994) described a zero-dimensional model of a planet that self-regulated its climate; it had land surfaces occupied by plants and the ocean was a habitat for algae. This model system was normally in negative feedback with respect to temperature or CO2 increase, but when subjected to a progressive increase of CO2 or heat flux, regulation continued at first, but as the critical CO2 abundance of 450 ppm, or heat input of 1450 Wm−2 , was approached, the sign of the feedback changed to positive and the system began to amplify and did not resist change. At the critical point, amplification rose steeply and precipitated a 6 ◦ C rise in temperature. Afterwards the system returned to negative feedback and continued to self-regulate at the higher temperature. As with the ice albedo feedback, there was marked hysteresis and reducing CO2 abundance or heat flux did not immediately restore the state prior to the discontinuity. The justifications for using this tiny zero-dimensional model to argue against the powerful forecasts of the giant global climate models are these. First, it is a model in which the biota and the geosphere play an active dynamic role, as in the model Daisyworld (Watson & Lovelock 1983) from which it has descended. Second, it makes predictions that are more in accord with the Earth’s history. It suggests that attempts at amelioration should take place before the critical point is reached. Unfortunately, when the large effect of unintentional cooling by shortlived pollution aerosols is taken into account, we may already be past this point and it would be unwise to assume that climate change can simply be reversed by reducing emissions or by geo-engineering. An engineer or physiologist looking at the IPCC forecasts for this century would find unconvincing their smooth and uninterrupted temperature rise until 2100, something expected of the equilibrium behaviour of a dead planet such as Mars. A glance at the Earth’s recent history reveals a climate and atmospheric composition that fluctuates suddenly as would be expected of a dynamic system
A geophysiologist’s thoughts on geo-engineering
89
with positive feedback. The long-term history of the Earth suggests the existence of hot and cold stable states that geologists refer to as the greenhouses and the icehouses. In between are metastable periods such as the present interglacial. The best-known hot house happened 55 Myr ago at the beginning of the Eocene period (Tripati & Elderfield 2005; Higgins & Schrag 2006). In that event, between 1 and 2 teratons of CO2 were released into the air by a geological accident. Putting so much CO2 in the air caused the temperature of the temperate and Arctic regions to rise by 8 ◦ C and of the tropics by 5 ◦ C and it took about 200 000 years for conditions to return to their previous states. Soon we will have injected a comparable quantity of CO2 and the Earth itself may release as much again when the ecosystems of the land and ocean are adversely affected by heat. The rise in CO2 55 Myr ago is thought to have occurred more slowly than now; the injection of gaseous carbon compounds into the atmosphere might have taken place over a period of about 10 000 years, instead of about 200 years as we are now doing. The great rapidity with which we add carbon gases to the air could be as damaging as is the quantity. The rapidity of the pollution gives the Earth system little time to adjust and this is particularly important for the ocean ecosystems; the rapid accumulation of CO2 in the surface water is making them too acidic for shell-forming organisms (Royal Society 2005). This did not appear to happen during the Eocene event, perhaps because there was time for the more alkaline deep waters to mix in and neutralize the surface ocean. Despite the large difference in the injection times of CO2 , the change in the temperature of approximately 5◦ C globally may have occurred as rapidly 55 Myr ago as it may soon do now. The time it takes to move between the two system states is likely to be set by the properties of the system more than by the rate of addition of radiant heat or CO2 . There are differences between the Earth 55 Myr ago and now. The Sun was 0.5 per cent cooler and there was no agriculture anywhere so that natural vegetation was free to regulate the climate. Another difference was that the world was not then experiencing global dimming – the 2–3 ◦ C of global cooling caused by the atmospheric aerosol of man-made pollution (Ramanathan et al. 2007). This haze covers much of the northern hemisphere and offsets global heating by reflecting sunlight and more importantly by nucleating clouds that reflect even more sunlight. The aerosol particles of the haze persist in the air for only a few weeks, whereas carbon dioxide persists for between 50 and 100 years. Any economic downturn that reduced fossil-fuel use would reduce the aerosol density and intensify the heating and so would the rapid implementation of the Bali recommendation for cutting back fossil-fuel use. It is sometimes assumed that the temperature of the sunlit surface of a planet is directly related to the albedo of the illuminated area. This assumption is not true for forested areas. The physiological temperature regulation of a tree normally keeps
90
James Lovelock
leaf temperature below ambient air temperature by evapotranspiration, the active process by which ground water is pumped to the leaves; the trees absorb the solar radiation but disperse the heat insensibly as the latent heat of water vapour. I have observed in the southern English summer that dark conifer tree leaves maintain a surface temperature more than 20 ◦ C cooler than an inert surface of the same colour. 4.4 Planetary medicine What are the planetary health risks of geo-engineering intervention? Nothing we do is likely to sterilize the Earth, but the consequences of planetary scale intervention could hugely affect humans. Putative geo-engineers are in a position similar to that of physicians before the 1940s. The author physician Lewis Thomas (1983) remarkably described in his book The Youngest Science the practice of medicine before the Second World War. There were only five effective medicines available: morphine for pain, quinine for malaria, insulin for diabetes, digitalis for heart disease and aspirin for inflammation and very little was known of their mode of action. For almost all other ailments, there was nothing available but nostrums and comforting words. At that time, despite a well-founded science of physiology, we were still ignorant about the human body or the host–parasite relationship it had with other organisms. Wise physicians knew that letting nature take its course without intervention would often allow natural self-regulation to make the cure. They were not averse to claiming credit for their skill when this happened. I think the same may be true about planetary medicine; our ignorance of the Earth system is overwhelming and intensified by the tendency to favour model simulations over experiments, observation and measurement. 4.5 Ethics Global heating would not have happened but for the rapid expansion in numbers and wealth of humanity. Had we heeded Malthus’s warning and kept the human population to less than 1 billion, we would not now be facing a torrid future. Whether or not we go for Bali or use geo-engineering, the planet is likely, massively and cruelly, to cull us, in the same merciless way that we have eliminated so many species by changing their environment into one where survival is difficult. Before we start geo-engineering we have to raise the following question: are we sufficiently talented to take on what might become the onerous permanent task of keeping the Earth in homeostasis? Consider what might happen if we start by using a stratospheric aerosol to ameliorate global heating; even if it succeeds, it
A geophysiologist’s thoughts on geo-engineering
91
would not be long before we face the additional problem of ocean acidification. This would need another medicine, and so on. We could find ourselves enslaved in a Kafka-like world from which there is no escape. Rees (2003) in his book The Final Century, envisaged a similar but more technologically based fate brought on by our unbridled creativity. The alternative is the acceptance of a massive natural cull of humanity and a return to an Earth that freely regulates itself but in the hot state. Garrett Hardin (1968) foresaw consequences of this kind in his seminal essay ‘The tragedy of the commons’. Whatever we do is likely to lead to death on a scale that makes all previous wars, famines and disasters small. To continue business as usual will probably kill most of us during the century. Is there any reason to believe that fully implementing Bali, with sustainable development and the full use of renewable energy, would kill fewer? We have to consider seriously that, as with nineteenth-century medicine, the best option is often kind words and painkillers but otherwise do nothing and let Nature take its course. The usual response to such bitter realism is: then there is no hope for us, and we can do nothing to avoid our plight. This is far from true. We can adapt to climate change and this will allow us to make the best use of the refuge areas of the world that escape the worst heat and drought. We have to marshal our resources soon and if a safe form of geo-engineering buys us a little time then we must use it. Parts of the world such as oceanic islands, the Arctic basin and oases on the continents will still be habitable in a hot world. We need to regard them as lifeboats and see that there are sufficient sources of food and energy to sustain us as a species. Physicians have the Hippocratic oath; perhaps we need something similar for our practice of planetary medicine. During the global heating of the early Eocene, there appears to have been no great extinction of species and this may have been because life had time to migrate to the cooler regions near the Arctic and Antarctic and remain there until the planet cooled again. This may happen again and humans, animals and plants are already migrating. Scandinavia and the oceanic parts of northern Europe such as the British Isles may be spared the worst of heat and drought that global heating brings. This puts a special responsibility upon us to stay civilized and give refuge to the unimaginably large influx of climate refugees. Perhaps the saddest thing is that if we fail and humans become extinct, the Earth system, Gaia, will lose as much as or more than we do. In human civilization, the planet has a precious resource. We are not merely a disease; we are, through our intelligence and communication, the planetary equivalent of a nervous system. We should be the heart and mind of the Earth not its malady. Perhaps the greatest value of the Gaia concept lies in its metaphor of a living Earth, which reminds us that we
92
James Lovelock
are part of it and that our contract with Gaia is not about human rights alone, but includes human obligations. References Broecker, W. S. 1991 The Great Ocean conveyor. Oceanography 4, 79–89 Budyko, M. I. 1969 The effect of solar radiation variations on the climate of the Earth. Tellus 21, 611–619 Hardin, G. 1968 The tragedy of the commons. Science 162, 1243–1248. (doi:10.1126/science.162.3859.1243) Henderson-Sellers, A. & McGuffie, K. 2005 A Climate Modelling Primer. Chichester, UK: Wiley. Higgins, J. A. & Schrag, D. P. 2006 Beyond methane: towards a theory for Paleocene–Eocene thermal maximum. Earth Planet. Sci. Lett. 245, 523–537. (doi:10.1016/j.epsl.2006.03.009) IPCC 2007 Climate Change 2007: The Physical Science Basis. Contribution of Working Group I to the Fourth Assessment Report of the IPCC. Cambridge, UK: Cambridge University Press. Lenton, T. M. 2006 Climate change to the end of the millennium. Climate Change 76, 7–29. (doi:10.1007/s10584-005-9022-1) Lovelock, J. E. 1986 Geophysiology: a new look at earth science. Bull. Am. Met. Soc. 67, 392–397. Lovelock, J. E. & Kump, L. R. 1994 Failure of climate regulation in a geophysiological model. Nature 369, 732–734. (doi:10.1038/369732a0) Lovelock, J. E. & Rapley, C. R. 2007 Ocean pipes could help the Earth to cure itself. Nature 449, 403. (doi:10.1038/449403a) Morton, O. 2007 Eating the Sun. London: Fourth Estate. Nightingale, P. D. & Liss, P. S. 2003 Gases in seawater. Treatise Geochem. 6, 49–81. Rahmstorf, S., Casenave, A., Church, J. A., Hansen, J. E., Keeling, R. F., Parker, D. E. & Somerville, R. C. J. 2007 Recent climate observations compared to projections. Science 316, 709. (doi:10.1126/science.1136843) Ramanathan, V., Li, F., Ramana, M. V., Praveen, P. S., Kim, D., Corrigan, C. E. & Nguyen, H. 2007 Atmospheric brown clouds: hemispherical and regional variations in long-range transport, absorption, and radiative forcing. J. Geophys. Res. 112, D22S21. (doi:10.1029/2006JD008124) Rees, M. 2003 Our Final Century. London: Heinemann. Royal Society 2005 Ocean Acidification due to Increasing Atmospheric Carbon Dioxide, Policy Document no.12/05. London: The Royal Society Thomas, L. 1983 The Youngest Science. In New York: Viking. Tripati, A. & Elderfield, H. 2005 Deep-sea temperature and circulation changes at the Paleocene–Eocene thermal maximum. Science 308, 1894–1898. (doi:10.1126/science.1109202) Watson, A. J. 1997 Volcanic iron, CO2 , ocean productivity and climate. Nature 385, 587–588. (doi:10.1038/385587b0) Watson, A. J. & Lovelock, J. E. 1983 Biological homeostasis of the global environment: the parable of Daisyworld. Tellus B 35, 284–289.
5 Coping with carbon: a near-term strategy to limit carbon dioxide emissions from power stations paul breeze
Burning coal to generate electricity is one of the key sources of atmospheric carbon dioxide emissions; so, targeting coal-fired power plants offers one of the easiest ways of reducing global carbon emissions. Given that the world’s largest economies all rely heavily on coal for electricity production, eliminating coal combustion is not an option. Indeed, coal consumption is likely to increase over the next 20–30 years. However, the introduction of more efficient steam cycles will improve the emission performance of these plants over the short term. To achieve a reduction in carbon emissions from coal-fired plant, however, it will be necessary to develop and introduce carbon capture and sequestration technologies. Given adequate investment, these technologies should be capable of commercial development by about 2020.
5.1 Introduction One of the key sources of atmospheric carbon dioxide emissions is the generation of electric power from fossil fuels. The use of oil, gas and coal for electricity generation accounts for roughly 25 per cent of annual global carbon dioxide emissions (Science Daily 2007). Targeting the emissions from these concentrated sources of carbon dioxide represents, therefore, one of the best ways of reducing global carbon emissions. Of the three fossil fuels, coal is by some margin the largest source of atmospheric carbon dioxide from power stations. According to the US Energy Information
Geo-Engineering Climate Change: Environmental Necessity or Pandora’s Box?, eds. Brian Launder and Michael Thompson. Published by Cambridge University Press. © Cambridge University Press 2010.
93
94
Paul Breeze
Table 5.1 Coal use for power generation Country
Proportion of total electrical power from coal (%)
South Africa China Australia India USA Africa South Korea Europe Russia Japan
93 82 80 75 51 46 36 30 30 22
Source: Breeze (2007b).
Administration (2007), coal accounted for 43 per cent of electricity production in 2004, close to twice the next most important source, natural gas. On top of this, coal produces more carbon dioxide per unit of electricity than natural gas, thereby amplifying its significance further. So, any reduction in emissions from coal-fired plants can have a significant impact globally. (It is worth bearing in mind, however, that much of the emission-reduction technology applicable to coal plants can be applied to gas-fired plants too.) The economics of coal-fired power generation make it most cost-effective to build large power stations. Individual plants are often capable of generating 1000 MW or more of power and these power plants are major sources of carbon dioxide. Some of them are among the largest single sources of greenhouse gas emissions on Earth. Equally, reducing or eliminating the emissions from a single power plant of this size would provide a large environmental benefit. From an environmental perspective, it would be preferable to abandon completely the burning of fossil fuels, and especially of coal, as a means to generate electricity. Unfortunately, that is not an option for either the short or the medium term. The world’s great economies are driven by coal combustion and none of them is going to abandon its use easily. Table 5.1 lists the proportion of electricity derived from coal combustion for a range of the world’s major economies. The USA derives 51 per cent of its electricity from coal, India relies on the fuel for 75 per cent of its electrical power, China for 82 per cent and South Africa for 93 per cent. All these countries have massive coal reserves; but even South Korea and Japan, which have negligible coal reserves of their own, burn large amounts of imported coal to generate electricity.
A strategy to limit CO2 emissions from power stations
95
Table 5.2 Predicted global power generation Global output (million MWh) 2004 2010 2015 2020 2025 2030
16 424 19 554 22 289 24 959 27 537 30 364
Source: US Energy Information Administration (2007).
Indeed, rather than a reduction in its use, the consumption of coal for power generation is expected to increase substantially over the next 20 years. Coal’s contribution to total world energy consumption, covering all uses including power generation, is expected to rise from 26 per cent in 2004 to 28 per cent in 2030 (US Energy Information Administration 2007). Approximately 65 per cent of all coal shipped is used to generate electricity and coal’s share of power generation is expected to increase from 43 per cent in 2004 to 45 per cent by 2030. This must be taken against a background of increasing power generation to meet rising demand. Again, according to figures from the US Energy Information Administration, generation output is expected to increase by 2.4 per cent each year from 2004 to 2030. The result, as Table 5.2 shows, is that total output rises from 16 424 million MWh in 2004 to 30 364 million MWh in 2030. The message from these figures is that coal consumption is likely to rise substantially over the medium term. Coal is a cheap source of electricity and it is available in large quantities in many parts of the world. The Western world has built much of its prosperity on coal and now the developing world is intent on doing the same. If greenhouse gas emissions are to be reduced then this must, at least for the next 20–30 years, take place against a background of increased coal use for power generation. 5.2 Alternatives to coal combustion One of the reasons why coal use will not fall is that no viable alternative exists today. The only other comparable source in terms of size and technology is nuclear power; but nuclear power is unlikely to be able to replace even a small part of current coal capacity. A nuclear power station is both cost-effective and produces
96
Paul Breeze
much lower greenhouse gas emissions than any fossil-fuel power plant. There are some major environmental objections to increased nuclear generation but perhaps the largest hindrance to a massive growth in nuclear capacity is the availability of uranium to fuel the plants. While the nuclear fuel industry would almost certainly argue otherwise, it is not clear today that it can support anything more than a modest growth in global capacity (IEA 2006; Breeze 2007a). So, while existing nuclear plants may be replaced in, for example, the USA and the UK, additional plants are unlikely to be built. Elsewhere capacity may grow, but never sufficiently to reduce coal use. Renewable technologies offer the other major alternative to fossil fuel combustion. Today, however, these technologies are simply not in a position to meet the growing demand for new capacity across the globe. Growth in wind power capacity, perhaps the best option for the medium term, is already showing signs of being constrained by global manufacturing capacity. Solar power is almost certainly the Earth’s long-term solution to electricity supply but it will probably be another generation before it can begin to provide the sort of capacity the world needs. Hydropower might be able to provide significantly more output, particularly in Africa where the infrastructure associated with increased hydro capacity can have other major benefits. Biomass, marine technologies, tidal power: all these will have a role to play but none can match coal for cheapness, reliability and gross capacity. The other important way of constraining growth in coal consumption is by introducing energy efficiency measures. There are simple measures that can lead to major savings but these will mostly take place in the developed world. The growth in the use of coal will mostly take place in the developing world. Inevitably, therefore, coal use will increase. 5.3 Facing up to coal The politics of coal have already been alluded to briefly but it is worth emphasizing its significance once again. Coal is a cheap, widely available, high energy-density fuel. Since the Industrial Revolution, it has provided the energy that has driven industry in the West. Indeed, coal is arguably the fount of Western prosperity. Today it continues to provide both energy and energy security in many Western countries and particularly the USA where, as noted, it accounts for over 50 per cent of electricity generation. The recognition of greenhouse warming and the identification of carbon dioxide emissions as a primary cause have led to a reappraisal of fossil-fuel use. As a result international efforts are taking place, under the auspices of the United Nations, to reach a comprehensive agreement to control and eventually reduce greenhouse gas
A strategy to limit CO2 emissions from power stations
97
emissions. Unfortunately, this comes at a time when the economies of two major developing nations, India and China, are growing rapidly. And, like the Western nations before them, they are growing on the back of coal. It is unrealistic to expect either of these nations, or any of the other developing nations that currently rely on coal, to sacrifice their prosperity for the sake of the planet. Any international agreement will therefore have to take this into account. In practice, this means that while Europe (and one hopes, eventually, the USA) will aim for drastic cuts in its greenhouse gas emissions, coal use will continue to increase. If, therefore, overall emissions are to be limited, then technological solutions based around coal use will have to be implemented. Fortunately these already exist. Applied pragmatically, they can do a lot to ameliorate the problems associated with coal combustion. 5.3.1 Conventional solutions The traditional coal-burning power plant in use today was developed over the course of the twentieth century. In such a plant coal is burnt in air to generate heat that is used to raise steam in a boiler and the steam is used to drive a steam turbine generator, producing electricity. The most highly developed of this type of plant, and the one that is of most interest here, is the pulverized coal-fired power plant. This plant burns coal that has first been reduced in grinders to a fine powder, which can be injected pneumatically into the boiler combustion chamber where it is burnt under carefully controlled conditions in order to minimize the production of nitrogen oxides. Such plants probably account for 90 per cent of current coal-fired generating capacity. The temperature in the combustion zone of a pulverized coal-fired power plant may reach 1500–1700 ◦ C. Under these conditions, nitrogen oxides are easily produced from the nitrogen in air, so the amount of air (and hence oxygen) is restricted in order to maintain reducing conditions in this hottest region. Further air is added in a cooler part of the furnace to complete the combustion reaction. The heat released during combustion is both radiant and convective. Radiant heat is captured by passing water through piping within the walls of the furnace while convective heat is captured higher up with bundles of water-containing tubes placed in the path of the exhaust gases. The product from the combustion of coal in air is, bar some traces of impurities, carbon dioxide. This may amount to 15 per cent by volume of the exhaust gases from the plant. The production of this carbon dioxide cannot be avoided in a plant of this type but the environmental performance of the plant may be improved significantly by improving its overall energy-to-electricity conversion efficiency. A steam turbine cycle approximates to that of a Carnot cycle engine. The key to higher efficiency in such engines is to increase the operating temperature and pressure.
98
Paul Breeze
The steam system of most conventional coal-fired plant includes a drum located part way through the system where water is converted to steam. Steam beyond this stage is superheated before being pumped to the steam turbine from which it is eventually condensed and returned as water to the boiler. A typical plant of this type might operate with a steam temperature of 540 ◦ C and a steam pressure of 17.5 bar. The resultant efficiency is approximately 38 per cent. Plants based on this type of steam cycle are known as subcritical plants. Some of the most modern plants operate with steam under conditions that are above the critical point of water. In these plants, there is no need for a steam drum within the steam cycle since the phases of water can coexist. Plants that operate under these conditions are referred to as supercritical plants. Two types of supercritical plant are in use today. The first, called simply a supercritical plant, will typically operate at a steam temperature of 580 ◦ C, a steam pressure of 29 bar and an overall energy conversion efficiency of 41 per cent, three percentage points higher than the subcritical plant. The second type, the ultra-supercritical plant, uses yet more extreme conditions. Steam temperature may be as high as 720 ◦ C, steam pressure at 37–38 bar which allows an operating efficiency of over 44 per cent, seven percentage points more than the best subcritical station. These efficiency gains are extremely important when it comes to environmental performance. Many old coal-fired plants commonly found in developing countries operate at an efficiency of only 30 per cent. Even in a technically advanced country such as the USA, the average efficiency of the coal-fired fleet is only 33 per cent (Science Daily 2007). Increasing the efficiency of a coal-fired plant from 33 to 45 per cent would reduce the amount of carbon dioxide produced for each unit of electricity by 27 per cent. The extreme operating conditions in supercritical and ultra-supercritical coal-fired plants require technically advanced materials. As these are developed further, even higher performance can be expected. The European Commission-funded AD700 programme aims to achieve an efficiency of 50– 55 per cent for an advanced coal-fired plant by 2015. Meanwhile, it is encouraging to note that supercritical plants are already being built in both China and India, with more than 100 on order in the former and where the first supercritical plants have already entered service. This appears to put the USA in the shade: of approximately 160 proposed new coal-fired plants that are currently under consideration there, only 14 are supercritical and four ultra-supercritical. Converting the world’s coal-burning fleet to high-efficiency plant offers one effective short-term strategy for limiting carbon dioxide emissions (albeit modestly). This is a measure that can begin to be implemented immediately. And while it will not reduce overall emissions it will slow their increase. Emission reduction from coal plants will require another set of technologies: carbon capture and sequestration.
A strategy to limit CO2 emissions from power stations
99
5.3.2 Advanced power cycles The aim of carbon capture and sequestration technologies is to create a (near) zero-emissions coal-fired power plant. If higher efficiency in existing plants represents a short-term strategy to limit power plant carbon dioxide emissions then the zero-emissions plant is the medium-term solution. If we accept the arguments presented here that coal burning will continue well into the middle of this century, it is important that these technologies are brought into service as quickly as possible. In practice, that timescale is likely to be another decade at least. To date there has been no coal-plant-scale demonstration of either capture or sequestration of carbon dioxide. Significant problems remain to be solved before either capture or storage can be implemented widely. Nevertheless, the basic technologies are in place and with sufficient investment there is no reason why they cannot be perfected over this time scale. Three different carbon-capture strategies are being developed in parallel today. These are termed post-combustion capture, pre-combustion capture and oxy-fuel combustion. It is likely that each will have a part to play in the future of coal combustion and all three will be considered briefly here. ‘Post-combustion’, as its name suggests, involves capturing carbon dioxide from the flue gases of a power plant after combustion has taken place. The carbon dioxide, approximately 15 per cent by volume of the exhaust gases, is mixed primarily with nitrogen, so post-combustion capture involves the separation of these two gases. Separation can be achieved today most easily using a chemical absorption technique involving an aqueous solution of the solvent monoethanolamine (MEA). This is carried out in a plant similar to that used for sulphur dioxide scrubbing. The flue gases from the power plant are passed up a tall tower from the sides of which a solution of MEA is sprayed into its path. With currently available technology, this can result in the capture of 80–95 per cent of the carbon dioxide within the exhaust gases. The scrubbed flue gases are released into the atmosphere. Meanwhile, the MEA–carbon dioxide-containing solution is collected at the bottom of the tower, pumped to a second reactor and heated to release the carbon dioxide and regenerate the solvent. This is an energy intensive process that is likely to reduce the overall plant conversion efficiency by 15 per cent. The released carbon dioxide must also be compressed prior to being pumped away for sequestration and this adds to the total energy burden, making an overall efficiency reduction with the two processes of approximately 25 per cent (Breeze 2006). The major advantage of post-combustion technology is that it can, in principle, be retrofitted to existing power plants. The effectiveness of this will depend both on the existing efficiency of the plant and the availability of space for the
100
Paul Breeze
capture plant. Modern supercritical and ultra-supercritical plants would make good candidates for future post-combustion capture owing to their already high efficiencies. Pre-combustion capture, the second of the capture strategies envisaged for coal combustion plants, takes a different approach. In this case, the idea is to avoid entirely the need to capture carbon dioxide after combustion by removing all the carbon from the fuel before combustion takes place, using a process of coal gasification. This process involves reacting coal at high temperature with a limited proportion of either air or oxygen and steam. The gasification reaction, which takes place under reducing conditions, produces a mixture of carbon monoxide and hydrogen, a mixture called synthesis gas, syngas for short. The syngas still contains carbon in the form of carbon monoxide; this is converted in a second process called a shift reaction in which it is reacted again at high temperature with steam. The result of the reaction is a mixture of carbon dioxide and hydrogen. The separation of carbon dioxide from hydrogen can be carried out relatively simply using current pressure swing absorption technology that selectively removes the carbon dioxide, leaving almost pure hydrogen. Hydrogen produced from coal in this way can be burned in a supercritical boiler of the type described above; it can be burned in a gas-turbine-based power plant or in an integrated gasification-combined cycle (IGCC) power plant that couples closely with the gasification and power generation processes. The latter is the configuration that the US Department of Energy (DOE) chose in 2003 as its FutureGen project, aimed at demonstrating a zero-emissions coal-fired power plant by 2012. (In 2008 the FutureGen project was modified, with the DOE proposing to sponsor carbon dioxide capture and sequestration rather than the whole plant.) Oxy-fuel combustion is, in reality, another form of post-combustion capture of carbon dioxide, but it takes a radical approach in order to eliminate the problems associated with separating carbon dioxide from nitrogen. In an oxy-fuel plant, combustion of the coal takes place in pure oxygen. When the combustion of carbon takes place in oxygen, the only combustion product is carbon dioxide, so there is no separation problem. (In practice, the combustion of coal produces some water vapour too, but this is easily separated from carbon dioxide by condensation.) However, such a plant requires an oxygen separation plant to provide a source of pure oxygen. The combustion of coal in oxygen can reach temperatures of 3500 ◦ C, far too high for the best of modern materials. In order to reduce the temperature, part of the carbon dioxide from the exhaust is recycled, cooling the combustion zone. The technology required to implement oxy-fuel combustion is still in the early stages of development and has yet to be demonstrated in a large-scale plant. However, it could potentially be retrofitted to existing and future supercritical power plants.
A strategy to limit CO2 emissions from power stations
101
Table 5.3 Power plant efficiency with carbon capture Efficiency (%) Subcritical pulverized coal with post-combustion capture Supercritical pulverized coal with post-combustion capture Ultra-supercritical pulverized coal with post-combustion capture Supercritical pulverized coal oxy-fuel combustion Integrated gasification-combined cycle
25.1 29.3 34.1 30.5 31.2
Source: Ansolabehere et al. (2007).
Which of these different approaches to carbon capture results in the most efficient power plant? Table 5.3 compares the efficiencies of the main types. These figures suggest that an ultra-supercritical pulverized coal power plant with carbon capture represents the most efficient plant type. This reflects the higher efficiency of this type of plant compared with the supercritical and subcritical plants and makes these plants excellent candidates for future retrofitting. The figure for oxy-fuel combustion should be treated with some caution as this technique has yet to be tested fully. It may yet offer higher efficiency when coupled with an ultra-supercritical boiler. 5.3.3 Carbon sequestration If carbon capture is to provide a successful strategy for removing carbon dioxide from coal-burning power plants then a means of storing the resultant gas must be found. The amount of carbon dioxide produced by power plants across the globe is approximately 10 billion tonnes each year (Science Daily 2007). The USA alone produces 1.4 billion tonnes, equivalent to approximately one-third of the natural gas piped around the USA annually (Science Daily 2007). As with carbon capture, the technology to transport and store carbon dioxide is already available and has been tested in three pilot-scale projects. These include injection of carbon dioxide from a coal gasification plant in the USA into an oilfield at Weyburn, Canada where it is used to force additional oil from the field (enhanced oil recovery), injection into a sandstone reservoir in the Saharan region of Algeria and sequestration of carbon dioxide separated from natural gas from the Norwegian Sleipner gas field into a brine aquifer under the North Sea. This latter was a response to Norway’s carbon tax. None of these projects stores the quantity of carbon dioxide that would be produced by a base load coal-fired power plant and the technology has yet to be proved at this scale. If a significant proportion of the carbon dioxide from coal combustion is to be sequestered, then very large stores will have to be identified. The most promising
102
Paul Breeze
and cheapest of these available today are exhausted oil and gas wells. Such sites will provide cheap and convenient places to test the viability of carbon dioxide sequestration but they cannot, alone, provide anything like the capacity needed if sequestration is to have a major impact on global emissions (Ansolabehere et al. 2007). Fortunately, there are many other types of geological formation and these, between them, should be able to accommodate all the carbon dioxide that is likely to be sequestered over the next 50 years. Sequestration, however, is only part of the problem. The sequestered carbon dioxide must remain isolated indefinitely if the capture and storage strategy is to be effective. This means that all sequestration sites will have to be monitored for decades, probably longer. While these sites will almost certainly be operated initially by private sector companies, the responsibility for their security is likely, eventually, to fall to national governments. It is important that this should be understood from the outset if the strategy is to be effective. 5.3.4 Costs While the efficiency gains with supercritical and ultra-supercritical coal-fired plants tend to balance the increased cost of the high-technology boiler systems, the introduction of capture and sequestration will have a significant effect on the cost of electricity generated from coal. The loss of efficiency, even without the costs associated with running a capture plant and the transportation and storage of carbon dioxide, will push prices up by approximately 25 per cent. The capital cost of a new subcritical coal-fired plant without carbon capture is $1323 per kW, according to a study carried out jointly by the US New Energy Technology Laboratory and Parsons (Ciferno et al. 2006). The same plant with carbon capture would cost $2358 per kW. Meanwhile, a new supercritical coal plant without capture would cost $1355 per kW and with capture, $2365 per kW. The same study found a new IGCC plant without capture would cost $1557 per kW and with capture, $1950 per kW. Meanwhile, MIT reported the cost of a new ultra-supercritical coal-fired plant to be $1360 per kW, rising to $2090 per kW with capture (Ansolabehere et al. 2007). The same study put the cost of an oxy-fuel plant to be $1900 per kW. Table 5.4 presents estimates of Ansolabehere et al. (2007) for the cost of electricity from these various power plant configurations. Figures from a further study by NETL–Parsons, while not quoted here, are broadly consistent with them for the configurations studied. For the plants without capture, the cost of electricity for the conventional coal-burning plants varies from $0.0478 to $0.0496 per kWh or just under 4 per cent, which is probably insignificant. The IGCC plant with an estimated generating cost of $0.0513 per kWh is higher. When carbon capture is
A strategy to limit CO2 emissions from power stations
103
Table 5.4 Cost of electricity from different coal-fired power plant configurations Type of plant
Without carbon capture ($ per kWh)
With carbon capture ($ per kWh)
Subcritical Supercritical Ultra-supercritical IGCC Oxy-fuel combustion
0.0484 0.0478 0.0496 0.0513 –
0.0816 0.0769 0.0734 0.0652 0.0698
Source: Ansolabehere et al. (2007).
added, however, the IGCC plant becomes the cheapest generator with a cost of $0.0652 per kWh, an increase of 27 per cent over the cost without capture. Oxyfuel combustion also looks competitive on this estimate, at $0.0698 per kWh, while of the conventional coal-burning plants with carbon capture the ultra-supercritical plant offers the most cost-effective means of generation, producing electricity for $0.0734 per kWh, 48 per cent higher than the cost without capture. The most expensive configuration with capture is the subcritical plant with a cost of $0.0816 per kWh, 69 per cent more than the same plant without capture. The cost of transportation and storage must be added to the costs in Table 5.4. This will depend on the type of storage and the distance of the storage site from the power plant. Typical estimates put the cost at between $1 and $8 per tCO2 (Breeze 2006). When evaluating these figures with a view to forming future strategy, there are two further points to bear in mind. First, while the IGCC plant appears to offer the cheapest source of electricity with carbon capture, this type of plant is still relatively new and so far its reliability has proved lower than that of the more conventional boiler-based plants (Ansolabehere et al. 2007). Second, supercritical and ultra-supercritical plants built today will be able to be retrofitted with carbon capture technologies at a later date. This makes less sense for an IGCC plant as a result of the tight integration between the different plant components necessary at the time of construction and could make the economics of the ultra-supercritical plant the most favourable. These figures can be placed in perspective by comparing the estimated cost of power from that of various other technologies. The cost of electricity from a new nuclear power plant is likely to be between $0.030 and $0.067 per kWh. A new large hydropower plant can generate for $0.040–0.080 per kWh and an onshore wind plant might be expected to produce power for between $0.060 and $0.090 per kWh (Breeze 2007b).
104
Paul Breeze
All these figures suggest that alternatives to coal-fired power generation with carbon capture might be more cost-effective. But, as has already been stressed, these alternatives cannot replace coal-fired generation in the short or medium term. So, while a shift to coal-fired generation with carbon capture may well offer future economic opportunities for a range of other technologies, this shift is still necessary in the interests of the planet. There is one final question: how is this shift to be achieved? Two things are required. The first is investment, primarily from Western governments, to develop the technologies for carbon capture and storage to a state where they can be deployed economically on a wide scale. The second is the introduction of global carbon emission limits, with cost penalties for emitting carbon that are sufficiently stringent to persuade generators across the globe to build plants based on these technologies. Both are within grasp, but, as the arguments at the UN conference in Bali in December 2007 showed, there are some hard bargains to be struck if consensus is to be reached. If, indeed, such a consensus can be achieved then there is no reason why carbon emissions from coal combustion should not fall significantly by 2050 even while the amount of coal burnt continues to rise. That may not be the solution sought by many environmentalists but it does provide a realistic route towards a carbon-free energy economy. If that can prevent catastrophic global warming, then there is no obvious reason why it should not be pursued. References Ansolabehere, S. et al. 2007 The Future of Coal: Options for a Carbon Constrained World – An Interdisciplinary Study. Cambridge, MA: Massachusetts Institute of Technology. Breeze, P. A. 2006 The Future of Carbon Sequestration: Key Drivers and Resistors, Costs and Technologies. London: Business Insights. Breeze, P. A. 2007a The Future of Nuclear Power: Growth Opportunities, Market Risk and the Impact of Future Technologies. London: Business Insights. Breeze, P. A. 2007b The Future of Clean Coal: The Impact of New Technologies and Legislation on the Economics of Coal-Fired Power Generation. London: Business Insights. Ciferno, J., Klara, J., Schoff, R. & Capicotto, P. 2006 Cost and performance comparison of fossil energy power plants. In 5th Ann. Conf. on Carbon Capture and Sequestration, Alexandria, VA, May 2006. IEA 2006 World Energy Outlook. Paris: International Energy Agency. Science Daily 2007 Carbon dioxide emissions from power plants rated worldwide. See www.sciencedaily.com. US Energy Information Administration 2007 International Energy Outlook. Washington, DC: EIA.
Part II Carbon dioxide reduction
6 Capturing CO2 from the atmosphere: rationale and process design considerations david w. keith, kenton heidel and robert cherry
In this chapter, our aim is to provide an overview of air capture technologies that focuses on three broad topics. First, we provide an economic and technical rationale for considering direct capture of CO2 from ambient air. Second, we describe some of the more important constraints and trade-offs involved in the design of air capture systems from the standpoint of chemical engineering and physics. Third, we provide an overview of a particular air capture technology that we are developing which combines a contactor using a high molarity sodium hydroxide solution with titanate-cycle caustic recovery process.
6.1 Rationale: physical carbon arbitrage Capturing CO2 from the air at a concentration of 0.04% may seem absurd given that after roughly two decades of research and development there are still no full scale commercial power plants with CO2 capture – for which the exhaust gas CO2 concentrations are greater than 10% – and only a handful of large commercial pilots appear to have financing in place to move ahead. The basic thermodynamics and physics suggest that capturing CO2 from the air need not be much harder than post-combustion capture from a power plant (Section 6.2). It nevertheless seems clear that if an air capture plant and a post-combustion capture facility at a large electric power plant are designed and operated under the same economic conditions, that is with the same costs for construction and energy
Geo-Engineering Climate Change: Environmental Necessity or Pandora’s Box?, eds. Brian Launder and Michael Thompson. Published by Cambridge University Press. © Cambridge University Press 2010.
107
108
D. W. Keith, K. Heidel and R. Cherry
and the same cost of capital, then the cost of air capture will always be significantly higher than the cost of post-combustion capture. This might seem a decisive argument against air capture, but it implicitly assumes that (a) air capture is competing against capture from power plants, and (b) both kinds of capture plants will be built in the same location with the same capital and operating costs. Both these assumptions are false. First, air capture is not likely to compete against capture from large fixed sources or power plants but rather against means of reducing emissions at mobile or small sources such as airplanes or small industrial facilities. These are carbon emissions which are hard to eliminate by process modification or fuel substitution and for which collection of the captured CO2 would be impractical. The cost of emissions reductions and sequestration in such places can easily exceed hundreds or even thousands of US dollars per tonne of CO2 . Second, the economic argument for air capture is all about arbitrage (Lackner, 2003; Zeman and Lackner, 2004; Keith et al., 2005). Assuming that air capture can be achieved at a modest premium above the cost of power plant capture, its defining advantages are that (a) it allows us to apply industrial economies of scale to a myriad small and hard-to-control CO2 emitters such as aircraft and home furnaces; (b) it enables the (admittedly partial) decoupling of carbon capture from the energy infrastructure, easing the constraints that arise when new energy technologies must be integrated into the existing infrastructures; and (c) it provides the (partial) freedom to build a capture plant where it is cheapest or near the best sequestration sites. An economic carbon market allows efficient allocation of the cost of emitting carbon to the atmosphere. Compared to command-and-control regulation, the flexibility of a carbon market allows greater economic efficiency. An ideal carbon market allows carbon mitigation to follow the global supply curve starting with the least expensive options and moving upwards towards the most expensive. In combination with the carbon market, air capture allows carbon to be physically removed in the most cost-effective circumstances, eliminating – in theory – the necessity for controlling the most expensive sources (such as aircraft) even if it is necessary to reduce net carbon emissions to zero. We call this attribute of air capture physical carbon arbitrage distinct from the arbitrage achievable by carbon markets. Consider some of the driving cost differences: r The capital cost of building the same large energy infrastructure varies by factors of two to four between low-cost locations like China and expensive places like Alberta, Canada. r The cost of energy needed to drive the capture process varies enormously between, for example, commercial natural gas in Europe and stranded gas in a remote location or coal.
Capturing CO2 from the atmosphere
109
r The cost of CO disposal ranges from essentially infinite in locations far from adequate 2 reservoirs or without regulatory regimes that allow storage to negative costs of order 25–50 $/tCO2 if CO2 is delivered to large oilfields suitable for enhanced oil recovery. r While the design of air capture facilities will necessarily vary somewhat depending on local factors, it seems likely that designs could be substantially more standardized and therefore less expensive than is typical for large energy infrastructure. This same argument applies to the individual equipment items that might be used in a standardized process. r Free-standing capture systems can be made as large as desired and do not have to match the CO2 output of any particular facility. This is especially important for relatively small CO2 sources such as small factories. Based on the rule of thumb that process plant capital costs scale as roughly the six-tenths exponent of capacity, building a single large plant of 100 units capacity costs about 16% as much as 100 small plants of unit capacity.
There is also a less tangible benefit of air capture. Construction of air capture plants in remote or underdeveloped areas can also be a form of economic development, creating long-lasting, skilled jobs. If such development was desired for sociopolitical reasons, air capture provides an environmentally valuable option. Large-scale technologies for capturing CO2 directly from the air would fill a crucial hole in the toolbox of carbon-climate technologies. Air capture breaks the link between emission source and capture technology; it enables carbon-neutral hydrocarbon fuels (CNHCs, see Section 6.6 and Chapter 7) made from the captured CO2 ; and in the long run, it enables negative emissions as a tool to manage global climate risks.
6.2 Physical and thermodynamic constraints While any individual air capture technology will face a host of engineering challenges, there are two fundamental factors that make air capture more difficult than conventional post combustion CO2 capture processes: first, the energy and materials cost of moving air through an absorber, and second the thermodynamic barrier due to the low concentration of CO2 in air. In this section we describe the physics that constrains an idealized air capture system with respect to these two factors. There is no lower bound to the energy required to move air through an absorber. The energy required to move air can be made arbitrarily low if the speed of air through the absorber approaches zero. However, there is a strong trade-off between this energy cost and the capital cost of the absorber. As the flow velocity approaches zero, the rate of capture per unit absorber surface will also approach zero; so while energy costs approach zero, the amortized cost of capital will approach infinity. This is because each unit of absorber structure captures CO2 at a rate that approaches zero yet still has a finite cost for amortizing the capital required for its construction
110
D. W. Keith, K. Heidel and R. Cherry
along with the cost of maintenance to keep it operational. Any practical design must balance the cost of energy required to drive air through the absorber against the cost of capital and maintenance. If we make a reasonable – though not universally applicable – assumption that the flow through the CO2 -absorbing structure is laminar, then the energy required to drive air through the absorber and the capture rate are linked by the fact that – for laminar flow – the transport of both CO2 and momentum are diffusive processes that occur in a fixed ratio. Neglecting factors of order unity that depend on the specific geometry, we can compute the energy required as compression work to capture a unit of CO2 in a laminar-flow absorber made from a substance that is sufficiently absorptive that the uptake rate is limited by CO2 transport in air (air-side limited). Under these assumptions, the specific energy per unit CO2 , E, is E=
1 ν ρV 2 rρCO2 D
where air has density ρ, velocity V and kinematic viscosity ν; and CO2 has diffusion constant D, density ρCO2 and mole fraction r. For air at standard conditions with 400 ppm CO2 at an air velocity of 10 m/s, the minimum energy is 0.15 GJ/tCO2 . This result can be obtained in a physically intuitive way by thinking about a parcel of air that moves through a structure coated with a perfect CO2 absorbing material. As the parcel moves, gas molecules contact the surface by diffusion, transferring momentum to the surface and losing CO2 . When the parcel has travelled far enough that most of the CO2 has been absorbed it will also have transferred most of its initial momentum to the surface, which would be sufficient to bring the parcel to a stop if there were no pressure gradient sustaining the flow. The minimum pressure drop needed to capture a minor constituent gas in a perfect laminar absorber is therefore, to order unity, the stagnation pressure at the flow velocity, that is, pmin = 1/2ρV 2 scaled by the ratio of diffusion rates which itself is the ratio of relative concentrations of momentum or CO2 and their transport coefficients. The thermodynamic minimum energy required to separate CO2 from the air is given by the free energy of mixing, p RT ln , p0 where p is the final pressure of pure CO2 , p0 is the initial partial pressure, R is the ideal gas constant and T is the working temperature. (Note that this formula ignores the change of free energy of the air when CO2 is extracted, a ∼ 1% correction.) At a 20 ◦ C operating temperature the minimum energy required to capture CO2 from the atmosphere at 400 ppm and deliver it at one atmosphere is 0.43 GJ/tCO2 .
Capturing CO2 from the atmosphere
111
Now, consider the comparison between capturing CO2 from the air and from the exhaust stream of a coal-fired power plant assuming that the CO2 is to be delivered in compressed form suitable for pipeline transport at a pressure of 150 bar. A process which captures CO2 from the air can, in principle, trade off the cost of scrubbing a larger fraction of CO2 from the air against the cost of moving additional air if the fraction captured is smaller. In practice, the fraction captured will depend on a complex optimization between capital and operating costs. Suppose that half of the CO2 is captured and thus that CO2 must be removed from air at an average concentration of about 300 ppm and, for the last molecules captured, a final concentration of 200 ppm. Assuming the worst-case 200 ppm requirement for all the capture, the minimum energy to go from 200 ppm to 1 bar is 0.47 GJ/tCO2 and the minimum energy cost of compressing CO2 from 1 to 150 bar is 0.28 GJ/tCO2 for a total of 0.75 GJ/tCO2 . Most designs for post-combustion capture from power plants assume that at least 90% of the CO2 must be scrubbed from the exhaust gases with a representative concentration of 15% CO2 . Again, what matters is the minimum energy to capture the final bit of CO2 from exhaust gases at 1.5% or 15 000 ppm for which the minimum energy is 0.23 GJ/tCO2 and, counting the same compression cost to 1 bar, the total minimum energy to deliver it at 150 bar is 0.51 GJ/tCO2 . Comparing CO2 capture from air and power plant exhausts, the intrinsic thermodynamic penalty due to the lower starting concentration of CO2 in air is therefore about a factor of 2 if the product is a 1 bar pure CO2 stream and a factor of about 1.5 if the product is pipeline pressure CO2 . These ratios move towards a factor of 1.0 as the percentage capture of CO2 increases; however, the energy cost per ton of CO2 increases simultaneously. The primary reason to move in this direction is the possibility of building and operating smaller air-contacting equipment because not as much air must be treated to capture the target tonnage of CO2 . In practice, proposed designs for both air capture and post-combustion capture are a long way from thermodynamic efficiency limits. Aqueous amines, the most commonly considered method for post-combustion capture require about 2– 3 GJ/tCO2 of regeneration heat (IPCC, 2005; Rao et al., 2006) and the NaOH solutions which we are exploring for air capture have a thermodynamic minimum regeneration energy of 2.4 GJ/tCO2 . The physical limits are, nevertheless, an important guide to the development of energy technologies (Keith et al., 2005): These thermodynamic arguments do not, of course, prove that practical air capture systems can be realized, nor is the performance of air capture technologies likely to approach thermodynamic limits in the near future. The ultimate thermodynamic limits are nevertheless an important basis for suggesting that air capture can be achieved at comparatively low cost. From the liberation of pure metals from their oxides to the performance of internal
112
D. W. Keith, K. Heidel and R. Cherry
combustion engines, electric motors and heat pumps, the historical record strongly supports the view that thermodynamic and other physical limits serve as an important guide to the long run performance of energy technologies.
6.3 Process design considerations for the primary gas separation step 6.3.1 Physical separations Physical separation processes are among the most widely used gas separation techniques in chemical engineering, but most of them can be ruled out almost immediately because of the burden of processing the non-CO2 components of air. Physical separations typically depend on temperature or pressure variations, yet the dilute concentration of CO2 makes direct application of these techniques impractical. Heating air by 1 ◦ C or pressurizing it by 1000 Pa (1% of an atmosphere) both require approximately 1.5 GJ/tCO2 , about three times the thermodynamic minimum energy required for separating CO2 from air to produce a 1-bar product; yet a temperature swing of 1 ◦ C or a pressure swing of 1000 Pa are still grossly insufficient to drive common physical separation process. This 1.5 GJ/tCO2 also corresponds to the net energy input if there is a very high 95% energy recovery on a still-modest 20 ◦ C temperature swing or 20 kPa pressure swing. Cryogenic separation CO2 could be recovered by cooling air at 1 atmosphere pressure to the point that CO2 condenses as a solid. At a 400 ppm concentration, this requires an initial temperature near −160 ◦ C and requires cooling not only the CO2 but also the mass of oxygen and nitrogen too. Conceivably the air could be maintained above the 0.53 MPa (approximately 5 atm) triple point pressure while being cooled so the CO2 can be recovered as a liquid from the system, perhaps after distillation of a condensed nitrogen/oxygen/CO2 mixture. While this type of liquefied gas processing is established technology, cryogenic separations are expensive. Moreover, since the whole air mass must be cooled, an order of magnitude estimate for the cost for capturing CO2 may be derived from the cost of cryogenic O2 separation using the 500 : 1 ratio of O2 : CO2 in ambient air implying an energy cost for CO2 capture of many hundreds of GJ/tCO2 . Physisorption Physisorption to a solid surface, for instance a molecular sieve, is currently used in the front end of air separation plants to remove CO2 and water to prevent them freezing out in the later cryogenic distillation. Adapting such a process for air capture would require overcoming significant problems such as preferential
Capturing CO2 from the atmosphere
113
absorption of water over CO2 . Moreover, both pressure-swing and temperatureswing adsorption are batch processes with inefficiencies in recovering the energy required to swing the beds and their contents through their operating cycles. The mechanics of moving extremely large quantities of air through packed beds of solids also presents a major design problem, since having large amounts of surface area to improve mass transfer rates also means large areas for momentum loss, i.e. pressure drop.
Membranes Membranes that separate CO2 on the basis of its molecular size or its solubility in the polymeric matrix are under active development for application to flue gases (IPCC, 2005). Using them to separate CO2 from air where the driving force for CO2 is at most 40 Pa seems implausible given the relatively low molecular fluxes through membranes. Increasing the CO2 driving force by pressurizing the air feed is not practical because of the capital costs of compression and the energy losses in recovering that compression work. Operating the downstream (CO2 collection) side of the membrane at vacuum conditions does not increase the 40 Pa driving force and therefore the flux because that value already assumes zero pressure on the collection side of the membrane. Alternatively, the membranes could pass oxygen, nitrogen and argon while leaving concentrated CO2 behind. This approach requires a tremendous membrane area because of the quantity of gases that must be transmitted, the last fraction of which has little driving force because of its low residual concentration in the CO2 .
Gas centrifuges Gas centrifuges suffer from low throughputs and relatively low separation per stage, problems which are worsened by the complexity of the equipment in each separation stage. Further, advances in design and operation of these systems are subject to government classification and export control limitations because of their potential use to separate and enrich nuclear materials.
Physisorption into a liquid Physisorption into a liquid is the basis for processes that absorb CO2 into a simple solvent such as cold methanol. Applying them to air capture suffers penalties with incomplete energy recovery while cooling and reheating the air stream and with the costs of any volatility loss of the solvent to the extremely large flow of air through the system.
114
D. W. Keith, K. Heidel and R. Cherry
6.3.2 Chemical separations While other methods are no doubt possible, in practice most existing air capture development is focused on two broad separation methods, chemisorption in aqueous solutions and chemisorption on solid surfaces. Aqueous absorption Separations that take advantage of CO2 ’s acidity in solution are the current standard for industrial processing. Literally dozens of such processes and solvents have been developed for the removal of CO2 and H2 S (collectively called acid gases) from natural gas, either separately or as a mixed gas stream. These processes have more recently been used for treating synthesis gas mixtures from gasification of coal, natural gas, or heavy petroleum fractions (IPCC, 2005). These processes differ functionally from each other in their selectivity for CO2 against H2 S; their ability to remove these gases to very low (ppm or lower) levels; their sensitivity to other gases such as ammonia; their maintenance and operating costs; and their trade-off of capital and energy costs. The primary barrier to using such process is that the kinetics of CO2 dissolution into water are limited by the initial reaction to form carbonic acid (CO2 + H2 O → H2 CO3 ). While this reaction is sufficiently fast to make aqueous systems costcompetitive for capture of CO2 from power plant exhaust streams, it is too slow at the much lower concentrations in ambient air. Two methods are being explored to get around the kinetic limitation in aqueous systems. One option is to accelerate the reaction using a catalyst. The naturally occurring enzyme carbonic anhydrase can accelerate the CO2 + H2 O reaction by a factor of ∼109 and facilitates respiration in living cells by catalysing the reverse reaction (all catalysts speed their target reactions in both directions). Using an enzyme as a catalyst is challenging because, to name only a few issues, they only operate in a narrow pH and temperature range and as organic compounds they may be decomposed by micro-organisms (Bao and Trachtenberg, 2006). Roger Aines and collaborators at Lawrence Livermore National Laboratory are developing synthetic catalysts that would be somewhat less effective in accelerating the reaction than carbonic anhydrase but which could be tailored for the air-capture application (Aines and Friedmann, 2008). An alterative to catalysis is to use aqueous solutions with very high pH. For example, our group has focused on using NaOH solutions with a concentration between 1 and 6 mol/L with pH near 13. In these solutions the kinetics are dominated by the direct reaction with the hydroxyl radical (CO2 + HO− → HCO3− ), which enables mass fluxes of ∼3 gCO2 /hr-m2 for the applicable case in which mass transfer is liquid-side limited. The advantages of strong bases are that (a) they
Capturing CO2 from the atmosphere
115
use simple inorganic chemistry which is insensitive to contamination, (b) vapour pressures are low so evaporative loss of the base to the atmosphere is minimal, (c) their high molarity enables low liquid-side fluid pumping work, (d) at sufficiently high molarity evaporative water loss can be eliminated, and (e) the technique does not depend on the development of novel solids or catalysts. The primary disadvantage is the difficulty of regenerating the resulting carbonate solution back to hydroxide. Recovery of NaOH from Na2 CO3 is closely related to ‘caustic recovery’, one of the oldest processes in industrial chemistry. In Kraft pulping for paper making, wood is digested using sodium hydroxide to liberate cellulose and produce pulp. The remained solution, so-called ‘black liquor’, consists of mainly other organic material originated from wood (e.g. lignin) along with sodium carbonate. The standard process for recovering NaOH from Na2 CO3 depends on a calcium cycle, a process that has been used on a continuous basis for more than 80 years. Several studies have investigated adaption of this process to recovery of NaOH for air capture (Zeman and Lackner, 2004; Keith et al., 2005; Baciocchi et al., 2006), alternative caustic recovery methods include the titanate cycle (Mahmoudkhani and Keith, 2009).
Sorption into solids Using an alkaline solvent, typically an organic amine compound, allows good selectivity and solvent loading without excessive regeneration costs. However, for air capture any evaporative loss of solvent to the air stream is a significant loss compared to the amount of CO2 captured and will make the overall economics untenable. The same type of chemistry used in aqueous absorption processes can be adapted to solid sorbent phases which will not evaporate. A high surface area material can be chemically modified so that it reacts with CO2 and can remove even low concentrations from air. The challenge then is to provide a large surface area for CO2 capture without having a large mass of solid support which must be heated to drive off the bound CO2 . Two solid sorbent systems are being actively developed for air capture. Both processes offer the advantage of low regeneration energy. Klaus Lackner and collaborators at Global Research Technologies (GRT) are developing an ion exchange membrane which captures CO2 using a carbonate to bicarbonate swing driven by changes in humidity. A significant challenge is that the partial pressure of CO2 achieved during the regeneration phase is only about 0.1 bar, so to obtain pure CO2 suitable for sequestration it is necessary either to purge all the air from the sorbent beds (including the internal pores of the sorbent material) before performing the regeneration step in a vacuum, or to regenerate in air and then capture the CO2 from air at a 10% concentration. Large-scale vacuum operations, especially
116
D. W. Keith, K. Heidel and R. Cherry
repeated batch processes, present a variety of engineering issues with sealing and with the generally low energy efficiency of vacuum pumping systems. An alternative solid sorbent system is being developed by Peter Eisenberger and Graciela Chichilnisky of Global Thermostat, using solid amines on a mesoporous silica substrate similar to those that are being developed for CO2 capture from power plants (Gray et al., 2007). Capture is accomplished using temperature-swing regeneration driven by low-grade heat (∼100 ◦ C or less). In general, these solid sorption methods offer the potential to achieve low capture energies with minimal water loss. Perhaps the central challenge in commercializing them is the need to build a solid surface with very high surface area at low manufacturing cost while simultaneously achieving long service lifetimes when operating in the free atmosphere which will inevitably be contaminated with various trace chemicals and windblown dust. 6.3.3 Energy integration and energy supply In general, a direct air-capture process will require electrical power to drive systems such as fans and pumps as well as the thermal input to drive the regeneration process itself. For a stand-alone air-capture system, thermal power could in principle be supplied by a wide range of energy sources including solar thermal power, natural gas, coal or nuclear heat. Electrical power could be imported from the grid or (co-)generated on site depending on economics and the opportunities for heat integration. Solar offers zero energy cost and no CO2 production but at the expense of relatively high capital costs and intermittent operation which does not integrate well with the desired steady operation of chemical processes. Because the plant would not operate at full capacity for a large fraction of the time, this raises the capital costs per ton of CO2 collected. While it is possible to reach high temperatures (>1000 ◦ C) in solar furnaces, most of the commercial development of solar thermal electricity is now focused on parabolic troughs which produce lower grade heat. Nikulshina et al. (2009) have explored the use of solar furnaces for CaCO3 (limestone) calcination as part of an air-capture process, and such solar kilns could also be applied to the titanate process we describe below (Section 6.5). In general, natural gas combustion offers the simplest and lowest capital cost plant design at the expense of relatively high energy costs. Combustion gas turbines offer the possibility of efficient cogeneration of power and process heat. Because our air-capture approach needs substantial quantities of high-grade heat and we wish to minimize initial technical risk, all of our current design efforts focus on natural-gas-driven systems using cogeneration so that the plant has no significant external electricity demand. We do note however that large air-capture plants, like
Capturing CO2 from the atmosphere
117
any large chemical process, will operate for long periods at steady high rates; this situation is analogous to base-load electrical power production where coal and nuclear systems have proven to be the most economical technologies. Where cost-effective, CO2 emissions from the natural gas combustion in the process are captured using either oxy-fuel combustion or post-combustion capture. Coal offers low energy cost but requires integrated carbon capture. The simplest technical approach for supplying high-grade heat to a calcination process would be direct combustion of coal with the material to be calcined (e.g. CaCO3 or titanates). This process is widely used for lime production. The disadvantage is the management of the coal ash and the possibility that the ash interferes with process chemistry once the lime or titanates are cycled into the capture process. Alternatively, coal could be used in a gasification system to supply syngas or hydrogen to heat the air-capture process, with CO2 capture and storage for the CO2 emissions from gasification. Nuclear heat offers high capital and low operating costs without generating additional carbon dioxide, but it has some disadvantages beyond the well-known issue of local public acceptance. One is the requirement to manage design standards and safety engineering in the integrated plant, because different engineering standards apply to chemical/thermal and to nuclear systems. This primarily affects the documentation and engineering analyses needed to license the nuclear plant for this integrated operation. A second disadvantage is the potential operational difficulty of running a nuclear plant closely coupled to a chemical facility. This concerns economic performance rather than safety performance. Almost all existing commercial nuclear plants have been used for electrical power production, some with modest cogeneration of heat for local district heating. In operating the first of any new technology such as nuclear-powered air capture, there is always the possibility of unexpected maintenance needs or poor performance. 6.4 Operational and environmental constraints In this section we examine operational and environmental constraints. While many of the issues discussed would apply to any air capture technology, others are specific to the large-scale strong-base system we are developing. 6.4.1 Transport issues In addition to the capture and recovery chemistry, there are problems in moving the great quantity of air which must be processed. Consider a facility that captures 1 MtCO2 /yr. This value has been used in our work for the nominal fullscale plant in part to match the scale of some unit operations in current industrial
118
D. W. Keith, K. Heidel and R. Cherry
practice, and because a large capacity is needed to make progress against total global emissions. Current commodity chemical plants for products such as ethylene/propylene, ammonia and methanol (as well as coal-fired power plants) are already being built at this scale. Assuming 400 ppm CO2 , 50% capture, and 90% annual availability of the capture system, the system would have to process 500 million kg/hr of air, or 6.5 million m3 /min, or a cubic kilometer of air at standard conditions in about 2.5 hours. This amount of air could be moved by either natural draft or forced draft using low-head fans. In either case, in the absence of an ambient wind of greater velocity than the flows into or out of the air capture plant, there would be a tendency for the air capture facility to recirculate low-CO2 effluent air to its intake. This condition sets the facility scale. Assuming a typical wind velocity of 5 m/s, about 20 000 m2 of intake area – or a square 150 meters on a side – is needed to collect the necessary amount of air. It is plausible to imagine a facility comparable in size to an open-roof sports stadium or, more likely, a number of separate smaller air contacting units all feeding a CO2 -rich sorbent liquid to a central CO2 recovery facility. Restricting air-side pressure drops to the stagnation pressure ρV2 /2 obtainable without external energy input corresponds to operation at a pressure drop in the range 50–150 Pa. We can imagine developing this pressure difference through a combination of near-stagnation pressure at the inlet and a reduced pressure in a venturi or aerofoil system at the exhaust point, or we may consider it to be an initial estimate of the head to be developed by whatever type of auxiliary (i.e. wind-augmenting) fan system might be used. While this pressure drop is low, it is in the range of other large industrial systems. Chemical plant distillation towers designed for vacuum conditions use low-pressure-drop packing to minimize the pressure and temperature at the bottom of the column and therefore the degradation of thermally sensitive chemicals. Evaporative cooling towers perform a mass- (and heat-) transfer function similar to a CO2 absorber and have been built at very large scales. Windmills demonstrate that several megawatts of useful work can be recovered from a moderate wind from a swept area similar to the intake area of the air-capture plant. This energy is available to move air through the contacting system and is probably best tapped as the original kinetic energy rather than as converted electricity. This value also suggests the magnitude of fan work that would be needed to maintain production on windless days. The large amount of air to be handled also affects the amount of sorbent liquid which must be circulated and brought into contact with the air. If a conventional countercurrent contacting system were to be used, there would be a significant energy penalty for lifting to the top of that contactor an amount of liquid which is of similar order of magnitude as the mass of air to be processed. That lifting work
Capturing CO2 from the atmosphere
119
cannot be recovered because the liquid remains at one atmosphere pressure (i.e. does not develop any elevation head) as it falls freely down over a packing material which spreads it into thin layers and streams with a large surface area (note that the mixing of these layers as they flow moderates a decline in the absorption rate as the surface liquid becomes saturated with CO2 ). In our exploration of contactor operation we have found that continuous liquid circulation is not necessary, and that periodic pulsed addition of sodium hydroxide sorbent solution to the top of the contactor is sufficient (Section 6.5). 6.4.2 Operational issues: environmental An air-capture system must not only perform well at the design conditions but must be designed to be robust to a variety of external influences. Perhaps the most obvious of these are the local atmospheric conditions. The contactor will take in enormous amounts of air. If it is raining or snowing, some amount of liquid water will come with the air and in a thunderstorm or tropical storm this amount could be substantial. Even when it is not raining the water balance is important. Except in the coincidence that the water fugacity in the caustic sorbent solution equals the partial pressure of water in the atmosphere, the system will either extract water vapour from the atmosphere or will evaporate water into the air stream. Both possible situations must be compensated for since the trend might continue for several days or weeks. Two types of freezing problems must be considered. Temperatures below 0 ◦ C will freeze pure water, creating icing problems in the air intake system if it collects rain, snow or condensation. Colder temperatures run the risk of freezing the sorbent liquid or causing the dissolved caustic to precipitate if its saturation temperature is reached. As one easy solution, air-capture systems could be located only in semitropical or tropical areas. Performance of the air-capture system will depend on atmospheric pressure because this affects both the partial pressure of CO2 – and therefore the driving force for capture – as well as the density of air and consequently the volume that must be moved to bring a certain amount of CO2 into the absorber. Normal fluctuations in atmospheric pressure and water vapour pressure (absolute humidity) each have a total range on the order of 3% of the pressure and do not pose a major problem since these differences should be within the normal design conservatism. More important is the elevation at which the air-capture system is installed. At an example elevation of 1400 m, air pressure is about 85% of the value at sea level. This suggests that low elevation siting is preferable although that also corresponds in many places to greater population densities. Air density will also change with the local temperature. Depending on the location, seasonal absolute temperature
120
D. W. Keith, K. Heidel and R. Cherry
changes can be up to about 20% and the resulting density changes will affect plant performance throughout the year. 6.4.3 Operational issues: contaminants and emissions A system designed to recover a component of air present at about 400 ppm, such as CO2 , also has potential to recover other species present at ppm levels. Such unintended species must either be innocuous in the system or they must be specifically considered in the design. If the basic process discharges only pure CO2 , other purge streams must be added to remove problematic low-concentration air contaminants to prevent their continual accumulation in the process. The chemistry of capturing the acid gas CO2 will naturally lead to capture of other acid gases such as SO2 , NOx , and, if present, H2 S. Solid sulphates present as aerosols could also be captured. The build-up of these materials must be understood to prevent formation of sulphate or other precipitates. However, this incidental capture creates an opportunity to generate SOx or NOx removal credits which might conceivably contribute to the overall process economics. The atmosphere, especially near the ground, contains suspended particulates. Collecting and purging inert solids or tar and soot particles from vehicle exhaust is not difficult. The bigger problem is likely to be materials somewhat soluble in caustic solutions like silicate compounds which might dissolve, accumulate and eventually precipitate elsewhere in the process. A related problem is capture of biological contamination such as wind-blown pollen, seeds, leaves, insects or birds. Unless screens remove these objects, their organic molecular constituents could be decomposed by the caustic sorbent and accumulate in the sorbent loop. One resulting concern is hydrolysis of lipid molecules (fats and oils) to form soap which would create either foam or a solid scum build-up. Finally, the air contactor might be expected to generate its own emissions issues. The reduced CO2 concentration in the effluent plume will hinder plant growth in the region downwind of the capture facility. This might stress natural vegetation leading to ecological changes and in the case of planted crops could create economic damages. Depending on the changes to the temperature and humidity of the air as it passes through the contactor, the effluent plume might be denser than the local atmosphere and have a tendency to remain near the ground. At a wind speed of 5 km/hr under stable conditions the plume recovers to 90% of upwind CO2 concentration within 2 km. Under unstable conditions or faster flow velocities recovery is much faster. The facility air effluent, especially at night when the atmosphere is cooler, might also form a large fog plume because of its water vapour content. This phenomenon
Capturing CO2 from the atmosphere
121
Table 6.1 Power to drive the air at various velocities Velocity (m/s)
Pressure drop (Pa)
Compression work (W)
Capture rate (tCO2 /yr)
Power input (kWh/tCO2 )
0.9 1.2 1.5
25 37 42
23 45 63
7.7 9.1 9.9
24 39 50
is identical to what happens in conventional cooling towers but will be of greater scale. Its potential effect on visibility for automobile and aeroplane traffic, as well as those individuals living or working in close proximity, must be considered. The extremely large volume of air can still lead to downwind problems even if concentrations of effluent emissions are quite low. Two that must be evaluated are the emission of aerosolized caustic droplets and the possible release of malodorous ammonia vapour generated by protein degradation in the high pH sorbent solution.
6.5 An example We are currently developing an air-capture process based around NaOH capture with a titanate-based hydroxide regeneration system (Keith et al., 2005; Stolaroff et al., 2008; Mahmoudkhani and Keith, 2009). This is a multistep chemical process with many components similar to or identical to existing components used in current chemical processes. The first step in the process is a system that contacts a strong hydroxide solution with atmospheric air, referred to as the contactor. This device will accomplish its task using structured packing similar to the material found in packed towers that are used in many chemical processing plants, but with a few key differences. The first is scale; in our current conceptual design, for example, a 1 MtCO2 /yr facility requires 13 contacting units 20m tall × 200m long. Second, unlike a traditional packingbased system, the contactor will be cross-flow (the liquid flowing perpendicular to the air) to reduce the maximum velocity to which the air must be accelerated and thus the energy needed to drive the air. Third, the contactor is operated in an intermittent wetting mode in which the packing is supplied fresh sorbent only a fraction of the total operating time, with sorbent hold-up allowing continued CO2 collection between these times. Regarding the use of a commercial packing in cross-flow, some experimental data are shown in Table 6.1. All values in the table are scaled to 1 square meter of intake surface. The values are taken from experiments performed during the summer of 2008 and are representative of Sulzer 250X packing with a total thickness of 1.54 m.
Figure 6.1 Flowsheet of a NaOH–titanate-based air-capture process. The kiln is depicted with a CO2 recycle loop which may or may not be used depending on the kiln configuration. The solid circle–arrow symbols show major points of heat addition or subtraction (Mahmoudkhani and Keith, 2009) (see also colour plate).
Capturing CO2 from the atmosphere
123
Table 6.2 Heat balance and exergy for a titanate-based NaOH recovery system. Note the large enthalpy change in the titanate compounds (Mahmoudkhani and Keith, 2009). Enthalpy Change H (kJ/mol-CO2 ) Crystallizer Crystallization of Na2 CO3 ·10H2 O Combined Crystallizer/ Leacher Unit Heating Na2 CO3 ·10H2 O Dissolution of Na2 CO3 ·10H2 O Crystallization of Na2 CO3 Leaching Reaction Heater Heating Na2 CO3 Heating Sodium Trititanate Reactor Decarbonization Reaction Cooler Cooling CO2 Cooling Sodium Pentatitanate Total
−68.8
Temperature Range (◦ C) 10
Exergy Change C (kJ/mol-CO2 ) 1.7
8.8 67.9
10 → 31 31
1.9 −1.2
45.3 15.2
103 103
−8.3 −1.3
100 → 860 100 → 860
93.4 84.0
123.4 146.9 65 −40.7 −213
860 860 → 25 860 → 100
−33.8 −22.1 −129.1
150
Any change in geometry of the packing or thickness will change the values. Note that the data seem to indicate that lower air velocity is less expensive to operate, but it does not take into account the capital cost of a capture system. When the capital cost is taken into account the optimal air velocity may be higher than shown here. The other process steps are shown in Figure 6.1 and will not be explored in depth here as they each have many analogues in current industrial practice. The thermodynamics of the idealized system are listed in Table 6.2. A titanate-based hydroxide regeneration process has a number of advantages and a number of disadvantages when compared with the standard calcium-based process. Advantages: r Lower high-temperature heat requirement, 90 kJ/mol-CO2 vs. 179 kJ/mol-CO2 for a calcium process. r More concentrated NaOH solutions can be generated, the calcium process is limited to one-sixth the output concentration of the titanate process. r Elimination of calcium which may cause fouling in the contactor.
124
D. W. Keith, K. Heidel and R. Cherry
Disadvantages: r Contamination or degradation of titanate particles must be carefully controlled. r Heat recovery from solids is very important, since large masses of titanate particles must be heated and cooled cyclically.
When comparing the two processes it is difficult to determine which method will cost less per tonne of captured CO2 because, although the titanate process has lower energy requirements, it has some added complexities. These will probably manifest themselves as extra capital cost and inefficiencies in heat integration, both of which increase the cost per tonne of captured CO2 . The two processes will probably be similar in cost and it will require more work to determine the preferred regeneration method for the air-capture system.
6.6 Discussion It is technically possible to capture CO2 from air at industrial scale. Indeed, technologies for industrial air capture were commercialized in the 1950s for pretreating air prior to cryogenic air separation. The cost of air capture is, however, uncertain and disputed. Some have argued that the costs would be prohibitively high (Herzog, 2003) or that investing funds into research on air capture is a mistake because it diverts attention from more important areas (Mark Jacobson as quoted in Jones, 2008). In sharp contrast, others have argued that air capture might be comparatively inexpensive and that it could play a central role in managing carbon dioxide emissions (Pielke, 2009). Our view is that air capture is simply another large chemical engineering technology. Its cost will depend on the technology employed, as well as the cost of materials, labour and energy. The economics of the process will determine its feasibility but will not be well defined until more work has been done on specific processes. As with other significant climate-related energy technologies it will not be possible to determine the cost with precision by small-scale academic research. Instead, costs will only become clear through pilot-scale process development and through costing by contract engineering firms with relevant expertise. In our view, such development is justified for three reasons. First, early estimates suggest that the CO2 -abatement cost of air capture is less than other technologies that are getting very large research and development investments. For example, the cost of cutting CO2 emissions by displacing carbon-intensive electricity production with roof-mounted solar photovoltaic panels can easily exceed 1000 $/tCO2 . We are confident that a straightforward combination of existing process technologies could achieve air capture at costs under 1000 $/tCO2 . Indeed, neither we nor others working in this area would be commercializing these approaches if we were not
Capturing CO2 from the atmosphere
125
able to convince investors that we could develop technologies to capture CO2 from air at costs many times less than 1000 $/tCO2 . Second, air capture offers a route to making carbon-neutral hydrocarbons fuels (CNHCs) for vehicles by using captured CO2 and clean energy sources to make synthetic fuels with desirable handling and combustion properties. Deep reductions in emissions from the transportation sector will require a change in vehicle fuel. Each of the three leading alternative fuel options – electricity, biofuels and hydrogen – face technical and economic hurdles which preclude near-term, major reductions in transportation emissions by using these technologies. Carbon-neutral hydrocarbons represent a fourth, fundamentally different alternative, a method for converting primary energy from sources such as solar or nuclear power into highenergy-density vehicle fuels compatible with the current vehicle fleet. As stated in Zeman and Keith Chapter 7, this volume: We argue for the development of CNHC technologies because they offer an alternate path to carbon neutral transportation with important technical and managerial advantages. We do not claim that CNHCs are ready for large-scale deployments or that they will necessarily prove superior to the three leading alternatives. We do argue that they are promising enough to warrant research and development support on a par with efforts aimed at advancing the alternatives.
Finally, air capture allows negative global CO2 emissions. While the prospect of achieving negative global emissions is distant, it is important because it represents one of the few ways to remediate human impact on climate. Without the ability to take CO2 out of the air, the climate change arising from current emissions is essentially irreversible (Solomon et al., 2009). References Aines, R. and Friedmann, J. 2008 Enabling Cost Effective CO2 Capture Directly from the Atmosphere, Report no. LLNL-TR-405787. Livermore, CA: Lawrence Livermore National Laboratory. Baciocchi, R., Storti, G. and Mazzotti, M. 2006 Process design and energy requirements for the capture of carbon dioxide from air. Chemical Engineering and Processing 45: 1047–1058. Bao, L. and Trachtenberg, M. C. 2006 Facilitated transport of CO2 across a liquid membrane: comparing enzyme, amine, and alkaline. Journal of Membrane Science 280: 330–334. Gray, M. L. et al. 2007 Performance of immobilized tertiary amine solid sorbents for the capture of carbon dioxide. International Journal of Greenhouse Gas Control 2: 3–8. (doi:10.1016/S1750–5836(07)00088–6) Herzog, H. 2003 Assessing the Feasibility of Capturing CO2 from the Air, Technical Report no. 2003–002 WP. Cambridge, MA: MIT Laboratory for Energy and the Environment.
126
D. W. Keith, K. Heidel and R. Cherry
IPCC 2005 Carbon Dioxide Capture and Storage, Special Report of the Intergovernmental Panel on Climate Change. Cambridge, UK: Cambridge University Press. Jones, N. 2008 Climate crunch: sucking it up, Nature 458: 1094–1097. Keith, D., Ha-Duong, M. and Stolaroff, J. 2005 Climate strategy with CO2 capture from the air. Climatic Change 74: 17–45. Lackner, K. 2003 A guide to CO2 sequestration. Science 300: 1677–1678. Mahmoudkhani, M. and Keith, D. W. 2009 Low-energy sodium hydroxide recovery for CO2 capture from atmospheric air: thermodynamic analysis, International Journal of Greenhouse Gas Control 3: 376–384. (doi:10.1016/j.ijggc.2009.02.003) Nikulshina, V., Gebalda, C. and Steinfeld, A. 2009 CO2 capture from atmospheric air via consecutive CaO-carbonation and CaCO3 -calcination cycles in a fluidized-bed solar reactor. Chemical Engineering Journal 146: 244–248. Piekle, R. 2009 An idealized assessment of the economics of air capture of carbon dioxide in mitigation policy Environmental Science and Policy 12: 216–225. Rao, A., Rubin, E., Keith, D. and Morgan, M. 2006 Evaluation of potential cost reductions from improved amine-based CO2 capture systems. Energy Policy 34: 3765–3772. Solomon, S. et al. 2009 Irreversible climate change due to carbon dioxide emissions. Proceedings of the National Academy of Sciences of the USA 106: 1704–1709. Stolaroff, J., Keith, D. and Lowry, G. 2008 Carbon dioxide capture from atmospheric air using sodium hydroxide spray. Environmental Science and Technology 42: 2728–2735. Zeman, F. and Lackner, K. 2004 Capturing carbon dioxide directly from the atmosphere. World Resource Review 16: 157–172.
7 Carbon neutral hydrocarbons frank s. zeman and david w. keith
Reducing greenhouse gas emissions from the transportation sector may be the most difficult aspect of climate change mitigation. We suggest that carbon neutral hydrocarbons (CNHCs) offer an alternative pathway for deep emission cuts that complement the use of decarbonized energy carriers. Such fuels are synthesized from atmospheric carbon dioxide (CO2 ) and carbon neutral hydrogen. The result is a liquid fuel compatible with the existing transportation infrastructure and therefore capable of a gradual deployment with minimum supply disruption. Capturing the atmospheric CO2 can be accomplished using biomass or industrial methods referred to as air capture. The viability of biomass fuels is strongly dependent on the environmental impacts of biomass production. Strong constraints on land use may favour the use of air capture. We conclude that CNHCs may be a viable alternative to hydrogen or conventional biofuels and warrant a comparable level of research effort and support. 7.1 Introduction Stabilizing atmospheric levels of carbon dioxide (CO2 ) will eventually require deep reductions in anthropogenic emissions from all sectors of the economy. Managing CO2 emissions from the transportation sector may be the hardest part of this challenge. In sectors such as power generation, several options are currently available including wind power, nuclear power and carbon capture and storage (CCS) technologies. Each can be implemented, in the near term, at a scale large
Geo-Engineering Climate Change: Environmental Necessity or Pandora’s Box?, eds. Brian Launder and Michael Thompson. Published by Cambridge University Press. © Cambridge University Press 2010.
127
128
Frank S. Zeman and David W. Keith
enough to enable deep reductions in CO2 emissions at costs of under $100 per tonne CO2 (tCO2 ) or an electrical premium of the order of $37 (MWh)−1 , based on representative CO2 emissions from a pulverized coal plant (IPCC 2005). It is recognized that in the last few years, the capital costs of constructing heavy equipment have escalated rapidly here and elsewhere; but we assume that recent capital cost increases are a transient phenomenon and use cost estimates prevailing in the 2000–2005 time window. Adding this premium to the lowest cost of electricity, subcritical coal at $48 (MW h)−1 (Breeze, Chapter 5), we establish a total electricity cost of $85 (MW h)−1 . All of the options listed by Breeze are below this threshold with the exception of expensive wind power sites. The transportation sector does not have such low-cost solutions. While there is ample opportunity for near-term gains in overall vehicle efficiency, these improvements cannot deliver deep cuts in emissions in the face of increasing global transportation demand. Beyond efficiency, deep reductions in emissions from the transportation sector will require a change in vehicle fuel. Changes in fuel are challenging owing to the tight coupling between vehicle fleet and refuelling infrastructure. Economic network effects and technological lock-in arise because users demand ubiquitous refuelling, yet investments in new fuel infrastructure are typically uneconomic without a large vehicle fleet. Moreover, each of the three leading alternative fuel options, hydrogen, ethanol and electricity, faces technical and economic hurdles precluding near-term, major reductions in transportation emissions using these technologies. We consider a fourth alternative: carbon neutral hydrocarbons (CNHCs). Hydrocarbons can be carbon neutral if they are made from carbon recovered from biomass or captured from ambient air using industrial processes. The individual capture technologies required to achieve CNHCs have been considered elsewhere; our goal is to systematically consider CNHCs as an alternative and independent route to achieving carbon neutral transportation fuels. We compare various methodologies for producing CNHCs, in terms of dollars ($) per gigajoule (GJ) of delivered fuel, using hydrogen as a reference case. We argue for the development of CNHC technologies because they offer an alternative path to carbon neutral transportation with important technical and managerial advantages. We do not claim that CNHCs are ready for large-scale deployment or that they will necessarily prove superior to the three leading alternatives. We do argue that they are promising enough to warrant research and development support on a par with efforts aimed at advancing the alternatives. CNHCs are effectively an alternative method for using carbon-free hydrogen, as shown in Figure 7.1. Converting CO2 into fuel by adding hydrogen can be viewed
Carbon neutral hydrocarbons
129
Figure 7.1 Two pathways for using centrally produced hydrogen in the transportation sector.
as a form of hydrogen storage (Kato et al. 2005). Once the hydrogen is produced, a choice exists between distribution and incorporation into a hydrocarbon fuel. The latter is potentially attractive because the energy cost of centrally produced hydrogen is inexpensive compared with crude oil or gasoline at the pump. Even with CCS, hydrogen can be produced from coal or natural gas at costs in the range $7.5–13.3 GJ−1 (IPCC 2005), whereas the current cost of crude is $17 GJ−1 (at $100 per barrel) and the cost of gasoline exceeds $20 GJ−1 (neglecting taxes). The barrier to the use of hydrogen in transportation systems is distribution and vehicle design rather than the cost of central hydrogen. When CNHCs are considered, the competition is between developing a new distribution and use infrastructure or capturing CO2 and synthesizing a hydrocarbon. 7.2 Carbon neutral hydrocarbons 7.2.1 Overview We define CNHCs as those whose oxidation does not result in a net increase in atmospheric CO2 concentrations. Hydrocarbon fuels can be made carbon neutral either directly by manufacturing them using carbon captured from the atmosphere, or indirectly by tying the production of fossil fuels to a physical transfer of atmospheric carbon to permanent storage. The indirect route allows for a gradual transition from the current infrastructure, based on petroleum, to a sustainable system based on atmospheric sources of carbon. It is vital to distinguish negative emissions achieved by permanent physical storage from economic offsets (carbon credits) or the sequestration of carbon in the active biosphere. While the use of carbon offsets such as those allowed under the clean development mechanism may have some benefits, they are not equivalent to non-emission (Wara 2007). There are also tangible benefits to increasing stocks
130
Frank S. Zeman and David W. Keith
Figure 7.2 Schematic of routes to CNHCs.
of carbon in soils or standing biomass, but such organic stores are highly labile and may be quickly released back to the atmosphere by changes in management practices or climate. Geological storage reservoirs for CO2 may also leak. However, the retention time for CO2 in geological reservoirs is at least 103 times longer than that for carbon stored in the biosphere. In most cases, a very large fraction of CO2 placed in geological storage is expected to be retained for time scales exceeding 108 years (IPCC 2005). Direct and indirect routes to CNHCs both begin by capturing CO2 from the atmosphere. Carbon can be captured from the atmosphere by either harvesting biomass from sustainable plantations or direct industrial processes referred to as air capture (Keith et al. 2006). Once captured, the CO2 can be transferred to storage either in geological formations or other means such as mineral sequestration (IPCC 2005). Alternatively, it may be returned to the fuel cycle through incorporation into a synthetic fuel or conventional biofuels. The synthetic fuel pathway depends on a source of primary energy to drive the required chemical reactions including the supply of hydrogen. As with hydrogen and electricity, these synthetic hydrocarbons are an energy carrier produced from a primary energy source such as wind, nuclear power or fossil fuels with CCS. Unlike hydrogen and electricity, they are carbonaceous fuels that are nevertheless carbon neutral as they were derived from the atmosphere. The relationship between all of the options is presented in Figure 7.2. We first review the technologies for capturing carbon from the air, using either biomass growth or air capture. The review is followed by discussions on transforming the carbon, in the form of high-purity CO2 , into hydrocarbon fuels. The
Carbon neutral hydrocarbons
131
objective is to outline the important process steps so that they can be quantified in the economic comparison that follows. The comparison does not include fugitive emissions from individual process steps. These include emissions associated with harvesting and processing biomass, potential leakage from industrial air capture and CO2 emissions associated with hydrogen production from fossil fuels (estimated at 7–28 kg CO2 GJ−1 : IPCC 2005). As such, the processes considered here will not be completely ‘carbon neutral’ unless accompanied by the removal of additional CO2 from the atmosphere. 7.2.2 Biomass systems The large carbon fluxes between the atmosphere and the terrestrial biosphere (approx. 60 Gtonne carbon per year), in combination with our substantial control over terrestrial biotic productivity (Vitousek et al. 1986), grant us a powerful lever for manipulating atmospheric CO2 . If biomass is cyclically harvested so that stocks of standing biomass are not decreased, it provides a means of simultaneously capturing both carbon and solar energy. As discussed below, once harvested biomass can be used to produce conventional biofuels or CNHCs; alternatively, the carbon can be permanently sequestered allowing for the indirect production of CNHCs. Large-scale use of biomass presents enormous challenges and poses risks of substantial environmental, social and economic side effects. As noted by the IPCC (2007), ‘biomass production and use imply the resolution of issues relating to competition for land and food, water resources, biodiversity and socio-economic impact’. These competing issues make it very unlikely that conventional biomass fuels can be used as the dominant solution to emissions from the transportation sector. There is also a degree of risk associated with solving the climate change problem using a technology dependent on the climate (Fargione et al. 2008; Searchinger et al. 2008). Estimates of biomass cost and availability vary widely. For example, the cost of switchgrass ranges from $33 tonne−1 at a yield of 11 tonnes biomass ha−1 to $44 tonne−1 at 7 tonne ha−1 in Oklahoma (Epplin 1996). Walsh estimated switchgrass costs at $20–25 tonne−1 depending on the location in the USA with woody crops (poplar) in the range $22–35 tonne−1 (Walsh et al. 2003). Other researchers estimate the cost of short rotation crops in Sweden at $89 tonne−1 with forestry residues slightly more expensive at $110 tonne−1 (Gustavsson et al. 2007), using a conversion value of $1.00 = €0.72 and an energy content for dry woody biomass of 20 GJ tonne−1 (Khesghi et al. 2000). Alternative studies for combination biomass with CCS have assumed costs of $50–54 tonne−1 (Audus & Freund 2004; Rhodes & Keith 2005).
132
Frank S. Zeman and David W. Keith
The dedication of large amounts of land to energy crops may also raise the price of agricultural products. Estimates vary from 10 per cent (Walsh et al. 2003) to 40 per cent (Searchinger et al. 2008). The current biofuel boom in North America appears to have increased agricultural prices significantly, even though its contribution to fuel supplies is minimal. This illustrates a negative impact of biofuel production, although it does not prove that larger-scale biomass production could not succeed using better choices of crops and incentive mechanisms. Ignoring the negative side effects of biomass harvesting discussed above, we assume that the cost of large-scale biomass delivered to centralized facilities ranges from $40 to $80 per dry tonne or $2–4 GJ−1 . It seems plausible that negative non- and macro-economic impacts of biomass production might limit biomass availability. Since we have no basis to estimate the effects of such impacts, we treat them parametrically, calculating the biomass price that would make biomass more expensive than air-capture routes to CNHCs.
7.2.3 Biomass-based fuels Biomass contains both carbon and energy. Production of ethanol from biomass uses the energy content of the biomass to drive the conversion process. In order to provide process energy, most of the carbon in the input biomass stream is oxidized and released to the atmosphere as CO2 . Even in advanced cellulosic ethanol production, which has not yet been applied at a commercial scale, only about one-third of the carbon content in the input biomass ends up in the fuel. As a means of recycling atmospheric carbon to liquid fuels, these processes make inefficient use of biomass carbon. Here, we consider only the production of CNHCs from biomass using external energy inputs to make more efficient use of the carbon captured in the biomass. This choice is based on the assumption that land-use constraints will be the most important barrier to biomass-based fuels and the observation that the cost of largescale carbon-free energy at a biomass processing plant is substantially less than that of delivered fuel energy. For example, hydrogen and heat might be supplied from coal with CCS at costs substantially less than those of delivered CNHCs or conventional gasoline. The use of external energy/hydrogen can convert a larger fraction of the input carbon to hydrocarbon fuel and reduce land use by a factor of 2–3 when compared with systems based on biomass alone (Agrawal et al. 2007). There is a large suite of methods that might be employed to produce CNHCs from biomass. Examples include the following: r electricity production with CO capture followed by CO hydrogenation using externally 2 2 supplied H2 ; the CO2 capture step might use oxy-fuel, post- or pre-combustion capture;
Carbon neutral hydrocarbons
133
r gasification to produce synthesis gas followed by production of CNHCs using the Fischer–Tropsch (F–T) process and CO2 hydrogenation using externally supplied H2 ; and r biological processing to produce hydrocarbons or alcohols combined with carbon capture followed by CO2 hydrogenation using externally supplied H2 .
For simplicity, we examine only the first route since there have been several assessments of biomass electricity with CO2 capture. Moreover, it allows a direct comparison with air capture allowing us to consider both direct and indirect routes from biomass to CNHCs. In reality, biomass co-firing or co-feeding with fossil fuels seems a more likely near-term prospect. Such methods would produce hydrocarbon fuels with reduced life-cycle CO2 emissions, but will not produce CNHCs. One might consider these options as a blend of CNHCs with conventional fossil fuel use.
7.2.4 Air-capture systems The process of air capture comprises two components: absorption and regeneration. The absorption phase refers to dissolving the CO2 contained in the atmosphere into solution or onto a solid sorbent, while the regeneration phase refers to producing a concentrated stream of CO2 from the medium used for absorption. Most recently, published work has addressed systems that use a strong base, typically NaOH, as the sorbent and chemical caustic recovery as the means of regeneration. Earlier works assumed an electrochemical system based on carbon-free electricity. The challenge with electrochemical methods is the electricity consumption during regeneration, 308 kJe mol−1 CO2 (Bandi et al. 1995), which can be converted to a cost of $100– −1 200 tCO−1 2 for carbon neutral electricity costing $0.05–0.10 (kW h) . The thermal process, as outlined in the literature, consists of four reactions: absorption; causticization; regeneration; and hydration (Baciocchi et al. 2006; Keith et al. 2006; Zeman 2007). The CO2 is absorbed into sodium hydroxide to form sodium carbonate. The carbonate ion is transferred from sodium to calcium ions in the causticization process, which results in the precipitation of calcium carbonate. The CO2 is regenerated by thermal decomposition of the calcium carbonate in a kiln while the lime produced is hydrated to complete the cycle. The absorption reaction is an established engineering technology dating back several decades (Spector & Dodge 1946). The other reactions are at the heart of the pulp and paper industry and can be directly applied to air capture with the addition of conventional CCS technologies (Keith et al. 2006), although conversion to an oxygen kiln significantly reduces energy demand (Baciocchi et al. 2006; Zeman 2007). Experimental work has shown conventional vacuum filtration technology sufficient for
134
Frank S. Zeman and David W. Keith
dewatering the precipitate and causticization at ambient temperatures to be feasible (Zeman 2008). While technically feasible, the amount of energy consumed and its form are critical components in terms of neutralizing emissions from the transportation sector. The thermal requirements are dominated by the decomposition of calcium carbonate, which requires a minimum of 4.1 GJ tCO−1 2 of high-temperature heat −1 (Oates 1998) with the potential to recover 2.4 GJ tCO2 at lower temperatures via steam hydration (Zeman 2007). Using existing technologies, the actual thermal load is 5.1GJ tCO−1 2 for an 80 per cent efficient kiln (Oates 1998). Some electrical energy is required to power blowers for air movement, pumps for sorbent circulation as well as oxygen production with load estimates varying from 126 to −1 656 kW h tCO−1 2 (0.45–2.36 GJe tCO2 : Zeman 2007). The cost of air capture was estimated, using technologies from other industries, for the system above at approximately $150 tCO−1 2 (Keith et al. 2006). We note that this cost estimate was intended to be a minimum estimate for long-run costs and that it was based on a simplistic combination of existing technologies rather than on an integrated plant design. Other capture processes have been considered (Steinberg & Dang 1977) as well as advanced regeneration cycles based on titanates that reduce high-temperature heat requirements by approximately 55 per cent (Nohlgren 2004). In this work, we consider air capture costs ranging from $100 to 200 tCO−1 2 recognizing that there are no a priori reasons why costs could not eventually be lower. Air capture, as opposed to biomass growth, is not limited by land area but rather by the rate of CO2 diffusion to the boundary layer, analogous to the physics that limits wind turbine spacing (Keith et al. 2006). Previous work has shown that air-capture rates are at least one order of magnitude larger than biomass growth (Johnston et al. 2003). This value reflects the large-scale limitations of CO2 transport in the atmospheric boundary layer. Note that air-capture systems need only occupy a small portion of the land area in order to capture the maximum large-scale CO2 flux, just as wind turbines have a small footprint yet capture much of the large-scale kinetic energy flux. The effective flux, based on the air-capture plant boundary, can be expected to be at least two and quite likely three orders of magnitude larger than biomass growth with the remainder of the land area available for other uses, such as agriculture. 7.2.5 CO2 fuels The production of synthetic fuels via CO2 hydrogenation requires hydrogen addition and oxygen removal (Inui 1996). We consider the production of octane (C8 H18 ) as a simple proxy for synthetic fuels that could replace current automobile gasoline; in practice, fuels with a range of hydrocarbons of approximately this molecular
Carbon neutral hydrocarbons
135
Table 7.1 Comparison of synthetic fuels from CO2 hydrogenation (HHV; Lide 2000) Hf0
energy contenta
Hydrogen efficiency
Fuel
(kJ mol−1 CO2 )
(kJ mol−1 CO2 )
(GJ tCO−1 2 )
Ratiob
Energyc
C2 H6 0 C8 H18
−62 −118
730 685
16.6 15.6
3 3.125
0.85 0.77
a b c
Based on heat of combustion. Molar ratio (H2 /CO2 ) of reactants in equations (7.1) and (7.2). Ratio of energy content in produced fuel over reactant H2 .
weight would probably be produced. We also consider dimethylether (DME) as a replacement for diesel fuel, even though it is not technically a hydrocarbon. We note that DME synthesis commonly takes the form of methanol dehydration (Jia et al. 2006) and higher-order hydrocarbons can be formed on composite catalysts that include zeolites with methanol as an intermediary (Kieffer et al. 1997). The synthesis reactions are listed in equation (7.1) for DME and equation (7.2) for C8 H18 : 2CO2(g) + 6H2(g) → C2 H6 O(g) + 3H2 O(g) ,
(7.1)
8CO2(g) + 25H2(g) → C8 H18(1) + 16H2 O(g) .
(7.2)
The production of hydrocarbons from CO2 and H2 feedstock is a high-pressure catalytic process where the choice of catalyst, operating pressure and temperature affects the reaction products (Inui 1996; Halmann & Steinberg 1999). A summary of relevant reaction characteristics is presented in Table 7.1. While the synthesis reaction is exothermic, producing high-purity streams of hydrogen and CO2 requires a significant amount of energy. The choice of product does not significantly impact the ratio of reactants with 3 mol of hydrogen required for every mole of CO2 . The energy component of hydrogen efficiency reflects the energy embedded in the feed hydrogen lost to steam production. Typically, the reactors operate at temperatures of approximately 200–300 ◦ C and pressures of 20–50 bar. It is likely that a use would be found for the reaction heat (second column of Table 7.1) in such a process, e.g. solids drying (in air capture) or its employment in a low-temperature Rankine cycle. It is worth noting that synthetic fuel production does not currently employ CO2 as feed (Mignard & Pritchard 2006; Galindo Cifre & Badr 2007), although much experimental work has been performed (Inui 1996; Kieffer et al. 1997; Halmann & Steinberg 1999).
136
Frank S. Zeman and David W. Keith
Table 7.2 Coefficients for offsetting emissions from conventional fuels Coefficient
Coil
fO/G
CCO2
Cstorage
Units Value Source
$/barrel 50–100 –
barrel GJ−la 0.27–0.24 EIA (2008)
$ tCO−1 2 – –
$ tCO−1 tCO2 GJ−1 2 b 8 0.067 IPCC EPA (2005) (2005)
a
b
fC/E
fLC – 1.40 Farrell et al. (2006)
Data analysis of monthly costs for cost of oil (West Texas Intermediate) and refinery price of gasoline since October 1993 resulted in ($ GJ−1 ) = 0.223 × ($/barrel) + 2.11 (n = 170, r2 = 0.952). Representing a range of $3–12 tCO−1 2 .
7.3 Economic comparisons A systematic comparison is used to identify the most important cost drivers for the various methods of producing CNHCs. We have chosen a metric based on the cost of delivering the fuel to the end-user, measured in dollars per GJ energy content ($ GJ−1 ). We employ four different bases for comparison: indirect methods to offset conventional oil use; air capture to hydrocarbons; hydrogen for use onboard vehicles; and biomass CO2 to hydrocarbons. The derivation is first given in the equation form, followed by a table listing relevant coefficients and then text explaining their use and origins.
7.3.1 Conventional fuels with indirect CCS The refining of crude oil produces the current transportation fuels of choice, gasoline and diesel. The existing transportation infrastructure is built around these fuels and their use in the immediate future is likely. Under these circumstances, CO2 emissions would be neutralized using indirect methods. The cost of neutralizing oil can be estimated using equation (7.3) and Table 7.2; the symbols used are described in the discussion below. $ costfuel = Coil × fO/G + CCO2 + 1.5 × Cstorage × fC/E × fLC . (7.3) GJ The cost per unit energy of neutralizing conventional fuels is determined, in equation (7.3), by adding the energy cost of the fuel to the cost of offsetting the resultant emissions through CCS. The energy cost of oil is the product of the cost per barrel and a conversion factor (fO/G ) that translates the cost into the appropriate units. The conversion factor (fO/G ), relating the cost of a gallon of gasoline from the refinery to a barrel of oil, was obtained by comparing the price data over the last
Carbon neutral hydrocarbons
137
14 years and using a value of 130.8 MJ gal−1 of gasoline (Keith & Farrell 2003). The cost of offsetting the emissions is the cost of capture and storage multiplied by the emissions per GJ of energy. The cost of capture (CCO2 ) refers to the cost of removing CO2 from the atmosphere. Underground geological storage is projected to cost $1–8 tCO−1 2 with transportation of the compressed CO2 by pipeline costing $2–4 tCO−1 for a 250-km run with an annual flow rate of 5 Mt CO2 (IPCC 2005). 2 The cost of storage is multiplied by a factor of 1.5, as air capture produces an amount of CO2 equivalent to 50 per cent of the amount captured if coal is used in the regeneration phase (Zeman 2007). The use of other methods of producing high-temperature heat, e.g. nuclear, would reduce this factor. The tonnes of CO2 released per unit energy contained in gasoline (fC/E ) are derived using an emission factor of 8.8 kg CO2 gal−1 (EPA 2005). The final coefficient (fLC ) relates the life-cycle CO2 emissions to the energy content of the final fuel, gasoline. In this manner, we include the process CO2 emissions associated with converting oil to gasoline. Using the values listed in Table 7.2 and the values of $100–200 tCO−1 2 for air −1 capture, the fuel cost is in the range $24–44 GJ . The cost of conventional gasoline is in the range $13–24 GJ−1 as oil prices fluctuate between $50 and $100 per barrel. Given the assumptions above, air capture increases the cost of fuel by $10–20 GJ−1 ; alternatively, with oil at $100 per barrel and air capture at $100 tCO−1 2 , the cost of vehicle fuel increases by 42 per cent.
7.3.2 Synthetic fuels using atmospheric CO2 Producing synthetic fuels from CO2 requires hydrogen and a high-pressure catalytic reactor to synthesize the fuel. The cost of producing hydrogen with CCS on an industrial scale has been estimated at $7.5–13.5 GJ−1 (IPCC 2005). The cost estimates for hydrogen vary depending on the price of natural gas, as steam reforming of methane is the dominant method for H2 production (Ogden 1999; Galindo Cifre & Badr 2007). Thermochemical methods for production are expected to be more economical than electrolysis except under circumstances where electricity is available at prices below $0.02 (kW h)−1 (Ogden 1999; Sherif et al. 2005) combined with a 50 per cent reduction in electrolysis capital costs (Ogden 1999). Conditions favourable to renewable electrolytic hydrogen may exist, e.g. excess wind power during early morning hours, but any gains from low electricity costs must eclipse increased capital costs associated with intermittent use of the electrolysers. We believe that only under exceptional circumstances would one form of secondary energy (electricity) be converted to another (hydrogen). We use a representative value of $10.5 GJ−1 for carbon neutral hydrogen. The CO2 resulting from the
138
Frank S. Zeman and David W. Keith
Table 7.3 Coefficients for synthetic fuels from atmospheric CO2 Coefficient Cco2 Units Value Source
fC/E
CH2
fS/P
$ tCO−1 tCO2 GJ−1 $ GJ−1 kJIN kJ−1 2 OUT – 0.067 10.5 1.24 – EPA (2005) IPCC Lide (2000) (2005)
Csynthcsis
Cstoragc
$ GJ−1 $ tCO−1 2 3.5 8 Michel (1999) IPCC (2005) and Gustavsson et al. (2007)
production of hydrogen is stored underground with any CO2 produced during the regeneration phase of air capture. $ costfuel = CCO2 × fC/E + CH2 × fS/P + Csynthesis + 0.5 × Cstorage × fC/E . GJ (7.4) The cost of producing synthetic fuels from atmospheric CO2 is calculated from equation (7.4). The total cost is the sum of producing the CO2 and hydrogen added to the cost of the synthesis reactors along with storing fugitive CO2 emissions. Again, the cost of air capture is estimated while the emissions per GJ of fuel produced (fC/E ) are equated to gasoline. The values of fC/E based on Table 7.1 are 4–10 per cent lower than that of gasoline, which is a small advantage. The cost of hydrogen, a representative value, is multiplied by a factor (fS/P ) representing the efficiency of transferring the embedded energy to the fuel. The hydrogen usage factor is calculated as the average of the inverse of hydrogen energy efficiency in Table 7.1. It represents the extra cost associated with H2 feed lost to water formation, as shown in equations (7.1) and (7.2). The synthesis costs vary with the fuel produced as the operating conditions and choice of catalysts vary. We used a representative value for multi-reactor synthesis with CO2 as feedstock (Michel 1999; Gustavsson et al. 2007). Using the values given in Table 7.3 and air-capture costs of $100–200 tCO−1 2 , the production cost for synthetic fuels is in the range −1 $23.5–$30 GJ . Mitigating the fugitive emissions from H2 production using air capture would add $2–4 GJ−1 to the total cost, based on 0.3 tCO2 per tCO2 used in fuel production. 7.3.3 Hydrogen use in transportation The use of hydrogen as a large-scale fuel for distributed road transportation requires a complete replacement of the associated infrastructure including production, distribution and fuelling stations. We consider only the cost to deliver hydrogen to the vehicle in $ GJ−1 . On the one hand, this neglects the cost reductions that will arise where hydrogen enables the use of fuel cells that are more efficient than the
Carbon neutral hydrocarbons
139
Table 7.4 Coefficients for hydrogen transportation systems Coefficient
CH2
Cdist
Cstation
Units Value Source
$ GJ−1 10.5 IPCC (2005)
$ GJ−1 10–22.5 Yang & Ogden (2007)
$ GJ−1 5 Ogden (1999)
conventional engines used with CNHCs, while on the other hand, it neglects the extra vehicle costs associated with hydrogen storage and hydrogen power plants. Our hydrogen cost assumptions are based on large-scale central-station hydrogen production using fossil fuels with CCS, since this is the most immediately available technology. Currently, over 90 per cent of hydrogen production is derived from the steam reforming of methane at production scales of up to 100 million standard cubic feet of hydrogen per day (Ogden 1999; Sherif et al. 2005; Galindo Cifre & Badr 2007). The estimates often use a purchase price of less than $6 GJ−1 for natural gas, which may not reflect prices for large-scale hydrogen production. Above this price point, production costs are similar for coal costing $1.5 GJ−1 . The cost of hydrogen fuels delivered to the vehicle is estimated using $ = CH2 × Cdist + Cstation . costfuel/ (7.5) GJ Using the values given in Table 7.4, the cost of hydrogen fuels delivered to the vehicle ranges from $25.5 GJ−1 to $38 GJ−1 . The range is strongly dependent on the cost of distributing the hydrogen to fuelling stations, which is in turn dependent on market penetration of hydrogen vehicles (Yang & Ogden 2007). The range in Table 7.4 reflects penetration levels of 5 per cent ($22.5 GJ−1 ) and 50 per cent ($10 GJ−1 ) where market penetration is taken as the proportion of vehicles using hydrogen fuel. These values are taken from the ‘base case’ of Yang & Ogden; the cost of other scenarios varies from −22.5 to +37 per cent for the 50 per cent market penetration and −17 to +70 per cent for 10 per cent. Equation (7.5) does not include any CO2 removal from the air to compensate for the fugitive emissions associated with H2 production from fossil fuels, as mentioned in Section 7.2.1. −1 Using an air-capture cost of $100 tCO−1 2 , the additional cost ranges from $1 GJ to $3 GJ−1 including storage. 7.3.4 Carbon neutral fuels from biomass As discussed in Section 7.2.4, there are many ways to convert biomass to CO2 and fuels. We focus on fuel synthesis in a manner similar to that described in Section 7.3.2 using CO2 derived from a biomass power plant. The cost of CO2
140
Frank S. Zeman and David W. Keith
derived from such a plant is a function of the cost of biomass and the difference between the balance of system costs for the biomass plant and the cost of a carbon neutral fossil plant. The cost for the balance of system is the difference between the cost of biomass electricity with CCS and the fuel cost. The fuel cost is obtained by dividing the biomass cost with the energy content; the result is then divided by the thermal efficiency of the plant. The cost of carbon neutral electricity was taken as $0.073 (kW h)−1 (IPCC 2005). The ideal case is where the revenues generated from the sale of electricity offset the cost of the plant and capture system. The cost of CO2 is then the cost of biomass divided by the tonnes of CO2 produced per tonne of biomass or −1 $ tCO−1 2 = 0.59×($ tonne ), based on the chemical formula for woody biomass (Petrus & Noordermeer 2006). In practice, the relationship will depend on the capture efficiency and the specific technology. Using studies from the literature, we establish two relationships for the cost of CO2 and the cost of biomass based on steam gasification (Rhodes & Keith 2005), equation (7.6), and oxygen gasification (Audus & Freund 2004), equation (7.7). The CO2 and dry biomass costs are expressed in $ tonne−1 : CCO2 = 1.07Cbio − 16,
(7.6)
CCO2 = 0.69Cbio + 55.
(7.7)
Steam gasification is the most cost-effective method for the biomass cost range used in this work. The resulting cost for CO2 is in the range $27–$70 tCO−1 2 . The cost of producing carbon neutral fuels by indirect methods, as per equation (7.3), using these values ranges from $16.5 GJ−1 to $31.5 GJ−1 . By comparison, direct methods based on equation (7.4) produce a delivered cost of fuel ranging from $18.5 GJ−1 to $21 GJ−1 . These costs are lower than the associated values for air capture using both indirect (28–33 per cent) and direct (23–30 per cent) methods. Biomass can be converted directly to hydrocarbon via the F–T synthesis, labelled here as biomass F–T. This technology has reached the commercial stage with a plant in Freiburg, Germany, producing 15 k tonnes yr−1 . The cost of these fuels has been estimated at $21 GJ−1 (Fairley 2006). The process uses biomass residues and the sensitivity to increasing feedstock prices, owing to increased demand for biofuels, was not discussed.
7.3.5 Comparison of methodologies A cost comparison of the various delivery methods for carbon neutral transportation fuels is presented in Figure 7.3. The figure contains the upper- and lower-bound estimates, based on the previous sections, for the different methods. The total cost
Carbon neutral hydrocarbons
141
Figure 7.3 Comparison of delivered costs for carbon neutral transportation fuels.
has been divided into subsections (oil, air capture, biomass, hydrogen, oxygen, fuel synthesis including reforming and fuel distribution including fuel stations) to illustrate the relative importance of each component. Reviewing Figure 7.3, we observe that the cost of oil, ranging from $50 to $100 per barrel, has a strong effect on the cost of indirect methods. The cost of oil accounts for 56–55 per cent of indirect routes using air capture and 80–77 per cent of costs for biomass-based systems. Direct routes are characterized by the need for hydrogen. The contribution from hydrogen production ranges from 56 to 43 per cent for air-capture systems, 71 to 61 per cent for direct biomass systems and 41 to 28 per cent for hydrogen-based systems. The lower percentage for the hydrogen infrastructure highlights its dependence on a hydrogen distribution system and fuelling stations. At oil costs above approximately $150 per barrel oil substitutes win out, and direct routes to CNHCs are uniformly preferred to indirect routes. The choice of mitigation technology will not, of course, depend simply on cost due to the strong path dependency in the coupled development of vehicle technologies and refuelling infrastructures. Considering costs alone, and ignoring the large uncertainties in technology, we can nevertheless draw some interesting conclusions about how the relative cost competitiveness of various routes to CNHCs depends on the cost of carbon, biomass and crude oil. We first consider the comparison between direct and indirect routes as a function of the cost of petroleum. We previously established, in Section 7.3.2, that the cost of delivering synthetic fuel is approximately $24 GJ−1 when air capture costs
142
Frank S. Zeman and David W. Keith
Figure 7.4 Mitigation options based on synthetic fuel production using −1 $100 tCO−1 H2 . 2 air capture with $10.5 GJ −1 $100 tCO−1 2 and H2 costs $10.5 GJ . Using the correlation from Table 7.1 (fO/G ), we can convert this value to an oil cost of $96 per barrel. Thus, at higher oil prices, it is economical to produce synthetic fuels under the assumed conditions. This value does not include any price on CO2 emissions. Based on the emission factor fC/E and the life-cycle factor fLC , the combustion of the gasoline produces the equivalent of 0.288 tCO2 per barrel of oil. The values define a line with a negative slope, as shown in Figure 7.4, which also contains a vertical line whose abscissa represents the cost of air capture. Above the line, mitigation by direct air capture is the most economical option. At emission prices lower than the cost of air capture, the most economical option is to pay for the emissions, while at higher prices, indirect methods are preferable. The area bounded by low oil and emission costs refers to the ‘business-as-usual’ (BAU) scenario. A similar graph can be produced for CNHCs using biomass by drawing a parallel solid line with the y-axis ordinate at $73.5 per barrel ($18.5 GJ−1 ) and a vertical dashed line with the x-axis abscissa −1 at $27 tCO−1 biomass). 2 ($40 tonne The balance between the cost of fuels using CO2 produced from biomass and air capture can also be measured in this manner. In this case, the common metrics are the cost of hydrogen and fuel synthesis. The comparison, as shown in Figure 7.5, contains curves for the ‘ideal’ case as well as steam reforming (Rhodes & Keith 2005) and oxygen gasification (Audus & Freund 2004). In the ideal case, the revenues from the sale of electricity exactly offset the capital and operating costs of the facility resulting in a direct relationship between the cost of biomass and CO2 . The area where air capture is economical is below and to the right of the gasification lines. The intersection of the gasification curves occurs at a biomass price of $180 tonne−1 , beyond which it is more economical to use oxygen
Carbon neutral hydrocarbons
143
Figure 7.5 Effect of biomass cost on the cost of producing CO2 for fuel synthesis (dashed line, ideal; solid line, Audus & Freund (2004); dot-dashed line, Rhodes & Keith (2005)).
gasification owing to the higher capture rate (85 per cent as opposed to 55 per cent). The study area can expand to the right if the external costs of biomass are included. The threshold for $100 tCO−1 2 air capture is if the total cost of biomass, including non-market costs, rises above $105 tonne−1 . Similarly, a comparison can be made between fuels produced using CO2 from air capture with using hydrogen on-board vehicles. Given that the cost of producing the hydrogen is identical, the comparison is between producing the CO2 and synthesis for air capture and distributing the hydrogen to fuelling stations, as shown in Figure 7.6. We have not included the cost of fuelling stations, which would add −1 $15 tCO−1 of cost. The cost estimate for fuel stations used in 2 for each $1 GJ −1 Section 7.3 ($5 GJ ) would add $75 tCO−1 2 to the allowable cost of air capture. Even without the fuelling stations, the required cost of air capture to be competitive with a fully developed hydrogen economy is not unreasonable. We investigated the effect of a 50 per cent reduction in the cost of fuel synthesis, shown as the dashed line in Figure 7.6.
7.4 Non-economic considerations The work presented here has focused on estimating the comparative costs of various methods of producing and delivering carbon neutral transportation fuels. There are, of course, external factors such as food production and geopolitical realities, which will exert market pressures on the chosen method of fuel production. Some efforts have been made to quantify the effects of biofuel production on food prices (Walsh et al. 2003) and the effects of a global change to a Western, meat-based diet
144
Frank S. Zeman and David W. Keith
Figure 7.6 Effect of reduced cost of hydrogen distribution, as measured by market penetration, on the allowable cost of air capture.
(Hoogwijk et al. 2003). These effects are difficult to quantify and also depend on the pathway. Projections to 2050 and beyond may reflect gradual changes but annual changes, such as switching crops, may have a dramatic impact on food prices. We will not delve into the food debate beyond raising the question of who gets to decide when it is time to grow food or fuel. A carbon neutral fuel is of little use unless consumers purchase vehicles designed to use those fuels. Currently, the consumer and commercial vehicle fleet is dominated by hydrocarbon-based internal combustion engines. The large volume of vehicle sales, 20 million annually in the USA, has led to economies of scale in production that make introducing a new vehicle type more expensive. Some hydrogen vehicles are available but the cost is two orders of magnitude higher than conventional vehicles (Mayersohn 2007). Additionally, electric vehicles are at least one order of magnitude more expensive than gasoline-powered cars (Tesla Motors 2008, http://www.teslamotors.com/buy/buyPage1.php). While electric vehicles require a different infrastructure, the expected price of electricity with CCS, which ranges from $0.05 to $0.07 (kW h)−1 ($14–20 GJ−1 ; IPCC 2005), can be compared with the fuel costs in Figure 7.2. Additionally, the consumer has come to expect certain attributes in a vehicle, including range and interior space, which may drive up the effective cost of alternative technologies. The premium afforded to hydrocarbon fuels may be quite substantial (Keith & Farrell 2003). Land-use change is also an important consideration for carbon neutral transportation fuels. Recent work has highlighted the importance of considering emissions
Carbon neutral hydrocarbons
145
over the complete life cycle of the fuel production process including indirect emissions from land-use change as a result of biofuel production. Once included, the life-cycle emissions of food-based biofuels change from slight reductions (20 per cent) to net emitters for at least 40 years (Fargione et al. 2008; Searchinger et al. 2008). Only cellulosic fuels from abandoned or degraded cropland do not result in an immediate release of carbon stores, although the potential carbon accrued through reforestation is not included. Cellulosic fuels were found to be the most economical in this work but their net greenhouse gas reductions must be subject to the same analysis as food-based fuels. Specifically, the carbon accounting must include roots and soil carbon as well as detritus from harvesting. Fargione et al. (2008) suggested that slash and thinning from sustainable forestry along with the use of carbonaceous wastes may provide the lowest net emissions. By contrast, CNHCs based on air capture, as well as hydrogen and electricity, are not dependent on the biosphere. They have the benefit of simple accounting with readily verifiable emission profiles as opposed to indirect emissions from biofuels, which may span the globe.
7.5 Conclusions We have introduced a methodology for a systematic comparison of solutions to greenhouse gas emissions from the transportation sector. We focused on solutions built around the concept of CNHCs and compared these against the production and distribution of hydrogen for consumption in the transportation sector. We find that CNHCs may be a cost-effective way to introduce hydrogen into the transportation infrastructure in a gradual manner. The continued use of liquid hydrocarbon fuels will minimize the disruptions and delays caused by requiring a different transportation infrastructure. It is far from evident, however, that any of these solutions, including electric vehicles, will be the method of choice. The lack of a clear technological ‘winner’ warrants equal attention and funding on all potential solutions. References Agrawal, R., Singh, N., Ribeiro, F. & Delgass, W. N. 2007 Sustainable fuel for the transportation sector. Proc. Natl Acad. Sci. USA 104, 4828–4833. (doi:10.1073/pnas.0609921104) Audus, H. & Freund, P. 2004 Climate change mitigation by biomass gasfication combined with CO2 capture and storage. In Proc. 7th Int. Conf. on Greenhouse Gas Control Technologies, vol. I (eds. E. Rubin, D. W. Keith & C. Gilboy). Vancouver, Canada: Elsevier, pp. 187–197.
146
Frank S. Zeman and David W. Keith
Baciocchi, R., Storti, G. & Mazzotti, M. 2006 Process design and energy requirements for the capture of carbon dioxide from air. Chem. Eng. Process 45, 1047–1058. (doi:10.1016/j.cep.2006.03.015) Bandi, A., Specht, M., Weimer, T. & Schaber, K. 1995 CO2 recycling for hydrogen storage and transportation: electrochemical CO2 removal and fixation. Energy Convers. Manage 36, 899–902. (doi:10.1016/0196-8904(95)00148-7) EPA 2005 Emission Facts: Average Carbon Dioxide Emissions Resulting from Gasoline and Diesel Fuel, Report no. EPA420-F-05-001. Washington, DC: Environmental Protection Agency. Epplin, F. M. 1996 Cost to produce and deliver switchgrass biomass to an ethanol-conversion facility in the Southern Plains of the United States. Biomass Bioenergy 11, 459–467. (doi:10.1016/S0961-9534(96)00053-0) Fairley, P. 2006 Growing biofuels, bioenergy news. In Bioenergy Association of New Zealand 2006. Wellington, NZ: Bioenergy Association of New Zealand. Fargione, J., Hill, J., Tilman, D., Polasky, S. & Hawthorne, P. 2008 Land clearing and the biofuel carbon debt. Science Express 319, 1235–1238. (doi:10.1126/science.1152747) Farrell, A. E., Plevin, R. J., Turner, B. T., Jones, A. D., O’Hare, M. & Kammen, D. M. 2006 Ethanol can contribute to energy and environmental goals. Science 311, 506–508. (doi:10.1126/science.1121416) Galindo Cifre, P. & Badr, O. 2007 Renewable hydrogen utilisation for the production of methanol. Energy Convers. Manage 48, 519–527. (doi:10.1016/j.enconman.2006.06.011) Gustavsson, L., Holmberg, J., Dornburg, V., Sathre, R., Eggers, T., Mahapatra, K. & Marland, G. 2007 Using biomass for climate change mitigation and oil use reduction. Energy Policy 35, 5671–5691. (doi:10.1016/j.enpol.2007.05.023) Halmann, M. M. & Steinberg, M. 1999 Greenhouse Gas Carbon Dioxide Mitigation Science and Technology. Boca Raton, FL: Lewis Publishers. Hoogwijk, M., Faaija, A., van den Broek, R., Berndes, G., Gielen, D. & Turkenburga, W. 2003 Exploration of the ranges of the global potential of biomass for energy. Biomass Bioenergy 25, 119–133. (doi:10.1016/S0961-9534(02)00191-5) Inui, T. 1996 Highly effective conversion of carbon dioxide to valuable compounds of composite catalysts. Catal. Today 29, 329–337. (doi:10.1016/0920-5861(95)00300-2) IPCC 2005 Special Report on Carbon Dioxide Capture and Storage, eds. B. Metz, O. Davidson, H. C. de Coninck, M. Loos & L. A. Meyer). Cambridge, UK: Cambridge University Press. IPCC 2007 Summary for policy makers. In Climate Change 2007: Mitigation. Contribution of Working Group III to the Fourth Assessment Report of the Intergovernmental Panel on Climate Change. Cambridge, UK: Cambridge University Press. Jia, G., Tan, Y. & Han, Y. 2006 A comparative study on the thermodynamics of dimethyl ether synthesis from CO hydrogenation and CO2 hydrogenation. Ind. Eng. Chem. Res. 45, 1152–1159. (doi:10.1021/ie050499b) Johnston, N. A. C., Blake, D. R., Rowland, F. S., Elliot, S., Lackner, K. S., Ziock, H. J., Dubey, M. K., Hanson, H. P. & Barr, S. 2003 Chemical transport modeling of potential atmospheric CO2 sinks. Energy Convers. Manage 44, 681–689. (doi:10.1016/S0196-8904(02)00078-x) Kato, Y., Liu, C. Y., Otsuka, K.-I., Okuda, Y. & Yoshizawa, Y. 2005 Carbon dioxide zero-emission hydrogen system based on nuclear power. Prog. Nucl. Energy 47, 504–511. (doi:10.1016/j.pnucene.2005.05.051)
Carbon neutral hydrocarbons
147
Keith, D. & Farrell, A. 2003 Rethinking hydrogen cars. Science 301, 315–316. (doi:10.1126/science.1084294) Keith, D. W., Ha-Duong, M. & Stolaroff, J. 2006 Climate strategy with CO2 capture from the air. Clim. Change 74, 17–45. (doi:10.1007/s10584-005-9026-x) Khesghi, H., Prince, R. & Marland, G. 2000 The potential for biomass fuels in the context of global climate change: focus on transportation fuels. Annu. Rev. Energy Environ. 25, 199–244. (doi:10.1146/annurev.energy.25.1.199) Kieffer, R., Fujiwara, M., Udron, L. & Souma, Y. 1997 Hydrogenation of CO and CO2 toward methanol, alcohols and hydrocarbons on promoted copper–rare earth oxide catalysts. Catal. Today 36, 15–24. (doi:10.1016/S0920-5861(96)00191-5) Lide, D. R. 2000 CRC Handbook of Chemistry and Physics, 81st edn. Boca Raton, FL: CRC Press. Mayersohn, N. 2007 Hydrogen car is here, a bit ahead of its time. New York Times, 9 December 2007. Michel, S. 1999 Methanol production costs. Rep. Sci. Technol. Linde. 61, 60–65. Mignard, D. & Pritchard, C. 2006 Processes for the synthesis of liquid fuels from CO2 and marine energy. Chem. Eng. Res. Des. 84, 828–836. (doi:10.1205/cherd.05204) Nohlgren, I. 2004 Non-conventional causticization technology: a review. Nordic Pulp Pap. Res. J. 19, 467–477. (doi:10.3183/NPPRJ-2004-19-04-p470-480) Oates, J. A. H. 1998 Lime and Limestone: Chemistry and Technology, Production and Uses Weinheim, Germany: Wiley–VCH. Official Energy Statistics from the US Government 2008 See http://tonto.eia.doe. gov/dnav/pet/hist/mg tt usM.htm, http://tonto.eia.doe.gov/dnav/pet/hist/rwtcM.htm. Ogden, J. 1999 Prospects for building a hydrogen energy infrastructure. Annu. Rev. Energy Environ. 24, 227–279. (doi:10.1146/annurev.energy.24.1.227) Petrus, L. & Noordermeer, M. A. 2006 Biomass to biofuels, a chemical perspective. Green Chem. 8, 861–867. (doi:10.1039/b605036k) Rhodes, J. S. & Keith, D. W. 2005 Engineering economic analyses of biomass IGCC with carbon capture and storage. Biomass Bioenergy 29, 440–450. (doi:10.1016/j.biombioe.2005.06.007) Searchinger, T., Heimlich, R., Houghton, R. A., Dong, F., Elobeid, A., Fabiosa, J., Tokgoz, S., Hayes, D. & Yu, T.-H. 2008 Use of U.S. croplands for biofuels increases greenhouse gases through emissions from land-use change. Science Express 319, 1238–1240. (doi:10.1126/science.1151861) Sherif, S. A., Barbir, F. & Veziroglu, T. N. 2005 Wind energy and the hydrogen economy: review of the technology. Sol. Energy 78, 647–660. (doi:10.1016/j.solener.2005.01.002) Spector, N. A. & Dodge, B. F. 1946 Removal of carbon dioxide from atmospheric air. Trans. Am. Inst. Chem. Eng. 42, 827–848. Steinberg, M. & Dang, V.-D. 1977 Production of synthetic methanol from air and water using controlled thermonuclear reactor power. I. Technology and energy requirement. Energy Convers. Manage 17, 97–112. (doi:10.1016/0013-7480(77)90080-8) Vitousek, P. M., Ehrlich, P. R., Ehrlich, A. H. & Matson, P. A. 1986 Human appropriation of the products of photosynthesis. BioScience 36, 368–373. (doi:10.2307/1310258) Walsh, M. E., De La Torre Ugarte, D., Hosein, S. & Splinsky, S. 2003 Bio energy crop production in the United States. Environ. Resour. Econ. 24, 313–333. (doi:10.1023/A:1023625519092) Wara, M. 2007 Is the global carbon market working? Nature 445, 595–596. (doi:10.1038/445595a)
148
Frank S. Zeman and David W. Keith
Yang, C. & Ogden, J. 2007 Determining the lowest cost hydrogen delivery mode. Int. J. Hydrogen Energy 32, 268–286. (doi:10.1016/j.ijhydene.2006.05.009) Zeman, F. S. 2007 Energy and material balance of CO2 capture from ambient air. Environ. Sci. Technol. 41, 7558–7563. (doi:10.1021/es070874m) Zeman, F. S. 2008 Experimental results for capturing CO2 from the atmosphere. AIChE J. 54, 1396–1399. (doi:10.1002/aic.11452)
8 Ocean fertilization: a potential means of geo-engineering? r. s. lampitt, e. p. achterberg, t. r. anderson, j. a. hughes, m. d. iglesias-rodriguez, b. a. kelly-gerreyn, m. lucas, e. e. popova, r. sanders, j. g. shepherd, d. smythe-wright and a. yool
The oceans sequester carbon from the atmosphere partly as a result of biological productivity. Over much of the ocean surface, this productivity is limited by essential nutrients and we discuss whether it is likely that sequestration can be enhanced by supplying limiting nutrients. Various methods of supply have been suggested and we discuss the efficacy of each and the potential side effects that may develop as a result. Our conclusion is that these methods have the potential to enhance sequestration but that the current level of knowledge from the observations and modelling carried out to date does not provide a sound foundation on which to make clear predictions or recommendations. For ocean fertilization to become a viable option to sequester CO2 , we need more extensive and targeted fieldwork and better mathematical models of ocean biogeochemical processes. Models are needed both to interpret field observations and to make reliable predictions about the side effects of large-scale fertilization. They would also be an essential tool with which to verify that sequestration has effectively taken place. There is considerable urgency to address climate change mitigation and this demands that new fieldwork plans are developed rapidly. In contrast to previous experiments, these must focus on the specific objective which is to assess the possibilities of CO2 sequestration through fertilization.
Geo-Engineering Climate Change: Environmental Necessity or Pandora’s Box?, eds. Brian Launder and Michael Thompson. Published by Cambridge University Press. © Cambridge University Press 2010.
149
150
R. S. Lampitt et al.
8.1 Background It is now generally accepted (IPCC 2007) that emissions of CO2 to the atmosphere are the dominant cause of global warming, and that cuts in these emissions (currently approx. 8 GtC yr−1 and rising) are needed in the next few years. Concurrent with these cuts, it may be possible to develop technological strategies to enhance sequestration of carbon dioxide from the atmosphere. Within the ocean, carbon is cycled between the surface and deep waters as part of a natural cycle and this has two major components that are of similar magnitude (Sarmiento et al. 1995). One is the ‘solubility pump’ driven by ocean circulation and by the solubility of CO2 in seawater and the other is the ‘biological pump’ that is driven by the primary production of plant (phyto-)plankton and subsequent gravitational settling of carbon-rich detrital particles or dispersion of dissolved organic carbon. The solubility pump has increased since pre-industrial times as a result of increasing atmospheric CO2 concentration and this historically recent ocean sink for anthropogenic CO2 is currently estimated to be approximately 2 GtC yr−1 (IPCC 2007). The ways in which direct action can be taken to enhance the ocean solubility pump still further have been considered with the final conclusion that they are ‘highly unlikely to ever be a competitive method of sequestering carbon in the deep ocean’ (Zhou & Flynn 2005) because the CO2 cost of implementing them will outweigh the benefits. With regard to the biological pump, the production of organic matter by phytoplankton absorbs CO2 from solution, lowering its concentration in the surface zone thus leading to uptake from the atmosphere as a result of an increased concentration gradient. Over almost the entire ocean, primary production of phytoplankton is limited by light and nutrient supply and in the sunlit surface ocean (the ‘euphotic zone’, approx. the top 100 m) one or other of the essential nutrients is almost always exhausted at some time during the growing cycle (Figure 8.1). An important point is that the relief of limitation by one nutrient will normally allow production to increase only to the point where it is limited by another. Concentrations deeper in the water column remain high and upward mixing of these nutrients is responsible for much of the productivity (Figure 8.2). It is this surface feature of nutrient deficiency that underlies the suggestion that ocean fertilization could be used to enhance the productivity of the euphotic zone, stimulate the biological pump and hence increase the uptake of anthropogenic CO2 . In this chapter, the relevant component of productivity is that which is based on a nutrient supply from outside the euphotic zone. This is termed ‘new production’ and is in effect that production available for export. Production that is based on nutrients that are recycled within the euphotic zone such as ammonia is not relevant here but is an important component of the total productivity often referred to simply as ‘primary production’.
Ocean fertilization: a potential means of geo-engineering?
Figure 8.1 Global annual minimum distribution of surface concentrations of nitrate, one of the principal macronutrients limiting primary production (Levitus World Ocean Atlas 1994) (see also colour plate).
Figure 8.2 Distribution of phosphate from south to north in the Pacific along 170◦ W showing the near-surface depletion and increase in concentration with depth (see also colour plate).
151
152
R. S. Lampitt et al.
Figure 8.3 Schematic of the decrease in downward flux of organic carbon as a function of depth in the water column. This is based on Martin et al. (1987) depicting the values that may be encountered in the temperate North Atlantic Ocean but the general principle is common to other regions. The two factors that determine the shape of the curve are the sinking rate of the particles and their rate of degradation.
If ocean fertilization is to be useful as a geo-engineering option, any carbon removed from the atmosphere must remain separated from the sea surface and hence out of contact with the atmosphere for periods in excess of a century (IPCC 2007). This requirement demands that any enhanced production must lead to ‘sequestration’ of the material by settling into the deeper water masses (200–1000 m) below the depth of winter mixing. Losses by sinking from the euphotic zone are referred to as the ‘export flux’ as distinct from this ‘sequestration flux’ to deep water (Figure 8.3). The process of biological production generates either particulate inorganic carbon (PIC; calcite) or particulate organic carbon (POC; or both) and these two pathways are significantly different in terms of the uptake of CO2 . The production of organic matter removes CO2 from solution, while the process of calcification decreases surface ocean alkalinity, and in addition releases CO2 to solution thereby partly counteracting the biological pump (Zeebe & Wolf-Gladrow 2001; Iglesias-Rodriguez et al. 2002). The potential for carbon sequestration is therefore influenced by the balance between calcifying and non-calcifying organisms. The majority of synthesized POC is remineralized to dissolved inorganic carbon (DIC)
Ocean fertilization: a potential means of geo-engineering?
153
in the upper few hundred metres of the ocean as a result of biological degradation leading inevitably to the decrease in flux with depth (Figure 8.3). However, a small fraction of it escapes into the zone below the depth of winter mixing and it is this component that can be considered as sequestered. This depth exhibits considerable regional variability from approximately 200 to 1000 m. In this chapter, we present the background to the suggestion that it may be possible to enhance ocean sequestration of carbon from the atmosphere. We provide a brief account of the role of nutrients in the oceans and their supply routes and the modelling approaches that are required to understand these complex interactions of physical, chemical and biological processes. Such models are central to any discussions about the efficacy of any proposed strategy for ocean fertilization and the potential side effects. We review in Section 8.2 the four means that have so far been suggested in order to enhance sequestration and comment on their likelihood of success. In Section 8.3, we discuss the side effects that may occur and, in Section 8.4, we describe the research that is required to provide a basis for rational decisions about the way forward. A crucial issue for all of these methods is whether (and to what extent) they can be demonstrated to enhance carbon sequestration to the international scientific community, to policy makers and to the commercial sector. Before carbon credits could be granted or traded, the extent of sequestration must be adequately verified. There is no single method that must be adopted for verification but considerable discussion is required in order to agree upon the principles and practices to be adopted. It is likely that verification will involve direct measurement of the downward flux of particulate carbon to deep oceanic layers, the measurement of the concentrations of elements affected by enhanced sequestration flux such as oxygen and nitrogen and good satellite images. Furthermore, and crucially, all this must be supported by a suitable modelling framework within which the observations can be adequately interpreted and the confidence in the level of sequestration expressed in an objective and analytical manner. This difficulty of establishing a means of verification is by no means new and, for instance, the agricultural livestock industry is facing very similar problems (DEFRA report 2006: RCCF 06/09). Nevertheless, the principles need to be addressed and agreement reached between the industrial, policy making and scientific communities as to what constitutes appropriate verification. 8.1.1 Macro- and micronutrients A variety of nutrients are essential for the growth of phytoplankton (Arrigo 2005). These can be divided into macronutrients such as nitrate and phosphate that are required in relatively high concentrations, and micronutrients such as iron and zinc
154
R. S. Lampitt et al.
Table 8.1 Elements important for oceanic new primary production and their principal supply routes. (Elements that generally limit new production in some areas are in italics. The concentration limiting productivity is not a fixed value as co-limitation is a frequent occurrence during which a low concentration of one nutrient can render the community more sensitive to limitation by another. Silicon does not affect the total annual new production directly but will alter the temporal trend of new production during the growing season (earlier peak in presence of silica). However, the presence or absence of Si will affect the community structure which will probably affect the remineralization length scale of settling particles. This, in turn, will affect the vertical distribution of nutrients in the mesopelagic, which will then change subsequent new production levels. Note that in contrast to terrestrial ecosystems, carbon (as DIC) is not usually a limiting nutrient.) Upwelling Cone, range in and mixing of Atmospheric Atmospheric Cone, limiting the oceanic deep water supply as gas supply as dust productivity euphotic zone Phosphorus Nitrogen Silicon Iron Zinc Carbon (DIC)
< 0.01 μM < 0.02 μM 0.2 μM 0.2 nM
0.005–2.0 μM 0.002–30.0 μM 0.05–130 μM 0.005–1.0 nM 0.01–1.0 nM 2.0 mM
that are required in much smaller quantities. Some elements such as silicon are essential for growth of certain phylogenetic groups (diatoms) but do not necessarily limit overall production. Nutrients are supplied to the euphotic zone by a variety of mechanisms (Table 8.1) all of which are relevant to this discussion. Macronutrients below the euphotic zone generally occur in a constant ‘Redfield ratio’ of N : P of 16 : 1 (Redfield 1934) and the elemental ratio of particulate matter in surface waters often does not deviate far from this (e.g. Chen et al. 1996). With regard to carbon, the vast majority is DIC but the particulate matter in the surface and subsurface has a ratio of C : N : P of approximately 106 : 16 : 1, while dissolved organic matter (DOM) has a ratio of 199 : 20 : 1 (Hopkinson & Vallino 2005). The implication from this is that if nutrients were provided solely from deep water, and if the settling biogenic particles (or DOM) had the same composition as the upwelled water, sequestration could not be enhanced in any sustained way. However, such simple first-order statements are not precisely correct and second-order effects allow some scope for sequestration by artificial ocean fertilization. For example, as seen from Table 8.1, some nutrients are not associated with carbon such as the
Ocean fertilization: a potential means of geo-engineering?
155
nitrogen gas from the atmosphere and furthermore settling particles do not always have a Redfield composition. The availability of nutrients in the oceans and their means of supply vary considerably from one region to another due largely to differences in physical characteristics. For example, approximately 25 per cent of the ocean surface has consistently high concentrations of macronutrients but still the plant biomass (as defined by chlorophyll) is low. The production in these high-nutrient low-chlorophyll (HNLC) waters is primarily limited by micronutrients especially iron. By contrast, low-nutrient low-chlorophyll (LNLC) waters can be found in the subtropical gyre systems of the oceans. These oligotrophic regions comprise approximately 40 per cent of the ocean surface and are characterized by wind-driven downwelling and a strong thermocline (both of which impede the nutrient supply from deeper water by vertical mixing) and hence exhibit very low surface water nutrient concentrations. To overcome the deficiency of nitrogen, fixation of nitrogen gas (diazotrophy) by cyanobacteria forms a crucial component of the biogeochemical cycle in many of these waters as it provides a major source of available nitrogen. In effect, diazotrophy ultimately prevents the ocean from losing the nitrogen required for photosynthesis (Falkowski 1997; Tyrrell 1999). For phosphorus, there is, however, no alternative supply route and it can therefore be considered as the ultimate limiting macronutrient (Tyrrell 1999). The only sources available to fuel primary production are the stocks in deep water or those supplied from rivers or on airborne dust and unless such sources exist, productivity will cease once local production exhausts the upper ocean pool.
8.2 Efficacy of ocean fertilization for carbon sequestration To date four distinct ways have been proposed to fertilize the oceans in order to enhance carbon sequestration. Two of these involve the supply of large quantities of macronutrients (or nutrient cocktails) and two involve micronutrient supply designed to facilitate the efficient usage of existing macronutrients. We now consider the potential efficacy of each of the schemes. Where possible, we present costs of implementing these schemes that can then be compared with other geoengineering proposals and with the current trading value of carbon emissions which at present is of the order $75 tonne−1 carbon (note that 1 t (C) corresponds to 3.67 t (CO2 )). Clearly, it is the net sequestration that must ultimately be calculated after taking into account the energetic costs of the technology concerned. Although such a cost comparison may seem the correct approach, not all industrial components are involved in carbon trading and the cost is not always the appropriate comparator. Potential side effects are considered in Section 8.3.
156
R. S. Lampitt et al.
An aspect that is often overlooked is that although fertilization may be carried out in a particular area leading to sequestration, the efficacy of this action must be considered in a global context and on a timescale of at least 100 years (as defined by IPCC; see Section 8.1). The rationale behind this statement is that if a water mass is fertilized and leads to local sequestration, this may not be globally significant if that water mass would, over the subsequent weeks and months, have been fertilized naturally. While the process of fertilization may be local (e.g. 104 –106 km2 ) and of limited duration (1–10 years), the assessment of the effects must be global and address the 100-year time period adopted by the IPCC. This demands that the macro- and micronutrient cycles with associated ecosystem interactions are embedded in high-resolution three-dimensional global circulation models. When combined with appropriate field observations, this will be the only effective way of assessing the long-term efficiency and remote consequences of any type of ocean fertilization. 8.2.1 Nutrient supply from land It has been proposed that fertilizer cocktails of macro- and micronutrients should be manufactured on land and transported by submarine pipe to a region significantly beyond the edge of the continental shelf. The nutrient ratios and the temporal supply rates could be controlled so that biological populations develop that optimize sequestration. Such environmental manipulation is today carried out in a sophisticated manner in terrestrial glasshouses where the physical conditions can be controlled, but, with close monitoring, there is no a priori reason why this should not also be possible in an environment such as the open ocean where control of the physical environment is unlikely to be possible. Empirical support for this approach has been based largely on the observations that the substantial leakage of agricultural fertilizer to coastal seas increases production greatly (with the associated problems of excessive eutrophication; see Section 8.3.1). In the open oceans, observations have been limited to a purposeful release of phosphate and iron (Rees et al. 2006) and to various modelling studies that highlight the intimate and intricate relationships between the various essential elements and the forms in which they are present in the ocean (Dutkiewicz et al. 2005; Parekh et al. 2005). On the face of it, macronutrient supply from land has much to recommend it. We believe that if properly implemented it is likely that such a scheme would lead to enhanced oceanic carbon sequestration. The drawback of the scheme is that the energetic costs of producing the cocktail and piping it from the land to regions of nutrient limitation are likely to be large with a carbon footprint that may be greater than the carbon sequestered. Nevertheless, it is worthwhile
Ocean fertilization: a potential means of geo-engineering?
157
assessing the approximate costs of this proposal as there is a likelihood of successful sequestration. As phosphorus is the ultimate limiting nutrient in the oceans, mining of phosphate-bearing rocks would have to be substantially increased. This is a demanding process and some sources currently used for fertilizer have a phosphate content as low as 4 per cent with many sources highly enriched in carbonate (Zapata & Roy 2004). One of the most common water-soluble fertilizers is diammonium phosphate (DAP), (NH4 )2 HPO4 , which currently has a market price of approximately $420 tonne−1 FOB (http://www.icispricing.com/il shared/Samples/SubPage181.asp) or $1700 tonne−1 of phosphorus. The costs of purification and injection to nutrient-poor regions of the ocean are likely to be large but if one excludes these costs and uses the C : P Redfield ratio of 106 : 1 one obtains a sequestration cost of the order of $45 tonne−1 carbon, a figure that is substantially less than the current trading price for carbon emissions. However, such a simple calculation may not be appropriate as a result of the details of timing. Although new production would increase rapidly in response to this nutrient supply (days), CO2 is absorbed by the ocean slowly (months) and depending on the local physics of the water column there is a possibility that this mismatch would prevent sequestration of atmospheric CO2 . The extraction of nitrogen gas from the atmosphere and conversion to urea has already been proposed and small-scale applications carried out (Ocean Nourishment Corporation; http://www.oceannourishment.com/). Initial studies of the cost of this process suggested that it was a viable proposition (Shoji & Jones 2001) with approximately 12 tonnes of CO2 captured per tonne of ammonia provided and a cost of approximately $25 tonne−1 of carbon claimed to be ‘sequestered’. However, these calculations were based on the assumption that phosphate would always be present in unlimited quantities at the site of injection, an assumption that is incorrect in almost all regions of the oceans. As mentioned previously, relief of limitation by one nutrient will normally allow production to increase only to the point where it is limited by another. Thus, unless persuasive data are released demonstrating that this is likely to lead to sustained sequestration, we conclude that it will only provide a short-term and localized enhancement of the biological pump and possibly no effective sequestration of atmospheric CO2 . The issue of distance from the shelf edge is an important one and obviously affects the economic and engineering viability of this scheme substantially. From the modelling perspective, this raises particular problems that will need to be addressed. Shelf zones have rarely been included in ocean general circulation models and there are some very particular difficulties in accurately representing the physics in areas where the shelves meet the open ocean. Until recently, largescale ocean circulation and shelf sea modelling have progressed in parallel with
158
R. S. Lampitt et al.
little interaction between the two. Recent increases in computational resources have, however, allowed refinement of finite difference grids and application of the finite-element approaches that allow an increase in the resolution towards the shelf and accurate representation of the shallow topography and the coast line (e.g. Davies & Xing 2005). Thus, although some of the ocean circulation models encompass the shelf edge, it will be a major challenge to embed in these models the appropriate ecosystem dynamics and carbon cycle. 8.2.2 Macronutrient supply from the deep ocean An alternative method proposed to supply nutrients to the oceanic euphotic zone is the use of local wave power to pump deep nutrient-rich water from depths of several hundred metres to the surface (http://www.atmocean.com/sequestration. htm; Lovelock & Rapley 2007). It is claimed that this would lead to enhanced production and sequestration of organic carbon via a direct stimulation of the biological carbon pump. This claim has been disputed on the grounds that deep waters also contain elevated concentrations of dissolved carbon dioxide that may be released to the atmosphere when these deep waters reach the surface (Shepherd et al. 2007). To first order, assuming Redfield stoichiometry, the net supply of DIC in the upwelled water will be just sufficient to supply the carbon required for the additional photosynthesis generated by the upwelled nutrients, without requiring drawdown of CO2 from the atmosphere. However, there are second-order effects to be considered, the most significant of which is the way in which the composition of the water pumped from depth (C : N : P) differs from that of the settling particles. It is widely accepted that nitrogen is preferentially remineralized relative to carbon from sinking organic material (e.g. Anderson & Sarmiento 1994; Christian et al. 1997). Consequently, upward flux of this relatively nitrate-rich water will allow a sinking flux of carbon larger than that contained in the upwelled water, thus potentially allowing a net air–sea flux of CO2 to occur. Whether the offset between the ratio of these two elements, C and N, in the upwelled water and in the sinking particles could be sufficiently large for this strategy to become a plausible means to sequester CO2 from the atmosphere is unclear at present. As far as we are aware, no comprehensive studies have examined the effects of these pipes at the time and space scales most pertinent to the anticipated effects. Preliminary calculations (A. Yool et al. unpubl. data) using upper limit assumptions for the effective translocation of nutrients indicate that the efficiency of sequestration is low. A very large number of pipes (approaching 1000 km2 in total area) would therefore be needed to achieve sequestration of 1 GtC yr−1 .
Ocean fertilization: a potential means of geo-engineering?
159
An inherent part of this scheme’s design is that the pipes will supply not only nutrients but also denser water to the surface. This leads to a statically unstable situation: denser fluid overlying lighter fluid. Depending on the upward flow rate and the rate of lateral surface mixing, this will increase mixing and deepening of the upper mixed layer, with consequences for the light field experienced by phytoplankton. These detailed concerns may best be assessed by high-resolution non-hydrostatic physical models or finite-element models with adaptive mesh. Although early results suggest that the usefulness of the pipes scheme may be limited by its efficiency, large uncertainties still exist in their precise operation at the local scale, and how this translates to their ability to enhance oceanic uptake of CO2 . We believe that this scheme cannot be dismissed as a potential solution yet and that further research is warranted. 8.2.3 Iron supply to HNLC regions (enhance macronutrient uptake) Ice-core records indicated that during past glacial periods, naturally occurring iron fertilization had repeatedly drawn massive amounts of carbon from the atmosphere. Several observational programmes have been carried out in present-day HNLC regions where there are regionally restricted natural supplies of iron and it has been suggested that this supply of iron is sufficient to relieve macronutrient limitation and hence enhance local productivity (Figure 8.4, red squares). The two most recent observational programmes, both in the Southern Ocean, examined the region around the Crozet Islands, and that associated with a shallow plateau near Kerguelen Island (Blain et al. 2007; Pollard et al. 2007). These studies have shown that not only is there enhanced surface production and nitrate reduction as a result of the local iron supply, but that this enhancement leads to increased fluxes of organic carbon below the euphotic zone, some of which reaches the sediments. The conclusion has been that natural iron fertilization in such HNLC regions promotes carbon export and sequestration by measurable amounts. The amount of carbon sequestered per unit addition of iron is of considerable interest and is termed the iron fertilization efficiency (IFE). Results from the field programmes indicate that the value of IFE at Crozet is four times lower than that calculated from Kerguelen, although the uncertainties at both locations are large and the difference between them probably not statistically significant. A key goal of future observational programmes must be to refine this value. Twelve artificial iron fertilization experiments have been carried out since 1993 to examine the effects of in situ addition of this micronutrient on upper ocean biogeochemistry (summarized in de Baar et al. 2005; Boyd et al. 2007). These experiments have shown that supplementing these areas with iron has a significant effect on biological processes in these regions and on the cycles of the major
160
R. S. Lampitt et al.
Figure 8.4 Annual average surface nitrate showing the locations of iron experiments referred to in Boyd et al. (2007); red, natural Fe studies; white, Fe addition experiments; green, Fe+P addition experiments (see also colour plate).
elements such as carbon, nitrogen, silicon and sulphur. Although all experiments enhanced the growth of phytoplankton, they were not all designed to measure export from the upper ocean and none was designed to measure sequestration. There was nevertheless evidence of enhanced export flux in several of the experiments, and one may expect this led to enhanced sequestration though to an unknown extent. This fertilization method has been the focus of more publicity than other methods, largely stemming from an informal sound bite by John Martin in 1988 that an ice age could be initiated with ‘half a tanker full of iron’. The laboratory experiments that formed the basis for Martin’s comments indicated that every ton of iron added to HNLC regions could sequester 30 000–100 000 tonnes of carbon. Models of progressively increasing resolution and realism have been used during the last 20 years in order to evaluate the potential for iron fertilization of HNLC regions as a means of consuming nutrients and sequestering carbon. Early simplistic models (e.g. Peng & Broecker 1985) indicated a possible reduction in atmospheric CO2 of 50–100 ppm; however, recent studies with higher resolution three-dimensional models coupled to ecosystem dynamics including iron have suggested that addition of iron is much less efficient (order of 10 ppm) because the other limiting factors of light and grazing become dominant (e.g. Dutkiewicz et al. 2005; Aumont & Bopp 2006).
Ocean fertilization: a potential means of geo-engineering?
161
The link between nutrient supply and ecosystem dynamics is complex, especially for micronutrients. Formulations suitable for global ecosystem models are only now becoming available with detailed physiological models of iron cycling (e.g. Flynn 2001) being implemented in ecosystem models (e.g. Fasham et al. 2006). However, modellers still face many problems in representing aspects of iron cycling such as the complex speciation of iron in the marine environment, bioavailability (e.g. binding by organic ligands), photochemical processes and interactions with colloids (Weber et al. 2005). Global biogeochemical models are not yet capable of accurately predicting both upper ocean production and consequent export of organic matter to deep waters (e.g. Gehlen et al. 2006), let alone the impact of a perturbation due to iron fertilization on the system. Our understanding of the mechanisms contributing to export remains incomplete, compromising the ability to successfully predict the ecosystem response to perturbations in iron supply. The data from the iron fertilization experiments are in themselves inconclusive, further contributing to the difficulties in reducing uncertainties in IFE through modelling. The final conclusion from Aumont & Bopp (2006) was that ‘the tool used in this study is a simplified (and simplistic) representation of reality. Thus, large uncertainties remain concerning the efficiency of iron fertilization that should be explored using more observations and/or other models.’ We concur entirely with this conclusion and until these other studies are carried out it will be impossible to state with confidence whether iron fertilization in HNLC regions is likely to be effective in sequestering anthropogenic carbon. Only after these studies are completed will it be possible to determine the net benefit of the activity after taking into account the carbon costs.
8.2.4 Iron supply to LNLC regions (enhance nitrogen fixation) In areas of the ocean where surface waters contain residual phosphate but are deficient in nitrate, nitrogen fixation (which has an especially high dependency on iron) is limited by this micronutrient (e.g. Falkowski 1997). The supply of iron could, if supported by sufficient local supplies of phosphorus, facilitate nitrogen fixation leading to enhanced productivity and thus possibly also carbon sequestration. As for HNLC areas, an important question regarding the efficacy of iron fertilization in LNLC regions is the extent to which other limiting factors, notably phosphorus, become limiting. The problem is exacerbated by our relatively poor understanding of the mechanisms of nutrient supply, including P, to the oligotrophic gyres. The addition of more ‘plankton functional types’ such as N2 fixers in marine ecosystem models is fraught with difficulty given our limited understanding of plankton
162
R. S. Lampitt et al.
physiology (Anderson 2005) but this is clearly a crucial task in the context of iron fertilization of LNLC regions. The good correlation between the diazotroph Trichodesmium sp. abundance and estimated dust deposition (Tyrrell et al. 2003) in the subtropical North Atlantic Ocean gives further support to this notion. Similarly, the South Atlantic oligotrophic gyre has low nitrate and iron concentrations but with residual phosphate. The effect of this on carbon sequestration has yet to be determined through large-scale field experiments although the evidence is strong that iron and phosphorus provide pivotal co-limitation of nitrogen fixation (Mills et al. 2004).
8.2.5 Conclusions None of these four methods has yet been fully explored either by adequate field experimentation or by appropriate computational modelling of the system. Both are required to determine the likelihood that sequestration can be enhanced and by how much but there is definite potential that some or all of the proposed methods could enhance sequestration. However, no serious and detailed assessments have been published on the full economic and/or energetic costs required in order to implement any of the methods. At present, the carbon trading market is developing at great speed ($10.9 billion in 2005 and $30.2 billion in 2006) but not all industrial sectors are involved. The consequence of this partial involvement of industry is that a direct comparison of costs between the various fertilization methods is much more difficult and will require detailed and thorough analyses. Nevertheless, it seems likely that iron fertilization would be the most cost-effective, simply because the quantity and cost of the fertilizing material required are both small.
8.3 Side effects of ocean fertilization Before any commercial application of ocean fertilization is considered, it is essential that adequate attention be given to potential unintended consequences, some of which may be deleterious to the marine environment or its users (sensu London convention and protocol) either in the short term (1–10 years) or in the longer term (centuries). Here, we can only provide a brief description of these potential side effects, but subsequently they must be explored in detail so that any benefits of sequestration can be balanced against any potential damage. There will be significant uncertainties in the scientific assessment of several of these side effects. Nevertheless, it will be necessary to estimate probabilities so that a cost–benefit– risk analysis can be carried out in a rational and well-informed manner. We identify and briefly discuss seven areas of potential side effect that will require specific
Ocean fertilization: a potential means of geo-engineering?
163
attention in the future although we cannot discount the possibility that others will occur. 8.3.1 Eutrophication and anoxia Defined as the detrimental response of an ecosystem to excess macronutrients, eutrophication is a coastal phenomenon of worldwide concern (Diaz et al. 2004; UNEP 2004). The key features of eutrophication of relevance here include reductions in oxygen levels, changes in phytoplankton species including development of harmful algal blooms (HABs) and a lowering of biological diversity. It is important to note that the degree to which eutrophication might occur in artificially fertilized areas of the open ocean is debatable on account of differences in circulation patterns, nutrient supply mechanisms and biological communities compared with coastal seas. Responses of marine organisms to low oxygen are almost entirely negative (Diaz 2001; Levin et al. 2001; Cowie 2005; Domenici et al. 2007). While physiological adaptation can occur, extended exposure (more than 60 days) to anoxia leads to total mortality (Knoll et al. 2007). The likelihood of such prolonged exposure will depend on how well different parts of the deep sea are ventilated. Closer to the continental margins, artificially enhanced POC fluxes may combine with the already higher productive shelf systems to increase the risk of low-oxygen conditions in bottom waters. Such changes potentially reduce the capacity of the system to support commercial fisheries. Prolonged (more than 1 year) anoxia promotes burial of organic carbon into the long-term geological record (Hedges & Keil 1995) and may be a means to sequester carbon but the degree of success will again depend on circulation patterns and/or proximity to the higher productive shelf–ocean margin systems. However, the promotion of bottom water anoxia as a sequestration strategy has to be judged against its serious detrimental effects on marine life. Furthermore, purposefully lowering the oxygen content of waters increases the risk of enhanced release of N2 O, a greenhouse gas more potent than CO2 , negating any potential benefit from fertilization (Fuhrman & Capone 1991; Jin & Gruber 2003). In more extreme situations, ‘sulphur eruptions’ can occur and the so-called ‘black tides’ of H2 S-laden water can cause extensive and prolonged mortality for almost all marine organisms (Weeks et al. 2002). Interestingly, susceptibility of organisms to hypoxia is also tied to temperature ranges (i.e. a thermal envelope that varies from species to species; P¨ortner et al. 2005) suggesting the possibility of identifying, by latitude, higher and lower risk regions for fertilization. The changes in nutrient input ratios (N : P : Si) can alter phytoplankton community composition. Enrichment of N relative to Si has been accompanied by shifts in species dominance from diatoms to dinoflagellates (see Cloern 2001) whereas
164
R. S. Lampitt et al.
the changes in N : P ratios (below Redfield) may have promoted ‘nuisance’ phytoplankton species such as Phaeocystis sp. (Riegman et al. 1992). Eutrophication also causes HABs (e.g. Chrysochromulina polylepis), which in productive fishery regions has serious economic impacts (Underdal et al. 1989) and can lead to human fatalities through the consumption of contaminated shellfish (Hallegraeff 1993). Hence, tampering with natural oceanic nutrient ratios through fertilization may promote phytoplankton, which are harmful to marine life and human health. In oligotrophic oceanic regions, artificially enhanced POC fluxes may have a positive effect on the benthic biomass (Section 8.3.7). However, closer to the productive continental shelves, increase in productivity due to eutrophication may reduce diversity in the benthos. Consequently, ocean fertilization strategies need to consider ecosystem characteristics (e.g. biological community structure) and both proximity to shallower shelf environments and circulation patterns, which can transport organic matter horizontally over large (more than 100 km) distances. 8.3.2 Modification of ocean pH Ocean pH has fluctuated between 8.0 and 8.3 for the last 25 Myr, and, since industrialization, the rate of increase in atmospheric CO2 has caused unprecedented changes in seawater pH and carbonate chemistry (Caldeira & Wickett 2003; Bellerby et al. 2005; Orr et al. 2005). These changes are predicted to impact on biological functions, including calcification, reproduction and physiology (Raven et al. 2005). The effects of fertilization are likely to be to reduce the current trend of decreasing pH in the euphotic zone although in deeper water the increased POC supply would tend to lower pH although to only a small degree. Understanding the effect of increasing anthropogenic pCO2 on marine biota is both ecologically relevant and of major significance for managing the global carbon cycle on human time scales. Ocean acidification may shift the phytoplankton community and this has important implications for sinking particulate organic matter as this may be partly controlled by ballast materials such as dust, silica and calcium carbonate (Klaas & Archer 2002). The available evidence suggests that the increase in atmospheric CO2 absorbed by the oceans will increase photosynthesis by coccolithophores (Leonardos & Geider 2005; Riebesell et al. 2007) and some studies also demonstrate an associated decrease in calcification (Riebesell et al. 2000). Other studies confirm the increase in photosynthesis but also show an increase in calcification in response to elevated pCO2 (Iglesias-Rodriguez et al. 2008). Additionally, evidence from the geological record suggests that past periods of ocean acidification such as the Palaeocene–Eocene Thermal Maximum did not result in a productivity crisis (Stoll et al. 2007) and calcification appeared insensitive to changes in pH (Gibbs et al. 2006). Controls on calcification under high CO2 are
Ocean fertilization: a potential means of geo-engineering?
165
extremely complex (e.g. Riebesell et al. 2007) and probably influenced by other environmental parameters that naturally covary with pCO2 . 8.3.3 Modification of global macronutrient balance Any attempt to fertilize the ocean with nutrients to stimulate production and hence carbon sequestration will inevitably result in a redistribution of nutrients on a global scale. The significance of this statement is that some areas may subsequently experience a decrease in nutrient supply leading to a reduction of biological productivity and possibly a reduction in economic activities such as fisheries. It is therefore important in any discussion of the side effects of purposeful fertilization that the downstream effects of nutrient redistribution are adequately considered. Once again the crucial requirement is the development of global models with sufficient resolution and appropriate parametrization to examine each potential method of fertilization. Results to date (Sarmiento & Gruber 2002) indicate that, for example, a side effect of HNLC iron fertilization around Antarctica will be to reduce macronutrient concentrations in the equatorial and coastal upwelling regions some decades to centuries later. This may cause a reduction in productivity leading to reduction in fishery yield. It is essential that such ‘costs’ are considered in the cost–benefit–risk analyses that will be carried out when appropriate data are available. 8.3.4 Modification of global iron balance It has been suggested that we should not alter the global balance of this essential trace element as it will become scarce elsewhere. The alternative view has been expressed that addition of iron to specific regions should be considered as pollution. We demonstrate below that neither of these concerns is justified. Iron is supplied to the surface ocean via the atmospheric transport of dust and its deposition, as well as by the upwelling, entrainment or mixing of deeper waters that are relatively rich in iron and other nutrients (Watson 1997). These sources supply new iron to the euphotic zone (i.e. not acquired via recycling). Rivers and continental margin sediments are also a significant source of iron to coastal waters (Tappin 2002; La¨es et al. 2003). However, uptake by coastal phytoplankton and sedimentation of the fluvial inputs are likely to render this iron supply inaccessible to oceanic phytoplankton. In the oceanic euphotic zone, iron is also recycled from living matter to sustain regenerated biological production. The currently projected iron additions by the ocean iron fertilization (OIF) industries are estimated as 10 000 tonnes per year, which is less than 0.1 per cent of the amount delivered to the ocean by dust (15.5 × 106 tonnes yr−1 ) or rivers (650 × 106 tonnes yr−1 ). Consequently,
166
R. S. Lampitt et al.
the projected OIF activities will not significantly upset the global oceanic balance of iron. Iron is a highly reactive element, and is subject to very rapid removal through inorganic precipitation and scavenging processes, in addition to biological uptake. Iron added to the ocean by natural or anthropogenic processes will consequently be rapidly removed from the ocean surface waters. A consequence of this strong removal mechanism is that oceanic surface waters are depleted of iron, with increasing concentrations with depth (Measures et al. 2008). Iron added through OIF will hence be rapidly removed from the surface waters (1–5 months; Sarthou et al. 2003), and continual additions would be required to replenish iron concentrations. 8.3.5 Generation of other climate-relevant gases (greenhouse/cloud forming) It is important to consider ocean fertilization in the context of radiative forcing and not simply in terms of the carbon cycle. Carbon dioxide is only partially responsible for greenhouse warming and although this gas is intimately linked to the biological production of the oceans, others are also controlled to a large extent by the biological and chemical processes taking place in the oceans. Some of these processes increase radiative forcing while others cause a reduction and, in Table 8.2, we provide an overview of the processes involved in the budgets of these various gases and the factors that are likely to be affected by ocean fertilization. Fluxes are potentially large particularly if the anticipated decrease in oxygen concentration is sufficient to generate larger quantities of methane and nitrous oxide. The interactions are complex and not well constrained with potentially a number of both positive and negative feedbacks. The critical research now needed is to determine and model the production rates of these gasses in response to fertilization and hence to determine the influence on greenhouse forcing. 8.3.6 Change to pelagic ecosystem structure While the purpose of ocean fertilization is to enhance carbon sequestration, one probable consequence is a change to the structure and function of the biological communities especially in the euphotic zone. These changes may affect fisheries directly or indirectly or they may alter the details of the export process such as by modifying the characteristics of the settling particles produced by the euphotic zone communities (chemical composition, sinking rates and palatability for sub-euphotic zone communities, etc.). Approximately 1.3 billion people depend on fisheries for a major part of their sustenance and economic welfare so it is appropriate we should consider ecosystem changes that might be a consequence of ocean fertilization and which might affect
Ocean fertilization: a potential means of geo-engineering?
167
Table 8.2 Gases and aerosols affecting the radiative balance of the Earth, their current effects, the fluxes to and from the ocean and the ways in which ocean fertilization are likely to alter their influence.
Gas
Ocean to Radiative atmosphere forcing supply rate (W m−2 ) (mol yr−1 )
CO2
1.6
−1.4 × 1014
Methane
0.5
8.0 × 1012
Halocarbons
0.3
Greater than 1 × 1011 (summation of various compounds)
Ozone
0.3
Nitrous oxide
0.1
1.2 × 1011
Aerosols (direct)
−0.5
3.3 × 1015 (g yr−1 )
DMS (albedo)
−0.7
6.9 × 1011
Factors causing increase or decrease
References
Increased sequestration and IPCC (2001) carbon export will reduce forcing but not well constrained Anoxia increases production Houweling et al. (2000) Enhanced production due to Harper (2000), Quack & phytoplankton metabolic Wallace processes. Bromo and (2003) and chloro compounds Smytheincrease forcing. Iodine Wright et al. compounds may lead to (2006) increases in aerosols and albedo enhancing cooling (cf. DMS) Solomon et al. Reduction in stratospheric (1994), ozone due to increased Dvortsov halocarbons will reduce its et al. (1999) negative effect on global and Vogt warming. Conversely et al. (1999) depletion of tropospheric ozone will reduce its radiative forcing Increase forcing due to Jin & Gruber biological production by (2003) phytoplankton Any increase in sea salt input IPCC (2001) will increase aerosol production
such human communities either negatively or positively. Although most fisheries are on the continental shelves and the ocean fertilization schemes we discuss are in oceanic areas, it has been claimed by enthusiasts of ocean fertilization that such schemes will inevitably enhance both carbon sequestration and fisheries yield. Computational models have to date been extremely poor at predicting community
168
R. S. Lampitt et al.
structure and in spite of the massive efforts over the past century at providing accurate predictions of fish yield, the uncertainties are usually very large even in relatively well-constrained coastal environments. This hope of double benefit seems optimistic. There are various examples where environmental change appears to have caused alterations in community structure. For example, it has been suggested that jellyfish replace bony fish in some ecosystems in response to climate change (Mills 2001; Purcell et al. 2007). Elsewhere, for example, the salp Salpa thompsoni appears to be replacing Antarctic krill in the Southern Ocean (Atkinson et al. 2004). Similarly, the decline in the cod population of the North Sea is thought to be due largely to subtle changes in the timing of the zooplankton communities that are the staple diet of juvenile cod (Beaugrand et al. 2003). This latter case is a classic example of the match–mismatch hypothesis whereby the food for larval growth and hence adult recruitment is required at precisely the correct time (Cushing 1975). Similarly, changes in global environmental indicators such as the North Atlantic Oscillation (NAO) or the El Ni˜no Southern Oscillation (ENSO) have been shown to elicit ecosystem changes albeit ones that are hard to predict (Stenseth et al. 2002). In addition to the direct effects on fisheries, indirect impacts such as by the promotion of HABs should not be ignored. As described above, HABs sometimes occur in response to coastal eutrophication and although unlikely to become a feature of fertilization of the open ocean they provide examples of major community changes that are demonstrably difficult to predict with confidence (Cloern 2001). The possibility that ocean fertilization will elicit comparable effects cannot be ruled out although we think it unlikely. As mentioned above, ecosystem changes in response to ocean fertilization may also affect the nature of the export process. The biological pump is mediated by the members of the euphotic zone community and changes to that community will necessarily change the nature of the settling particles in terms of their morphology (e.g. marine snow aggregates versus faecal pellets as the principal vehicles for flux) or the chemical composition of the particles affecting, for instance, the Redfield ratio of these particles and the balance between the production of POC and PIC. It is widely accepted that changes in nutrient input ratios (N : P : Si) affect phytoplankton community composition (Arrigo 2005). For example, long-term regime shifts in species dominance from diatoms to dinoflagellates in the North Sea are thought to be a reflection of nitrogen enrichment relative to silicon (see Cloern 2001) whereas changes in N : P ratios (below Redfield) may have promoted undesirable phytoplankton species such as Phaeocystis sp. in northwest European coastal waters (Riegman et al. 1992). Similarly during the natural fertilization CROZEX project, iron fertilization had the somewhat unexpected result of increasing the abundance, diameter and biomass
Ocean fertilization: a potential means of geo-engineering?
169
of the colonial forms of Phaeocystis antarctica which proved both unpalatable to mesozooplankton and were inefficiently exported (Lucas et al. 2007). Our conclusion is that ocean fertilization is likely to change pelagic ecosystem structure and function. This may have a direct effect on fisheries and will certainly modify the details of the biological pump. The types of change will depend heavily on the proposed method of fertilization but a clear conclusion about either of these is not possible until the large-scale fieldwork and associated modelling has been completed. 8.3.7 Change to benthic ecosystem structure Approximately 0.4 Gt of carbon is deposited on the abyssal seafloor each year, the end member of the biological pump (Jahnke 1996). Of this, approximately 96 per cent dissolves or is remineralized each year to DIC and hence influences air–sea CO2 exchange on a timescale of a few centuries (Tyson 1995). The remaining 4 per cent is buried and incorporated into the geological sediment and hence removed from atmospheric interaction for many millions of years. The processes that determine the proportion of the sedimented material that is buried are largely driven by the benthic biota and it is therefore of importance to determine potential effects on this community. With this in mind, it will be possible to estimate the effects of ocean fertilization on sequestration on the centennial timescale agreed upon by the IPCC and on the much longer timescales of geological significance. From the strict perspective of the 100-year timescale we are considering here, the effects of changes to the benthic communities can probably be ignored. The abundance, biomass and diversity of the deep-sea benthos are intimately linked to inputs of organic matter from the euphotic zone (Gage & Tyler 1991). In general, there is a decrease in benthic biomass and abundance with decreasing organic carbon flux (Figure 8.5a) (Rowe 1983; Rex et al. 2006). Diversity generally increases from regions of low to moderate productivity, and then declines towards regions of higher productivity (Figure 8.5b). The response of the benthos to increases in organic carbon inputs will therefore depend on where it sits on this continuum. In the characteristically low productivity oligotrophic gyres where ocean fertilization has been suggested, it is likely that enhanced POC fluxes to the seafloor would result in increased biomass and abundance (Glover et al. 2002; Hughes et al. 2007) and enhanced diversity (Levin et al. 2001). This change in the assemblages may influence ecosystem functioning (Sokolova 2000; Danovaro et al. 2008). However, the relationship between POC fluxes and benthic response is not simple; for example, recent changes in megafaunal species dominance in the abyssal North Atlantic (Billett et al. 2001) appear to be related to changes in the
170
R. S. Lampitt et al. (b) density (×103 m–2)
biomass (g m–2)
(a)
Figure 8.5 (a) The relationship between estimated POC flux and wet weight biomass and abundance of the deep-sea macrobenthos in the western North Atlantic (adapted from Johnson et al. 2007). (b) Schematic showing the pattern of diversity change with POC flux (adapted from Levin et al. 2001).
composition of the organic matter (Wigham et al. 2003), and not simply to changes in total export flux (Lampitt et al. 2001). Benthic ecosystems are in a complex state of dynamic equilibrium. While this equilibrium may be altered by enhanced fluxes (e.g. seasonal phytodetritus; Beaulieu 2002), after the period of fertilization has ceased, the system may revert to the earlier equilibrium. It is not clear what will happen to the carbon that was contained in the increased biomass; some of this may be incorporated into the geological record although the majority will be released into the water column by remineralization. 8.4 Research and developments required to reduce uncertainties As is apparent above, the commercial and engineering sectors urgently need to carry out a substantive assessment of the financial and energetic costs of each method but necessarily in parallel with further scientific research. Although not the principal focus of this chapter we can identify here the scientific approaches that are required in order to assess the efficacy of the various proposed methods and the likelihood of unacceptable side effects. We divide these into experiments carried out in the laboratory, in mesocosms, in the field, and with computational models. Each approach has advantages and disadvantages and provides complementary insights into the complex biogeochemical interactions implicated in any ocean fertilization proposal. 8.4.1 Laboratory experiments Experiments in the laboratory cannot simulate many aspects of the natural world such as diurnal changes in mixing but they have the great advantage of providing
Ocean fertilization: a potential means of geo-engineering?
171
environments that can be controlled and manipulated to simulate a wide variety of conditions. Changes in climate-driven biogeochemical processes such as photosynthesis, calcification, nitrogen fixation and silicification have been investigated in the laboratory (on land and at sea) by manipulating environmental variables including macro- (Krauk et al. 2006) and micronutrients (Hudson & Morel 1989; Timmermans et al. 1994; Yoshida et al. 2002), and carbonate chemistry (Zondervan et al. 2001). However, we face two main problems in interpreting the laboratory data. The first is the different experimental approaches used, for example, continuous versus batch-mode cultures, different medium compositions and nutrient concentrations. Second, the available measurements are mostly limited to biogeochemical rates or the ecological/biogeochemical function at the cellular level, frequently for single species although some contain detailed and comprehensive information at several levels of biological organization. The use of relevant organisms and (if possible) communities will be required for laboratory experiments (Brewer et al. 2000). Understanding long-term (chronic) effects of eutrophication and increased concentrations of carbon dioxide on organisms are a prerequisite in addition to in situ field observations and manipulations. For example, these could include, among others, long-term (chronic) effects of Fe fertilization, vertical macronutrient flux, eutrophication, anoxia, changing pH, changing light environment, changing temperature and increased concentrations of carbon dioxide on organisms including appropriate combinations of the above. Laboratory and field experiments are required to assess the effect of nutrients, CO2 , iron availability on the community structure of phytoplankton taxa and how these changes may alter biogeochemical fluxes. Particularly, observations should be made where time series are available in regions that have undergone changes in pCO2 or in nutrient dynamics, and tests should be conducted to assess to what extent these processes have played a role in determining trophic interactions and carbon fluxes. 8.4.2 Mesocosm experiments Mesocosms are containers currently with volumes 1–400 m3 , which may (or may not) contain a benthic sediment environment (Harada et al. 1996). They represent an approximation of a natural environment that can be controlled by the addition of nutrients, pollutants, predators or CO2 and hence are intermediate between laboratory and field experiments. They have proven useful in providing information about trophic interactions and biogeochemical functions (Howarth 1988; Dam & Drapeau 1995; Riemann et al. 2000; Delille et al. 2005). They enable scaling from the individual organism up to the community level and interdisciplinary programmes that involve manipulative experiments using benthic and pelagic mesocosms can
172
R. S. Lampitt et al.
begin to address the complexity of the responses of the biota. Problems associated with mesocosm experiments include their predominantly coastal location and inevitable uncertainty as to whether the response of the community is really representative of open ocean environments. Furthermore, they do not reflect some key processes such as variations in physical mixing. To overcome the first of these problems, offshore pelagic mesocosm experiments can be used to quantify the effects of manipulations on species composition and succession, photosynthesis, macroand micronutrients and carbon removal, nitrogen fixation, organic matter production and gas exchange in natural open sea plankton communities. Some technical problems remain to be solved but this approach offers significant opportunities for simulated open ocean fertilization experiments. As discussed above, sequestration is defined as the removal of carbon from the system for over a century and this demands flux to depths of 200–1000 m, which are probably impossible to replicate in any future design of mesocosm.
8.4.3 Field experiments Of the four potential means of ocean fertilization identified above, significant field experiments have only been carried out to address the effects of iron fertilization and almost all have been in HNLC regions. As described above, these studies were not designed to address the issue of sequestration and if the feasibility of all four potential strategies is to be evaluated, further relevant experiments in the field will need to be undertaken. In order to address issues of natural spatial and temporal variability, these would need to be of sufficient duration (more than 10 weeks) and scale (in excess of 100 × 100 km). Because such experiments potentially have considerable social and economic importance, they need to be carried out by well-qualified and experienced teams of independent oceanographers skilled in the state-of-the-art observations that will be necessary to verify that sequestration has been globally enhanced as a result of the localized fertilization. This will involve oceanographers from the disciplines of physics, biology, biogeochemistry and chemistry and the observations will need to be interpreted rigorously by assimilation into an appropriate modelling framework (see below). The cost of such experiments will be large (millions of pounds), including ship time on suitably equipped research vessels. The scientific and commercial communities are now ready to collaborate in the pursuit of appropriate field experiments. Given the appropriate financial and legal support, we are optimistic that major advances can and should be made using largescale field experiments that will address explicitly the effects of various types of ocean fertilization on carbon sequestration.
Ocean fertilization: a potential means of geo-engineering?
173
8.4.4 Modelling Modelling studies addressing the issue of artificial ocean fertilization broadly fall into two categories: regional modelling of localized field experiments and global modelling to assess the long-term and remote consequences of proposed fertilization schemes. A new generation of ecosystem models is being developed that includes the cycling of iron and other elements, providing the link to carbon export. It is important that the models in question are sufficiently complex to reproduce the essential features of the experiments, yet without including complexity beyond that which can be verified by observation. Effective model verification is then possible, the aim being to provide accurate simulations of the ecosystem dynamics as observed in the field. By successfully modelling fertilization experiments, the models provide a formal assessment of our understanding of marine ecosystems and their potential response to nutrient enrichment. Important processes and nutrient budgets are constrained in a way that could not be done solely on the basis of measurements. The results of these models, focusing on particular field experiments, then provide the basis for the GCMs addressing impacts at the global scale. Modelling the long-term and large-scale (remote) effects of iron fertilization requires high-resolution global GCMs coupled with suitable ecosystem models. A necessary prerequisite is good physics, biogeochemical models only being ‘as good as the physical circulation framework in which they are set’ (Doney 1999). Further, a full description of the ocean carbon system and carbon exchange with the atmosphere, spun up to the equilibrium (involving model runs of thousands of years), is required. One such model for this purpose, at the forefront of the field, is that of Aumont & Bopp (2006), which has been used to study the global effects of iron enrichment experiments. With a resolution of 2 × 2◦ , the model suffers deficiencies in reproducing biophysical interactions, particularly in the oligotrophic gyres where mesoscale effects play an important role in nutrient budgets. The resolution required for a good representation of nutrients and carbon in the ocean (e.g. a few kilometres) is currently unachievable. Improvements may, however, be made through alternative approaches, notably developments in numerical methods of accelerations of global models (e.g. Li & Primeau 2008) and finite-element modelling on adaptive meshes (e.g. Piggott et al. 2008). The latter is a promising new method that allows increased resolution where and when it is required as simulations are run. Finally, the need for experimental and observational data to underpin modelling studies cannot be overemphasized. In order to be effective as management tools, the models need to undergo rigorous validation to ensure that the assumptions employed are realistic and lead to reliable predictions.
174
R. S. Lampitt et al.
8.5 Conclusion The proposition that the biological pump could be stimulated by the purposeful supply of essential nutrients is sound in principle. There are several methods that have been proposed in order to achieve this and it is very likely that some or all of these could lead to enhanced flux of carbon into the deeper layers of the ocean in a localized area. We have considered the two main issues surrounding these proposals, the first being ‘Will they work?’ By this we question whether a particular scheme will sequester more carbon than the scheme consumes when the timescale of consideration is 100 years and the spatial scale is global. Local or short-term sequestration is irrelevant if counteracted by carbon release when the scale of time or space is enlarged. The second consideration is whether there are likely to be unacceptable side effects of the proposed scheme. All methods of ocean fertilization must, by design, substantially modify the natural biological processes of the marine ecosystem. Without doubt, the effects of fertilization will extend far beyond any sequestration of carbon from the atmosphere. It is possible that some of these effects will be significant and may be considered as an unacceptable cost for the calculated benefits in terms of carbon sequestration. There is at present a clear and urgent need for tightly focused research into the effects of ocean fertilization. The critical areas of research will involve large-scale field experiments (100 × 100 km) tightly coupled to high-resolution three-dimensional computational models with embedded biogeochemistry. This is required for each of the four classes of fertilization schemes that have been proposed. Until completed satisfactorily, it is impossible to provide a rational judgement about whether the schemes proposed are (i) likely to be effective and (ii) likely to cause unacceptable side effects. Once this research has been carried out, it will be the responsibility of the science community to perform appropriate cost–benefit–risk analyses in order to inform policy. At the same time, discussions between the commercial, regulatory and scientific communities must take place so that the principles and practices of verification can be established. References Anderson, L. A. & Sarmiento, J. L. 1994 Redfield ratios of remineralization determined by nutrient data analysis. Glob. Biogeochem. Cycles 8, 65–80. (doi:10.1029/93GB03318). Anderson, T. R. 2005 Plankton functional type modelling: running before we can walk? J. Plankton Res. 27, 1073–1081. (doi:10.1093/plankt/fbi076) Arrigo, K. R. 2005 Marine micro-organisms and global nutrient cycles. Nature 437, 349–355. (doi:10.1038/nature04159)
Ocean fertilization: a potential means of geo-engineering?
175
Atkinson, A., Siegel, V., Pakhomov, E. & Rothery, P. 2004 Long-term decline in krill stocks and increase in Salps within the Southern Ocean. Nature 432, 100–103. (doi:10.1038/nature02996) Aumont, O. & Bopp, L. 2006 Globalizing results from ocean in situ iron fertilization. Glob. Biogeochem. Cycles 20, GB2017. (doi:10.1029/2005GB002591) Beaugrand, G., Brander, K. M., Lindley, J. A., Souissi, S. & Reid, P. C. 2003 Plankton effect on cod recruitment in the North Sea. Nature 426, 661–664. (doi:10.1038/nature02164) Beaulieu, S. R. 2002 Accumulation and fate of phytodetritus on the sea floor. Oceanogr. Mar. Biol. Annu. Rev. 40, 171–232. Bellerby, R. G. J., Olsen, A., Furevik, T. & Anderson, L. A. 2005 Response of the surface ocean CO2 system in the Nordic Seas and North Atlantic to climate change. In Climate Variability in the Nordic Seas, eds. H. Drange, T. M. Dokken, T. Furevik, R. Gerdes & W. Berger. Washington, DC: American Geophysical Union, pp. 189–198. Billett, D. S. M., Bett, B. J., Rice, A. L., Thurston, M. H., Gal´eron, J., Sibuet, M. & Wolff, G. A. 2001 Long-term change in the megabenthos of the Porcupine Abyssal Plain NE Atlantic. Prog. Oceanogr. 50, 325–348. (doi:10.1016/S0079-6611(01)00060-x) Blain, S. et al. 2007 Effect of natural iron fertilization on carbon sequestration in the Southern Ocean. Nature 446, 1070–1074. (doi:10.1038/nature05700) Boyd, P. W. et al. 2007 Mesoscale iron enrichment experiments 1993–2005: synthesis and future directions. Science 315, 612–617. (doi:10.1126/science.1131669) Brewer, P. G., Peltzer, E. T., Friederich, G., Aya, I. & Yamane, K. 2000 Experiments on the ocean sequestration of fossil fuel CO2 : pH measurements and hydrate formation. Mar. Chem. 72, 83–93. (doi:10.1016/S0304-4203(00)00074-8) Caldeira, K. & Wickett, M. E. 2003 Anthropogenic carbon and ocean pH. Nature 425, 365. (doi:10.1038/425365a) Chen, C.-T. A., Lim, C.-M., Huang, B.-T. & Chang, L.-F. 1996 Stoichometry of carbon, hydrogen, nitrogen, sulfur and oxygen in the particulate matter of the western North Pacific marginal seas. Mar. Chem. 54, 179–190. (doi:10.1016/0304-4203(96)00021-7) Christian, J. R., Lewis, M. R. & Karl, D. M. 1997 Vertical fluxes of carbon, nitrogen, and phosphorus in the North Pacific Subtropical Gyre near Hawaii. J. Geophys. Res.-Oceans. 102, 15 667–15 677. (doi:10.1029/97JC00369) Cloern, J. E. 2001 Our evolving conceptual model of the coastal eutrophication problem. Mar. Ecol. Prog. Ser. 210, 223–253. (doi:10.3354/meps210223) Cowie, G. 2005 The biogeochemistry of Arabian Sea surficial sediments: a review of recent studies. Prog. Oceanogr. 65, 260–289. (doi:10.1016/j.pocean.2005.03.003). Cushing, D. H. 1975 Natural mortality of plaice. J. Du Conseil. 36, 150–157. Dam, H. G. & Drapeau, D. T. 1995 Coagulation efficiency, organic-matter glues and the dynamics of particles during a phytoplankton bloom in a mesocosm study. Deep Sea Res. II 42, 111–123. (doi:10.1016/0967-0645(95)00007-D) Danovaro, R., Gambi, C., Dell’Anno, A., Corinaldesi, C., Fraschetti, S., Vanreusel, A., Vincx, M. & Gooday, A. J. 2008 Exponential decline of deep-sea ecosystem functioning linked to benthic biodiversity loss. Curr. Biol. 18, 1–8. (doi:10.1016/j.cub.2007.11.056) Davies, A. M. & Xing, J. 2005 Modelling process influencing shelf edge exchange of water and suspended sediment. Continental Shelf Res. 25, 973–1001. (doi:10.1016/j.csr.2004.12.006) de Baar, H. J. W. et al. 2005 Synthesis of iron fertilization experiments: from the iron age in the age of enlightenment. J. Geophys. Res.-Oceans 110, C9. (doi:10.1029/2004JC002601)
176
R. S. Lampitt et al.
Delille, B., Harlay, D. & Zondervan, I. 2005 Response of primary production and calcification to changes of pCO2 during experimental blooms of the coccolithophorid Emiliania huxleyi. Glob. Biogeochem. Cycles 19, GB2023. (doi:10.1029/2004GB002318) Diaz, R. J. 2001 Overview of hypoxia around the world. J. Environ. Qual. 30, 275–281. Diaz, R. J., Solan, M. & Valente, R. M. 2004 A review of approaches for classifying benthic habitats and evaluating habitat quality. J. Environ. Manage. 73, 165–181. (doi:10.1016/j.jenvman.2004.06.004) Domenici, P., Lefranc¸ois, C. & Shingles, A. 2007 Hypoxia and the antipredator behaviours of fishes. Phil. Trans. R. Soc. B. 362, 2105–2121. (doi:10.1098/rstb.2007.2103) Doney, S. C. 1999 Major challenges confronting marine biogeochemical modelling. Glob. Biogeochem. Cycles 13, 705–714. (doi:10.1029/1999GB900039) Dutkiewicz, S., Follows, M. J. & Parekh, P. 2005 Interactions of the iron and phosphorus cycles: a three-dimensional model study. Glob. Biogeochem. Cycles 19, GB1021. (doi:10.1029/2004GB002342) Dvortsov, V. L., Geller, M. A., Solomon, S., Schauffler, S. M., Atlas, E. L. & Blake, D. R. 1999 Rethinking reactive halogen budgets in the midlatitude lower stratosphere. Geophys. Res. Lett. 26, 1699–1702. (doi:10.1029/1999GL900309) Falkowski, P. G. 1997 Evolution of the nitrogen cycle and its influence on the biological sequestration of CO2 in the ocean. Nature 387, 272–275. (doi:10.1038/387272a0) Fasham, M. J. R., Flynn, K. J., Pondaven, P., Anderson, T. R. & Boyd, P. W. 2006 Development of a robust ecosystem model to predict the role of iron on biogeochemical cycles: a comparison of results for iron-replete and iron-limited areas, and the SOIREE iron-enrichment experiment. Deep Sea Res. I 53, 333–366. (doi:10.1016/j.dsr.2005.09.011) Flynn, K. J. 2001 A mechanistic model for describing dynamic multi-nutrient, light, temperature interactions in phytoplankton. J. Plankton Res. 23, 977–997. (doi:10.1093/plankt/23.9.977) Fuhrman, J. A. & Capone, D. G. 1991 Possible biogeochemical consequences of ocean fertilization. Limnol. Oceanogr. 36, 1951–1959. Gage, J. D. & Tyler, P. A. 1991 Deep Sea Biology: A Natural History of Organisms at the Deep-Sea Floor. Cambridge, UK: Cambridge University Press. Gehlen, M., Bopp, L., Ernprin, N., Aumont, O., Heinze, C. & Raguencau, O. 2006 Reconciling surface ocean productivity, export fluxes and sediment composition in a global biogeochemical ocean model. Biogeosciences 3, 521–537. Gibbs, S. J., Bown, P. R., Sessa, J. A., Bralower, T. J. & Wilson, P. A. 2006 Nannoplankton extinction and origination across the Paleocene–Eocene Thermal Maximum. Science 314, 1770–1773. (doi:10.1126/science.1133902) Glover, A. G., Smith, C. R., Paterson, G. L. C., Wilson, G. D. F., Hawkins, L. & Sheader, M. 2002 Polychaete species diversity in the central Pacific abyss: local and regional patterns, and relationships with productivity. Mar. Ecol. Prog. Ser. 240, 157–170. (doi:10.3354/meps240157) Hallegraeff, G. M. 1993 A review of harmful algal blooms and their apparent global increase. Phycologia 32, 79–99. Harada, S., Watanabe, M., Kohata, K., Ioriya, T., Kunugi, M., Kimura, T., Fujimori, S., Koshikawa, H. & Sato, K. 1996 Analyses of planktonic ecosystem structure in coastal seas using a large-scale stratified mesocosm: a new approach to understanding the effects of physical, biochemical and ecological factors on phytoplankton species succession. Water Sci. Technol. 34, 219–226. (doi:10.1016/S0273-1223(96)00748-2) Harper, D. 2000 The global chloromethane cycle: biosynthesis, biodegradation and metabolic role. Nat. Prod. Rep. 17, 337–348. (doi:10.1039/a809400d)
Ocean fertilization: a potential means of geo-engineering?
177
Hedges, J. I. & Keil, R. G. 1995 Sedimentary organic-matter preservation: an assessment and speculative synthesis. Mar. Chem. 49, 81–115. (doi:10.1016/0304-4203(95) 00008-F) Hopkinson, C. S. & Vallino, J. J. 2005 Efficient export of carbon to the deep ocean through dissolved organic matter. Nature 433, 142–145. (doi:10.1038/nature03191) Houweling, S., Dentener, F., Lelieveld, J., Walter, B. & Dlugokencky, E. 2000 The modeling of tropospheric methane: how well can point measurements be reproduced by a global model? J. Geophys. Res. 105, 8981–9002. (doi:10.1029/1999JD901149) Howarth, R. W. 1988 Nutrient limitation of net primary production in marine ecosystems. Annu. Rev. Ecol. Syst. 19, 89–110. (doi:10.1146/annurev.es.19.110188.000513) Hudson, R. J. M. & Morel, F. M. M. 1989 Distinguishing between extra- and intracellular iron in marine phytoplankton. Limnol. Oceanogr. 34, 1113–1120. Hughes, J. A., Smith, T., Chaillan, F., Bett, B. J., Billett, D. S. M., Boorman, B., Fisher, E. H., Frenz, M. & Wolff, G. A. 2007 Two abyssal sites in the Southern Ocean influenced by different organic matter inputs: environmental characterization and preliminary observations on the benthic foraminifera. Deep Sea Res. II 54, 2275–2290. (doi:10.1016/j.dsr2.2007.06.006) Iglesias-Rodriguez, M. D., Armstrong, R., Feely, R., Hood, R., Kleypas, J., Sabine, C. & Sarmiento, J. 2002 Progress made in study of ocean’s calcium carbonate budget. EOS, Trans. Am. Geophys. Union 83, 365. (doi:10.1029/2002EO000267) Iglesias-Rodriguez, M. D. et al. 2008 Phytoplankton calcification in a high-CO2 world. Science 320, 336–340. (doi:10.1126/science.1154122) IPCC 2001 Climate Change 2001: The Scientific Basis. Cambridge, UK: Cambridge University Press. IPCC 2007 Climate Change 2007: Synthesis Report. Contributions of Working Groups I, II and III to the Fourth Assessment Report of the Intergovernmental Panel on Climate Change. Cambridge, UK: Cambridge University Press. Jahnke, R. A. 1996 The global ocean flux of particulate organic carbon: areal distribution and magnitude. Glob. Biogeochem. Cycles 10, 71–88. (doi:10.1029/95GB03525) Jin, X. & Gruber, N. 2003 Offsetting the radiative benefit of ocean iron fertilization by enhancing N2 O emissions. Geophys. Res. Lett. 30, 4. (doi:10.1029/2003GL018458) Johnson, N. A., Campbell, J. W., Moore, T. S., Rex, M. A., Etter, R. J., McClain, C. R. & Dowell, M. D. 2007 The relationship between the standing stock of deep-sea macrobenthos and surface production in the western North Atlantic. Deep Sea Res. I 54, 1350–1360. (doi:10.1016/j.dsr.2007.04.011) Klaas, C. & Archer, D. E. 2002 Association of sinking organic matter with various types of mineral ballast in the deep sea: implications for the rain ratio. Glob. Biogeochem. Cycle 16, 1116. (doi:10.1029/2001GB001765) Knoll, A. H., Bambach, R. K., Payne, J. L., Pruss, S. & Fischer, W. W. 2007 Paleophysiology and end-Permian mass extinction. Earth Planet. Sci. Lett. 256, 295–313. (doi:10.1016/j.epsl.2007.02.018) Krauk, J. M., Villareal, T. A., Sohm, J. A., Montoya, J. P. & Capone, D. G. 2006 Plasticity of N:P ratios in laboratory and field populations of Trichodesmium spp. Aquat. Microb. Ecol. 42, 243–253. (doi:10.3354/ame042255) La¨es, A., Blain, S., Laan, P., Achterberg, E. P., Sarthou, G. & De Baar, H. J. W. 2003 Deep dissolved iron profiles in the eastern North Atlantic in relation to water masses. Geophys. Res. Lett. 30, 1902. (doi:10.1029/2003GL017902) Lampitt, R. S., Bett, B. J., Kiriakoulakis, K., Popova, E. E., Ragueneau, O., Vangriesheim, A. & Wolff G. A. 2001 Material supply to the Abyssal seafloor in the Northeast Atlantic. Prog. Oceanogr. 50, 27–63. (doi:10.1016/S0079-6611(01)00047-7)
178
R. S. Lampitt et al.
Leonardos, N. & Geider, R. J. 2005 Elevated atmospheric carbon dioxide increases organic carbon fixation by Emiliania huxleyi Haptophyta, under nutrient-limited high-light conditions. J. Phycol. 41, 1196–1203. (doi:10.1111/j.1529-8817.2005.00152.x) Levin, L. A., Etter, R. J., Rex, M. A., Gooday, A. J., Smith, C. R., Pineda, J., Stuart, C. T., Hessler, R. R. & Pawson, D. 2001 Environmental influences on regional deep-sea species diversity. Annu. Rev. Ecol. Syst. 32, 51–93. (doi:10.1146/annurev.ecolsys.32.081501.114002) Li, X. & Primeau, F. W. 2008 A fast Newton–Krilov solver for seasonally varying global ocean biogeochemistry models. Ocean Model. 23, 13–20. (doi:10.1016/j.ocemod.2008.03.001) Lovelock, J. E. & Rapley, C. G. 2007 Ocean pipes could help the Earth to cure itself. Nature 449, 403. (doi:10.1038/449403a) Lucas, M., Seeyave, S., Sanders, R., Moore, C. M., Williamson, R. & Stinchcombe, M. 2007 Nitrogen uptake responses to a naturally Fe-fertilised phytoplankton bloom during the 2004/2005 CROZEX study. Deep Sea Res. II 54, 2138–2173. (doi:10.1016/j.dsr2.2007.06.017) Martin, J. H., Knauer, G. A., Karl, D. M. & Broenkow, W. W. 1987 VERTEX: carbon cycling in the northeast Pacific. Deep Sea Res. A. 34, 267–285. (doi:10.1016/0198-0149(87)90086-0) Measures, C. I., Landing, W. M., Brown, M. T. & Buck, C. S. 2008 High-resolution Al and Fe data from the Atlantic Ocean CLIVAR-CO2 repeat hydrography A16N transect: extensive linkages between atmospheric dust and upper ocean geochemistry. Glob. Biogeochem. Cycles 22, GB1005. (doi:10.1029/2007GB003042) Mills, C. E. 2001 Jellyfish blooms: are populations increasing globally in response to changing ocean conditions? Hydrobiologia. 451, 55–68. (doi:10.1023/A:1011888006302) Mills, M. M., Ridame, C., Davey, M., La Roche, J. & Geider, R. J. 2004 Iron and phosphorus co-limit nitrogen fixation in the eastern tropical North Atlantic. Nature 429, 292–294. (doi:10.1038/nature02550) Orr, J. C. et al. 2005 Anthropogenic ocean acidification over the twenty-first century and its impact on calcifying organisms. Nature 437, 681–686. (doi:10.1038/nature04095) Parekh, P., Follows, M. J. & Boyle, E. A. 2005 Decoupling of iron and phosphate in the global ocean. Glob. Biogeochem. Cycles 19, GB2020. (doi:10.1029/2004GB002280) Peng, T. H. & Broecker, W. S. 1985 The utility of multiple tracer distributions in calibrating models for uptake of anthropogenic CO2 by the ocean thermocline. J. Geophys. Res.-Oceans 90, 7023–7035. (doi:10.1029/JC090iC04p07023) Piggott, M. D., Gorman, G. J., Pain, C. C., Allison, P. A., Candy, A. S., Martin, B. T. & Wells, M. R. 2008 A new computational framework for multi-scale ocean modelling based on adapting unstructured meshes. Int. J. Numer. Methods Fluids. 56, 1003–1015. (doi:10.1002/fld.1663) Pollard, R., Sanders, R., Lucas, M. & Statham, P. 2007 The CROzet natural iron bloom and EXport experiment CROZEX. Deep Sea Res. II 54, 1905–1914. (doi:10.1016/j.dsr2.2007.07.023) P¨ortner, H. O., Langenbuch, M. & Michaelidis, B. 2005 Synergistic effects of temperature extremes, hypoxia, and increases in CO2 on marine animals: from Earth history to global change. J. Geophys. Res.-Oceans 110, C09S10. (doi:10.1029/2004JC002561) Purcell, J. E., Uye, S. & Lo, W.-T. 2007 Anthropogenic causes of jellyfish blooms and their direct consequences for humans: a review. Mar. Ecol. Prog. Ser. 350, 153–174. (doi:10.3354/meps07093)
Ocean fertilization: a potential means of geo-engineering?
179
Quack, B. & Wallace, D. W. R. 2003 Air–sea flux of bromoform: control rates and implications. Glob. Biogeochem. Cycles 17, 1023. (doi:10.1029/202GB001890) Raven, J. A. et al. 2005 Ocean Acidification due to Increasing Atmospheric Carbon Dioxide, Policy document no. 12/05. London: The Royal Society. Redfield, A. C. 1934 On the proportions of organic derivatives in sea water and their relation to the composition of plankton. In James Johnstone Memorial Volume. Liverpool, UK: University of Liverpool, pp. 176–192. Rees, A. P., Law, C. S. & Woodward, E. M. S. 2006 High rates of nitrogen fixation during an in-situ phosphate release experiment in the Eastern Mediterranean Sea. Geophys. Res. Lett. 33, L10607–L10608. (doi:10.1029/2006GL025791) Rex, M. A. et al. 2006 Global bathymetric patterns of standing stock and body size in the deep-sea benthos. Mar. Ecol. Prog. Ser. 317, 1–8. (doi:10.3354/meps317001) Riebesell, U., Zondervan, I., Rost, B., Tortell, P. D., Zeebe, R. E. & Morel, F. M. M. 2000 Reduced calcification of marine plankton in response to increased atmospheric CO2 . Nature 407, 364–367. (doi:10.1038/35030078) Riebesell, U. et al. 2007 Enhanced biological carbon consumption in a high CO2 ocean. Nature 450, 545–549. (doi:10.1038/nature06267) Riegman, R., Noordeloos, A. A. M. & Cadee, G. C. 1992 Phaeocystis blooms and eutrophication of the continental coastal zones of the North Sea. Mar. Biol. 112, 479–484. (doi:10.1007/BF00356293) Riemann, L., Steward, G. F. & Azam, F. 2000 Dynamics of bacterial community composition and activity during a mesocosm diatom bloom. Appl. Environ. Microbiol. 66, 578–587. (doi:10.1128/AEM.66.2.578-587.2000) Rowe, G. T. 1983 Biomass and production of the deep-sea macrobenthos. In The Sea, vol. 8, ed. G. T. Rowe. New York: Wiley Interscience, pp. 97–121. Sarmiento, J. L. & Gruber, N. 2002 Sinks for anthropogenic carbon. Phys. Today 55, 30–36. (doi:10.1063/1.1510279) Sarmiento, J. L., Murnane, R., Lequere, C., Keeling, R. & Williams, R. G. 1995 Air–sea CO2 transfer and the carbon budget of the north Atlantic. Phil. Trans. R. Soc. B 348, 211–219. (doi:10.1098/rstb.1995.0063) Sarthou, G. et al. 2003 Atmospheric iron deposition and sea-surface dissolved iron concentrations in the eastern Atlantic Ocean. Deep Sea Res. I 50, 1339–1352. (doi:10.1016/S0967-0637(03)00126-2) Shepherd, J. G., Inglesias-Rodriguez, D. & Yool, A. 2007 Geo-engineering might cause, not cure, problems. Nature 449, 781. (doi:10.1038/449781a) Shoji, K. & Jones, I. S. F. 2001 The costing of carbon credits from ocean nourishment plants. Sci. Total Environ. 277, 27–31. (doi:10.1016/S0048-9697(01)00832-4) Smythe-Wright, D., Boswell, S. M., Breithaupt, P., Davidson, R. D., Dimmer, C. H. & Eiras Diaz, L. 2006 Methyl iodide production in the ocean: implications for climate change. Glob. Biogeochem. Cycles 20, GB3003. (doi:10.1029/2005GB002642) Sokolova, M. N. 2000 Feeding and Trophic Structure of the Deep-Sea Macrobenthos. Washington, DC: Smithsonian Institution. Solomon, S., Garcia, R. R. & Ravishankara, A. R. 1994 On the role of iodine in ozone depletion. J. Geophys. Res. 99, 20 491–20 499. (doi:10.1029/94JD02028) Stenseth, N. C., Mysterud, A., Ottersen, G., Hurrell, J. W., Chan, K.-S. & Lima, M. 2002 Ecological effects of climate fluctuations. Science 297, 1292–1296. (doi:10.1126/science.1071281) Stoll, H. M., Shimizu, N., Archer, D. & Ziveri, P. 2007 Coccolithophore productivity response to greenhouse event of the Paleocene–Eocene Thermal Maximum. Earth Planet. Sci. Lett. 258, 192–206. (doi:10.1016/j.epsl.2007.03.037)
180
R. S. Lampitt et al.
Tappin, A. D. 2002 An examination of the fluxes of nitrogen and phosphorus in temperate and tropical estuaries: current estimates and uncertainties. Estuar. Coast. Shelf Sci. 55, 885–901. (doi:10.1006/ecss.2002.1034) Timmermans, K. R., Stolte, W. & de Baar, H. J. W. 1994 Iron-mediated effects on nitrate reductase in marine phytoplankton. Mar. Biol. 121, 389–396. (doi:10.1007/BF00346749) Tyrrell, T. 1999 The relative influences of nitrogen and phosphorus on oceanic primary production. Nature 400, 525–553. (doi:10.1038/22941) Tyrrell, T., Maranon, E., Poulton, A. J., Bowie, A. R., Harbour, D. S. & Woodward, E. M. S. 2003 Large-scale latitudinal distribution of Trichodesmium spp. in the Atlantic Ocean. J. Plankton Res. 25, 405–416. (doi:10.1093/plankt/25.4.405) Tyson, R. V. 1995 Sedimentary Organic Matter. London: Chapman and Hall. Underdal, B., Skulberg, O. M., Dahl, E. & Aune, T. 1989 Disastrous bloom of Chrysochromulina polylepis (Prymnesiophyceae) in Norwegian coastal waters 1988: mortality in marine biota. Ambio 18, 265–270. UNEP 2004 GEO Yearbook 2003. Nairobi, Kenya: United Nations Environment Programme. Vogt, R., Sander, R., Glasow, R. V. & Crutzen, P. J. 1999 Iodine chemistry and its role in halogen activation and ozone loss in the marine boundary layer: a model study. J. Atmos. Chem. 32, 375–395. (doi:10.1023/A:1006179901037) Watson, A. J. 1997 Volcanic iron, CO2 , ocean productivity and climate. Nature 385, 587–588. (doi:10.1038/385587b0) Weber, L., Volker, C., Schartau, M. & Wolf-Gladrow, D. A. 2005 Modeling the speciation and biogeochemistry of iron at the Bermuda Atlantic time-series study site. Glob. Biogeochem. Cycles 19, GB1019. (doi:10.1029/2004GB002340) Weeks, S. J., Currie, B. & Bakun, A. 2002 Satellite imaging: massive emissions of toxic gas in the Atlantic. Nature 415, 493–494. (doi:10.1038/415493b) Wigham, B. D., Hudson, I. R., Billett, D. S. M. & Wolff, G. A. 2003 Is long-term change in the abyssal Northeast Atlantic driven by qualitative changes in export flux? Evidence from selective feeding in deep-sea holothurians. Prog. Oceanogr. 59, 409–441. (doi:10.1016/j.pocean.2003.11.003) Yool, A., Shepherd, J. G., Bryden, H. L. & Oschlies, A. 2009 Low efficiency of nutrient translocation for enhancing oceanic uptake of carbon dioxide. J. Geophys. Res. 114 C08009 (doi: 10.1029/2008JC004792) Yoshida, T., Hayashi, K. & Ohmoto, H. 2002 Dissolution of iron hydroxides by marine bacterial siderophore. Chem. Geol. 184, 1–9. (doi:10.1016/S0009-2541(01)00297-2) Zapata, F. & Roy, R. N. 2004 Use of Phosphate Rocks for Sustainable Agriculture, Fertilizer and Plant Nutrition Bulletin no. 13. Rome: FAO Land and Water Development Division and the International Atomic Energy Agency. Zeebe, R. E. & Wolf-Gladrow, D. 2001 CO2 in Seawater: Equilibrium, Kinetics, Isotopes. Amsterdam, The Netherlands: Elsevier. Zhou, S. & Flynn, P. 2005 Geo-engineering downwelling ocean currents: a cost assessment. Clim. Change. 71, 203–220. (doi:10.1007/s10584-005-5933-0) Zondervan, I., Zeebe, R. E., Rost, B. & Riebesell, U. 2001 Decreasing marine biogenic calcification: a negative feedback on rising atmospheric pCO2 . Glob. Biogeochem. Cycles 15, 507–516. (doi:10.1029/2000GB001321)
9 The next generation of iron fertilization experiments in the Southern Ocean v. smetacek and s. w. a. naqvi
Of the various macro-engineering schemes proposed to mitigate global warming, ocean iron fertilization (OIF) is one that could be started at short notice on relevant scales. It is based on the reasoning that adding trace amounts of iron to iron-limited phytoplankton of the Southern Ocean will lead to blooms, mass sinking of organic matter and ultimately sequestration of significant amounts of atmospheric carbon dioxide in the deep sea and sediments. This iron hypothesis, proposed by John Martin in 1990 (Martin 1990), has been tested by five mesoscale experiments that provided strong support for its first condition: stimulation of a diatom bloom accompanied by significant CO2 drawdown. Nevertheless, a number of arguments pertaining to the fate of bloom biomass, the ratio of iron added to carbon sequestered and various side effects of fertilization, continue to cast doubt on its efficacy. The idea is also unpopular with some environmental groups because it is perceived as meddling with nature. However, the opposition to OIF is premature because none of the published experiments were specifically designed to test its second condition pertaining to the fate of iron-induced organic carbon production. Furthermore, the arguments on side effects are based on worst-case scenarios. These doubts, formulated as hypotheses, need to be tested in the next generation of OIF experiments. We argue that such experiments, if carried out at appropriate scales and localities, will not only show whether the technique is feasible, but will also lead to a better understanding of the structure and functioning of pelagic ecosystems in general and the krill-based Southern Ocean ecosystem, in particular. The outcomes of current models on the efficacy and side effects of OIF differ widely, so additional data from properly designed Geo-Engineering Climate Change: Environmental Necessity or Pandora’s Box?, eds. Brian Launder and Michael Thompson. Published by Cambridge University Press. © Cambridge University Press 2010.
181
182
V. Smetacek and S. W. A. Naqvi
experiments are urgently needed for realistic parametrization. OIF is likely to boost zooplankton stocks, including krill, which could have a positive effect on recovery of the great whale populations. Negative effects of possible commercialization of OIF can be controlled by the establishment of an international body headed by scientists to supervise and monitor its implementation.
9.1 Introduction Iron fertilization of the open ocean, by both natural and artificial means, has been in the limelight ever since John Martin formulated the ‘iron hypothesis’ (Martin 1990). It postulates that adding iron to nutrient-rich but low-productivity ocean regions, by dust in the past and artificial fertilization in the future, would stimulate phytoplankton blooms, which would drawdown significant amounts of atmospheric carbon dioxide and, by mass sinking, sequester the carbon for long timescales in the deep ocean and sediments. The hypothesis was welcomed by biogeochemists and palaeoceanographers as a plausible mechanism to explain the lower glacial atmospheric CO2 levels that coincided with higher dust deposition rates compared with the interglacials. Plankton ecologists on the other hand were sceptical that the trace nutrient iron could limit phytoplankton growth to the same extent as light and the macronutrients, nitrate and phosphorus. Unfortunately, the spectre of wanton commercialization of OIF put the scientific community as a whole on its guard. The precautionary principle was applied and discussions of the iron hypothesis have centred on the possible negative effects (Chisholm et al. 2001; Lawrence 2002). However, given the paucity of evidence regarding both its efficacy and the effects it might engender, scepticism of OIF is premature, particularly when compared with other options and in view of the recent consensus on the wide-reaching effects of inevitable climate change (Lampitt et al., Chapter 8). Ten OIF experiments have been carried out so far in different oceans by scientists from many countries. Phytoplankton blooms dominated by diatoms have been induced in all but one experiment (SEEDS II), thus solving a long-standing paradox: what regulates the productivity of high-nutrient, low-chlorophyll (HNLC) oceans (Chisholm & Morel 1991)? Despite the overwhelming evidence, acceptance of iron as a growth-limiting factor on a par with light or macronutrient (nitrogen and phosphorus) limitation has been slow (de Baar et al. 2005) and basic questions pertaining to the fate of iron-induced bloom biomass are still under debate (Boyd et al. 2007). Furthermore, the bio-oceanography community has been slow to explore the new research avenues opened up by OIF
The next generation of iron fertilization experiments in the Southern Ocean
183
experiments, and funding agencies have not been doing their utmost to encourage them. This lack of enthusiasm is reflected in the proportion of ship time allocated to the OIF experiments (a total of only 11 months over 10 years) in relation to the total research ship time (much greater than 2500 months) made available during the same period to other disciplines of observational oceanography. It has recently been pointed out that the experiments were not designed to address the question of whether iron fertilization is a feasible mitigation measure for ongoing global change (Buesseler et al. 2008), implying that they could not have provided the answers attributed to them: that OIF is not likely to be efficient (Buesseler & Boyd 2003), will require massive continuous injections of iron sulphate (FeSO4 ) with a large carbon footprint (Zeebe & Archer 2005) and have negative side effects on the oceans and atmosphere (Lawrence 2002). Clearly, there is a need to develop alternative scenarios based on our current understanding of plankton ecology and ocean biogeochemistry but under consideration of the limiting role played by iron over most of the nutrient-rich open ocean. The Southern Ocean (SO) is the only ocean region where substantial amounts of the phytoplankton nutrients, nitrate and phosphate, supplied to surface waters by upwelling of deep ocean water in the South, are returned to the deep ocean by downwelling along its northern fringe. In contrast to N and P, the CO2 remineralized from the breakdown of organic particles can escape to the atmosphere, the amount depending on water temperature and the concentration of atmospheric CO2 . Thus, at pre-industrial CO2 levels of 280 ppmv, the SO was a source, but, with the subsequent rise in CO2 levels, it is now considered a weak sink of atmospheric CO2 (Le Qu´er´e et al. 2007) without productivity changing significantly. It follows that enhancing productivity by OIF will enhance this sink to the extent that CO2 , converted to organic matter by fertilized phytoplankton and exported in sinking particles to the deep ocean, is removed from the surface layer. This can be assessed only in experiments designed for the purpose. Quantitative assessment of the effects of OIF requires application of models of SO ecology and biogeochemistry embedded in hydrography. However, the predictions of various models developed for the purpose, reviewed by Lampitt et al. (Chapter 8), differ widely, depending on the parameters employed: in particular, C : Fe ratios of phytoplankton and depth-dependent remineralization rates in the water column relative to the sinking rate of vertical flux. Recent studies of naturally fertilized ecosystems in the vicinity of the SO islands of Crozet (Pollard et al. 2007) and Kerguelen (Blain et al. 2007) indicate higher C : Fe ratios and higher deep export rates of carbon than those derived from the experiments. Clearly, data from appropriately designed OIF experiments are a prerequisite for running realistic models exploring local and global effects of iron fertilization.
184
V. Smetacek and S. W. A. Naqvi
In this chapter, we argue that it is high time to launch the next generation of OIF experiments in the SO. We examine the state of Martin’s iron hypothesis and show how the current impasse regarding the role of iron on the ecology and biogeochemistry of the SO can be overcome by carrying out a series of experiments specifically designed to assess the magnitude, depth and composition of vertical flux in relation to surface productivity and structure of the pelagic food web. We present a series of hypotheses amenable to testing in future experiments and show how the scientific community can take charge of the situation and prevent the negative effects of OIF commercialization. 9.2 The concept of iron limitation Iron has long been suspected to be a growth-limiting factor in the ocean based on well-established facts: its conversion to highly insoluble ferric hydroxide (rust) in alkaline, oxygenated seawater and its obligate requirement by all organisms. Furthermore, the consistently higher plankton productivity of near land compared with open ocean waters of the SO was taken as evidence for the provisioning of trace elements (including iron) from the land and their limiting role away from it (Hart 1942). The perennially high macronutrient concentrations in large tracts of ocean, the Sub-Arctic and Equatorial Pacific and the entire SO, known as the HNLC regions, were considered a paradox compared with the equatorial and high-latitude regions of the Atlantic Ocean where nutrients were exhausted over much of the growth season. Three mutually inclusive reasons were proposed to explain the HNLC condition: low light levels in the deep mixed layers of high latitudes; iron limitation of phytoplankton growth; and heavier grazing pressure due to life histories of dominant grazers in HNLC regions. Although light limitation cannot apply to the Equatorial Pacific and zooplankton populations vary widely, both seasonally and regionally, iron limitation, the only factor common to all regions, was regarded with scepticism, partly because early iron addition experiments could show no difference between control and iron-supplemented bottles. With hindsight, this was due to contamination of controls by trace amounts of iron. John Martin’s group overcame the contamination problem by applying ultraclean methods to field measurements and bottle experiments. They demonstrated very low iron concentrations in HNLC waters and, in bottle experiments, a strong response of natural plankton to iron addition, in striking contrast to the absence of growth in control bottles (Martin & Fitzwater 1988). This evidence failed to convince many in the bio-oceanography community but biogeochemists took up the challenge more readily and have since opened a booming marine trace metal field (Jickells et al. 2005). Not surprisingly, most of the OIF experiments focused on biogeochemical
The next generation of iron fertilization experiments in the Southern Ocean
185
processes with much less attention paid to the underlying ecological processes unfolding at the species level. The reluctance of bio-oceanographers to place iron limitation of phytoplankton growth on an equal footing with light or macronutrients can be attributed to several reasons. Thus, the issue of whether N or P is the primary limiting nutrient in oceans relative to lakes (where P has long been accepted as the limiting nutrient) was hotly debated for both coastal eutrophic waters as well as the open ocean. However, the concept was not transferred to iron by the mainstream community, possibly because many of the scientists involved were working in coastal areas where iron was not an issue. Indeed an early, albeit inadvertent, iron fertilization experiment was carried out by a titanium factory in the form of acid waste dumping in the North Sea where no noticeable effects on phytoplankton productivity were reported. The practice was stopped due to popular protest and the acid waste is now converted to FeSO4 and applied to lawns and sewage treatment plants. No doubt the methodological problems associated with contamination-free measurements of iron as well as maintenance of iron-clean conditions for experimentation have deterred many biological laboratories from studying the role of iron. Lack of clarity regarding the sources and availability of iron to offshore phytoplankton – how much of it is derived from upwelling from the deep sea vis-`avis the amount introduced by dust (Cassar et al. 2007); and how much is retained in the surface layer by binding to organic molecules with an affinity for iron (ligands; Jickells et al. 2005) – is another factor hindering the acceptance of iron limitation by mainstream bio-oceanographers. Thus, the sources of iron enabling the annual spring bloom in the open North Atlantic, which set this ocean apart from the other high-latitude oceans, are still not unequivocally established. Furthermore, there are indications that the iron demand of coastal versus oceanic phytoplankton differ (Strzepek & Harrison 2004) implying that the impact of iron deficiency, unlike that of macronutrients, is also a question of adaptation, hence species composition. However, this view overlooks the extensive, obligate role played by iron in the cellular machinery where it is as much an essential building block as, for example, phosphate. Both chlorophyll synthesis and nitrate reduction, the two gateways to light and macronutrient usage, respectively, require iron. Thus, by increasing cellular chlorophyll, iron-sufficient phytoplankton can shade adapt, thereby improving the efficiency of light usage (Gervais et al. 2002). The accompanying disadvantage of suffering photodamage under calm sunny conditions is of minor importance in the cloudy, stormy SO. An example for the reluctance to accept the limiting role of iron is encapsulated in a sentence from a recent review of iron fertilization experiments (de Baar et al. 2005): ‘Hence having dumped a total of 8975kg of Fe into HNLC waters and using approximately 1 year of shiptime, we may conclude that light is the ultimate
186
V. Smetacek and S. W. A. Naqvi
determinate of the phytoplankton biomass response.’ This statement is based on a comparison of changes in the concentrations of dissolved and particulate properties measured in the various experiments that vary with depth of the mixed layer. The lower chlorophyll concentrations induced by OIF in deeper water columns are interpreted as light limitation. However, budgetary analyses are based not on concentration (g C m−3 ) but on the magnitude of biomass, which is given by the total stock in the mixed layer water column: concentration multiplied by mixed layer depth (g C m−2 ). It is the magnitude of the stock (the amount of CO2 fixed in the water column) that determines the amount of CO2 exchanged between atmosphere and ocean and the amount that can sink to the deep sea. Indeed, the deeper the mixed layer, the larger the amount of biomass that can be built up, given sufficient iron, owing to the larger nutrient inventory of the deeper water column. The dilution factor, which increases with depth of the mixed layer, has several different effects on the rate of biomass build-up and its eventual fate. Thus, the deeper the depth of mixing the smaller the percentage of the phytoplankton population within the euphotic zone (the layer where sufficient light is available to enable net growth), which will accordingly retard population growth rate (Smetacek & Passow 1990). The efficiency of light usage by phytoplankton will also be lower owing to attenuation by the deeper water column. However, these effects on the rate of biomass accumulation can be partly compensated by shade adaptation, which effectively increases the depth of the euphotic zone (Gervais et al. 2002), and by the reduction in grazing pressure due to the greater dilution of algal cells that decrease predator/prey encounter rate (Landry & Hassett 1982). Algal concentration could also affect the rate of aggregation of cells and chains into flocks (marine snow) and hence the magnitude of export from the surface and sinking rate through the deep water column. However, the relevant threshold concentrations of these effects need to be quantified in further experiments. The fact that all SO OIF experiments induced blooms in a range of mixed layer depths and from spring to late summer (Boyd et al. 2007) indicates that iron availability and not light or grazing controlled the build-up of biomass. Thus, the standing stock of chlorophyll attained by the EisenEx bloom in a 70-m mixed layer after 21 days (200 mg Chl m−2 ) was the same as that reached by the SEEDS I bloom in the North Pacific following nitrate exhaustion after 13 days in a 10 m deep layer (Tsuda et al. 2003). Field observations also demonstrate that, apart from the winter months, light availability cannot be regarded as a limiting factor for phytoplankton biomass build-up. Standing stocks of more than 200 mg chlorophyll m−2 , equivalent to that of North Sea spring blooms but in mixed layers three times as deep, have been recorded along the Polar Front (Bathmann et al. 1997) and continental slope (Turner & Owens 1995). Indeed, the standing stock of the latter diatom bloom of 7 mg Chl m−3 homogeneously distributed in a 70-m mixed
The next generation of iron fertilization experiments in the Southern Ocean
187
layer, stretching in a band along the shelf break of the western Antarctic Peninsula, ranks among the highest recorded in the ocean. Such a standing stock (approx. 15 g C m−2 ) could also be reached by OIF if the patch were large enough to prevent dilution with outside water. Clearly, this is a best-case scenario highlighting the need for more ambitious experiments. Over the past decades, environmental-scale experiments carried out by terrestrial and lake ecologists have revealed unexpected fundamental insights on various relationships between ecological and biogeochemical processes. So, when the first in situ OIF experiments IronEx I and II demonstrated that a patch of surface ocean, marked with sulphur hexafluoride (SF6 ), could be successfully manipulated and followed for a significant period of time (Martin et al. 1994; Coale et al. 1996), the feat was hailed as the transition of ocean ecology from an observational to an experimental science (Frost 1996). Nevertheless, the response since then has been muted. If only 10 per cent of the ship time available to the relevant chemical, biological and geological oceanography disciplines had been dedicated to carrying out OIF experiments, we would now be discussing the results of at least 30 experiments, of which one or more would be large-scale, long-term, multi-ship international projects akin to the North Atlantic Bloom Experiment coordinated by JGOFS in 1988 (Ducklow & Harris 1993). Such an experiment would by now have quantified the fate of iron-fertilized bloom biomass, provided new insights on the reaction of bacteria and zooplankton communities and their predators to enhanced productivity and monitored the reaction of the deep-sea benthos to an enhanced food supply. Furthermore, insights into the much debated relationship between species diversity and ecosystem productivity could also have been effectively addressed by OIF. In short, we would have answers to the open questions raised in recent reviews of OIF (de Baar et al. 2005; Boyd et al. 2007; Lampitt et al., Chapter 8) and acquired many more unexpected insights on pelagic ecosystem structure and functioning and their impact on the deep sea and sediments. Cost, pollution and expertise cannot be the reasons for the lack of enthusiasm for OIF experiments. Because the ship operates in much the same place, in situ experiments actually burn less fuel than conventional transect oceanography racing from station to station. Ferrous sulphate required is sold at low cost in garden shops to improve lawns for which the recommended dosage is 20 g m−2 . The dosage required to fertilize a bloom is 0.05 g m−2 for a 50 m deep mixed layer. Clearly, hazardous impurities in the commercially available ferrous sulphate will be diluted to insignificant levels. Dispersing iron is straightforward and using SF6 as a tracer is no longer required as other easily monitored parameters such as photosynthetic efficiency, increasing chlorophyll and declining pCO2 accurately track the patch. Running the experiment is not very different from interdisciplinary observational
188
V. Smetacek and S. W. A. Naqvi
oceanography. Clearly, the prerequisites for oceanographic experimentation have long been in place. 9.3 The efficacy of iron fertilization A key question addressed by the iron hypothesis pertains to the sources and sinks of atmospheric CO2 over glacial/interglacial climate cycles that are of obvious relevance to future OIF. Since more or less the same amount of CO2 – approximately 210 Gt (the difference between 180 and 280 ppmv) – has appeared and disappeared from the atmosphere over the past four cycles (Petit et al. 1999), it seems logical to assume that the same process and its cessation would have been responsible for removing and returning the CO2 . However, we are far from understanding the processes regulating atmospheric CO2 levels (Falkowski et al. 2000), given that its turnover time at the pre-industrial inventory of approximately 600 Gt was approximately 4 years. Primary production and its subsequent remineralization is responsible for the lion’s share – 65 and 40 Gt yr−1 by land plants and marine phytoplankton, respectively (Falkowski et al. 1998; Haberl et al. 2007) – the remainder exchanged with the oceans by seasonal cooling and warming combined with replacement of surface water by upwelling and downwelling (the physical solubility pump). Given the high turnover rate combined with the broad range of interannual variability observed at regional scales in both marine and terrestrial productivity, e.g. over El Ni˜no cycles, the near constancy of CO2 levels over timescales of decades to centuries is remarkable: a 10 per cent change on either side of the balance would result in halving or doubling of atmospheric CO2 concentrations within 50 years. Net losses and gains to the CO2 inventory that would have an effect on timescales of more than 104 years are associated with burial in ocean sediments – estimated at 0.02 Gt yr−1 (Tyson 1995) – and outgassing from volcanic activity, respectively. These fluxes must balance each other, given the near-constant concentrations prevailing for periods of stability over thousands of years between the transitional phases indicated by ice-core records (Petit et al. 1999). Such longer-term steady states are again remarkable given that they maintain different CO2 concentrations and have repeated themselves over six cycles during the past 650 000 yrs (Siegenthaler et al. 2005). It should be pointed out that the transitions between warm and cold states differ inasmuch as CO2 removal takes place gradually, with intermediate steady states, over tens of thousands of years, whereas its release is much more rapid: a few thousand years. This pattern, but not its triggering events, coincides with that of dust in the ice-core records: a key factor in the iron hypothesis. The amount of CO2 missing during the glacials is equivalent to approximately one-third the biomass of the terrestrial biosphere, the magnitude of which is a direct function of annual rainfall exhibited by the gradient from rainforests to deserts.
The next generation of iron fertilization experiments in the Southern Ocean
189
Owing to the drier glacial climate, the area covered by vegetation was substantially smaller than during interglacials, so the carbon released by retreating forests and wetlands during glacials would also have to be ‘sunk’ by the same process removing CO2 from the atmosphere. Thus, we are left with the ocean as the most likely source and sink of atmospheric CO2 . It contains 50 times more carbon than that present in the atmosphere. Most of this carbon is bound in bicarbonate and carbonate anions but approximately 1 per cent is present as dissolved CO2 + H2 CO3 . The concentration of CO2 in the surface layer is in equilibrium with the atmosphere but deep ocean water has excess CO2 due to remineralization of organic matter transported there by the biological carbon pump (BCP; Lampitt et al., Chapter 8). If, in a thought experiment, the BCP were shut down but ventilation of the deep ocean by thermohaline circulation were maintained at today’s rate of 1000 years, atmospheric CO2 concentrations would increase approximately twofold (MaierReimer et al. 1996), implying that the rate of long-term sequestration (centuries) will be on average 1.2 Gt yr−1 over a full ventilation cycle of 1000 years. Total export production (organic carbon sinking out of the surface mixed layer) in today’s ocean is estimated at 16 Gt yr−1 (Falkowski et al. 1998), implying that most of the sinking carbon flux is retained close to the surface from where CO2 is recycled with the atmosphere on decadal time scales. This is consistent with the models of depth-dependent organic matter remineralization in the deep water column (Martin et al. 1987). A voluminous literature has been developed around the interpretation of data gathered from ice-cores and ocean sediments as evidence for and against the iron hypothesis. The bulk of productivity proxies (indirect indicators of surface productivity such as stable isotope ratios of C and N) recorded in the sediments does not support higher glacial productivity of the Antarctic circumpolar current (ACC) (Anderson et al. 2002). However, interpretation of the proxies is fraught with uncertainty and new, less ambiguous proxies need to be identified to achieve progress in the field. One such proxy are the resistant spores of the diatom genus Chaetoceros characteristic of dense phytoplankton blooms that have been shown to dominate the fossil assemblage over a much larger area of the Atlantic ACC than they do in modern sediments, implying that phytoplankton blooms were much more extensive over glacial periods (Abelmann et al. 2006). Indeed, as we shall see below, large-scale OIF experiments provide an ideal framework to test the hypotheses underlying interpretations of the various proxies.
9.4 Is the current opposition to OIF justified? Most of the arguments against OIF invoke worst-case scenarios justified by the precautionary principle, but these need to be weighed against best-case scenarios on
190
V. Smetacek and S. W. A. Naqvi
the basis of cost and benefit within the context of the global biosphere. For example, two often-cited potential impacts of OIF are (i) the organic matter produced may be recycled in the upper part of the water column resulting in CO2 sequestration on a time scale of decades rather than centuries expected from deeper organic matter export and (ii) excessive oxygen consumption by decomposition of organic matter in a restricted subsurface layer would create conditions conducive for the production of nitrous oxide (N2 O) and methane (CH4 ). Since these gases have higher global warming potentials than CO2 , it is argued that their release to the atmosphere might offset the gains of CO2 uptake by the ocean (e.g. Fuhrman & Capone 1991). However, ocean methane production is largely restricted to the sediments underlying intense upwelling regions of low latitudes (Bakun & Weeks 2004), so oceans as a whole are a minor player in the atmospheric CH4 budget (Crutzen 1991). Even in the most intense oceanic oxygen minimum zones (OMZs), CH4 production does not occur, except perhaps within microanaerobic sites in the interior of particles (e.g. Jayakumar et al. 2001). By contrast, oceans are estimated to account for one-quarter to one-third of the total (natural + anthropogenic) N2 O inputs to the atmosphere (Nevison et al. 2004). Moreover, N2 O cycling – production as well as consumption – is very sensitive to the ambient oxygen concentration in the low range (less than approx. 0.5 ml l−1 ); hence, its enhanced production as a result of OIF deserves a thorough investigation. Measurements of N2 O have been made during two of the SO OIFs – the Southern Ocean Iron Enrichment Experiment (SOIREE; Law & Ling 2001) and the European Iron Fertilization Experiment (EIFEX; Walter et al. 2005). Whereas some accumulation of N2 O was found to occur within the thermocline underlying the SOIREE patch, no such increase was observed during EIFEX over a threefold longer period. The discrepancy was attributed by Walter et al. to the rapid sedimentation of organic particles to the deep ocean, which was also reflected in vertical distribution of barium during EIFEX (Jacquet et al. 2008). Thus, the depth of remineralization of organic matter produced as a result of OIF will determine not only the storage time in the ocean of the carbon sequestered from the atmosphere, but also the climatic feedback by N2 O. Moreover, the magnitude and even the nature of this feedback would be strongly dependent on the location as well as duration of fertilization, as shown by the modelling study conducted by Jin & Gruber (2003). These authors found that while enhanced production of N2 O could almost completely offset the radiative benefit of OIF in the tropics, this offset would be much smaller in the SO: 6–18 per cent in the case of large-scale (approx. 100 million km2 ) and long-term (100 years) fertilization. This difference arises from the differential effect of the organic carbon mineralization on the subsurface oxygen distributions in the two regions. That is, owing to the high oxygen concentrations in subsurface waters of the SO, the volume of water with low oxygen concentration (approx. 0.5 ml l−1 ,
The next generation of iron fertilization experiments in the Southern Ocean
191
below which the production of N2 O increases non-linearly) developed as a result of fertilization will be much smaller than in the tropics where the mesopelagic oxygen minimum is already quite intense and expanding (Stramma et al. 2008). In fact, the model results suggest a relaxation of the tropical OMZs as a result of a decrease in export production caused by the lower preformed nutrient concentrations in intermediate waters at their source following fertilization of the SO (Sarmiento & Orr 1991; Jin & Gruber 2003). On the other hand, the deep-water oxygen concentrations may be expected to fall everywhere in the oceans if the particles emanating from the fertilized blooms sink to the deep sea. This can be expected because massive deep sinking of diatom cells and chains in the aftermath of natural blooms is a well-known phenomenon (Smetacek 1985) that results in accumulation of fluffy layers on the deep-sea floor underlying productive waters (Beaulieu 2002). Because the volume of the deep ocean is three to four times greater than the mesopelagic layer, where N2 O production occurs due to hypoxia, diatom bloom sinking results in dilution of the organic matter, hence also the oxygen deficit resulting from its breakdown. The dilution effect on deep-water oxygen depletion can, in fact, have a beneficial effect on atmospheric CO2 content (Boyle 1988; Sarmiento & Orr 1991). Thus, the palaeoceanographic evidence from the Indian Ocean and Pacific Ocean suggests a rearrangement of the vertical chemical structure during glacial periods with the deep waters being more oxygen depleted (Kallel et al. 1988; Galbraith et al. 2007). The atmospheric N2 O concentration during the last glacial maximum reached a minimum of 200 ppbv or less (Spahni et al. 2005). Thus, if the decrease in atmospheric CO2 during glacial periods was due to sequestration of carbon in the deep ocean by enhanced SO export, as proposed by Martin (1990), this could account for the inferred lower deep-water oxygen concentrations. Apparently, this oxygen depletion did not lead to an increase in atmospheric N2 O. It would not be unreasonable to assume that artificial OIF would not produce an opposite trend. According to Bakun & Weeks (2004), the projected intensification of low-latitude coastal upwelling due to global warming is likely to result in the spreading of extreme hypoxia and greenhouse gas build-up conditions now regularly observed off Namibia. If true, then diverting nutrients from source waters of coastal upwelling to the deep sea and sediments by OIF will have an additional mitigation effect by constraining low-latitude, coastal N2 O and CH4 production resulting from the feedback effects of global warming. The production of other climatically important gases such as dimethyl sulphide (DMS) also needs to be investigated in greater detail through additional observations and modelling. The OIF experiments have been found to typically increase concentrations of DMS and its precursor dimethylsulphoniopropionate (DMSP), by a factor of 3 (Turner et al. 2004). Increases of this magnitude, if occurring globally,
192
V. Smetacek and S. W. A. Naqvi
could lead to atmospheric cooling by 1–2 ◦ C (Gunson et al. 2006; Liss 2007), which must also be considered while evaluating the impacts and benefits of OIF. The upper limit for CO2 drawdown by ocean-scale application of OIF is set by the average nitrate concentration (approx. 20 mmol m−3 ) left in the surface mixed layer (approx. 50 m depth) of the entire ice-free SO (approx. 50 × 106 km2 ) at the end of the summer assuming the Redfield ratio of C : N of 6 : 1. Removal of all nitrate at one fell sweep would amount to approximately 4 Gt of carbon. The rate of renewal of nitrate by upwelling deep water is 3–4 years implying that approximately 1Gt of CO2 is the minimum amount that could be taken up by an iron-replete SO. More sophisticated models suggest that OIF of the SO can reduce atmospheric CO2 by over 60 ppmv (Jin & Gruber 2003). Clearly, this is no substitute for reducing emissions, yet it is too large to be ignored, again on the basis of the precautionary principle applied in a global warming context. The carbon footprint of deploying large-scale OIF is not likely to be large. Ferrous sulphate is freely available as an unwanted by-product of the growing titanium industry and used to be dumped in the past. An average factory produces approximately 400 000 tonnes a year and, assuming a range of C : Fe ratios from worst- to best-case scenarios – 1000 : 1 to 10 000 : 1 – from 14 to 1.4 million tonnes of granular FeSO4 will be required to sequester 1 Gt of carbon. Shipping this amount of iron to the SO would require diversion of a minuscule percentage of the tanker fleet currently transporting 2 billion tonnes of petroleum annually around the globe. The means by which the iron could be added will not be discussed here, but there are a number of options to choose from that harness ACC wind and current fields for dispersal. 9.5 Designing future OIF experiments All experiments used the fertilization technique employed in IronEx: releasing a solution of weakly acidified FeSO4 into the propeller wash of the ship steaming along tracks at intervals of 1–3 km, for instance, spiralling outward from a drifting, surface-tethered buoy. One would expect the weakly acidified iron solution to be quickly neutralized by mixing with alkaline seawater and the dissolved iron oxidized to its insoluble state, i.e. colloidal rust particles that would be difficult for phytoplankton to take up, but results from the experiments demonstrate otherwise. For example, the C : Fe uptake ratio exceeded 15 000 : 1 in the SEEDSI experiment (Boyd et al. 2007), similar to values for natural blooms (Blain et al. 2007) and laboratory experiments, demonstrating that the physiological ability to use iron is present in the phytoplankton. In other words, we see no need yet to replace FeSO4 . Small-scale experiments employing several tonnes of FeSO4 have not lost their relevance because they can be carried out at regional scales to test whether the
The next generation of iron fertilization experiments in the Southern Ocean
193
plankton of a given water body in a given season is iron limited or not. For instance, such experiments will help in elucidating the causes of the regime shift that has occurred in the eastern Bering Sea accompanying retreat of the winter ice cover (Smetacek & Nicol 2005). Nutrient and light availability have not changed, yet the annual phytoplankton bloom, which used to occur during melting of the ice cover in early spring, is now delayed by several months resulting in a shift in the timing of the annual peak of food supply to the benthos. Further testing of the iron hypothesis will require larger-scale experiments at sites where the fertilized surface layer is coherent with the underlying deep-water column. Such conditions are met within the closed cores of mesoscale eddies that are formed by meanders of frontal jets that enclose a water mass from the adjacent branch of the ACC. Eddies are of two types depending on the direction of rotation: clockwise-rotating eddies enclose a water column – the core of the eddy – from south of the respective front. Owing to its lower temperature compared with the surrounding frontal jet, the core has a smaller volume and appears as a depression in altimeter images of sea surface height. The opposite holds for anticlockwise eddies that appear as bulges in the images. Such quasi-stationary eddies extend to the sea floor and can have lifetimes of several months. They are approximately 100 km in diameter (including the enclosing frontal jet) and 4 km deep, and hence have to be visualized as slowly rotating, flat discs completing a revolution once a week. Such mesoscale eddies are ideal for studying the growth and demise of iron-fertilized blooms all the way down to the underlying sediments but their lifetime is too short for longer-term experiments over the growth season. Experiments carried out in closed cores of eddies have the added advantage that the fertilized patch retains its initial circular shape over the course of the experiment (Cisewski et al. 2008) and is not distorted into a streak as happened during SOIREE and the SOFEX north patch (Boyd et al. 2000; Bishop et al. 2004). Furthermore, the closed core tends to be horizontally homogeneous in its properties; hence, the effects of local patchiness are greatly diminished. The disadvantage is that horizontal mixing within the core results in the patch spreading rapidly, hence diluting the effects of fertilization by mixing with non-fertilized water from within the eddy core (Cisewski et al. 2005). The dilution effect can be lessened by taking care to position the patch as close to the centre of the eddy as possible, where current speeds are lowest, and hence coherence between the surface and the underlying deep-water column greatest. Unfortunately, the connection with the sediment surface is looser because eddies can be displaced laterally by approximately 30 km in their lifetime (Losch et al. 2006). The dilution effect can also be lessened by fertilizing as large an area as feasible from a research ship. The larger the area of a patch the lesser the effects of dilution at its centre. However, the area of the closed core of an average eddy of 100 km diameter is approximately 3000 km2 , which is
194
V. Smetacek and S. W. A. Naqvi
approximately 5–50 times the area of the patches fertilized in previous experiments. Clearly, fertilizing the centre of the core is the best option but locating it at sea is not a trivial task.
9.6 Hypotheses to be tested in future in situ experiments 9.6.1 Interpretation of proxies Apart from testing the second condition of the iron hypothesis, i.e. determining the magnitude and depth of vertical particle flux from an iron-induced bloom, future in situ experiments can address a host of issues currently under investigation with conventional approaches. A central issue in Earth system science, hence also the future of OIF as a mitigation measure, is the influence of evolving biological properties on the global carbon cycle. The role of organisms in shaping and maintaining Earth’s atmosphere on geological timescales is universally accepted. Less clear are the potential impacts of changing biology on climate over shorter timescales of decades. Given the short turnover time of atmospheric CO2 by the biosphere (approx. 4 years), a change in input or output terms of only a few per cent would have a significant impact within decades. So, stability of the global biosphere is a crucial factor in maintaining the steady states that prevailed over the Holocene and the Last Glacial Maximum. It follows that the transitions between steady states could well have been triggered and maintained by positive feedback mechanisms within the terrestrial and marine biospheres that disrupt the checks and balances maintaining glacial or interglacial steady states. The iron hypothesis postulates a balancing mechanism mediating between terrestrial and oceanic biospheres such that, when the land was brown, the oceans were green and took up CO2 , but when the land was green the oceans were blue and released it back to the atmosphere. This alternating desertification of the land and ocean mediated by rainfall and dust and embedded in ocean circulation is too plausible to be discarded on the basis of evidence obtained to date from sedimentary proxies. Clearly, OIF experiments offer a new approach to test the hypotheses underlying interpretation of the various proxies.
9.6.2 Biology of diatom species All OIF experiments have reported stimulation of the entire phytoplankton assemblage by iron addition although biomass accumulated almost exclusively in diatoms. Clearly, this algal group is subject to lower mortality rates than the smaller nano- and picophytoplankton of the microbial food web that, because of their larger surface : volume ratios, should be more competitive in resource uptake, and
The next generation of iron fertilization experiments in the Southern Ocean
195
should have higher growth rates than the much larger diatoms (Smetacek 1999). It follows that only diatoms accumulate biomass because they are better defended against pathogens (viruses) and grazers that keep the smaller phytoplankton in check (Smetacek 2002), in all probability by virtue of their silica shells (Hamm & Smetacek 2007). The diatom species commonly reported from phytoplankton blooms span a size range equivalent to that of the land plants (from herbs to trees). They also differ vastly in shape and thickness of the silica shells, reflected in Si : N ratios from less than 0.5 to more than 3. Some resemble smooth needles, others are embellished with long, barbed spines many times the length of the cells. Clearly, diatom species morphology will have an effect on the rate of aggregation and size of the aggregates in the senescent phase of blooms, hence also on the magnitude and composition of vertical flux. The blooms in OIF experiments have been dominated by a broad range of diatom species from many centric and pennate families, generally typical of the fertilized region, and including the shapes mentioned above, indicating that they all have the ability to take up iron in the form provided. The fact that different species assemblages dominated the experiments despite the presence of many more species is a strong indication of internal regulation of biomass build-up (Assmy et al. 2007). Unfortunately, our understanding of the factors determining species composition of blooms is rudimentary and can be advanced not only by investigating bloom dynamics at the species-specific level under in situ turbulence and light fields but also in the presence of the full complement of pathogens and grazers. OIF experiments meet all these conditions and enable one to follow relevant processes in the surface and deeper layers ensuing from a defined perturbation of a natural system; in this case, providing a limiting nutrient to the entire community. The terrestrial equivalent would be to water an arid ecosystem. Based on our current understanding of succession in vegetation types across the rainfall gradient – from deserts to grassland to rainforest – we know what the long-term outcome of such an undertaking would be. By analogy, one could postulate the outcome of sustained fertilization of nutrient-rich SO waters by examining the species composition across the iron gradient from coastal and shelf to open ocean waters. 9.6.3 Effect of neritic versus oceanic species on vertical carbon flux The diatom flora of iron-rich continental margins differs significantly from that of the adjoining HNLC open ocean. The diatom genera, and in some cases species, typical of the Antarctic coastal and shelf environments are the same, or very similar to those that inhabit equivalent, more thoroughly studied environments in the northern ocean margins. These neritic species tend to be thin-shelled, smallcelled, long-chained forms among which the spore-forming species of the spiny
196
V. Smetacek and S. W. A. Naqvi
genus Chaetoceros are particularly prominent. They dominate the spring blooms of high latitudes and their aggregated chains are the major component of the mass sedimentation representing the annual peak in vertical flux that generally occurs after nutrient exhaustion (Smetacek 1985). Many of these coastal species (including Chaetoceros spp.) form compact resting spores under unfavourable growth conditions that, entrained within sinking diatom aggregates or zooplankton faecal pellets, settle out on the sediment surface and, in shallow regions, provide the seed stock for next year’s bloom (Smetacek 1985). Such blooms are also typical for the shelves of Antarctica and Sub-Antarctic islands, where peak annual flux also occurs in their aftermath (Wefer 1989; Blain et al. 2007; Salter et al. 2007). However, the factor triggering mass flux, invariably nitrate or silicate exhaustion around the northern Atlantic, will be different because macronutrients only rarely reach limiting concentrations in the entire SO. The studies carried out on the shelves of the islands of Kerguelen and Crozet have shown that iron plays the same role in terminating blooms here as do the macronutrients in northern waters with the same consequences for the biological carbon pump (Blain et al. 2007; Salter et al. 2007). These typical coastal species extend oceanward in the productive plume of water downstream of the Antarctic Peninsula that extends across the open southwestern Atlantic Sector of the SO beyond South Georgia to approximately 30◦ W (Hart 1942). The higher productivity of this region during the spring, reflected in satellite images of chlorophyll, is apparently due to seaward entrainment of iron-rich winter water, emanating from contact with the convoluted shelf, land runoff and upwelling along the continental margin, possibly augmented by Patagonian dust. The dominance of Chaetoceros spores, representative of typically coastal species in the diatom assemblage of the underlying deep-sea sediments (Abelmann et al. 2006), also indicates that this coastal flora is advected seaward with the iron-rich waters. During the Last Glacial Maximum the zone dominated by these Chaetoceros spores extended much further across the southern Atlantic, coinciding with higher iron content of the sediment than during the Holocene, indicating that this coastal flora could be maintained over the deep ocean given an adequate supply of iron. Because the compact spores have evolved tough, grazer-resistant silica shells, their organic contents are more likely to survive breakdown during sinking and be buried in the sediments than in the case of vegetative cells. We suggest that iron fertilization of the phytoplankton community of the Southwest Atlantic Sector will result in deeper drawdown and greater burial of organic matter, accompanied by less loss of silica, than in other regions of the ACC. Indeed, we intend testing this hypothesis in an iron fertilization experiment LOHAFEX (Loha is Hindi for iron) to be carried out jointly by India and Germany in the austral summer of 2009. We have selected the post-spring period and a region of
The next generation of iron fertilization experiments in the Southern Ocean
197
quasi-stable eddies north of South Georgia for our experiment to test whether iron fertilization leads to revival of the coastal assemblage. Thus, during the experiment SEEDS in the Northwest Pacific a massive bloom of the typically coastal, sporeforming species Chaetoceros debilis was induced in HNLC waters. Unfortunately, the ship had to leave just as the bloom peaked, so its fate could not be determined (Tsuda et al. 2003). In a subsequent experiment (SEEDS II) carried out under the same environmental conditions, a bloom did not develop, although the phytoplankton assemblage responded to iron addition by increasing photosynthetic efficiency (Suzuki et al. 2006). The lack of biomass accumulation was attributed to heavier grazing pressure (Tsuda et al. 2006), although the absence of an adequate seeding stock of C. debilis might have played a more decisive role. Chaetoceros debilis also exhibited very high growth rates during EisenEx, but the initial seed stock was too small to contribute significantly to bloom biomass during the three weeks of the experiment (Assmy et al. 2007). Clearly, there are a number of hypotheses pertaining to the relative roles of growth and mortality (bottom–up and top–down factors, respectively) in selecting species dominance in phytoplankton blooms (Smetacek et al. 2004; Poulton et al. 2007) that can be tested in longer term OIF experiments. 9.6.4 Impact of grazing on iron-induced blooms All SO OIF experiments were carried out outside the region inhabited by Antarctic krill (Euphausia superba) which forms the food base of the Antarctic fauna: whales, seals and penguins. The zooplankton of HNLC regions is dominated by salps (large, watery, barrel-shaped organisms that filter particles indiscriminately from water pumped through their gills) and copepods which, as the experiments demonstrate, were unable to prevent diatom biomass from accumulating. The situation might be different in the more productive regions around the continental margin frequented by krill; in particular, the Southwest Sector of the Atlantic reported to harbour more than 60 per cent of the entire population (Atkinson et al. 2004). Krill feed voraciously on diatom blooms, macerating the shells in gastric mills, and producing copious amounts of faecal matter in the form of loose strings with potentially high sinking rates. However, the friable faeces are also recycled in the surface layer (Gonz´alez 1992), so it has yet to be established whether krill grazing enhances carbon export relative to the flux of aggregated algal chains and phytodetritus from a senescent, ungrazed, diatom bloom. Krill roam their feeding grounds in compact swarms grazing down local diatom blooms and moving on in search of others (Smetacek et al. 1990). It has been recently shown that krill release substantial amounts of iron to their environment (Tovar-Sanchez et al. 2007), possibly leaching out from faecal material, in which case their grazing should lead to recycling iron and enable the bloom to recuperate after their departure. OIF experiments carried
198
V. Smetacek and S. W. A. Naqvi
out in the krill habitat provide the ideal conditions to study the mechanisms running the biological carbon pump under natural conditions. Longer-term experiments will be required to follow the effect of OIF on zooplankton stocks. The experiments EisenEx and EIFEX significantly enhanced grazing and reproduction rates of the copepods (Jansen et al. 2006; Henjes et al. 2007) and it is likely that krill will respond in a similar manner. Following decimation of their major consumers, the great whales, krill stocks, instead of increasing (Laws 1977), have actually decreased. A statistical analysis of all net catches carried out in the former whaling grounds in the Southwest Atlantic Sector indicates an ongoing declining trend, with current stock size less than 20 per cent of that in the 1970s (Atkinson et al. 2004). Salps, an indicator of HNLC waters, have increased concomitantly, which is a sign of decreasing productivity (Smetacek 2007). It follows that the ongoing krill decline, which will seriously jeopardize recovery of the whale populations, might be reversed by larger scale OIF in the Southwest Atlantic. This hypothesis could be tested with a series of large-scale experiments carried out along the melting sea-ice edge of the Weddell–Scotia Confluence in austral spring. 9.7 Concluding remarks The decision to carry out large-scale OIF should be based on a comprehensive assessment of the long- and short-term as well as local and global effects derived from experiments and models, hence taken by an international body, preferably an agency of the UN specifically established for the purpose. This science-oriented body would subsequently manage and monitor larger scale OIF whether carried out by itself (analogous to deployment of experts by the well-known International Atomic Energy Agency, IAEA, of the UN) or by private companies under contract. OIF can be opened to the carbon credit market only if such an independent body allots a specific carbon quota per tonne of iron added in a specific season and region. For reasons dealt with below, inherent uncertainties are likely to prevent the scientific community from ever being in a position to allot fixed quotas for the SO. Indeed, we feel it is highly unlikely that a free-for-all carbon market could ever take over the SO as feared by many scientists (Chisholm et al. 2001) owing to the verification problem. It is up to the scientific community to exercise tight control over any large-scale OIF operation by generating the knowledge required to assess its effects. Statements calling for more experiments have been made by several scientific and other international bodies (Buesseler et al. 2008). We have outlined hypothetical differences between open ACC and landinfluenced waters above; another region where a very different type of phytoplankton assemblage is present is the silicon-limited band of the ACC north of the Polar Front, in particular the regions where Antarctic Intermediate Water is
The next generation of iron fertilization experiments in the Southern Ocean
199
formed. The SOFEX north patch, carried out in low-silicon waters, induced a bloom of thin-shelled pennate diatoms followed by significant enhancement of vertical flux (Bishop et al. 2004; Coale et al. 2004). However, in addition to thinshelled diatoms, haptophyte flagellates known to produce copious quantities of DMSP, in particular the colonial species Phaeocystis and the calcifying coccolithophorids, also dominate the phytoplankton of this region (Seeyave et al. 2007; Lampitt et al. 2008). If OIF stimulates growth of coccolithophorids then pCO2 decrease caused by the uptake and export of organic carbon will be offset by the increase resulting from CaCO3 formation that reduces the alkalinity as well as the pH of seawater. Experiments in this area would shed light on the much debated factors selecting for diatom or haptophyte dominance of the SO phytoplankton blooms (Smetacek et al. 2004). Thus, Phaeocystis blooms are recurrent in the Ross Sea but not in the Atlantic Sector and it is possible that sustained fertilization may lead to sustained blooming of this species, which does not appear to either provide high-quality food to zooplankton or contribute as efficiently to vertical flux as do diatoms. The toxic algal blooms reported from coastal waters around the world are unlikely to occur in the SO, although it cannot be excluded that toxic species of diatoms (Pseudo-nitzschia spp.), not currently present, may appear in the future. These possibilities underline the need to first carry out experiments but also, if they prove successful, to monitor carefully the effects of any large-scale OIF continuously and stop fertilization if and when needed. Prolonged OIF will certainly boost zooplankton stocks, in particular copepods, and possibly also their predator populations. Surface-living, copepod-feeding fish are absent in the HNLC ACC (Smetacek et al. 2004), so it is impossible to predict which other predator groups, from coelenterates (jelly-fish) and amphipods, to mesopelagic fish and squid and even the copepod-feeding, endangered southern right whales, might profit from copepod population build-up. It will also be necessary to follow the effects of sustained fertilization on the mesopelagic community of copepods and radiolarians because, if their population density increases over time, they will intercept an increasingly larger proportion of the deep sinking flux. In such a case, it would be advisable to interrupt OIF for appropriate periods. It would also be necessary to monitor closely oxygen concentrations in the water column and sediments underlying OIF regions and halt operations if and when large oxygen depletion begins to take place. Even under the best possible conditions, OIF will have only a limited effect on the rate at which atmospheric CO2 is projected to rise, but the amount involved is too large to be discounted; in short, we cannot afford not to thoroughly investigate the potential of this technique. A further benefit accruing from large-scale experiments is the attention they draw from the media, eager to report new developments in
200
V. Smetacek and S. W. A. Naqvi
the global struggle, now in its infancy, that will help meet the challenge of global warming. The case for and against OIF can be used as a platform to educate the public on the workings of the global carbon cycle and the anthropogenic impact on it by providing a perspective on the quantities involved. OIF experiments would also serve as an ideal training ground for the next generation of ocean scientists faced with the challenge of coping with ongoing climate change in a global context. References Abelmann, A., Gersonde, R., Cortese, G., Kuhn, G. & Smetacek, V. 2006 Extensive phytoplankton blooms in the Atlantic Sector of the glacial Southern Ocean. Paleoceanography 21, PA1013. (doi:10.1029/2005PA001199) Anderson, R. F., Chase, Z., Fleisher, M. Q. & Sachs, J. 2002 The Southern Ocean’s biological pump during the last glacial maximum. Deep Sea Res. II 49, 9–10. (doi:10.1016/S0967-0645(02)00018-8) Assmy, P., Henjes, J., Klaas, C. & Smetacek, V. 2007 Mechanisms determining species dominance in a phytoplankton bloom induced by the iron fertilisation experiment EisenEx in the Southern Ocean. Deep Sea Res. I 54, 340–362. (doi:10.1016/j.dsr.2006.12.005) Atkinson, A., Siegel, V., Pakhomov, E. A. & Rothery, P. 2004 Long-term decline in krill stock and increase in salps within the Southern Ocean. Nature 432, 100–103. (doi:10.1038/nature02996) Bakun, A. & Weeks, S. J. 2004 Greenhouse gas buildup, sardines, submarine eruptions and the possibility of abrupt degradation of intense marine upwelling ecosystems. Ecol. Lett. 7, 1015–1023. (doi:10.1111/j.1461-0248.2004.00665.x) Bathmann, U. V., Scharek, R., Klaas, C., Dubischar, C. D. & Smetacek, V. 1997 Spring development of phytoplankton biomass and composition in major water masses of the Atlantic sector of the Southern Ocean. Deep Sea Res. II 44, 51–67. Beaulieu, S. E. 2002 Accumulation and fate of phytodetritus on the sea floor. In Oceanography and Marine Biology: An Annual Review, eds. Gibson, R. N., Barnes, M. & Atkinson, R. J. London: Taylor and Francis, pp. 171–232. Bishop, J. K. B., Wood, T. J., Davis, R. E. & Sherman, J. T. 2004 Robotic observations of enhanced carbon biomass and export at 55 degrees S during SOFeX. Science 304, 417–420. (doi:10.1126/science.1087717) Blain, S. et al. 2007 Effect of natural iron fertilization on carbon sequestration in the Southern Ocean. Nature 446, 1070–1074. (doi:10.1038/nature05700) Boyd, P. et al. 2000 Phytoplankton bloom upon mesoscale iron fertilisation of polar Southern Ocean waters. Nature 407, 695–702. (doi:10.1038/35037500) Boyd, P. W. et al. 2007 Mesoscale iron enrichment experiments 1993–2005: synthesis and future directions. Science 315, 612–617. (doi:10.1126/science.1131669) Boyle, E. A. 1988 Vertical oceanic nutrient fractionation and glacial-interglacial CO2 cycles. Nature 331, 55–56. (doi:10.1038/331055a0) Buesseler, K. O. & Boyd, P. W. 2003 Will ocean fertilization work? Science 300, 67–68. (doi:10.1126/science.1082959) Buesseler, K. O. et al. 2008 Ocean iron fertilization: moving forward in a sea of uncertainty. Science 319, 162. (doi:10.1126/science.1154305) Cassar, N., Bender, M. L., Barnett, B. A., Songmiao, F., Moxim, W. J., Levy, H. II & Tilbrook, B. 2007 The Southern Ocean biological response to aeolian iron deposition. Science 317, 1067–1070. (doi:10.1126/science.1144602)
The next generation of iron fertilization experiments in the Southern Ocean
201
Chisholm, S. W. & Morel, F. M. M. 1991 What regulates phytoplankton production in nutrient-rich areas of the open sea? Limnol. Oceanogr. 36, U1507–U1511. Chisholm, S., Falkowski, P. & Cullen, J. 2001 Dis-crediting ocean fertilisation. Science 294, 309–310. (doi:10.1126/science.1065349) Cisewski, B., Strass, V. H. & Prandke, H. 2005 Upper-ocean vertical mixing in the Antarctic Polar Frontal Zone. Deep Sea Res. II 52, 1087–1108. (doi:10.1016/j.dsr2.2005.01.010) Cisewski, B., Strass, V. H., Losch, M. & Prandke, H. 2008 Mixed layer analysis of a mesoscale eddy in the Antarctic Polar Front Zone. J. Geophys. Res. Ocean. 113, C05017. (doi:10.1029/2007JC004372) Coale, K. H. et al. 1996 A massive phytoplankton bloom induced by an ecosystem-scale iron fertilization experiment in the Equatorial Pacific Ocean. Nature 383, 495–501. (doi:10.1038/383495a0) Coale, K. H. et al. 2004 Southern Ocean iron enrichment experiment: carbon cycling in high-and low-Si waters. Science 304, 408–414. (doi:10.1126/science.1089778) Crutzen, P. J. 1991 Methane’s sinks and sources. Nature 350, 380–381. (doi:10.1038/350380a0) de Baar, H. J. W. et al. 2005 Synthesis of iron fertilization experiments: from the iron age in the age of enlightenment. J. Geophys. Res. 110, C09S16. (doi:10.1029/2004JC002601) Ducklow, H. W. & Harris, R. P. (eds.) 1993 JGOFS: The North Atlantic Bloom Experiment. Deep Sea Res. II 40, 1–461. Falkowski, P. G., Barber, R. T. & Smetacek, V. 1998 Biogeochemical controls and feedbacks on ocean primary production. Science 281, 200–206. (doi:10.1126/science.281.5374.200) Falkowski, P. et al. 2000 The global carbon cycle: a test of our knowledge of earth as a system. Science 290, 291–296. (doi:10.1126/science.290.5490.291) Frost, B. W. 1996 Phytoplankton bloom on iron rations. Nature 383, 475–476. (doi:10.1038/383475a0) Fuhrman, J. A. & Capone, D. G. 1991 Possible biogeochemical consequences of ocean fertilization. Limnol. Oceanogr. 36, 1951–1959. Galbraith, E. D., Jaccard, S. L., Pedersen, T. F., Sigman, D. M., Haug, G. H., Cook, M., Southon, J. R. & Franc¸ois, R. 2007 Carbon dioxide release from the North Pacific abyss during the last deglaciation. Nature 449, 890–893. (doi:10.1038/nature06227) Gervais, F., Riebesell, U. & Gorbunov, M. Y. 2002 Changes in primary productivity and chlorophyll a in response to iron fertilization in the Southern Polar Frontal Zone. Limnol. Oceanogr. 47, 1324–1335. Gonz´alez, H. E. 1992 The distribution and abundance of krill faecal material and oval pellets in the Scotia and Weddell Seas (Antarctica) and their role in particle flux. Polar Biol. 12, 81–91. (doi:10.1007/BF00239968) Gunson, J. R., Spall, S. A., Anderson, T. R., Jones, A., Totterdell, I. J. & Woodage, M. J. 2006 Climate sensitivity to ocean dimethylsulphide emissions. Geophys. Res. Lett. 33, L07701. (doi:10.1029/2005GL024982) Haberl, H., Erb, K. H., Krausman, F., Gaube, V., Bondeau, A., Plutzar, C., Gingrich, S., Lucht, W. & Fischer-Kowalski, M. 2007 Quantifying and mapping the human appropriation of net primary production in earth’s terrestrial ecosystems. Proc. Natl Acad. Sci. USA 104, 12 942–12 947. (doi:10.1073/pnas.0704243104) Hamm, C. & Smetacek, V. 2007 Armor: why, when, and how. In Evolution of Primary Producers in the Sea, eds. Falkowski, P. G. & Knoll, A. H. London: Elsevier, pp. 311–332.
202
V. Smetacek and S. W. A. Naqvi
Hart, T. J. 1942 Phytoplankton periodicity in Antarctic surface waters. Discovery Reports 21, 261–356. Henjes, J., Assmy, P., Klaas, C., Verity, P. & Smetacek, V. 2007 Response of microzooplankton (protists and small copepods) to an iron-induced phytoplankton bloom in the Southern Ocean (EisenEx). Deep Sea Res. I 54, 363–384. (doi:10.1016/j.dsr.(2006):12.004) Jacquet, S. H. M., Savoye, N., Dehairs, F., Strass, V. H. & Cardinal, D. 2008 Mesopelagic carbon remineralization during the European iron fertilization experiment. Glob. Biogeochem. Cycle. 22, GB1023. (doi:10.1029/2006GB002902) Jansen, S., Klaas, C., Kr¨agefsky, S., von Harbou, L. & Bathmann, U. 2006 Reproductive response of the copepod Rhincalanus gigas to an iron-induced phytoplankton bloom in the Southern Ocean. Polar Biol. 29, 1039–1044. (doi:10.1007/s00300-006-0147-0) Jayakumar, D. A., Naqvi, S. W. A., Narvekar, P. V. & George, M. D. 2001 Methane in coastal and offshore waters of the Arabian Sea. Mar. Chem. 74, 1–13. (doi:10.1016/S0304-4203(00)00089-X) Jickells, T. D. et al. 2005 Global iron connections between desert dust, ocean biogeochemistry, and climate. Science 308, 67–71. (doi:10.1126/science.1105959) Jin, X. & Gruber, N. 2003 Offsetting the radiative benefit of ocean iron fertilisation by enhancing N2 O emissions. Geophys. Res. Lett. 30, 2249. (doi:10.1029/2003GL018458) Kallel, N., Labeyrie, L. D., Juillet-Leclerc, A. & Duplessy, J. C. 1988 A deep hydrological front between intermediate and deep-water masses in the glacial Indian Ocean. Nature 333, 651–655. (doi:10.1038/333651a0) Landry, M. R. & Hassett, R. P. 1982 Estimating the grazing impact of marine micro-zooplankton. Mar. Biol. 67, 283–288. (doi:10.1007/BF00397668) Law, C. S. & Ling, R. D. 2001 Nitrous oxide flux and response to increased iron availability in the Antarctic Circumpolar Current. Deep Sea Res. II 48, 2509–2527. (doi:10.1016/S0967-0645(01)00006-6) Lawrence, M. G. 2002 Side effects of oceanic iron fertilization. Science 297, 1993. (doi:10.1126/science.297.5589.1993b) Laws, R. W. 1977 Seals and whales of the Southern Ocean. Phil. Trans. R. Soc. B 279, 81–96. (doi:10.1098/rstb.1977.0073) Le Qu´er´e, C. et al. 2007 Saturation of the Southern Ocean CO2 sink due to recent climate change. Science 316, 1735–1738. (doi:10.1126/science.1136188) Liss, P. S. 2007 Trace gas emissions from the marine biosphere. Phil. Trans. R. Soc. A 365, 1697–1704. (doi:10.1098/rsta.2007.2039) Losch, M., Schukay, R., Strass, V. & Cisewski, B. 2006 Comparison of a NLOM data assimilation product to direct measurements in the Antarctic Polar Frontal Zone: a case study. Ann. Geophys. 24, 3–6. Maier-Reimer, E., Mikolajewicz, U. & Winguth, A. 1996 Future ocean uptake of CO2 : interaction between ocean circulation and biology. Climate Dyn. 12, 711–721. (doi:10.1007/s003820050138) Martin, J. H. 1990 Glacial–interglacial CO2 change: the iron hypothesis. Paleoceanography 5, 1–13. (doi:10.1029/PA005i001p00001) Martin, J. H. & Fitzwater, S. E. 1988 Iron deficiency limits phytoplankton growth in the northeast Pacific subarctic. Nature 331, 341–343. (doi:10.1038/331341a0) Martin, J. H., Knauer, G. A., Karl, D. M. & Broenkow, W. W. 1987 Carbon cycling in the northeast Pacific. Deep Sea Res. A 34, 267–285. (doi:10.1016/0198-0149(87)90086-0) Martin, J. H. et al. 1994 Testing the iron hypothesis in ecosystems of the equatorial Pacific Ocean. Nature 371, 123–129. (doi:10.1038/371123a0)
The next generation of iron fertilization experiments in the Southern Ocean
203
Nevison, C. D., Lueker, T. J. & Weiss, R. F. 2004 Quantifying the nitrous oxide source from coastal upwelling. Glob. Biogeochem. Cycle 18, GB1018. (doi:10.1029/2003GB002110) Petit, J. R. et al. 1999 Climate and atmospheric history of the past 420 000 years from the Vostok ice core, Antarctica. Nature 399, 429–436. (doi:10.1038/20859) Pollard, R., Sanders, R., Lucas, M. & Statham, P. 2007 The Crozet natural iron bloom and export experiment (CROZEX). Deep Sea Res. II 54, 1905–1914. (doi:10.1016/j.dsr2.2007.07.023) Poulton, A. J., Mark Moore, C., Seeyave, S., Lucas, M. I., Fielding, S. & Ward, P. 2007 Phytoplankton community composition around the Crozet Plateau, with emphasis on diatoms and Phaeocystis. Deep Sea Res. II 54, 2085–2105. (doi:10.1016/j.dsr2.2007.06.005) Salter, I., Lampitt, R. S., Sanders, R., Poulton, A., Kemp, A. E. S., Boorman, B., Saw, K. & Pearce, R. 2007 Estimating carbon, silica and diatom export from a naturally fertilised phytoplankton bloom in the Southern Ocean using PELAGRA: a novel drifting sediment trap. Deep Sea Res. II 54, 2233–2259. (doi:10.1016/j.dsr2.2007.06.008) Sarmiento, J. L. & Orr, J. C. 1991 Three-dimensional simulations of the impact of Southern Ocean nutrient depletion on atmospheric CO2 and ocean chemistry. Limnol. Oceanogr. 36, 1928–1950. Seeyave, S., Lucas, M. I., Moore, C. M. & Poulton, A. J. 2007 Phytoplankton productivity and community structure in the vicinity of the Crozet Plateau during austral summer 2004/2005. Deep Sea Res. II 54, 2020–2044. (doi:10.1016/j.dsr2.2007.06.010) Siegenthaler, U. et al. 2005 Stable carbon cycle-climate relationship during the late Pleistocene. Science 310, 1313–1317. (doi:10.1126/science.1120130) Smetacek, V. 1985 Role of sinking in diatom life-history cycles: ecological, evolutionary and geological significance. Mar. Biol. 84, 239–251. (doi:10.1007/BF00392493) Smetacek, V. 1999 Diatoms and the ocean carbon cycle. Protist 150, 25–32. Smetacek, V. 2002 The ocean’s veil. Nature 419, 565. (doi:10.1038/419565a) Smetacek, V. 2007 ?Es el declive del krill ant´artico resultado del calentamiento global o del exterminio de las ballenas? In Impactos del calentamiento global sobre los ecosistemas polares, ed. Duarte, C. M. Bilbao, Spain: Fundacion BBVA, pp. 45–81. Smetacek, V. & Nicol, S. 2005 Polar ocean ecosystems in a changing world. Nature 437, 362–368. (doi:10.1038/nature04161) Smetacek, V. & Passow, U. 1990 Spring bloom initiation and Sverdrup’s critical-depth model. Limnol. Oceanogr. 35, 228–234. Smetacek, V., Scharek, R. & N¨othig, E.-M. 1990 Seasonal and regional variation in the pelagial and its relationship to the life history cycle of krill. In Antarctic Ecosystems, eds. Kerry, K. & Hempel, G. Heidelberg, Germany: Springer, pp. 103–116. Smetacek, V., Assmy, P. & Henjes, J. 2004 The role of grazing in structuring Southern Ocean pelagic ecosystems and biogeochemical cycles. Antarctic Science 16, 541–558. (doi:10.1017/S0954102004002317) Spahni, R. et al. 2005 Atmospheric methane and nitrous oxide of the late Pleistocene from Antarctic ice cores. Science 310, 1317–1321. (doi:10.1126/science.1120132) Stramma, L., Johnson, G. C., Sprintall, J. & Mohrholz, V. 2008 Expanding oxygen-minimum zones in the tropical oceans. Science 320, 655–658. (doi:10.1126/science.1153847) Strzepek, R. F. & Harrison, P. J. 2004 Photosynthetic architecture differs in coastal and oceanic diatoms. Nature 431, 689–692. (doi:10.1038/nature02954)
204
V. Smetacek and S. W. A. Naqvi
Suzuki, K., Saito, H., Hinuma, A., Kiyosawa, H., Kuwata, A., Kawanobe, K., Saino, T. & Tsuda, A. 2006 Comparison of community structure and photosynthetic physiology of phytoplankton in two mesoscale iron enrichment experiments in the NW subarctic Pacific. In Proc. PICES-IFEP Workshop on In Situ Iron Enrichment Experiments in the Eastern and Western Subarctic Pacific. Tovar-Sanchez, A., Duarte, C. M., Hern´andez-Le´on, S. & Sanudo-Wilhelmy, S. A. 2007 Krill as a central node for iron cycling in the Southern Ocean. Geophys. Res. Lett. 34, L11601. (doi:10.1029/2006GL029096) Tsuda, A. et al. 2003 A mesoscale iron enrichment in the western subarctic Pacific induces a large centric diatom bloom. Science 300, 958–961. (doi:10.1126/science.1082000) Tsuda, A., Saito, H. & Sastri, A. R. 2006 Meso-and microzooplankton responses in the iron-enrichment experiments in the subarctic North Pacific (SEEDS, SERIES and SEEDS-II). In Proc. PICES-IFEP Workshop on In Situ Iron Enrichment Experiments in the Eastern and Western Subarctic Pacific. Turner, D. & Owens, N. J. P. 1995 A biogeochemical study in the Bellingshausen Sea: overview of the STERNA expedition. Deep-Sea Res. II 42, 907–932. (doi:10.1016/0967-0645(95)00056-V) Turner, S. M., Harvey, M. J., Law, C. S., Nightingale, P. D. & Liss, P. S. 2004 Iron-induced changes in oceanic sulfur biogeochemistry. Geophys. Res. Lett. 31, L14307. (doi:10.1029/2004GL020296) Tyson, R. V. 1995 Sedimentary Organic Matter. London: Chapman and Hall. Walter, S., Peeken, I., Lochte, K., Webb, A. & Bange, H. W. 2005 Nitrous oxide measurements during EIFEX, the European Iron Fertilisation Experiment in the subpolar South Atlantic Ocean. Geophys. Res. Lett. 32, L23613. (doi:10.1029/2005GL024619) Wefer, G. 1989 Particle flux in the ocean: present and past. In Dahlem Conference, eds. Berger, W. H., Smetacek, V. S. & Wefer, G. Chichester, UK: Wiley, pp. 139–154. Zeebe, R. E. & Archer, D. 2005 Feasibility of ocean fertilization and its impact on future atmospheric CO2 levels. Geophys. Res. Lett. 32, L09703. (doi:10.1029/2005GL022449)
Part III Solar radiation management
10 Global temperature stabilization via controlled albedo enhancement of low-level maritime clouds john latham, philip j. rasch, chih-chieh (jack) chen, laura kettles, alan gadian, andrew gettelman, hugh morrison, keith bower and tom choularton An assessment is made herein of the proposal that controlled global cooling sufficient to balance global warming resulting from increasing atmospheric CO2 concentrations might be achieved by seeding lowlevel, extensive maritime clouds with seawater particles that act as cloud condensation nuclei, thereby activating new droplets and increasing cloud albedo (and possibly longevity). This chapter focuses on scientific and meteorological aspects of the scheme. Associated technological issues are addressed in Chapter 11. Analytical calculations, cloud modelling and (particularly) GCM computations suggest that, if outstanding questions are satisfactorily resolved, the controllable, globally averaged negative forcing resulting from deployment of this scheme might be sufficient to balance the positive forcing associated with a doubling of CO2 concentration. This statement is supported quantitatively by recent observational evidence from three disparate sources. We conclude that this technique could thus be adequate to hold the Earth’s temperature constant for many decades. More work – especially assessments of possible meteorological and climatological ramifications – is required on several components of the scheme, which possesses the advantages that (i) it is ecologically benign – the only raw materials being wind and seawater, (ii) the degree of cooling could be controlled, and (iii) if unforeseen adverse effects occur, the system could be immediately switched off, with the forcing returning to normal within a few days (although the response would take a much longer time). Geo-Engineering Climate Change: Environmental Necessity or Pandora’s Box?, eds. Brian Launder and Michael Thompson. Published by Cambridge University Press. © Cambridge University Press 2010.
207
208
John Latham et al.
10.1 Introduction Atmospheric clouds exercise a significant influence on climate. They can inhibit the passage through the atmosphere of both incoming, short-wave solar radiation, some of which is reflected back into space from cloud tops, and they intercept longwave radiation flowing outwards from the Earth’s surface. The first of these effects produces a global cooling, the second a warming. On balance, clouds produce a cooling effect, corresponding to a globally averaged negative net forcing of approximately −13 W m−2 (Ramanathan et al. 1989). Since the estimated positive forcing resulting from a doubling of the atmospheric carbon dioxide (CO2 ) concentration (from the value – approximately 275 ppm – existing at the beginning of the industrial period) is approximately +3.7 W m−2 (Ramaswamy et al. 2001), it is clear that, in principle, deliberate modification of clouds to produce a cooling sufficient to balance global warming resulting from the burning of fossil fuels is feasible. This chapter presents and assesses a proposed scheme for stabilization of the Earth’s global mean temperature (in the face of continually increasing atmospheric CO2 concentrations) by seeding clouds in the marine boundary layer (MBL) with seawater aerosol, in order to increase the cloud droplet number concentration and thus the cloud albedo (and possibly longevity): thereby producing a cooling. This chapter focuses attention on the physics and meteorology of the idea. Technological aspects are treated in Salter et al., Chapter 11). Section 10.2 outlines the global cooling scheme and some cloud model sensitivity studies designed to determine the sensitivity of cloud albedo (reflectivity) enhancement to values of the meteorological and cloud microphysical parameters involved. Section 10.3 presents some simple calculations designed to illustrate the potential viability of the technique. Section 10.4 presents (GCM) climate computations that provide a more rigorous quantitative assessment. Technological implications from the results of these computations are discussed in Section 10.5. A brief discussion of questions and concerns that would need to be satisfactorily examined before any justification would exist for the operational deployment of the technique is presented in Section 10.6. Section 10.7 provides a provisional quantitative assessment of the extent to which global temperature stabilization might be possible with this technique.
10.2 Principle and first assessment of the idea Low-level, non-overlapped marine stratiform clouds cover about a quarter of the oceanic surface (Charlson et al. 1987) and characteristically possess albedos, A,
Global temperature stabilization
209
in the range 0.3–0.7 (Schwartz & Slingo 1996). They, therefore, make a significant (cooling) contribution to the radiative balance of the Earth. Latham (1990, 2002) proposed a possible technique for ameliorating global warming by controlled enhancement of the natural droplet number concentrations (N0 ) in such clouds, with a corresponding increase A in their albedo (the first indirect or Twomey effect (1977)), and also possibly in their longevity (the second indirect, or Albrecht effect (1989)), thus producing cooling. N0 values in these clouds range typically from approximately 20 to 200 cm−3 . The technique involves dissemination – at or close to the ocean surface – of monodisperse seawater (NaCl) droplets approximately 1 μm in size, which possess sufficiently large salt masses always to be activated – as cloud condensation nuclei (CCN) – to form N additional droplets when (shrinking by evaporation in the subsaturated air en route) they rise into the cloud bases. The total droplet concentration N after seeding thus lies between N and (N0 +N), because some of the natural CCN which would be activated in the absence of seeding may not be in its presence, owing to the lower supersaturations that prevail. The central physics behind this scheme, which have been authoritatively treated in a considerable number of studies (e.g. Twomey 1977, 1991; Charlson et al. 1987; Albrecht 1989; Wigley 1989; Slingo 1990; Ackerman et al. 1993; Pincus & Baker 1994; Rosenfeld 2000; Stevens et al. 2005), is that an increase in droplet concentration N causes the cloud albedo to increase because the overall droplet surface area is enhanced. It can also increase cloud longevity (tantamount to increasing cloudiness) because the growth of cloud droplets by coalescence to form drizzle or raindrops, which often initiates cloud dissipation, is impeded, since the droplets are smaller and the clouds correspondingly more stable. Possibly significant departures from this simple picture are outlined in Section 10.6. Calculations by the above-mentioned workers indicate that a doubling of the natural droplet concentration (i.e. to N = 2N0 ) in all such marine stratiform clouds (which corresponds to an increase A of approximately 0.06 in their cloud-top albedo) would produce cooling sufficient roughly to balance the warming associated with CO2 doubling. Latham (1990, 2002) calculated that for a droplet diameter d = 0.8 μm (associated salt mass ms = 10−17 kg) the total (global) seawater volumetric dissemination rate dV/dt required to produce the required doubling of N in all suitable marine stratocumulus clouds is approximately 30 m3 s−1 , which appears well within the range of modern technology. It is considered (e.g. Charlson et al. 1987) that most natural CCN over the oceans consist of ammonium sulphate particles formed from dimethyl sulphide produced at the ocean surface by planktonic algae. In order to ensure at least a doubling of N, we would need to add
210
John Latham et al.
at least 2N0 particles per unit volume. Ideally, their size distribution would be monodisperse, largely in order to avoid the production of ultra-giant nuclei (UGN) (Woodcock 1953; Johnson 1982; De Leeuw 1986), which could act to promote drizzle formation and thus cloud dissipation. The monodispersity of the added particles may also make the clouds more colloidally stable, thus inhibiting coalescence and associated drizzle formation. We point out that ship tracks are a consequence of inadvertent and uncontrolled albedo increase in such clouds, resulting from the addition of effective CCN in the exhausts from the ships and that our proposed deliberate generation of efficient seasalt CCN at the ocean surface, thereby (usually) enhancing N, is of course basically a version of a process that happens naturally, via the catastrophic bursting of air bubbles produced by wave motion. However, except in conditions of high winds or in regions where other aerosol sources are weak, these sea-salt particles constitute only a small fraction of the CCN activated in marine stratocumulus (Latham & Smith 1990). A simplified version of the model of marine stratocumulus clouds developed by Bower et al. (1999) was used (Bower et al. 2006) to examine the sensitivity of albedo enhancement A to the environmental aerosol characteristics, as well as those of the seawater aerosol of salt mass ms and number concentration N deliberately introduced into the clouds. Values of albedo change A and total droplet number concentration N were calculated for a wide range of values of ms , N and various other cloud parameters. Computations were made for aerosol characteristics pertaining to clean, intermediate and polluted air masses (Spectra A, B and C, respectively). Values of A were calculated from the droplet number concentrations using the method of Schwartz & Slingo (1996). It was found that, for Spectrum B, values of A and N are insensitive to ms over the range 10−17 –10−14 kg. For Spectrum A, the insensitivity range is 10−18 –10−15 kg, and the A values are typically several times greater (for the same values of N) than those for Spectrum B. For Spectrum C, the A values are much lower than those for Spectra A and B. The above-mentioned threshold value of A (0.06) was achieved for most parameter-value permutations for Spectrum A, a significant fraction for Spectrum B and scarcely any for Spectrum C. For all the three aerosol spectra, the calculated values of A and total droplet concentration N were found to be highly sensitive to the imposed additional aerosol concentrations N. The relationship between N and A was found always to be strongly non-linear (e.g. Twomey 1991; Pincus & Baker 1994). These model computations provide provisional quantitative support for the physical viability of the mitigation scheme, as well as offering new insights (Section 10.5) into its technological requirements.
Global temperature stabilization
211
10.3 Relationships between spraying rate, albedo change and negative forcing The simple calculations presented in this section are designed to illustrate the relationships between the deliberately imposed increase in cloud droplet number concentration, N, the associated increase in cloud albedo, A, the resultant globally averaged negative forcing F, and the required continuous seawater aerosol volumetric spray production rate dV/dt. These calculations also provide some indication as to whether or not our global temperature stabilization scheme is quantitatively feasible. For the purposes of this discussion, we assume that the only clouds deliberately seeded with seawater CCN are non-overlapped marine stratiform clouds. As discussed in item 4 of Section 10.6, there are still many unknowns in our characterization of aerosol cloud interactions and aerosol indirect effects. However, theoretical calculations and global modelling both suggest that the first indirect effect generally dominates over the second, so the latter is disregarded in what follows. For similar reasons, we consider only short-wave radiative effects in this analysis. The average solar irradiance F (W m−2 ) received at the Earth’s surface is F = 0.25F0 (1 − AP ),
(10.1)
where F0 (=1370 W m−2 ) is the solar flux at the top of the atmosphere and AP is the planetary albedo. Thus, an increase AP in planetary albedo produces a forcing F of F = −340AP .
(10.2)
We define f1 (= 0.7) as the fraction of the Earth’s surface covered by ocean, f2 (= 0.25) as the fraction of the oceanic surface covered by non-overlapped marine stratiform clouds, and f3 as the fraction of oceanic stratiform cloud cover which is seeded. Thus, the average change A in cloud albedo associated with a change AP in planetary albedo is A =
Ap F = − , (f1 f2 f3 ) 60f3
(10.3)
from which it follows that, if f3 = 1, to produce a globally averaged negative forcing of −3.7 W m−2 , the required increases in planetary and cloud albedo are 0.011 and 0.062, respectively: the associated percentage changes in albedo being roughly 3.7 and 12 per cent. The cloud albedo increase resulting from seeding the clouds with seawater CCN to increase the droplet number concentration from its unseeded value N0 to N is
212
John Latham et al.
Table 10.1 Values of negative forcing F (W m−2 ) derived from equation (10.6) for selected values of N/N0 (the ratio of the seeded to unseeded cloud droplet number concentration) and f3 , the fraction of suitable clouds seeded f3 N/N0
1.0
0.7
0.5
0.3
0.17
2 3 4 5 7 10
3.1 4.9 6.2 7.2 8.8 10.4
2.2 3.5 4.4 5.1 6.1 7.3
1.6 2.5 3.1 3.6 4.4 5.2
0.94 1.5 1.9 2.2 2.6 3.1
0.53 0.84 1.1 1.2 1.5 1.8
given (Schwartz & Slingo 1996) by A= 0.075 ln(N/N0 )
(10.4)
and it follows from equations (10.3) and (10.4) that −F = 4.5f 3 ln(N/N0 ),
(10.5)
(N/N0 ) = exp(−F /4.5f3 ).
(10.6)
which may be rewritten as
It follows from equation (10.6) that if f3 = 1 (all suitable clouds seeded), the value of (N/N0 ) required to produce a negative forcing of −3.7 W m−2 is 2.3, in reasonable agreement with the estimates of Charlson et al. (1987) and Slingo (1990). Table 10.1 presents the values of globally averaged negative forcing, F (W m−2 ), derived from equation (10.5) for a range of values of f3 and N/N0 . We see that if the fraction f3 of suitable clouds that are seeded falls below approximately 0.3, it is not possible, for values of (N/N0 ) realistically achievable on a large scale (a rough estimate is (N/N0 ) < 10), for our scheme to produce a negative forcing of −3.7 W m−2 . We also see the distinct non-linearity in the relationship between (N/N0 ) and F. The volumetric spraying rate dV dn dn = νd = (π/6) d 3 , dt dt dt
(10.7)
where νd is the volume (m3 ) of a seawater droplet of diameter d (m) at creation and dn/dt (s−1 ) is the rate of spraying of seawater droplets.
Global temperature stabilization
213
We assume that in equilibrium the number of sprayed droplets residing in the atmosphere is constant, i.e. the deliberate creation rate of seawater droplets equals the loss rate. Thus, dn AE Hf1 f2 f3 AE Hf1 f2 f3 = (N − N0 ) = N0 (N/N0 − 1) , dt f4 τR f4 τR
(10.8)
where AE (m2 ) is the surface area of the Earth; H (m) is the height over which the seawater droplets are distributed; f4 is the fraction of the sprayed droplets that are not lost at creation and do not move laterally away from regions of selected cloud cover; and τ R (s) is the average residence time of the seawater aerosol in the atmosphere. Thus (from equations 10.6–10.8), taking AE = 5.1 × 1014 m2 , f1 = 0.7, f2 = 0.25, we obtain dV = 4.6 × 1013 f3 d 3 (H N0 /f4 τR )[{exp(−F /4.5f3 )} − 1]. dt
(10.9)
Assuming that f3 = 1: f4 = 0.5: d = 0.8 μm = 8×10−7 m: H = 1000 m: N0 = 100 cm−3 = 108 m−3 : τ R = 3 days=2.6 × 105 s (M. C. Barth 2008, personal communication; D. H. Lenschow 2008, personal communication; M. H. Smith 2008, personal communication), it follows from equation (10.9) that, for a negative forcing F = −3.7 W m−2 , the required total volumetric seawater aerosol dissemination rate dV/dt = 23 m3 s−1 . Keeping all the above parameter values the same but seeding only half of the suitable clouds yields a value of dV/dt of approximately 37 m3 s−1 .
10.4 Global climate modelling computations Global aspects of the cloud albedo enhancement scheme were examined using two separate models. The first of these was the HadGAM numerical model, which is the atmospheric component of the UK Hadley Centre Global Model, based on the Meteorological Office Unified Model (UM), v. 6.1. It is described in Johns et al. (2004) and contains the New Dynamics Core (Davies et al. 2005). It is run at N96L38 resolution, i.e. 1.25◦ latitude by 1.875◦ longitude with 38 vertical levels extending to over 39 km in height. N96 denotes a resolution of 96 two-grid-length waves, i.e. 192 grid points in longitude. It has a non-hydrostatic, fully compressible, deep atmosphere formulation and uses a terrain-following, height-based vertical coordinate. It also includes semi-Lagrangian advection of all prognostic variables except density, and employs the two-stream radiation scheme of Edwards & Slingo (1996). The aerosol species represented include sulphate, black carbon, biomass smoke and sea salt.
214
John Latham et al.
The convection scheme is based on the mass flux scheme of Gregory & Rowntree (1990) (but with major modifications), and the large-scale cloud scheme is that of Smith (1990). The HadGAM model was used to calculate five-year mean values of cloud-top droplet effective radius reff (μm), liquid water path, LWP (g m−2 ) and outgoing short-wave radiation flux Fsw (W m−2 ) at the top of the atmosphere (TOA). In the control run (no seeding) the globally averaged cloud droplet number concentration, N, was approximately 100 cm−3 , and the model was then run again with N increased – in all regions of low-level maritime cloud (below approximately 3000 m, 700 hPa) – to 375 cm−3 . Such a value of N should be readily achievable technologically, if our global temperature stabilization scheme were ever to be operationally deployed. The computed 5-year mean distributions of layer cloud effective radius reff (μm) and LWP (g m−2 ) for unseeded and seeded marine low-level clouds are displayed in Figures 10.1 and 10.2, respectively. They show that increasing the cloud droplet number concentration N from natural values to the seeded figure of N = 375 cm−3 leads to a general decrease in droplet size (Figure 10.1, the first indirect effect) and increase in LWP, with consequent decrease in the efficiency of precipitation development (Figure 10.2, the second indirect effect). The changes in effective radius are clearly evident in the regions of persistent marine stratocumulus off the west coasts of Africa and North and South America, and also over much more extensive regions of the southern oceans. Changes in LWP in these same regions can be perceived but are less pronounced. Figure 10.3 reveals that the imposed increase in N has caused an overall significant negative change F in radiative forcing, which would cause a cooling of the Earth’s climate. The largest effects are apparent in the three regions of persistent marine stratocumulus off the west coasts of Africa and North and South America, mentioned earlier, which together cover approximately 3 per cent of the global surface. Lower but appreciable values of negative forcing can be seen throughout the much more extensive regions of the southern oceans. The 5-year mean globally averaged TOA negative forcing resulting from the marine low-level cloud seeding is calculated to be −8.0 ± 0.1 W m−2 , more than twice that required to compensate for the 3.7 W m−2 warming associated with a doubling of atmospheric CO2 concentration. A similar calculation was performed in a developmental version of the National Center for Atmospheric Research (NCAR) community atmosphere model (CAM). The simulations were performed at 1.9◦ × 2.5◦ latitude/longitude resolution (26 layers with a top near 40 km) using a newly developed microphysics parametrization (Gettelman et al. 2008; Morrison & Gettelman 2008). That parametrization uses a two-moment scheme predicting cloud mass and particle number for four
Global temperature stabilization
215
(a)
(b)
Figure 10.1 Five-year mean distributions of cloud-top effective radius reff (μm) in all regions of marine stratocumulus. (a) Control and (b) with N = 375 cm−3 in regions of low-level maritime cloud (see also colour plate).
classes of condensed water (small particle liquid and ice, and precipitation-sized rain and snow). Three 5-year simulations were conducted. The first simulation (the control) calculated cloud drop number using the drop activation parametrization of Abdul-Razzak & Ghan (2005) with a functional dependence on aerosol
216
John Latham et al.
Figure 10.2 Five-year mean distributions of LWP (g m−2 ). (a) Control and (b) with N = 375 cm−3 in regions of low-level maritime cloud (see also colour plate).
type and concentration, and resolved and turbulent dynamical fields. The other two simulations overrode the cloud droplet number concentrations (NC ) below 850 hPa, prescribing them as 375 and 1000 cm−3 , wherever clouds were found. The influence of the warm cloud seeding geo-engineering strategy is assessed by taking the difference between the seeding experiments and the control simulation.
Global temperature stabilization
217
Figure 10.3 Five-year mean difference F (W m−2 ) in radiative forcing between the control simulation and that in which N = 375 cm−3 in regions of low-level maritime cloud (see also colour plate).
Figure 10.4 shows the top of atmosphere SWCF (Figure 10.4a) for the NCAR model for the three simulations and the difference between SWCF for the control and experiments where the drop number was prescribed to be 375 and 1000 cm−3 below 850 hPa (Figure 10.4b,c, respectively, with Figure 10.b showing a similar quantity to that described in Figure 10.3 for the HadGAM model). The NCAR model shows the effect of the cloud seeding in some of the same regions. The change in SWCF is approximately half the amplitude of that seen in the HadGAM model in the marine stratus and trade cumulus regions. The HadGAM simulations prescribed the drop number to approximately 700 hPa, somewhat higher than the NCAR simulations, but the difference may also result from the many uncertainties in modelling cloud aerosol interactions in global climate models. Unlike the HadGAM simulations, there is also an intriguing response in the mid-latitude storm tracks, and there are some patches of positive SWCF evident in the simulations as well. Some of the areas of positive SWCF in the simulations with more moderate seeding (to 375 cm−3 ; e.g. a weakening of the cloud forcing) occur downstream of regions strongly influenced by anthropogenic aerosols (e.g. downstream of China, and the eastern USA). In our model, producing clouds with 375 drops per cm3 in these regions would actually constitute a reduction in NC . Those regions are not seen in the simulation where the drop number is increased to 1000 cm−3 . Other regions where the SWCF increases (slightly, less than 6 W m−2 ) in the central Pacific (northern and southern hemispheres) are not common to the two simulations, and
218
John Latham et al.
Figure 10.4 Annual average of (a) short-wave cloud forcing (SWCF) of the control simulation, (b) short-wave cloud forcing difference (SWCF) between a geoengineering experiment by setting the cloud drop concentration to 375 cm−3 and the control simulation, and (c) SWCF between a geo-engineering experiment by setting the cloud drop concentration to 1000 cm−3 and the control simulation. For the purpose of this study, we only plot results over the ocean surface (see also colour plate).
we believe that these regions are an artefact of the relative brevity of the simulations, and are indicative of interannual variability. Because the SWCF field exhibits significant spatial variation, it is clear that some geographical locations are more susceptible to cloud seeding than others.
219
SWCF (W m–2)
Global temperature stabilization
Per cent of ocean area
Figure 10.5 Cumulative SWCF based on the ranked orders of all grid cells over the ocean for the two geo-engineering experiments (NC = 375 cm−3 and 1000 cm−3 ). The annual average (ANN) and the seasonal means (DJF: December– January–February; JJA: June–July–August) are plotted for both cases.
In other words, one may significantly reduce the cost of the warm cloud seeding geo-engineering strategy by selecting the locations where cloud seeding should be applied to achieve the maximum amount of cooling. Therefore, it is important to identify these optimal locations for cloud seeding. To achieve this goal, we first analysed the impact of warm cloud seeding by ranking the intensity of response (SWCF) in all grid cells over the ocean surface. We considered the amplitude of the forcing change, and the area occupied by each grid cell (varying with the cosine of latitude) in performing the ranking. The accumulated forcing (SWCF) based on the ranked orders of all grid cells over the ocean surface is presented in Figure 10.5. Ranking was performed on the monthly mean forcings for each month of the simulation and then the results composited to produce Figure 10.5.
220
John Latham et al.
It is evident that the December–January–February (DJF) seasonal mean has the strongest response and the June–July–August (JJA) seasonal mean has the weakest response. The annual average (ANN) falls between the two seasonal means. Since the Sun is most intense in the southern hemisphere during DJF, we expect most of the important locations for seeding to reside in that hemisphere during that season, with the converse true during JJA. The stronger response to seeding during DJF for a given areal extent may be explained by the enhanced susceptibility of the more pristine clouds of the southern hemisphere. In the NCAR model, optimal cloud seeding over 25 per cent of the ocean surface might produce a net cooling close to 3.5 or 4 W m−2 in DJF if the cloud drop number concentration is 375 or 1000 cm−3 , respectively. Following this same strategy, weaker cooling is expected in JJA (approximately 2.5 W m−2 ) in both geo-engineering experiments. The reasons for the forcing reaching a maximum for cloud fractions below 100 per cent (Figure 10.5) are still under investigation. The corresponding optimal locations based on this ranking are displayed in Figure 10.6 based on the NC = 375 cm−3 experiment for two choices of seeding area. The results indicate that the preferential locations for cloud seeding depend strongly on season. The optimal areas in the summer hemisphere occur first in marine stratus, and shallow trade cumulus regions, and secondarily in mid-latitude storm track regions. Both regions would need to be seeded to reach forcing amplitudes that could balance that associated with a doubling of CO2 .
10.5 Technological implications of the foregoing calculations The calculations and computations presented in Sections 10.2 to 10.4 yield some significant implications, outlined below, with respect to technological aspects of the global temperature stabilization technique. (1) The sensitivity studies (Section 10.2) show that the albedo changes A are insensitive, over a wide range, to the values of salt mass ms . It follows that the choice of disseminated droplet size can, to a considerable extent, be dictated by technological convenience. These studies also indicate that it would be optimal for albedo enhancement to confine our salt masses within the range 10−17 –10−15 kg, corresponding to seawater droplets in the approximate size range 0.3–3.0 μm. This is because smaller particles may not be nucleated and larger ones could act as UGN and thus perhaps promote drizzle onset and concomitant cloud dissipation. It therefore seems sensible to disseminate seawater droplets of diameter approximately 0.8 μm, thereby minimizing the required volumetric flow rate. (2) Monodispersity of the seawater aerosol, within the above-mentioned size range, has little impact on the values of A. However, it remains desirable because it could enhance cloud stability and therefore longevity.
Global temperature stabilization
221
Figure 10.6 Optimal geographical locations for warm cloud seeding based on the NC = 375 cm−3 experiment. (a) 5% of the ocean area in DJF, (b) 5% of the ocean area in JJA, (c) 25% of the ocean area in DJF and (d) 25% of the ocean area in JJA.
(3) The calculations presented earlier indicate that optimal seeding of all suitable maritime clouds may produce values of globally averaged negative forcing F of at least −3.7 W m−2 . If so (see discussion of uncertainties in Section 10.6), the areal fraction of suitable cloud cover seeded, f3 , in order to maintain global temperature stabilization, could – for a period of some decades – be appreciably lower than unity, thus rendering less daunting the practical problem of achieving adequate geographical dispersal of disseminated CCN. (4) It follows from (3) that there exists, in principle, latitude to: (a) avoid seeding in regions where deleterious effects (such as rainfall reduction over adjacent land) are predicted; (b) seed preferentially in unpolluted regions, where the albedo changes A for a fixed value of N are a maximum. (5) The high degree of seasonal variability in the optimal geographical distributions of suitable cloud (Section 10.4) underlines the desirability of a high degree of mobility in the seawater aerosol dissemination system.
222
John Latham et al.
10.6 Issues requiring further study In addition to requiring further work on technological issues concerning the cloud albedo enhancement scheme (Salter et al., Chapter 11), we need to address some limitations in our understanding of important meteorological aspects, and also make a detailed assessment of possibly adverse ramifications of the deployment of the technique, for which there would be no justification unless these effects were found to be acceptable. Some of these issues were addressed by Latham (2002) and Bower et al. (2006), and so are not examined herein. Others are now outlined. (1) It was assumed in the specimen calculations (Section 10.3) that approximately half of the seawater droplets disseminated near the ocean surface would be transported upwards by turbulent air motions to enter suitable clouds and form additional cloud droplets, i.e. f4 = 0.5. In actuality, f4 will probably vary considerably according to the meteorological situation. Thus, we need to obtain reliable estimates of f4 for all situations of interest. Airborne measurements (Smith et al. 1993) and estimates based on bubble-bursting studies (Blanchard 1969) suggest that f4 is greater than 0.1. D. H. Lenschow and M. H. Smith (2008, personal communications) suggest that the fraction will be close to 0.5. It appears that the calculated spraying rates (Section 10.3) are readily achievable technologically, and could easily be increased to accommodate any likely value of f4 . (2) It may prove useful to examine the possibility of charging the seawater droplets and harnessing the Earth’s electric field to help transport them to cloud base. (3) If our technique were to be implemented, global changes in the distributions and magnitudes of ocean currents, temperature, rainfall and wind would result. Even if it were possible to seed clouds relatively evenly over the Earth’s oceans, so that the effects of this type could be minimized, they would not be eliminated. Also, the technique would still alter the land–ocean temperature contrast, since the radiative forcing produced would be only over the oceans. In addition, we would be attempting to neutralize the warming effect of vertically distributed greenhouse gases with a surfacebased cooling effect, which could have consequences such as changes in static stability, which would need careful evaluation. Thus, it is vital to engage in a prior assessment of associated climatological and meteorological ramifications, which might involve currently unforeseen feedback processes. It is important to establish the level of local cooling which would have significant effects on ocean currents, local meteorology and ecosystems. This will require a fully coupled ocean/atmosphere climate system model. (4) R. Wood (personal communication) states that an important recently identified aspect of the aerosol–cloud–climate problem for low clouds is that the macrophysical properties of the clouds respond to changes in aerosol concentration in ways not foreseen at the time of formulation of the Albrecht effect (i.e. that reduction in warm rain production – resulting from increasing cloud droplet concentration and reduced droplet size – leads to thicker clouds). For marine stratocumulus clouds, recent studies with a large eddy
Global temperature stabilization
223
simulation (LES) model (Ackerman et al. 2004) and a simple mixed layer model (Wood 2007) show that the response of the cloud liquid water path on relatively short timescales (less than 1 day) is a balance between moistening of the MBL due to precipitation suppression, which tends to thicken the cloud, and drying by the increased entrainment associated with the extra vigour that a reduction in precipitation content brings to the MBL. Under some conditions the clouds thicken, and under others the clouds thin. Thus, it is unjustifiably simplistic to assume that adding CCN to the clouds will always brighten them according to the Twomey equation. Also, even without precipitation, LES studies (e.g. Wang et al. 2003; Xue & Feingold 2006) show that the enhanced water vapour transfer rates associated with smaller, more numerous droplets can lead to feedbacks on the dynamics that tend to offset, to some extent, the enhanced reflectivity due to the Twomey effect. These effects are either not treated or are poorly treated by GCM parametrizations of clouds and boundary-layer processes. It is clearly critical to an authoritative assessment of our scheme to conduct a full quantitative examination of them. IPCC (2007) has stressed the importance and current poor understanding of aerosol–cloud interactions.
Other refinements or extensions that need to be made to current work on the albedo enhancement idea include the following: inclusion of direct aerosol and long-wave radiative effects; examination of the lateral dispersion of aerosol from its dissemination sites; further estimation of the areal coverage of suitable maritime clouds; estimation of aerosol lifetimes – both within and outside of clouds – and the fraction of disseminated aerosol particles that enter suitable clouds. A rough comparison of the amounts of salt entering the MBL from natural processes and via seeding indicates that if the scheme were in full operation, i.e. spraying enough seawater to balance the warming associated with CO2 doubling, seeding would contribute less than 10 per cent of the total salt. In the first few decades of operation, the amount of disseminated salt would be several orders of magnitude less than that produced naturally.
10.7 Discussion It follows from the discussion in Section 10.6, particularly items 3 and 4, that although two separate sets of GCM computations (Section 10.4) agree in concluding that this cloud seeding scheme is in principle powerful enough to be important in global temperature stabilization, there are important clearly defined gaps in our knowledge which force us to conclude that we cannot state categorically at this stage whether the technique is in fact capable of producing significant negative forcing. There are also currently unresolved technological issues (Salter et al., Chapter 11).
224
John Latham et al.
If it is found that the unresolved issues defined in Section 10.6 (especially item 4) do not yield the conclusion that the cloud albedo seeding technique is much weaker than is estimated from the GCM computations, we may conclude that it could stabilize the Earth’s average temperature TAV beyond the point at which the atmospheric CO2 concentration reaches 550 ppm but probably not up to the 1000 ppm value. The corresponding amount of time for which the Earth’s average temperature could be stabilized depends, of course, on the rate at which the CO2 concentration increases. Simple calculations show that if it continues to increase at the current level, and if the maximum amount of negative forcing that the scheme could produce is −3.7 W m−2 , TAV could be held constant for approximately a century. At the beginning of this period, the required global seawater dissemination rate dV/dt (if f3 = 1) would be approximately 0.14 m3 s−1 initially, increasing each year to a final value of approximately 23 m3 s−1 . Recent experimental studies of both the indirect and direct aerosol effects involving data from the MODIS and CERES satellites (Quaas et al. 2006, 2008) have led to a study by Quaas & Feichter (2008) of the quantitative viability of the global temperature stabilization technique examined in this chapter. They concluded that enhancement (via seeding) of the droplet number concentration in marine boundary-layer cloud to a uniform sustained value of 400 cm−3 over the world oceans (from 60◦ S to 60◦ N) would yield a short-wave negative forcing of −2.9 W m−2 . They also found that the sensitivity of cloud droplet number concentration to a change in aerosol concentration is virtually always positive, with larger sensitivities over the oceans. These experimental results are clearly supportive of our proposed geoengineering idea, as is the work of Platnick & Oreopoulos (2008) and Oreopoulos & Platnick (2008), which also involves MODIS satellite measurements. Further encouraging support for the quantitative validity of our global temperature stabilization scheme is provided by the field research of Roberts et al. (2008) in which, for the first time, the enhancement of albedo was measured on a cloud-by-cloud basis, and linked to increasing aerosol concentrations by using multiple, autonomous, unmanned aerial vehicles to simultaneously observe the cloud microphysics, vertical aerosol distribution and associated solar radiative fluxes. In the presence of long-range transport of dust and anthropogenic pollution, the trade cumuli have higher droplet concentrations, and are on average brighter, the observations indicating a higher sensitivity of radiative forcing by trade cumuli to increases in cloud droplet concentrations than has been reported hitherto. The aerosol–cloud forcing efficiency was as much as −60 W m−2 per 100 per cent cloud fraction for a doubling of droplet concentrations and associated increase in liquid water content; the accompanying direct top of the atmosphere effect of this elevated aerosol layer was found to be −4.3 W m−2 .
Global temperature stabilization
225
Our view regarding priorities for work in the near future is that we should focus attention on outstanding meteorological issues outlined earlier in this chapter, particularly in Section 10.6, as well as technological ones described in Chapter 11. At the same time, we should develop plans for executing a limited-area field experiment in which selected clouds are inoculated with seawater aerosol, and airborne, ship-borne and satellite measurements are made to establish, quantitatively, the concomitant microphysical and radiative differences between seeded and unseeded adjacent clouds and thus, hopefully, to determine whether or not this temperature stabilization scheme is viable. Such further field observational assessment of our technique is of major importance. Advantages of this scheme, if deployed, are that (i) the amount of cooling could be controlled – by measuring cloud albedo from satellites and turning disseminators on or off (or up and down) remotely as required, (ii) if any unforeseen adverse effect occurred, the entire system could be switched off instantaneously, with cloud properties returning to normal within a few days, (iii) it is relatively benign ecologically, the only raw materials required being wind and seawater, and (iv) there exists flexibility to choose where local cooling occurs, since not all suitable clouds need to be seeded. A further positive feature of the technique is revealed by comparing the power required to produce and disseminate the seawater CCN with that associated with the additional reflection of incoming sunlight. As determined in Chapter 11, approximately 1500 spray vessels would be required to produce a negative forcing of −3.7 W m−2 . Each vessel would require approximately 150 kW of electrical energy to atomize and disseminate seawater at the necessary continuous rate (as well as to support navigation, controls, communications, etc.), so that the global power requirement is approximately 2.3 × 108 W. Ideally, this energy would be derived from the wind. The additional rate of loss of planetary energy, resulting from cloud seeding, required to balance the warming caused by CO2 doubling would be F.AE = −1.9 × 1015 W. Thus, the ratio of reflected power to required dissemination power is approximately 8 × 106 . This extremely high ‘efficiency’ is largely a consequence of the fact that the energy required to increase the seawater droplet surface area by four or five orders of magnitude – from that existing on entry to the clouds to the surface area achieved when reflecting sunlight from cloud top – is provided by Nature. Acknowledgements We gratefully acknowledge NCAS for the use of HPCx computing resources, the UK Met Office for use of the HadGAM numerical model and EPSRC for providing funding for L. K. We also thank Mary Barth, Steve Ghan, Brian Hoskins, Andy
226
John Latham et al.
Jones, Don Lenschow, James Lovelock, Stephen Salter, Mike Smith, Tom Wigley, Lowell Wood and Rob Wood, who provided very helpful advice and comments during the course of this work. The National Center for Atmospheric Research is sponsored by the National Science Foundation. References Abdul-Razzak, H. & Ghan, S. J. 2005 Influence of slightly soluble organics on aerosol activation. J. Geophys. Res. 110, D06206. (doi:10.1029/2004JD005324) Ackerman, A. S., Toon, O. B. & Hobbs, P. V. 1993 Dissipation of marine stratiform clouds and collapse of the marine boundary layer due to the depletion of cloud condensation nuclei by clouds. Science 262, 226–229. (doi:10.1126/science.262.5131.226) Ackerman, A. S., Kirkpatrick, M. P., Stevens, D. E. & Toon, O. B. 2004 The impact of humidity above stratiform clouds on indirect aerosol climate forcing. Nature 432, 1014–1017. (doi:10.1038/nature03174) Albrecht, B. A. 1989 Aerosols, cloud microphysics and fractional cloudiness. Science 245, 1227–1230. (doi:10.1126/science.245.4923.1227) Blanchard, D. C. 1969 The oceanic production rate of cloud nuclei. J. Rech. Atmos. 4, 1–11. Bower, K. N., Jones, A. & Choularton, T. W. 1999 A modelling study of aerosol processing by stratocumulus clouds and its impact on general circulation model parameterisations of cloud and aerosol. Atmos. Res. 50, 317–344. (doi:10.1016/S0169-8095(98)00100-8) Bower, K. N., Choularton, T. W., Latham, J., Sahraei, J. & Salter, S. H. 2006 Computational assessment of a proposed technique for global warming mitigation via albedo-enhancement of marine stratocumulus clouds. Atmos. Res. 82, 328–336. (doi:10.1016/j.atmosres.2005.11.013) Charlson, R. J., Lovelock, J. E., Andreae, M. O. & Warren, S. G. 1987 Oceanic phytoplankton, atmospheric sulphur, cloud albedo and climate. Nature 326, 655–661. (doi:10.1038/326655a0) Davies, T., Cullen, M., Malcolm, A., Mawson, M., Staniforth, A., White, A. & Wood, N. 2005 A new dynamical core for the met office’s global and regional modelling of the atmosphere. Q. J. R. Meteorol. Soc. 131, 1759–1782. (doi:10.1256/qj.04.101) De Leeuw, G. 1986 Vertical profiles of giant particles close above the sea surface. Tellus B 38, 51–61. Edwards, J. M. & Slingo, A. 1996 Studies with a flexible new radiation code. I: Choosing a configuration for a large-scale model. Q. J. R. Meteorol. Soc. 122, 689–719. (doi:10.1002/qj.49712253107) Gettelman, A., Morrison, H. & Ghan, S. J. 2008 A new two-moment bulk stratiform cloud microphysics scheme in the Community Atmosphere Model, version 3 (CAM3). II: Single-column and global results. J. Climate 21, 3660–3679. (doi:10.1175/2008JCLI2116.1) Gregory, D. & Rowntree, P. R. 1990 A mass flux convection scheme with representation of cloud ensemble characteristics and stability dependent closure. Mon. Weather Rev. 118, 1483–1506. (doi:10.1175/1520-0493(1990)1182.0.CO;2) IPCC 2007 Climate Change 2007: The Physical Science Basis. Contribution of Working Group I to the Fourth Assessment Report of the Intergovernmental Panel on Climate Change (eds. Solomon, S., Qin, D., Manning, M., Chen, Z., Marquis, M., Averyt, K. B., Tignor, M. & Miller, H. L.). Cambridge, UK: Cambridge University Press.
Global temperature stabilization
227
Johns, T. et al. 2004 HadGEM1: Model Description and Analysis of Preliminary Experiments for the IPCC Fourth Assessment Report, Hadley Centre Technical Note no. 55. Exeter, UK: Met Office. See http://www.metoffice.gov. uk/ research/hadleycentre/pubs/HCTN. Johnson, D. B. 1982 The role of giant and ultragiant aerosol particles in warm rain initiation. J. Atmos. Sci. 39, 448–460. (doi:10.1175/1520-0469(1982)0392.0.CO;2) Latham, J. 1990 Control of global warming? Nature 347, 339–340. (doi:10.1038/347339b0) Latham, J. 2002 Amelioration of global warming by controlled enhancement of the albedo and longevity of low-level maritime clouds. Atmos. Sci. Lett. 3, 52–58. (doi:10.1006/Asle.2002.0048) Latham, J. & Smith, M. H. 1990 Effect on global warming of wind-dependent aerosol generation at the ocean surface. Nature 347, 372–373. (doi:10.1038/347372a0) Morrison, H. & Gettelman, A. 2008 A new two-moment bulk stratiform cloud microphysics scheme in the Community Atmosphere Model, version 3 (CAM3). I: description and numerical tests. J. Climate 21, 3642–3659. (doi:10.1175/2008JCLI2105.1) Oreopoulos, L. & Platnick, S. 2008 Radiative susceptibility of cloudy atmospheres to droplet number perturbations. II: Global analysis from MODIS. J. Geophys. Res. 113, D14S21 (doi:10.1029/2007JD009655) Pincus, R. & Baker, M. B. 1994 Effect of precipitation on the albedo susceptibility of clouds in the marine boundary layer. Nature 372, 250–252. (doi:10.1038/372250a0) Platnick, S. & Oreopoulos, L. 2008 Radiative susceptibility of cloudy atmospheres to droplet number perturbations. I: Theoretical analysis and examples from MODIS. J. Geophys. Res. 113, D14S20. (doi:10.1029/2007JD009654) Quaas, J. & Feichter, J. 2008 Climate change mitigation by seeding marine boundary layer clouds. Poster paper presented at the session ‘Consequences of geo-engineering and mitigation as strategies for responding to anthropogenic greenhouse gas emissions’ at the EGU General Assembly, Vienna, Austria, 2008. Quaas, J., Boucher, O. & Lohmann, U. 2006 Constraining the total aerosol indirect effect in the LMDZ and ECHAM4 GCMs using MODIS satellite data. Atmos. Chem. Phys. 6, 947–955. Quaas, J., Boucher, O., Bellouin, N. & Kinne, S. 2008 Satellite-based estimate of the combined direct and indirect aerosol climate forcing. J. Geophys. Res. 113, D05204. (doi:10.1029/2007JD008962) Ramanathan, V., Cess, R., Harrison, E., Minnis, P., Barkstrom, B., Ahmad, E. & Hartmann, D. 1989 Cloud-radiative forcing and climate: results from the earth radiation budget experiment. Science 243, 57–63. (doi:10.1126/science.243.4887.57) Ramaswamy, V., Boucher, O., Haigh, J., Hauglustaine, D., Haywood, J., Myhre, G., Nakajima, T., Shi, G. Y. & Solomon, S. 2001 Radiative forcing of climate change. In Climate Change 2001: The Scientific Basis. Contribution of Working Group I to the Third Assessment Report of the Intergovernmental Panel on Climate Change (eds. Houghton, J. T., Ding, Y., Griggs, D. J., Noguer, M., van der Linden, P. J., Dai, X., Maskell, K. & Johnson, C. A.). Cambridge, UK: Cambridge University Press, pp. 351–418. Roberts, G. C., Ramana, M. V., Corrigan, C., Kim, D. & Ramanathan, V. 2008 Simultaneous observations of aerosol–cloud–albedo interactions with three stacked unmanned aerial vehicles. Proc. Natl Acad. Sci. USA 105, 7370–7375. (doi:10.1073/pnas.0710308105)
228
John Latham et al.
Rosenfeld, D. 2000 Suppression of rain and snow by urban and industrial air pollution. Science 287, 1793–1796. (doi:10.1126/science.287.5459.1793) Schwartz, S. E. & Slingo, A. 1996 Clouds, chemistry, and climate. In Proc. NATO Advanced Research Workshop (eds. Crutzen, P. & Ramanathan, V.). Heidelberg, Germany: Springer, pp. 191–236. Slingo, A. 1990 Sensitivity of the earth’s radiation budget to changes in low clouds. Nature 343, 49–51. (doi:10.1038/343049a0) Smith, R. N. B. 1990 A scheme for predicting layer clouds and their water content in a general circulation model. Q. J. R. Meteorol. Soc. 116, 435–460. (doi:10.1002/qj.49711649210) Smith, M. H., Park, P. M. & Consterdine, I. E. 1993 Marine aerosol concentrations and estimated fluxes over the sea. Q. J. R. Meteorol. Soc. 119, 809–824. (doi:10.1002/qj.49711951211) Stevens, B., Vali, G., Comstock, K., Wood, R., VanZanten, M., Austin, P. H., Bretherton, C. S. & Lenschow, D. H. 2005 Pockets of open cells and drizzle in marine stratocumulus. Bull. Am. Meteorol. Soc. 86, 51–57. (doi:10.1175/BAMS-86-1-51) Twomey, S. 1977 Influence of pollution on the short-wave albedo of clouds. J. Atmos. Sci. 34, 1149–1152. (doi:10.1175/1520-0469(1977)0342.0.CO;2) Twomey, S. 1991 Aerosols clouds and radiation. Atmos. Environ. 25A, 2435–2442. Wang, S., Wang, Q. & Feingold, G. 2003 Turbulence, condensation, and liquid water transport in numerically simulated nonprecipitating stratocumulus clouds. J. Atmos. Sci. 60, 262–278. (doi:10.1175/1520-0469(2003)0602.0.CO;2). Wigley, T. M. L. 1989 Possible climate change due to SO2 -derived cloud condensation nuclei. Nature 339, 365–367. (doi:10.1038/339365a0) Wood, R. 2007 Cancellation of aerosol indirect effects in marine stratocumulus through cloud thinning. J. Atmos. Sci. 64, 2657–2669. (doi:10.1175/JAS3942.1) Woodcock, A. H. 1953 Salt nuclei in marine air as a function of altitude and wind force. J. Atmos. Sci. 10, 362–371. (doi:10.1175/1520-0469(1953)0102.0.CO;2) Xue, H. & Feingold, G. 2006 Large-eddy simulations of trade wind cumuli: investigation of aerosol indirect effects. J. Atmos. Sci. 63, 1605–1622. (doi:10.1175/JAS3706.1)
11 Sea-going hardware for the cloud albedo method of reversing global warming stephen salter, graham sortino and john latham
Following the review by Latham et al. (Chapter 10) of a strategy to reduce insolation by exploiting the Twomey effect, the present chapter describes in outline the rationale and underlying engineering hardware that may bring the strategy from concept to operation. Wind-driven spray vessels will sail back and forth perpendicular to the local prevailing wind and release micron-sized drops of seawater into the turbulent boundary layer beneath marine stratocumulus clouds. The combination of wind and vessel movements will treat a large area of sky. When residues left after drop evaporation reach cloud level they will provide many new cloud condensation nuclei giving more but smaller drops and so will increase the cloud albedo to reflect solar energy back out to space. If the possible power increase of 3.7 W m−2 from double pre-industrial CO2 is divided by the 24-hour solar input of 340 W m−2 , a global albedo increase of only 1.1 per cent will produce a sufficient offset. The method is not intended to make new clouds. It will just make existing clouds whiter. This chapter describes the design of 300-tonne ships powered by Flettner rotors rather than conventional sails. The vessels will drag turbines resembling oversized propellers through the water to provide the means for generating electrical energy. Some will be used for rotor spin, but most will be used to create spray by pumping 30 kg s−1 of carefully filtered water through banks of filters and then to micro-nozzles with piezoelectric excitation to vary drop diameter. The rotors offer a convenient housing for spray nozzles with fan assistance to help initial dispersion. The ratio of solar energy reflected by a drop at the top of a cloud to the energy needed to make the surface area of the nucleus Geo-Engineering Climate Change: Environmental Necessity or Pandora’s Box?, eds. Brian Launder and Michael Thompson. Published by Cambridge University Press. © Cambridge University Press 2010.
229
230
S. Salter, G. Sortino and J. Latham
on which it has grown is many orders of magnitude and so the spray quantities needed to achieve sufficient global cooling are technically feasible.
11.1 Introduction Other contributions to this volume by Caldeira & Wood (Chapter 13) and Rasch et al. (Chapter 12) examine the possibilities of injecting a sulphur aerosol into the upper layers of the stratosphere to increase albedo. Here, as proposed in Chapter 10 and in Latham et al. (1990, 2002), and Bower et al. (2006), a different strategy is developed that exploits the Twomey effect, a striking, if inadvertent, demonstration of that which is provided in Figure 11.1. Taken from a NASA satellite image, the long white streaks are caused by sulphates in the trails of exhausts from ship engines which provide extra condensation nuclei for new drops. Since, in the scheme we propose, the aim is to increase the solar reflectivity of such low-level maritime clouds and since a fine salt aerosol provides an admirable replacement for the sulphates whose effectiveness is evident in Figure 11.1, it seemed appropriate for the sprays to be dispersed from sea-going vessels (rather than, say, low-flying aircraft) and for the source of the sprays to be drawn from the ocean itself. Thus, the present chapter explores some of the design issues and concepts in using a fleet of wind-driven spray vessels to achieve the required albedo increase. The first step in vessel design is to estimate the spray volumes needed. Cloud albedo A depends on cloud depth Z in metres, liquid water content L, usually expressed in ml m−3 , and the concentration of cloud drops n, usually expressed as the number per ml of air. For engineering purposes the equations in Twomey’s classic paper (1977), usefully discussed by Schwartz & Slingo (1996), can be condensed by the use of a variable K(Z, L, n). If K(Z, L, n) ≡ 0.15 ZL2/3 n1/3 ,
(11.1)
then albedo A(Z, L, n) =
K(Z, L, n) . K(Z, L, n) + 0.827
(11.2)
Following Figure 2 of Schwartz & Slingo, this expression has been plotted in Figure 11.2 of this chapter against the cloud-droplet-number concentration for a typical liquid water content of 0.3 ml m−3 and for a wide range of cloud depths. The vertical black lines are the range of cloud drop concentrations for Pacific and
Sea-going hardware for the cloud albedo method of reversing global warming 231
Figure 11.1 Demonstration of the Twomey effect. The photograph is a NASA satellite image of ship tracks over the Bay of Biscay. It was images such as this that triggered the Twomey work (see also colour plate).
Cloud depth 2400 m
0.6
600
0.4
300
Bennartz Pacific minimum 40 cm−3
Albedo
1200
Bennartz N Atlantic maxium 89 cm−3
0.8
75
150
0.2 37.5
0 10
100
Drop concentration cm−3 1 × 103
Figure 11.2 Cloud top reflectivity as a function of drop concentration for various cloud thicknesses and a liquid water content of 0.3 g m−3 . Adapted from Schwartz & Slingo (1996) with additions of the present range of concentrations for Pacific and Atlantic suggested by Bennartz (2007).
232
S. Salter, G. Sortino and J. Latham
North Atlantic given by Bennartz (2007). Changing the liquid water content by factors of 2 either way shifts the curves slightly downwards but because the slopes of the curves in the commonly occurring conditions are the same, this has little effect on the magnitude of the reflectivity change. 11.2 An atmospheric energy balance calculation There is a vast amount of existing information on most of the parameters needed to calculate cooling as a function of spray rate, but it is distributed between many computers around the world and has been saved in different formats with different spatial resolutions and sampling rates, using different recovery software and different access protocols. Most of this information has now been collected, decoded, interpolated, unified and stored in a database as 6596 equal-area (7.72 × 1010 m2 ) cells of a reduced Gaussian grid. This allows selective interrogation by an efficient parsing routine (Sortino 2006). If there is to be double pre-industrial CO2 with no temperature change, then solar reflection needs to increase by approximately 3.7 W m−2 , or 2000 Terawatts globally. This is about the electrical output of 1.8 million nuclear power stations of 1100 MW each. The question is how many spray vessels with how much spray equipment placed where at which season will be needed? Calculations can be done separately for each of the equal-area cells. The greatest uncertainties concern the estimates of the present number of cloud condensation nuclei at various times and places, and the drop lifetimes. This is because it is the fractional change in drop numbers in clouds that drives the change in albedo. To demonstrate a spot test of equation (11.2) for reasonably typical conditions, let the cloud depth Z be 300 m, with a liquid water content L of 0.3 ml m−3 and use n = 65 cm −3 for the average mid-ocean drop concentration from the range of values suggested by Bennartz (2007) to calculate an albedo of A(Z, L, n) = 0.495. The effect of injecting 30 kg s−1 of seawater as 0.8 μm drops but confining it to just one of the equal-area cells will be to increase the number of new nuclei per cell by 1.12 × 1017 s−1 . It will take some time (perhaps 2 hours) for turbulence to disperse the evaporated spray residues through the boundary layer, but the cleanliness of the mid-ocean air and the hydrophilic nature of the salty residue will make them very effective condensation nuclei. A large fraction of those that reach the high humidity at the cloud base will form newer but smaller drops with the same total liquid water content as before. It will take some further time before they wash out or coalesce with large drops. The lowest estimate for drop life is approximately 1 day, giving an increase of 9.67 × 1021 in each cell to bring the total to 1.47 × 1022 . The depth of the marine boundary layer is often between 500 and 1500 m. If we can take the depth of a cell of the reduced Gaussian grid to be
Sea-going hardware for the cloud albedo method of reversing global warming 233
1000 m, the new concentration of cloud drops will be 191 cm−3 . This will make the new value of A(Z, L, n) = 0.584. The mean 24-hour equinoctial solar input at the equator is 440 W m−2 , while at the latitude of Patagonia it is reduced to 240 W m−2 . If spray sources can migrate with the seasons, a typical value of 340 W m−2 seems reasonable, even conservative. The resulting change of albedo will increase reflected power by 30.26 W m−2 or 2.33 TW over the 7.72 × 1010 m2 area of one cell. We cannot be sure that spray sources will always be under the right kind of cloud. The most conservative cooling estimate would be based on the assumption of completely random, non-intelligent deployment of spray vessels. This would reduce the 2.33 TW cooling by the fraction of cover of suitable low-level stratocumulus. This is given by Charlson et al. (1987) as 0.18 and would reduce reflected power from a single source of 30 kg s−1 to 420 GW. However, a lower concentration of nuclei over a wide area is more effective than a high one over a small area and the lifetime of nuclei under clear skies should be much longer than in cloud. It may turn out to be better to release spray in air masses that are cloudless but are predicted to become cloudy after some dispersal has taken place. These crude engineering lumped calculations should be performed with the actual values at a representative sample of times for every cell that has not been excluded on grounds of being downwind of land with dirty air, upwind of droughtstricken regions or too close to busy shipping routes. The wind speed data for each cell should be checked to ensure that there is enough input power for, as will be developed shortly, wind energy provides the principal source for driving the vessels and creating the spray. With an efficient generator, the 30 kg s−1 flow rate will be reached at 8 m s−1 wind speed. If the nucleus lifetime was the longest estimate of 5 days (Houghton 2004), this would bring the concentration up to levels found over land and lead to much reduced effectiveness. Cells will be placed in rank order to see how many are needed to achieve any target cooling and either how many vessels should be put in each cell or how many cells should be treated by one vessel. Vessel movements can be planned by looking at the best-cell list for the next month. The equations used for Figure 11.2, together with lumped assumptions about what is in reality a wide spread of values, allow the approximate prediction of global cooling as a function of spray rate from purely randomly placed spray sources as shown in Figure 11.3. The circle shows the approximate increase in positive forcing since the start of the industrial revolution. As the spraying rate is increased, the gain in reflected power evidently shows diminishing returns. But if these lumped assumptions are correct, the spray rate to cancel the 3.7 W m−2 effect of a doubling of pre-industrial CO2 is between 30 and 70 m3 s−1 .
234
S. Salter, G. Sortino and J. Latham −3
Global reflected power change W m−2
m
u inim c m −3 cifi cm a P 40
4
n 65
Mea
Assumptions and sources
cm
Boundary layer depth 1000 m
−3
9 cm
um 8 axim ntic m
Cloud depth
Atla
300 m
Schwartz & Slingo (1996)
3
0.3 ml m−3
Liquid water
Schwartz & Slingo (1996)
59 hours
Drop life
2
Smith (1991)
Low not high cloud fraction 0.18 1
Charlson et al. (1987)
Initial albedo 0
24 hour power in 0
10
20
30
40
50
Global spray rate
m3
60
70
80
90
0.495 340 W m−2
100
s−1
Figure 11.3 Global cooling as a function of spray rate for the assumptions in the right-hand side table, non-intelligent spraying and the range of initial nuclei concentration suggested by Bennartz (2007). The circle shows warming since the start of the industrial revolution. It could be reversed by spraying approximately 10 m3 s−1 . The question mark is a guess for the effect of twice pre-industrial CO2 . Assumptions obtained from Charlson et al. (1987), Schwartz & Slingo (1996) and Smith et al. (1991).
It is also useful to calculate the spray amount that would ‘hold the fort’ long enough for renewable energy technologies to be deployed by cancelling the annual increase in global warming, probably approximately 40 mW m−2 . The annual increase would be a spray rate of less than 150 kg s−1 , even with non-intelligent positioning. Suitable sites for spraying need plenty of incoming sunshine to give something to reflect. They must have a high fraction of low-level marine stratocumulus cloud. They should have few high clouds because these will reduce incoming energy and send the reflected energy down again. There should be reliable but not extreme winds to give spray vessels sufficient thrust. There should be a low density of shipping and icebergs. It helps to have a low initial density of cloud condensation nuclei because it is the fractional change that counts. This suggests sea areas distant from dirty or dusty land upwind. Owing to a possible anxiety over the effect of extra cloud condensation nuclei on rainfall, areas upwind of land with a drought problem should be avoided. In Figure 11.4 maps are for four seasons showing suitability of different sea areas based on the combination of one possible set of the selection criteria. Clearly, seasonal migration of the spray vessels is desirable and the southern oceans are particularly suitable for treatment in the southern summer. The very best all-year sites are off the coasts of California, Peru and Namibia. Regions in which marine
Sea-going hardware for the cloud albedo method of reversing global warming 235 (a)
(b)
(c)
(d)
Figure 11.4 Results of a parameter combination based on a set of selection criteria of sunshine, initial CCN concentration, cloud cover and wind speed for four quarters of 2001 from Sortino (2006). Red is best but yellow is fine. Seasonal migration is indicated. (a) January–March, (b) April–June, (c) July–September and (d) October–December. (See also colour plate.)
currents are flowing towards the Arctic are of special interest partly because cooling this water might contribute to preserving Arctic ice cover, which is itself a powerful reflector of solar energy, and partly because a reduction in the release rate of methane from the melting of Siberian permafrost might be achieved.
11.3 Energy and propulsion Energy is needed to make the spray. The proposed scheme will draw all the energy from the wind. Numbers of remotely controlled spray vessels will sail back and forth, perpendicular to the local prevailing wind. The motion through the water will drive underwater ‘propellers’ acting in reverse as turbines to generate electrical energy needed for spray production. Each unmanned spray vessel will have a global positioning system, a list of required positions and satellite communications to allow the list to be modified from time to time, allowing them to follow suitable cloud fields, migrate with the seasons and return to port for maintenance. The problems of remotely operating and maintaining ropes, sails and reefing gear will be avoided if the vessels use Flettner rotors instead of sails. These are vertical
236
S. Salter, G. Sortino and J. Latham
Figure 11.5 Anton Flettner’s first rotor ship, the Baden-Baden, crossed the Atlantic in 1926.
spinning cylinders that use the Magnus effect to produce forces perpendicular to the wind direction. Anton Flettner built two sea-going ships. The first, named Buckau, then renamed Baden-Baden (Figure 11.5), crossed the Atlantic in 1926 (Seufert & Seufert 1983). The rotors allow a sailing vessel to turn about its own axis, apply ‘brakes’ and go directly into reverse. They even allow self-reefing at a chosen wind speed. Flettner’s rotor system weighed only one-quarter of the conventional sailing rig which it replaced. The rotor ships could sail 20◦ closer to the wind than unconverted sister ships. The heeling moment on the rotor flattened out in high wind speeds and was less than the previous bare rigging. With a wind on her quarter, the ship would heel into the wind. The only disadvantage of these vessels is that they have to tack to move downwind. Energy has to be provided for electric motors to spin the rotors, but this was typically 5–10 per cent of the engine power for a conventional ship of the same thrust. (After the Atlantic crossing, Flettner obtained orders for six more. He built one, Barbara, but had the rest cancelled as a result of the 1929 depression.) Flettner used drums of steel and, later, aluminium. Today much lighter ones could be built with Kevlar or carbon-reinforced epoxy materials. His main problem seems to have been to find bearings capable of taking the large aerodynamic forces at quite high velocities despite the geometric distortions of heavily loaded structures. The development by SKF of geometrically tolerant rolling bearings will have removed many of the difficulties. A major wind-turbine
Sea-going hardware for the cloud albedo method of reversing global warming 237
Figure 11.6 A collection of measured and calculated lift and drag coefficients for spinning cylinders.
manufacturer, Enercon, commissioned a Flettner rotor ship which was launched in August 2008. The launch design showed propulsion by four rotors, 4 m in diameter and 27 m tall.
11.4 Rotor lift and drag The lift forces of a spinning cylinder are very much higher than those of a textile sail or an aircraft wing having the same projected area. Potential theory predicts that the lift per unit length of rotor should be 2π times the product of the surface speed of the rotor and far-field wind speed. This means that, for a constant rotor speed, it will rise with the first power of wind speed rather than with the square. If rotor surface speed and wind speed are kept in proportion, square law equations can be used (as in aircraft design) for comparison with wings and sails. The spin ratio (defined as local rotor speed over far-field wind speed in a frame moving with the vessel) acts such as the angle of incidence of the aerofoil section of an aircraft wing. The lift coefficient from ideal potential theory, as used with the square of velocity in aircraft design, is shown as the heavy line of Figure 11.6. The open circles are from wind tunnel tests by Reid (1924) on a 115-mm diameter cylinder. He reported excessive vibration sufficient to stop the test at 3000 rpm and a tunnel speed of 10 m s−1 , which would be a spin ratio of 1.79, but did not report results for spin ratios above 4.32.
238
S. Salter, G. Sortino and J. Latham
Figure 11.7 John Marples’ Cloudia (a rebuilt Searunner 34), with Thom fences on test at Fort Pierce, FL, February 2008. With a rotor drive power of 600 W, she could sail faster than the beam wind, stop, go into reverse and yaw 180◦ in either direction about her own axis. Funding for work on Cloudia was provided by the c Discovery Channel.) Discovery Channel and organized by Impossible Pictures. (
It is well known that part of the drag on an aircraft wing is due to the permanent tip vortex generated by the positive pressure on the under surface driving air to the negative pressure on the upper surface. The effect can be minimized by high aspectratio wings, such as those of the albatross, and by tip fins. It was for this reason that Flettner added discs to the tops of his rotors (see Figure 11.5). As a further design development, Thom (1934) experimented with multiple discs (or fences) and found that they produced very much higher lift coefficients and sometimes even negative drag coefficients. His data for disc diameters three times the rotor diameter placed at intervals of 0.75 of the rotor diameter are plotted as open triangles in Figure 11.6. The negative drag coefficients imply that some forward drive power is being taken from the rotor drive. Also plotted in Figure 11.6 with filled circles are coefficients from a numerical simulation carried out by Mittal & Kumar (2003) for an infinitely long cylinder. The falling drag values, even going negative, are of interest and provide qualitative support for Thom’s observations. All predictions agree quite well up to spin ratios of approximately 3, but diverge for higher values. A photograph of a sea-going yacht conversion by John Marples incorporating Flettner rotors with Thom fences is shown in Figure 11.7. An artist’s impression of the final spray vessel is shown in Figure 11.8.
Sea-going hardware for the cloud albedo method of reversing global warming 239
Figure 11.8 A conceptual Flettner spray vessel with Thom fences. The wind would be blowing from the reader’s right-hand side, the rotor spin would be clockwise seen from above and rotor thrust to the left. Vessels can also report sea and air temperatures, humidity, solar input, the direction and velocities of winds and currents, atmospheric pressure, visibility, cloud cover, plankton count, aerosol count, salinity and radio reception and could even rescue yachtsmen in distress. c J. MacNeill 2006.) (
Wind tunnel balances from the pre-war years had none of the force-sensing transducers allowed by later electronics. It must have been difficult to make accurate small drag-force measurements on vortex shedding rotors that were being fed with mechanical power. But if Mittal and Thom are right, we can design some very exciting sailing ships. Figure 11.9, also taken from Mittal & Kumar, shows lift coefficients against time after spin-up for a series of spin ratios from 0 to 5. There is an interesting build-up of vibrations for spin ratios up to 2 and also between 4 and 4.8 which are reminiscent of vortex shedding and are in good agreement with Reid’s reported vibrations. (As a cautionary note, however, the Reynolds number in Mittal & Kumar’s simulations is only 200, so there must be some doubt as to whether the same features will be present in a full-scale craft where the Reynolds numbers will be at least four orders of magnitude greater.) In a subsequent paper, Mittal (2004) shows span-wise axial oscillations in air velocity at various times after spin-up. The pitch of the oscillations is close to that of the fences suggested by Thom and it may be that the superior performance of fenced rotors is caused by the suppression of this instability. Thom measured the
240
S. Salter, G. Sortino and J. Latham
Figure 11.9 Numerical lift coefficients versus time (Mittal & Kumar 2003). (Courtesy of Journal of Fluid Mechanics.)
torque needed to spin his cylinders, but then made a mistake in scaling up the torque coefficient to the much higher Reynolds numbers needed for practical applications. This was spotted by Norwood (1991), who confirmed his own torque calculations with a model test. The patterns of the air flow associated with the vibrations of Mittal’s numerical predictions show clear vortex shedding as in the flow over a stalled aircraft wing. The vortex axes are parallel to the spin axis. Some aircraft designers put small vortex generators on the upper wing surfaces to induce pairs of vortices with axes parallel to the line of flight. These stabilize the air flow against separation. A single disc will centrifuge air outwards and lose all its kinetic energy. But perhaps closely packed discs with root fillets, as in Figure 11.8, may also induce pairs of vortices returning some of the kinetic energy of spin to the rotor core. The resistance to
Sea-going hardware for the cloud albedo method of reversing global warming 241
buckling of the double curvature of the fillets on the discs will make them much stronger. 11.5 Spray generation Provided that salt residues are of sufficient size to achieve nucleation, it is the number of drops rather than the mass of spray which matters. The aim is a monodisperse spray with a diameter of 0.8 μm but with the option for some controlled diameter variation. Spinning discs, ultrasonic excitation of Faraday waves and colliding jets of high-pressure water/air solutions have all been studied. The final choice uses silicon micro-fabrication technology. A hexagonal array of 1483 submicron holes will be etched through an 8-μm layer of silicon to meet a 50-μm hole through the thickness of a 0.5-mm wafer. This will be repeated 1345 times within the area of a 3.2-mm hole in a Yokota YST130N stainless steel disc. This hole will be one of 499 spread across a 100-mm wafer to give nearly 109 micro-nozzles in each of the 18 wafers of a spray vessel. Ultrasonic excitation can be used to reduce drop size from the value predicted by Rayleigh (1878). The silicon will be protected by an oxide layer sealed with vapour-deposited Parylene. Energy losses are dominated by the viscosity through the 8-μm layer. The nozzle banks must be drenched in desalinated water when the system is idle. A weakness of the micro-nozzle approach is that particles much smaller than a nozzle can form an arch to clog it. Fortunately, the need to remove viruses from ground water for drinking purposes has produced a good selection of ultrafiltration products that can filter to a better level than is needed. Suppliers guarantee a life of 5 years provided that back-flushing can be done at the right intervals. Each rotor has a Grundfoss down-hole pump that feeds 17 m3 s−1 to a bank of eight filters with blister valves to allow any filter to be back-flushed. Norit X-flow filters have an excellent record for pre-filtration in reverse osmosis desalination plant (van Hoof et al. 1999). A trash-grid made from titanium mesh will prevent jelly fish and plastic bags from jamming the pump. If it is fed with a current of 90 A, it can also produce 2 ppm electrolytic chlorine to prevent biological growths. Electrical energy for spray and rotor drive will be generated by a pair of 2.4-m diameter axial-flow turbines on either side of the hull as shown in Figure 11.10. These are very much larger than any propellers needed for a vessel of this size but can act as propellers for 10 hours in windless conditions using energy from a bank of Toshiba SCiB batteries. The vessels will also carry a liquid-cooled version of the Zoche ZO 01A radial diesel aero engine to give trans-ocean range in emergency. The turbine rotation speed will be limited by cavitation to approximately 80 rpm. This is fast enough for the use of polyphase permanent-magnet rim generators built into the turbine ducts. Tiles of neodymium–boron magnets will be moved past wet
242
S. Salter, G. Sortino and J. Latham
Figure 11.10 A cross-sectional view through rotor and turbines.
Sea-going hardware for the cloud albedo method of reversing global warming 243
printed-circuit pancake stator windings sealed in glass-flake epoxy Parylene. The axial thrust on the rotor is taken by a pair of 45◦ SKF spherical roller thrust bearings aft of the rotor and an SKF CARB bearing at the forward end.
11.6 Rotor design Rotor dimensions of 20 m height and 2.4 m core diameter are chosen to produce a thrust sufficient for full turbine power in a wind speed of 8 m s−1 and look very small compared with conventional sails. The difficult region is at the root of the internal mast where the design must resolve the many conflicts between r r r r r r
the transfer of large thrust forces and moments from the mast to the hull; the possible need to deal with wave impact; the need to send water to the spray-release system; the transmission of drive torque to the rotors at variable speeds in either direction; the pumping of a large volume of air through the rotors for spray dispersion; and the need to allow extension and deflection of the rotor and mast by some form of universal joint with radial and torsional stiffness but with the freedom for the rotor to distort in heave, roll and pitch.
The main thrust and the weight of the motor are taken by a pair of 45◦ spherical roller bearings placed slightly below half the height of the rotor. Forces due to uneven aerodynamic loads and roll acceleration are taken through a tri-link mechanism at the bottom of the rotor to a pair of angular-contact ball-bearings matched for low swash play. The tri-link allows the rotor to stretch and bend without loading the bearings. It turns out that the design is rigidity limited to stay within the acceptable angular movement of the spherical bearings rather than being strength limited. The drive motor, shown in Figure 11.11, has a 1-m hole and is built around outside of the mast. It is made with moving magnets and a pancake stator, very similar to that of the turbine. Spray will be released just above the upper bearing into an upward airstream of 12 m s−1 produced by fan blades just below the upper bearing. The fan blade angle has to change when rotor direction changes to allow the vessel to go about. The air passage at the bottom of the rotor is narrowed by a factor of 4, so that the Bernoulli pressure drop can provide suction for the rotor surface to prevent flow separation as shown by Prandtl in 1904 (see Braslow et al. 1951; Schlichting 1979, ch. 14). Motor drive torque is based on fan requirements and reverse engineering the quoted drive power values from Flettner ships which indicate torsional drag
244
S. Salter, G. Sortino and J. Latham
Figure 11.11 A motor with a hole large enough to fit outside the mast solves the conflicts between rotor loads, torque drive and water pipes.
Sea-going hardware for the cloud albedo method of reversing global warming 245 a
b
Figure 11.12 (a) Glauert’s torque coefficients against Reynolds number for spin ratios 2–6 and (b) against spin ratio for Reynolds numbers 0.5, 1 and 2 million. Flettner used a safety factor of 2 and so shall we. But if the suction idea works, we can hope for even lower drive torque requirements.
coefficients of approximately 0.006. This is above that predicted by Glauert (1957) and Padrino & Joseph (2006), as shown in Figure 11.12.
11.7 Project costs A development programme has been planned to reduce technical uncertainties. These mainly involve the design of an efficient spray generator, a study of drop life and dispersion, a study of the present distribution of cloud condensation nuclei, a limited-area field experiment and rigorous meteorological modelling, especially of any adverse effects. Very few uncertainties will remain after the expenditure of the first £2 million over 2 years. It will need perhaps £25 million and a further 3 years to complete research and development of the reliable hardware for spray vessels including the first fully instrumented, full-scale, crewed and sea-going prototype. Once there is experience of its operation, it will cost approximately £30 million for tooling, which will allow a large number of spray vessels to be built rapidly in the event of a global emergency. The vessels are expected to have a displacement of 300 tonnes and a plant rating of 150 kW. Much industrial equipment costs approximately £1000 kW–1 . Mediumsized ships cost approximately £2000 tonne−1 dead weight. Smaller ones may be more expensive. It seems likely that production spray vessels will cost between £1 and £2 million each. Most small- and medium-sized yachts are now built with glass-reinforced plastic. The expected displacement of the present vessels may be above the size for which this material is suitable and so the first prototype is being designed for steel
246
S. Salter, G. Sortino and J. Latham
construction, which is convenient for subsequent modifications involving extra holes and welded attachments. Alternatively, an attractive material for mass production would be ferrocement owing to its excellent long-term resistance to seawater. There is also a growing number of interesting plastic-reinforced cement materials that should be considered. Filter manufacturers will guarantee a 5-year life and expect that most will last for 10 years. Enough is known about bearing loads to achieve long life and the main unavoidable maintenance cost will be anti-fouling treatments. For ships moving continuously in cold water, anti-fouling intervals will be longer than for ones frequently moored in warm harbours. These figures suggest that eventually the maintenance will be low in comparison with return on capital. If world temperatures are to be kept steady with no carbon reduction, the working fleet would have to be increased by approximately 50 vessels a year plus extra ones to replace any lost. If the assumptions used for Figure 11.3 are correct, the cancellation of 3.7 W m−2 associated with a doubling of preindustrial CO2 will need a spray rate of approximately 45 m3 s−1 and perhaps less with skilful vessel deployment. If 0.03 m3 s−1 is the right design choice for one spray vessel, this could come from a working fleet of approximately 1500.
11.8 Conclusions It is possible that a fleet of remotely controlled wind-driven spray vessels can exploit the Twomey effect to make useful reductions to the input of solar energy to the sea. It can also provide valuable meteorological and oceanographic data. The Flettner rotor, perhaps with Thom fences, is an attractive alternative to sails owing to its high lift coefficients and high lift–drag ratios but mainly for easy control by computer and the convenience for housing spray plant producing an upward air flow. Long-life ultrafiltration modules can deliver seawater with absolute filtration below 0.1 μm but with the salt still present. This will not clog 0.8-μm nozzles manufactured in silicon wafers using lithographic techniques from the microfabrication industry. So far, this seems to be the most energy-efficient spray system that can be devised. With stronger notch-tolerant materials for the nozzle grid, the energy requirement can approach the absolute minimum needed for creation of new surface area against the force of surface tension.
Sea-going hardware for the cloud albedo method of reversing global warming 247
Predictions for global cooling power as a function of spray rate depend strongly on assumptions of initial concentration of condensation nuclei and the lifetime of spray but only weakly on cloud liquid–water content and depth. It is possible that 50 spray vessels costing approximately £1–2 million each could cancel the thermal effects of a 1-year increase in world CO2 . The immediate effect of albedo control is local cooling of the sea surface. It could be targeted at vulnerable regions, such as coral reefs and diminishing polar ice. It may reduce the frequency and severity of hurricanes and extend the sea area suitable for phytoplankton growth. However, the oceans are an effective thermal store, and currents are an efficient transport mechanism, so initial effects will eventually spread worldwide. Sea temperatures have a strong influence on world climate. Initially, the deployment of spray vessels in an attempt to replicate favourable historic oceanic temperature patterns will delay the need for perfect understanding of all the many complicated interactions. Spray releases with control of time, place, quantity and drop diameter will give atmospheric physicists a rare chance to try controlled experiments and so help in the understanding of atmospheric aerosols that have such powerful effects on climate. While a major effort should be put into the study of all possible side effects of keeping sea temperatures at present values (or other values of our choosing), many of the side effects appear to be benign and less dangerous than those of large, unbridled temperature rises. Unsuitable places can be avoided and spraying can be stopped instantly with all effects removed in a few days. Albedo control can do nothing for ocean acidity. Acknowledgements All geo-engineers have been inspired by the pioneering work of James Lovelock. Key suggestions about spray generation are due to Lowell Wood and Tom Stevenson. NASA and the ISCCP have been extremely generous with information and patient in explaining how we can use it. Their continued work on understanding and protecting the home planet has earned respect and admiration from many countries round the world. Funding for work on Cloudia was provided by the Discovery Channel and organized by Impossible Pictures. The National Center for Atmospheric Research is sponsored by the National Science Foundation. Any opinions, findings and conclusions or recommendations expressed in this chapter are those of the authors and do not necessarily reflect the views of the National Science Foundation.
248
S. Salter, G. Sortino and J. Latham
References Bennartz, R. 2007 Global assessment of marine boundary layer cloud droplet number concentration from satellite. J. Geophys. Res. 112, D02201. (doi:10.1029/2006JD007547) Bower, K., Choularton, T., Latham, J., Sahraei, J. & Salter, S. 2006 Computational assessment of a proposed technique for global warming mitigation via albedo-enhancement of marine stratocumulus clouds. Atmos. Res. 82, 328–336. (doi:10.1016/j.atmosres.2005.11.013) Braslow, A. L., Burrows, D. L., Tetervin, N. & Viscconti, F. 1951 Experimental and Theoretical Studies of Area Suction for the Control of the Laminar Boundary Layer, NACA Report no. 1025. Available from Cranfield University NACA archive. Charlson, R. J., Lovelock, J. E., Andreae, M. O. & Warren, S. G. 1987 Oceanic phytoplankton, atmospheric sulphur and climate. Nature 326, 655–661. (doi:10.1038/326655a0) Glauert, M. B. 1957 The flow past a rapidly rotating cylinder. Proc. R. Soc. A 242, 108–115. (doi:10.1098/rspa.1957.0157) Houghton, J. 2004 Global Warming: The Complete Briefing. Cambridge, UK: Cambridge University Press. Latham, J. 1990 Control of global warming. Nature 347, 339–340. (doi:10.1038/347339b0) Latham, J. 2002 Amelioration of global warming by controlled enhancement of the albedo and longevity of low-level maritime clouds. Atmos. Sci. Lett. 3, 52. (doi:10.1006/asle.2002.0048) Mittal, S. 2004 Three-dimensional instabilities in flow past a rotating cylinder. ASME J. Appl. Mech. 71, 89–95. Mittal, S. & Kumar, B. 2003 Flow past a rotating cylinder. J. Fluid Mech. 476, 303–334. (doi:10.1017/S0022112002002938) Norwood, J. 1991 Performance Prediction for 21st Century Multihull Sailing Yachts. London, UK: Amateur Yacht Research Society. See www.ayrs.org/ Padrino, J. C. & Joseph, D. D. 2006 Numerical study of the steady state uniform flow past a rotating cylinder. J. Fluid Mech. 557, 191–223. (doi:10.1017/S0022112006009682) Rayleigh, L. 1878 On the instability of jets. Proc. Lond. Math. Soc. S1–10, 4–13. (doi:10.1112/plms/s1-10.1.45) Reid, E. G. 1924 Tests of Rotating Cylinders: Technical Notes, NACA-TN-209. Available from Cranfield University NACA archive. Schlichting, H. 1979 Boundary Layer Theory. New York: McGraw-Hill. Schwartz, S. E. & Slingo, A. 1996 Enhanced shortwave radiative forcing due to anthropogenic aerosols. (Intelligible even to engineers.). In Clouds Chemistry and Climate, eds. Crutzen, P. & Ramanathan, V. Heidelberg, Germany: Springer, pp. 191–236. Seufert, W. & Seufert, S. 1983 Critics in a spin over Flettner’s ship. New Sci. 10 March, 656–659. Smith, M. H., Park, P. M. & Consterdine, I. E. 1991 North Atlantic aerosol remote concentrations measured at a hebridean coastal site. Atmos. Environ. A 25, 547–555. (doi:10.1016/0960-1686(91)90051-8) Sortino, G. 2006 A data resource for cloud cover simulation. MSc thesis, School of Informatics, University of Edinburgh.
Sea-going hardware for the cloud albedo method of reversing global warming 249 Thom, A. 1934 Effects of Disks on the Air Forces on a Rotating Cylinder, Aeronautical Research Committee Reports and Memoranda no. 1623. Available from Cranfield University NACA archive. Twomey, S. 1977 Influence of pollution on the short-wave albedo of clouds. J. Atmos. Sci. 34, 1149–1152. (doi:10.1175/1520-0469(1977)0342.0.CO;2) Van Hoof, S. C. J. M., Hashim, A. & Kordes, A. J. 1999 The effect of ultrafiltration as pretreatment to reverse osmosis in wastewater reuse and seawater desalination applications. Desalination 124, 231–242. (doi:10.1016/S0011-9164(99)00108-3)
12 An overview of geo-engineering of climate using stratospheric sulphate aerosols philip j. rasch, simone tilmes, richard p. turco, alan robock, luke oman, chih-chieh (jack) chen, georgiy l. stenchikov and rolando r. garcia
We provide an overview of geo-engineering by stratospheric sulphate aerosols. The state of understanding about this topic as of early 2008 is reviewed, summarizing the past 30 years of work in the area, highlighting some very recent studies using climate models, and discussing methods used to deliver sulphur species to the stratosphere. The studies reviewed here suggest that sulphate aerosols can counteract the globally averaged temperature increase associated with increasing greenhouse gases, and reduce changes to some other components of the Earth system. There are likely to be remaining regional climate changes after geo-engineering, with some regions experiencing significant changes in temperature or precipitation. The aerosols also serve as surfaces for heterogeneous chemistry resulting in increased ozone depletion. The delivery of sulphur species to the stratosphere in a way that will produce particles of the right size is shown to be a complex and potentially very difficult task. Two simple delivery scenarios are explored, but similar exercises will be needed for other suggested delivery mechanisms. While the introduction of the geo-engineering source of sulphate aerosol will perturb the sulphur cycle of the stratosphere signicantly, it is a small perturbation to the total (stratosphere and troposphere) sulphur cycle. The geo-engineering source would thus be a small contributor to the total global source of ‘acid rain’ that could be compensated for through improved pollution control of anthropogenic tropospheric sources. Some areas of research remain unexplored. Although ozone may be depleted, with a consequent increase to solar ultraviolet-B (UVB) energy reaching the surface and a potential impact on health and biological populations, the aerosols will also scatter and attenuate this part of the energy spectrum, and this Geo-Engineering Climate Change: Environmental Necessity or Pandora’s Box?, eds. Brian Launder and Michael Thompson. Published by Cambridge University Press. © Cambridge University Press 2010.
250
Geo-engineering using stratospheric sulphate aerosols
251
may compensate the UVB enhancement associated with ozone depletion. The aerosol will also change the ratio of diffuse to direct energy reaching the surface, and this may influence ecosystems. The impact of geo-engineering on these components of the Earth system has not yet been studied. Representations for the formation, evolution and removal of aerosol and distribution of particle size are still very crude, and more work will be needed to gain confidence in our understanding of the deliberate production of this class of aerosols and their role in the climate system.
12.1 Introduction The concept of ‘geo-engineering’ (the deliberate change of the Earth’s climate by mankind; Keith 2000) has been considered at least as far back as the 1830s with J. P. Espy’s suggestion (Fleming 1990) of lighting huge fires that would stimulate convective updrafts and change rain intensity and frequency of occurrence. Geoengineering has been considered for many reasons since then, ranging from making polar latitudes habitable to changing precipitation patterns. There is increasing concern by scientists and society in general that energy system transformation is proceeding too slowly to avoid the risk of dangerous climate change from humankind’s release of radiatively important atmospheric constituents (particularly CO2 ). The assessment by the Intergovernmental Panel on Climate Change (IPCC 2007a) shows that unambiguous indicators of humaninduced climate change are increasingly evident, and there has been little societal response to the scientific consensus that reductions must take place soon to avoid large and undesirable impacts. To reduce carbon dioxide emissions soon enough to avoid large and undesirable impacts requires a near-term revolutionary transformation of energy and transportation systems throughout the world (Hoffert et al. 1998). The size of the transformation, the lack of effective societal response and the inertia to changing our energy infrastructure motivate the exploration of other strategies to mitigate some of the planetary warming. For this reason, geo-engineering for the purpose of cooling the planet is receiving attention. A broad overview to geo-engineering can be found in the reviews of Keith (2000), WRMSR (2007) and the chapters in this volume. The geo-engineering paradigm is not without its own perils (Robock 2008). Some of the uncertainties and consequences of geo-engineering by stratospheric aerosols are discussed in this paper. This study describes an approach to cooling the planet, which goes back to the mid-1970s, when Budyko (1974) suggested that, if global warming ever became
252
Philip J. Rasch et al.
a serious threat, we could counter it with aeroplane flights in the stratosphere, burning sulphur to make aerosols that would reflect sunlight away. The aerosols would increase the planetary albedo and cool the planet, ameliorating some (but as discussed below, not all) of the effects of increasing CO2 concentrations. The aerosols are chosen/designed to reside in the stratosphere because it is remote, and they will have a much longer residence time than tropospheric aerosols that are rapidly scavenged. The longer lifetime for stratospheric aerosols means that fewer aerosols need be delivered per unit time to achieve a given aerosol burden, and that the aerosols will disperse and act to force the climate system over a larger area. Sulphate aerosols are always found in the stratosphere. Low background concentrations arise due to transport from the troposphere of natural and anthropogenic sulphur-bearing compounds. Occasionally much higher concentrations arise from volcanic eruptions, resulting in a temporary cooling of the Earth system (Robock 2000), which disappears as the aerosol is flushed from the atmosphere. The volcanic injection of sulphate aerosol thus serves as a natural analogue to the geo-engineering aerosol. The analogy is not perfect because the volcanic aerosol is flushed within a few years, and the climate system does not respond in the same way as it would if the particles were continually replenished, as they would be in a geo-engineering effort. Perturbations to the system that might become evident with constant forcing disappear as the forcing disappears. This study reviews the state of understanding about geo-engineering by sulphate aerosols as of early 2008. We review the published literature, introduce some new material and summarize some very recent results that are presented in detail in the submitted articles at the time of writing. In our summary we also try to identify areas where more research is needed. Since the paper by Budyko (1974), the ideas generated there have received occasional attention in discussions about geo-engineering (e.g. NAS92 1992; Turco 1995; Govindasamy & Caldeira 2000, 2003; Govindasamy et al. 2002; Crutzen 2006; Wigley 2006; Matthews & Caldeira 2007). There are also legal, moral, ethical, financial and international political issues associated with a manipulation of our environment. Commentaries (Bengtsson 2006; Cicerone 2006; Kiehl 2006; Lawrence 2006; MacCracken 2006) to Crutzen (2006) address some of these issues and remind us that this approach does not treat all the consequences of higher CO2 concentrations (such as ocean acidification; others are discussed in Robock 2008). Recently, climate modellers have begun efforts to provide more quantitative assessments of the complexities of geo-engineering by sulphate aerosols and the consequences to the climate system (Rasch et al. 2008; Tilmes et al. 2008, 2009; Robock et al. 2008).
Geo-engineering using stratospheric sulphate aerosols
253
Figure 12.1 A schematic of the processes that influence the life cycle of stratospheric aerosols (adapted with permission from SPARC 2006).
12.2 An overview of stratospheric aerosols in the Earth system 12.2.1 General considerations Sulphate aerosols are an important component of the Earth system in the troposphere and stratosphere. Because sulphate aerosols play a critical role in the chemistry of the lower stratosphere and occasionally, following a volcanic eruption, in the radiative budget of the Earth by reducing the incoming solar energy reaching the Earth surface, they have been studied for many years. A comprehensive discussion of the processes that govern the stratospheric sulphur cycle can be found in the recent assessment of stratosphere aerosols (SPARC 2006). Figure 12.1, taken from that report, indicates some of the processes that are important in that region. Sulphate aerosols play additional roles in the troposphere (IPCC 2007a and references therein). As in the stratosphere they act to reflect incoming solar energy (the ‘aerosol direct effect’), but also act as cloud condensation nuclei, influencing the size of cloud droplets and the persistence or lifetime of clouds (the ‘aerosol indirect effect’) and thus the reflectivity of clouds.
254
Philip J. Rasch et al.
Figure 12.2 A very rough budget (approx. 1 digit of accuracy) for most of the major atmospheric sulphur species during volcanically quiescent situations, following Rasch et al. (2000), SPARC (2006) and Montzka et al. (2007). Numbers inside boxes indicate species burden in units of Tg S, and approximate lifetime against the strongest source or sink. Numbers beside arrows indicate net source or sinks (transformation, transport, emissions, and deposition processes) in Tg S yr−1 .
Although our focus is on stratospheric aerosols, one cannot ignore the troposphere, and so we include a brief discussion of some aspects of the tropospheric sulphur cycle also. A very rough budget describing the sources, sinks and transformation pathways1 during volcanically quiescent times is displayed in Figure 12.2. Sources, sinks and burdens for sulphur species are much larger in the troposphere than in the stratosphere. The sources of the aerosol precursors are natural and anthropogenic sulphur-bearing reduced gases (DMS, dimethyl sulphide; SO2 , sulphur dioxide; H2 S, hydrogen sulphide; OCS, carbonyl sulphide). These precursor gases are gradually oxidized (through both gaseous and aqueous reactions) to end products involving the sulphate anion (SO2− 4 ) in combination with various other 1
Sulphur emissions and burdens are frequently expressed in differing units. They are sometimes specified with respect to their molecular weight. Elsewhere they are specified according to the equivalent weight of sulphur. They may be readily converted by multiplying by the ratio of molecular weights of the species of interest. We use only units of S in this chapter, and have converted all references in other papers to these units. Also, in the stratosphere, we have assumed that the sulphate binds with water in a ratio of 75/25 H2 SO4 /water to form particles. Hence 3 Tg SO2− 4 = 2 Tg SO2 = 1 Tg S ≈ 4 Tg aerosol particles.
Geo-engineering using stratospheric sulphate aerosols
255
cations. In the troposphere where there is sufficient ammonia, most of the aerosols exist in the form of mixtures of ammonium sulphate ((NH4 )2 SO4 ) and bisulphate ((NH4 )HSO4 ). The stratospheric sulphur-bearing gases oxidize (primarily via reactions with the OH radical) to SO2 , which is then further oxidized to gaseous H2 SO4 . Stratospheric sulphate aerosols exist in the form of mixtures of condensed sulphuric acid (H2 SO4 ), water and, under some circumstances, hydrates with nitric acid (HNO3 ). Although the OCS source is relatively small compared with other species, owing to its relative stability, it is the dominant sulphur-bearing species in the atmosphere. Oxidation of OCS is a relatively small contributor to the radiatively active sulphate aerosol in the troposphere, but it plays a larger role in the stratosphere where it contributes perhaps half the sulphur during volcanically quiescent conditions. Some sulphur also enters the stratosphere as SO2 and as sulphate aerosol particles. The reduced sulphur species oxidize there and form sulphuric acid gas. The H2 SO4 vapour partial pressure in the stratosphere – almost always determined by photochemical reactions – is generally supersaturated, and typically highly supersaturated, over its binary H2 O–H2 SO4 solution droplets. The particles form and grow through vapour deposition, depending on the ambient temperature and concentrations of H2 O. These aerosol particles are then transported by winds (as are their precursors). Above the lower stratosphere, the particles can evaporate, and in the gaseous form the sulphuric acid can be photolysed to SO2 , where it can be transported as a gas, and may again oxidize and condense in some other part of the stratosphere. Vapour deposition is the main growth mechanism in the ambient stratosphere, and in volcanic clouds, over time. Because sources and sinks of aerosols are so much stronger in the troposphere, the lifetime of sulphate aerosol particles in the troposphere is a few days, while that of stratospheric aerosol is a year or so. This explains the relatively smooth spatial distribution of sulphate aerosol and resultant aerosol forcing in the stratosphere, and much smaller spatial scales associated with tropospheric aerosol. The net source of sulphur to the stratosphere is believed to be of the order of 0.1 Tg S yr−1 during volcanically quiescent conditions. A volcanic eruption completely alters the balance of terms in the stratosphere. For example, the eruption of Mount Pinatubo is believed to have injected approximately 10 Tg S (in the form of SO2 ) over a few days. This injection amount provides a source approximately 100 times that of all other sources over the year. The partial pressure of sulphuric acid gas consequently reaches much higher levels than those during background conditions. After an eruption, new particles are nucleated only in the densest parts of eruption clouds. These rapidly coagulate and disperse to concentration levels that do not aggregate significantly. Particle aggregation is controlled by Brownian coagulation (except perhaps under very high sulphur loadings). Coagulation mainly
256
Philip J. Rasch et al.
limits the number of particles, rather than the overall size of the particles, which depends more on the sulphur source strength (although considering the overall sulphur mass balance, the two processes both contribute). The particles’ growth is thus influenced by both vapour deposition and proximity to other particles. The primary loss mechanism for sulphur species from the stratosphere is believed to be the sedimentation of the aerosol particles. Particle sedimentation is governed by Stokes’ equation for drag corrected to compensate for the fact that in the stratosphere at higher altitudes the mean free path between air molecules can far exceed the particle size, and particles fall more rapidly than they would otherwise. The aerosol particles settle out (larger particles settle faster), gradually entering the troposphere, where they are lost via wet and dry deposition processes. Examples of the non-linear relationships between SO2 mass injection, particle size and visible optical depth as a function of time assuming idealized dispersion can be found in Pinto et al. (1998). These are detailed microphysical simulations, although in a one-dimensional model with specified dispersion. The rate of dilution of injected SO2 is critical owing to the highly non-linear response of particle growth and sedimentation rates within expanding plumes; particles have to be only 10 μm or less to fall rapidly, which greatly restricts the total suspended mass, optical depth and infrared effect. The mass limitation indicates that 10 times the mass injection (of say Pinatubo) might result in only a modestly larger visible optical depth after some months. The life cycle of these particles is thus controlled by a complex interplay between meteorological fields (like wind, humidity and temperature), the local concentrations of the gaseous sulphur species, the concentration of the particles themselves and the size distribution of the particles. In the volcanically quiescent conditions (often called background conditions), partial pressures of sulphur gases remain relatively low, and the particles are found to be quite small (Bauman et al. 2003), with a typical size distribution that can be described with a lognormal distribution with a dry mode radius, standard deviation and effective radius of 0.05/2.03/0.17 μm, respectively. After volcanic eruptions when sulphur species concentrations get much higher, the particles grow much larger (Stenchikov et al. 1998). Rasch et al. (2008) used numbers for a size distribution 6–12 months after an eruption for the large volcanic-like distribution of 0.376/1.25/0.43 μm following Stenchikov et al. (1998) and Collins et al. (2004). There is uncertainty in the estimates of these size distributions, and volcanic aerosol standard distribution σ LN was estimated to range from 1.3 to greater than 2 in Steele & Turco (1997). When the particles are small, they primarily scatter in the solar part of the energy spectrum, and play no role in influencing the infrared (long-wave) part of the energy spectrum. Larger particles seen after an eruption scatter and absorb in
Geo-engineering using stratospheric sulphate aerosols
257
the solar wavelengths, but also absorb in the infrared (Stenchikov et al. 1998). Thus small particles tend to scatter solar energy back to space. Large particles scatter less efficiently, and also trap some of the outgoing energy in the infrared. The size of the aerosol thus has a strong influence on the climate. 12.2.2 Geo-engineering considerations To increase the mass and number of sulphate aerosols in the stratosphere, a new source must be introduced. Using Pinatubo as an analogue, Crutzen (2006) estimated a source of 5 Tg S yr−1 would be sufficient to balance the warming associated with a doubling of CO2 . Wigley (2006) used an energy balance model to conclude that approximately 5 Tg S yr−1 in combination with emission mitigation would suffice. These studies assumed that the long-term response of the climate system to a more gradual injection would be similar to the transient response to a Pinatubo-like transient injection. A more realistic exploration can be made in a climate system model (see Section 12.2.4). Rasch et al. (2008) used a coupled climate system model to show that the amount of aerosol required to balance the warming is sensitive to particle size, and that non-linearities in the climate system mattered. Their model suggested that 1.5 Tg S yr−1 might suffice to balance the greenhouse gases’ warming, if the particles looked like those during background conditions (unlikely, as will be seen in Section 12.2.3), and perhaps twice that would be required if the particles looked more like volcanic aerosols. Robock et al. (2008) used 1.5–5 Tg S yr−1 in a similar study, assuming larger particle sizes (which, as will be seen in Section 12.2.3, is probably more realistic). They explored the consequences of injections in polar regions (where the aerosol would be more rapidly flushed from the stratosphere) and tropical injections. All of these studies suggest that a source 15–30 times that of the current nonvolcanic sources of sulphur to the stratosphere would be needed to balance warming associated with a doubling of CO2 . It is important to note that in spite of this very large perturbation to the stratospheric sulphur budget, it is a rather small perturbation to the total sulphur budget of the atmosphere. This suggests that the enhanced surface deposition (as for example ‘acid rain’) from a stratospheric geoengineering aerosol would be small compared with that arising from tropospheric sources globally, although it could be important if it occurred in a region that normally experienced little deposition from other sources. There are competing issues in identifying the optimal way to produce a geoengineering aerosol. Since ambient aerosol can be a primary scavenger of new particles and vapours, their very presence limits new particle formation. When the stratosphere is relatively clean, the H2 SO4 supersaturation can build up, and
258
Philip J. Rasch et al.
nucleation of new particles over time occurs more easily, with less scavenging of the new particles. Thus, the engineered layer itself becomes a limiting factor in the ongoing production of optically efficient aerosols. Many of the earlier papers on geo-engineering with stratospheric aerosols have listed delivery systems that release sulphur in very concentrated regions, using artillery shells, high-flying jets, balloons, etc. These will release the sulphur in relatively small volumes of air. Partial pressures of sulphuric acid gas will get quite high, with consequences to particle growth and lifetime of the aerosols (see Section 12.2.3 for more detail). An alternative would be to use a precursor gas that is quite long-lived in the troposphere but oxidizes in the stratosphere and then allow the Earth’s natural transport mechanisms to deliver that gas to the stratosphere, and diffuse it prior to oxidation. OCS might serve as a natural analogue to such a gas (Turco et al. 1980), although it is carcinogenic and a greenhouse gas. Current sources of OCS are 1–2 Tg S yr−1 (Montzka et al. 2007). Perhaps 15 per cent of that is estimated to be of anthropogenic origin. Only approximately 0.03–0.05 Tg S yr−1 is estimated to reach the tropopause and enter the stratosphere (Figure 12.2 and SPARC 2006). Residence times in the troposphere are estimated to be approximately 1–3 years, and much longer (3–10 years) in the stratosphere. Turco et al. (1980) speculated that if anthropogenic sources of OCS were to be increased by a factor of 10, then a substantial increase in sulphate aerosols would result. If we assume that lifetimes do not change (and this would require careful research in itself), then OCS concentrations would in fact need to be enhanced by a factor of 50 to produce a 1 Tg S yr−1 source. It might also be possible to create a custom molecule that breaks down in the stratosphere that is not a carcinogen, but using less reactive species would produce a reservoir species that would require years to remove if society needs to stop production. Problems with this approach would be reminiscent of the climate impacts from the long-lived chlorofluorocarbons (CFCs), although lifetimes are shorter. 12.2.3 Aerosol injection scenarios An issue that has been largely neglected in geo-engineering proposals to modify the stratospheric aerosol is the methodology for injecting aerosols or their precursors to create the desired reflective shield. As exemplified in Section 12.2.4 below, climate simulations to date have employed specified aerosol parameters, including size, composition and distribution, often with these parameters static in space and time. In this section, we consider transient effects associated with possible injection schemes that use
Geo-engineering using stratospheric sulphate aerosols
259
aircraft platforms, and estimate the microphysical and dynamical processes that are likely to occur close to the injection point in the highly concentrated injection stream. There are many interesting physical limitations to such injection schemes for vapours and aerosols, including a very high sensitivity to the induced nucleation rates (e.g. homogeneous nucleation) that would be very difficult to quantify within injection plumes. Two rather conservative injection scenarios are evaluated, both assume baseline emission equivalent to approximately 2.5 Tg S yr−1 (which ultimately forms approx. 10 Tg of particles) as follows: (i) insertion of a primary aerosol, such as fine sulphate particles, using an injector mounted aboard an aircraft platform cruising in the lower stratosphere and (ii) sulphur-enhanced fuel additives employed to emit aerosol precursors in a jet engine exhaust stream. In each case injection is assumed to occur uniformly between 15 and 25 km, with the initial plumes distributed throughout this region to avoid hot spots. Attempts to concentrate the particles at lower altitudes, within thinner layers, or regionally – at high latitudes, for example – would tend to exacerbate problems in maintaining the engineered layer, by increasing the particle number density and thus increasing coagulation. Our generic platform is a jet-fighter-sized aircraft carrying a payload of 10 metric tons of finely divided aerosol, or an equivalent precursor mass, to be distributed evenly over a 2500-km flight path during a 4-hour flight (while few aircraft are currently capable of sustained flight at stratospheric heights, platform design issues are neglected at this point). The initial plume cross section is taken to be 1 m2 , which is consistent with the dimensions of the platform. Note that, with these specifications, a total aerosol mass injection of 10 Tg of particles per year would call for 1 million flights, and would require several thousand aircraft operating continuously in the foreseeable future. To evaluate other scenarios or specifications, the results described below may be scaled to a proposed fleet or system. Particle properties The most optically efficient aerosol for climate modification would have sizes, Rp , of the order of 0.1 μm or somewhat less (here we use radius rather than diameter as the measure of particle size, and assume spherical, homogeneous particles at all times). Particles of this size have close to the maximum backscattering cross-section per unit mass; they are small enough to remain suspended in the rarefied stratospheric air for at least a year and yet are large enough and thus could be injected at low enough abundances to maintain the desired concentration of dispersed aerosol against coagulation for perhaps months (although long-term coagulation and growth ultimately degrade the optical efficiency at the concentrations required; see below). As the size of the particles increases, the aerosol mass needed to maintain a fixed optical depth increases roughly as ∼Rp , the local
260
Philip J. Rasch et al.
mass sedimentation flux increases as ∼Rp4 , and the particle infrared absorptivity increases as ∼Rp3 (e.g. Seinfeld & Pandis 1997). Accordingly, to achieve, and then stabilize, a specific net radiative forcing, similar to those discussed in Section 12.2.4 below, larger particle sizes imply increasingly greater mass injections, which in turn accelerate particle growth, further complicating the maintenance of the engineered layer. This discussion assumes a monodispersed aerosol. However, an evolving aerosol, or one maintained in a steady state, exhibits significant size dispersion. Uppertropospheric and stratospheric aerosols typically have a lognormal-like size distribution with dispersion σ LN ∼ 1.6–2.0 (lnσ LN ∼ 0.47–0.69). Such distributions require a greater total particle mass per target optical depth than a nearly monodispersed aerosol of the same mean particle size and number concentration. Accordingly, the mass injections estimated here should be increased by a factor of approximately 2, other things remaining equal (i.e. for σ LN ∼ 1.6–2.0, the mass multiplier is in the range of 1.6–2.6). Aerosol microphysics A bottleneck in producing an optically efficient uniformly dispersed aerosol – assuming perfect disaggregation in the injector nozzles – results from coagulation during early plume evolution. For a delivery system with the specifications given above, for example, the initial concentration of plume particles of radius Rpo = 0.08 μm would be approximately 1 × 109 cm−3 , assuming sulphate-like particles with a density of 2 g cm−3 . This initial concentration scales inversely with the plume cross-sectional area, flight distance, particle specific density and cube of the particle radius, and also scales directly with the mass payload. For example, if Rpo were 0.04 or 0.16 μm, the initial concentration would be approximately 1 × 1010 or 1 × 108 cm−3 , respectively, other conditions remaining constant. For an injected aerosol plume, the initial coagulation time constant is tco =
2 , ηpo Kco
(12.1)
where npo is the initial particle concentration and Kco is the self-coagulation kernel (cm3 s−1 ) corresponding to the initial aerosol size. For Rpo ∼0.1 μm, K ∞ ∼ 3 × 10−9 cm3 s−1 (e.g. Turco et al. 1979; Yu & Turco 2001). Hence, in the baseline injection scenario, tco ∼ 0.07–7 s, for Rpo ∼ 0.04–0.16 μm, respectively. To assess the role of self-coagulation, these timescales must be compared with typical small-scale mixing rates in a stably stratified environment, as well as the forced mixing rates in a jet exhaust wake. Turco & Yu (1997, 1998, 1999) derived analytical solutions of the aerosol continuity equation which describe the particle microphysics in an evolving plume.
Geo-engineering using stratospheric sulphate aerosols
261
The solutions account for simultaneous particle coagulation and condensational growth under the influence of turbulent mixing, and address the scavenging of plume vapours and particles by the entrained background aerosol. A key factor – in addition to the previous specifications – is the growth, or dilution, rate of a plume volume element (or, equivalently, the plume cross-sectional area). The analytical approach incorporates arbitrary mixing rates through a unique dimensionless parameter that represents the maximum total number of particles that can be maintained in an expanding, coagulating volume element at any time. Turco & Yu (1998, 1999) show that these solutions can be generalized to yield time-dependent particle size distributions, and accurately reproduce numerical simulations from a comprehensive microphysical code. Although aerosol properties (concentration, size) normally vary across the plume cross section (e.g. Brown et al. 1996; D¨urbeck & Gerz 1996), uniform mixing is assumed, and only the mean behaviour is considered. Quiescent injection plumes An otherwise passive (non-exhaust) injection system generally has limited turbulent energy, and mixing is controlled more decisively by local environmental conditions. If the quiescent plume is embedded within an aircraft wake, however, the turbulence created by the exhaust, and wing vortices created at the wingtips, can have a major impact on near-field mixing rates (e.g. Schumann et al. 1998). For a quiescent plume, we adopt a linear cross-sectional growth model that represents a smallscale turbulent mixing perpendicular to the plume axis (e.g. Justus & Mani 1979). Observations and theory lead to the following empirical representation for the plume volume: V (t)/V0 = (1 + t/τmix ),
(12.2)
where V is the plume volume element of interest (equivalent to the cross-sectional area in the near-field), V0 is its initial volume and τmix is the mixing timescale. For the situations of interest, we estimate 0.1 < τmix < 10 s. Following Turco & Yu (1999, eqn (73)), we find for a self-coagulating primary plume aerosol Np (t)/Npo =
1 , 1 + fm ln(1 + fc /fm )
(12.3)
where Np is the total number of particles in the evolving plume volume element at time t, and Npo is the initial number. We also define the scaled time, fc = t/τco and scaled mixing rate, fm = τmix /τco . The local particle concentration is np (t) = Np (t)/V (t). In Figure 12.3, predicted changes in particle number and size are illustrated as a function of the scaled time for a range of scaled mixing rates. The ranges of
262
Philip J. Rasch et al.
Figure 12.3 Evolution of the total concentration of particles Np and the massmean particle radius Rp in an expanding injection plume. Both variables are scaled against their initial values in the starting plume. The time axis ( fc = t/τco ) is scaled in units of the coagulation time constant τco . Each solid line, corresponding to a fixed value of fm , gives the changes in Np and Rp for a specific mixing timescale τmix measured relative to the coagulation timescale τco or fm = τmix /τco . The heavy dashed line shows the changes at the unit mixing time, for which fc = fm when the plume cross-sectional area has roughly doubled; the longer the mixing timescale, the greater the reduction in particle abundance and particle radius.
parameters introduced earlier result in an approximate range of 0.014 ≤ fm ≤ 140. At the lower end, prompt coagulation causes only a small reduction in the number of particles injected, while at the upper end reductions can exceed 90 per cent in the first few minutes. Particle self-coagulation in the plume extending over longer timescales further decreases the initial population – by a factor of a 1000 after 1 month in the most stable situation assumed here, but by only some tens of per cent for highly energetic and turbulent initial plumes. The dashed line in Figure 12.3 shows the effect of coagulation at the ‘unit mixing time’, at which the plume volume has effectively doubled. Clearly, prompt coagulation significantly limits the number of particles that can be injected into the ambient stratosphere when stable stratification constrains early mixing. Initial particle concentrations in the range of approximately 1010 –1011 cm−3 would be rapidly depleted, as seen by moving down the unit mixing time line in Figure 12.3 (further, 1011 cm−3 of 0.08 μm sulphate particles exceed the density of stratospheric air). A consequence of prompt coagulation is that it is increasingly difficult to compensate for plume coagulation (at a fixed mass injection rate) by reducing the starting particle size. Initial particle concentrations could simultaneously be reduced to offset coagulation, but the necessary additional flight activity would affect payload and/or infrastructure. It is also apparent that rapid mass injections
Geo-engineering using stratospheric sulphate aerosols
263
in the forms of liquids or powders for the purpose of reducing flight times would lead to mass concentrations greatly exceeding those assumed above (generally