Handbook of Climate Change Mitigation and Adaptation 9781461464310

569 60 93MB

English Pages [2802] Year 2015

Report DMCA / Copyright

DOWNLOAD FILE

Polecaj historie

Handbook of Climate Change Mitigation and Adaptation
 9781461464310

Citation preview

Handbook of Climate Change Mitigation and Adaptation DOI 10.1007/978-1-4614-6431-0_1-2 # Springer Science+Business Media New York 2015

Introduction to Climate Change Mitigation Maximilian Lacknera*, Wei-Yin Chenb and Toshio Suzukic a Institute of Chemical Engineering, Vienna University of Technology, Vienna, Austria b Department of Chemical Engineering, University of Mississippi, University, MS, USA c National Institute of Advanced Industrial Science and Technology (AIST), Nagoya, Japan

Abstract Since the first edition of the Handbook, important new research findings on climate change have been gathered. The handbook was extended to also cover, apart from climate change mitigation, climate change adaptation as one can witness increasing initiatives to cope with the phenomenon. Instrumental recording shows a temperature increase of 0.5  C Le Houérou (J Arid Environ 34:133–185, 1996) with rather different regional patterns and trends (Folland CK, Karl TR, Nicholls N, Nyenzi BS, Parker DE, Vinnikov KYA (1992) Observed climate variability and change. In: Houghton JT, Callander BA, Varney SDK (eds) Climate change, the supplementary report to the IPCC scientific assessment. Cambridge University Press, Cambridge, pp 135–170). Over the last several million years, there have been warmer and colder periods on Earth, and the climate fluctuates for a variety of natural reasons as data from tree rings, pollen, and ice core samples have shown. However, human activities on Earth have reached an extent that they impact the globe in potentially catastrophic ways. This chapter is an introduction to climate change.

Climate Change There has been a heated discussion on climate change in recent years, with a particular focus on global warming. Over the last several million years, there have been warmer and colder periods on Earth, and the climate fluctuates for a variety of natural reasons as data from tree rings, pollen, and ice core samples have shown. For instance, in the Pleistocene, the geological epoch which lasted from about 2,588,000 to 11,700 years ago, the world saw repeated glaciations (“ice age”). More recently, “Little Ice Age” and the “Medieval Warm Period” (IPCC) occurred. Several causes have been suggested such as cyclical lows in solar radiation, heightened volcanic activity, changes in the ocean circulation, and an inherent variability in global climate. Also on Mars, climate change was inferred from orbiting spacecraft images of fluvial landforms on its ancient surfaces and layered terrains in its polar regions (Haberle et al. 2012). Spin axis/ orbital variations, which are more pronounced on Mars compared to Earth, are seen as main reasons. As to recent climate change on Earth, there is evidence that it is brought about by human activity and that its magnitude and effects are of strong concern. Instrumental recording of temperatures has been available for less than 200 years. Over the last 100 years, a temperature increase of 0.5  C could be measured (Le Houérou 1996) with rather different regional patterns and trends (Folland et al. 1992). In (Ehrlich 2000), Bruce D. Smith is quoted as saying, “The changes brought over the past 10,000 years as agricultural landscapes replaced wild plant and animal communities, while not so abrupt as those caused by the impact of an asteroid as the CretaceousTertiary boundary some 65 Ma ago or so massive as those caused by advancing glacial ice in the Pleistocene, are nonetheless comparable to these other forces of global change.” At the Earth Summit in Rio de Janeiro in 1992, over 159 countries signed the United Nations Framework Convention on Climate Change (FCCC, also called “Climate Convention”) in order to achieve “stabilization of greenhouse gas *Email: [email protected] Page 1 of 10

Handbook of Climate Change Mitigation and Adaptation DOI 10.1007/978-1-4614-6431-0_1-2 # Springer Science+Business Media New York 2015

Fig. 1 (a) Observed global mean combined land and ocean surface temperature anomalies, from 1850 to 2012 from three data sets. Top panel: annual mean values. Bottom panel: decadal mean values including the estimate of uncertainty for one dataset (black). Anomalies are relative to the mean of 1961–1990. (b) Map of the observed surface temperature change from 1901 to 2012 derived from temperature trends determined by linear regression from one dataset (orange line in panel a). Trends have been calculated where data availability permits a robust estimate (i.e., only for grid boxes with greater than 70 % complete records and more than 20 % data availability in the first and last 10 % of the time period). Other areas are white. Grid boxes where the trend is significant at the 10 % level are indicated by a + sign (Source: IPCC (IPCC 2013))

Page 2 of 10

Handbook of Climate Change Mitigation and Adaptation DOI 10.1007/978-1-4614-6431-0_1-2 # Springer Science+Business Media New York 2015

Fig. 2 Radiative forcing estimates in 2011 relative to 1750 and aggregated uncertainties for the main drivers of climate change. Values are global average radiative forcing (RF), partitioned according to the emitted compounds or processes that result in a combination of drivers. The best estimates of the net radiative forcing are shown as black diamonds with corresponding uncertainty intervals; the numerical values are provided on the right of the figure, together with the confidence level in the net forcing (VH very high, H high, M medium, L low, VL very low). Albedo forcing due to black carbon on snow and ice is included in the black carbon aerosol bar. Small forcings due to contrails (0.05 W m 2, including contrail induced cirrus), and HFCs, PFCs and SF6 (total 0.03 W m 2) are not shown. Concentration-based RFs for gases can be obtained by summing the like-coloured bars. Volcanic forcing is not included as its episodic nature makes is difficult to compare to other forcing mechanisms. Total anthropogenic radiative forcing is provided for three different years relative to 1750 (Source: IPCC (IPCC 2013))

concentrations in the atmosphere at a level that would prevent dangerous anthropogenic interference with the climate system” (United Nations (UN) 1992). In 2001, the Intergovernmental Panel on Climate Change (IPCC) (Intergovernmental Panel on Climate Change (IPCC) 2007) wrote, “An increasing body of observations gives a collective picture of a warming world and other changes in the climate system. . . There is new and stronger evidence that most of the warming observed over the last 50 years is attributable to human activities.” In its fourth assessment report of 2007, the IPCC stated that human actions are “very likely” the cause of global warming. More specifically, there is a 90 % probability that the burning of fossil fuels and other anthropogenic factors such as deforestation and the use of certain chemicals have already led to an increase of 0.75 in average global temperatures over the last 100 years and that the increase in hurricane and tropical cyclone strength since 1970 also results from man-made climate change. In its fifth assessment report of 2013, the IPCC confirms their findings as “Warming of the climate system is unequivocal, and since the 1950s, many of the observed changes are unprecedented over decades to millennia. The atmosphere and ocean have warmed, the amounts of snow and ice have diminished, sea level has risen, and the concentrations of greenhouse gases have increased” (IPCC 2013). Figures 1 and 2 show some details of IPCC’s findings.

Page 3 of 10

Handbook of Climate Change Mitigation and Adaptation DOI 10.1007/978-1-4614-6431-0_1-2 # Springer Science+Business Media New York 2015

In Fig. 2, natural and man-made (anthropogenic) radiative forcings (RF) are depicted. RF, or climate forcing, expressed in W/m2, is a change in energy flux, viz., the difference of incoming energy (sunlight) absorbed by Earth and outgoing energy (that radiated back into space). A positive forcing warms up the system, while negative forcing cools it down. (Anthropogenic) CO2 emissions, which have been accumulating in the atmosphere at an increasing rate since the Industrial Revolution, were identified as the main driver. The position of the IPCC has been adopted by several renowned scientific societies, and a consensus has emerged on the causes and partially on the consequences of climate change. The history of climate change science is reviewed in (Miller et al. 2009). There are researchers who oppose the scientific mainstream’s assessment of global warming (Linden 1993). However, the public seems to be unaware of the high degree of consensus that has been achieved in the scientific community, as elaborated in a 2009 World Bank report (Worldbank 2009). In (Antilla 2005), there is a treatment of the mass media’s coverage of the climate change discussion with a focus on rhetoric that emphasizes uncertainty, controversy, and climate scepticism. Climate change skeptic films were found to have a strong influence on the general public’s environmental concern (Greitemeyer 2013).

The Greenhouse Effect A greenhouse, also called a glass house, is a structure enclosed by glass or plastic which allows the penetration of radiation to warm it. Gases capable of absorbing the radiant energy are called the greenhouse gases (GHG). Greenhouses are used to grow flowers, vegetables, fruits, and tobacco throughout the year in a warm, agreeable climate. On Earth, there is a phenomenon called the “natural greenhouse” effect, or the Milankovitch cycles. Without the greenhouse gas effect, which is chiefly based on water vapor in the atmosphere (Linden 2005) (i.e., clouds that trap infrared radiation), the average surface temperature on Earth would be 33  C colder (Karl and Trenberth 2003). The natural greenhouse effect renders Earth habitable since the temperature which would be expected from the thermal equilibrium of the irradiation from the sun and radiative losses into space (radiation balance in the blackbody model) is approximately 18  C. On the moon, for instance, where there is hardly any atmosphere, extreme surface temperatures range from 233  C to 133  C (Winter 1967). On Venus, by contrast, the greenhouse effect in the dense CO2 laden atmosphere results in an average surface temperature in excess of 450  C (Sonnabend et al. 2008; Zasova et al. 2007). The current discussion about global warming and climate change is centered on the anthropogenic greenhouse effect. This is caused by the emission and accumulation of greenhouse gases in the atmosphere. These gases (water vapor, CO2, CH4, N2O, O3, and others) act by absorbing and emitting infrared radiation. The combustion of fossil fuels (oil, coal, and natural gas) has led mainly to an increase in the CO2 concentration in the atmosphere. Preindustrial levels of CO2 (i.e., before the start of the Industrial Revolution) were approximately 280 ppm, whereas today, they are above 380 ppm with an annual increase of approximately 2 ppm. According to the IPCC Special Report on Emission Scenarios (SRES) (IPCC 2010a), by the end of the twenty-first century, the CO2 concentration could reach levels between 490 and 1,260 ppm, which are between 75 % and 350 % above the preindustrial levels, respectively. CO2 is the most important anthropogenic greenhouse gas because of its comparatively high concentration in the atmosphere. The effect of other greenhouse-active gases depends on their molecular structure and their lifetime in the atmosphere, which can be expressed by their greenhouse warming potential (GWP). GWP is a relative measure of how much heat a greenhouse gas traps in the atmosphere. Page 4 of 10

Handbook of Climate Change Mitigation and Adaptation DOI 10.1007/978-1-4614-6431-0_1-2 # Springer Science+Business Media New York 2015

Waste 2.5% Agriculture 8%

Energy* 84%

CO2 95%

Industrial processes 5.5% CH4 4% N2O 1%

Fig. 3 Shares of global anthropogenic greenhouse gas emissions (Reprinted with permission from (Quadrelli and Peterson 2007))

It compares the amount of heat trapped by a certain mass of the gas in question to the amount of heat trapped by a similar mass of CO2. With a time horizon of 100 years, the GWP of CH4, N2O, and SF6 with respect to CO2 is 25, 298, and 22,800, respectively (IPCC 2010b). But CO2 has a much higher concentration than other GHGs, and it is increasing at a higher rate due to burning of fossil fuels. Thus, while the major mitigating emphasis has mainly been placed on CO2, efforts on mitigating CH4, N2O, and SF6 have also been active.

Anthropogenic Climate Change The climate is governed by natural influences, yet human activities have an impact on it as well. The main impact that humans exert on the climate is via the emission of greenhouse gases. Deforestation is another example of an activity that influences the climate (McMichael et al. 2007). Figure 3 shows the share of greenhouse gas emissions from various sectors taken from (Quadrelli and Peterson 2007). The energy sector is the dominant source of GHG emissions. According to the International Energy Agency (IEA), if no action toward climate change mitigation is taken, global warming could reach an increase of up to 6 in average temperature (International Energy Association IEA 2009). This temperature rise could cause devastating consequences on Earth, which will be discussed briefly below.

Effects of Climate Change Paleoclimatological data show that 100–200 Ma ago, almost all carbon was in the atmosphere as CO2, with global temperatures being 10  C warmer and sea levels 50–100 m higher than today. Photosynthesis and CO2 uptake into the oceans took almost 200 Ma. Since the Industrial Revolution, i.e., during the last 200 years, this carbon is being put back into the atmosphere to a significant extent. This is a rate which is 107 times faster, so there is a risk of a possible “runaway” reaction greenhouse effect. Figure 4 shows the timescales of several different effects of climate change for the future. Due to the long lifetime of CO2 in the atmosphere, the effects of climate change until a new equilibrium has been reached will prove long term. A global temperature increase of 6  C would be severe, so the IEA has developed a scenario which would limit the temperature increase to 2  C (International Energy Association IEA 2009) to minimize the effects.

Page 5 of 10

Handbook of Climate Change Mitigation and Adaptation DOI 10.1007/978-1-4614-6431-0_1-2 # Springer Science+Business Media New York 2015

Sea-level rise due to ice melting: Several millennia Sea-level rise due to thermal expansion: centuries to millennia

CO2 emissions peak: 0 to 100 years

Temperature stabilisation: a few centuries CO2 stabilisation: 100 to 300 years

CO2 emissions Today 100 years

1000 years

Fig. 4 Time scales of climate change effects based on a stabilization of CO2 concentration levels between 450 and 1,000 ppm after today’s emissions (Reprinted with permission from (Quadrelli and Peterson 2007))

Sea level rise will indeed be the most direct impact. Other impacts including those on weather, flooding, biodiversity, water resources, and diseases are discussed here.

Climate Change: What Will Change? An overall higher temperature on Earth, depending on the magnitude of the effect and the rate at which it manifests itself, will change the sea level, local climatic conditions, and the proliferation of animal and plant species, to name but a few of the most obvious examples. The debate on the actual consequences of global warming is the most heated part of the climate change discussion. Apart from changes in the environment, there will be various impacts on human activity. One example is the threats to tourism revenue in winter ski resorts (Hoffmann et al. 2009) and low-elevation tropical islands (Becken 2005). Insurance companies will need to devise completely new business models, to cite just one example of businesses being forced to react to climate change.

Impact of Climate Change Mitigation Actions The purpose of climate change mitigation is to enact measures to limit the extent of climate change. Climate change mitigation can make a difference. In the IEA reference scenario (International Energy Association IEA 2009), the world is headed for a CO2 concentration in the atmosphere above 1,000 ppm, whereas that level is limited to 450 ppm in the proposed “mitigation action” scenario. In the first case, the global temperature increase will be 6  C, whereas it is limited to 2  C in the latter (International Energy Association IEA 2009). The Intergovernmental Panel on Climate Change has projected that the financial effect of compliance through trading within the Kyoto commitment period will be limited at between 0.1 % and 1.1 % of GDP. By comparison, the Stern report estimated that the cost of mitigating climate change would be 1 % of global GDP and the costs of doing nothing would be 5–20 times higher (IPCC 2010b; Stern 2007).

Page 6 of 10

Handbook of Climate Change Mitigation and Adaptation DOI 10.1007/978-1-4614-6431-0_1-2 # Springer Science+Business Media New York 2015

Fig. 5 Conceptual framework for developing a climate change adaptation strategy. OUV Outstanding Universal Values (each World Heritage (WH) site has one or more such OUV. According to UNESCO, WH represent society’s highest conservation designation (Source: Jim Perry (2015))

Climate Change Adaptation Versus Climate Change Mitigation Individuals (Grothmann and Patt 2005), municipalities (Laukkonen et al. 2009; van Aalst et al. 2008), businesses (Hoffmann et al. 2009), and nations (Næss et al. 2005; Stringer et al. 2009) have started to adapt to the ongoing and expected state of climate change. Climate change adaptation and climate change mitigation face similar barriers (Hamin and Gurran 2009). To best deal with the situation, there needs to be a balanced approach between climate change mitigation and climate change adaptation (Becken 2005; Laukkonen et al. 2009; Hamin and Gurran 2009). This will prove to be one of mankind’s largest modern challenges. Figure 5 shows a conceptual framework for developing a climate change adaptation strategy. Details are presented in this Handbook.

Handbook of Climate Change Mitigation and Adaption Motivation The struggle in mitigating climate change is not only to create a sustainable environment but also to build a sustainable economy through renewable energy resources. “Sustainability” has turned into a household phrase as people become increasingly aware of the severity and scope of future climate change. A survey of the current literature on climate change suggests that there is an urgent need for a comprehensive handbook introducing the mitigation of climate change to a broad audience. The burning of fossil fuels such as coal, oil, and gas and the clearing of forests has been identified as the major source of greenhouse gas emissions. Reducing the 24 billion metric tons of carbon dioxide emissions per year generated from stationary and mobile sources is an enormous task that involves both technological challenges and monumental financial and societal costs with benefits that will only Page 7 of 10

Handbook of Climate Change Mitigation and Adaptation DOI 10.1007/978-1-4614-6431-0_1-2 # Springer Science+Business Media New York 2015

surface decades later. The Stern Report (2007) provided a detailed analysis of the economic impacts of climate change and the ethical ground of policy responses for mitigation and adaptation. The decline in the supply of high-quality crude oil has further increased the urgency to identify alternative energy resources and develop energy conversion technologies that are both environmentally sound and economically viable. Various routes for converting renewable energies have emerged – including energy conservation and energy-efficient technologies. The energy industry currently lacks an infrastructure that can completely replace fossil fuels in the near future. At the same time, energy consumption in developing countries like China and India is rapidly increasing as a result of their economic growth. It is generally recognized that the burning of fossil fuels will continue until an infrastructure for sustainable energy is established. Therefore, there is now a high demand for reducing greenhouse gas emissions from fossil fuel–based power plants. Adaptation is a pragmatic approach to deal with the facts of climate change so that life, property, and income of individuals can be protected. The pursuit of sustainable energy resources has become a complex issue across the globe. The Handbook on Climate Change Mitigation and Adaptation is a valuable resource for a wide audience who would like to quickly and comprehensively learn the issues surrounding climate change mitigation.

Why This Book Is Needed There is a mounting consensus that human behaviors are changing the global climate and that its consequence, if left unchecked, could be catastrophic. The fourth climate change report by the Intergovernmental Panel on Climate Change (IPCC 2007) has provided the most detailed assessment ever on climate change’s causes, impacts, and solutions. A consortium of experts from 13 US government science agencies, universities, and research institutions released the report Global Climate Change Impacts in the United States (2009), which verifies that global warming is primarily human induced and climate changes are underway in the USA and are only expected to worsen. From its causes and impacts to its solutions, the issues surrounding climate change involve multidisciplinary sciences and technologies. The complexity and scope of these issues warrants a single comprehensive survey of a broad array of topics, something which the Handbook on Climate Change Mitigation and Adaptation achieves by providing readers with all the necessary background information on the mitigation of climate change. The handbook introduces the fundamental issues of climate change mitigation in independent chapters rather than directly giving the detailed advanced analysis presented by the IPCC and others. Therefore, the handbook will be an indispensable companion reference to the complex analysis presented in the IPCC reports. For instance, while the IPCC reports give large amounts of data concerning the impacts of different greenhouse gases, they contain little discussion about the science behind the analysis. Similarly, while the IPCC reports present large amounts of information concerning the impacts of different alternative energies, the reports rarely discuss the science behind the technology. There is currently not a single comprehensive source that enables the readers to learn the science and technology associated with climate change mitigation.

Audience of the Handbook

Since the handbook covers a wide range of topics, it will find broad use as a major reference book in environmental, industrial, and analytical chemistry. Scientists, engineers, and technical managers in the energy and environmental fields are expected to be the primary users. They are likely to have an undergraduate degree in science or engineering with an interest in understanding the science and technology used in addressing climate change and its mitigation.

Page 8 of 10

Handbook of Climate Change Mitigation and Adaptation DOI 10.1007/978-1-4614-6431-0_1-2 # Springer Science+Business Media New York 2015

Scope This multivolume handbook offers a comprehensive collection of information on climate change and how to minimize its impact. The chapters in this handbook were written by internationally renowned experts from industry and academia. The purpose of this book is to provide the reader with an authoritative reference work toward the goal of understanding climate change, its effects, and the available mitigation and adaptation strategies with which it may be tackled: • • • • • • •

Scientific evidence of climate change and related societal issues The impact of climate change Energy conservation Alternative energy sources Advanced combustion techniques Advanced technologies Education and outreach

This handbook presents information on how climate change is intimately involved with two critical issues: available energy resources and environmental policy. Readers will learn that these issues may not be viewed in isolation but are mediated by global economics, politics, and media attention. The focus of these presentations will be current scientific technological development although societal impacts will not be neglected.

References Antilla L (2005) Climate of scepticism: US newspaper coverage of the science of climate change. Global Environ Change Part A 15(4):338–352 Becken S (2005) Harmonising climate change adaptation and mitigation: the case of tourist resorts in Fiji. Global Environ Change Part A 15(4):381–393 Ehrlich PR (2000) Human natures: genes cultures and the human prospect B&T. Island Press, Washington, DC. ISBN 978-1559637794 Folland CK, Karl TR, Nicholls N, Nyenzi BS, Parker DE, Vinnikov KYA (1992) Observed climate variability and change. In: Houghton JT, Callander BA, Varney SDK (eds) Climate change, the supplementary report to the IPCC scientific assessment. Cambridge University Press, Cambridge, pp 135–170 Greitemeyer T (2013) Beware of climate change skeptic films. J Environ Psychol 35:105–109 Grothmann T, Patt A (2005) Adaptive capacity and human cognition: the process of individual adaptation to climate change. Global Environ Change Part A 15(3):199–213 Haberle RM, Forget F, Head J, Kahre MA, Kreslavsky M, Owen SJ (2012) Summary of the Mars recent climate change workshop NASA/Ames Research Center. Icarus 222(1):415–418 Hamin EM, Gurran N (2009) Urban form and climate change: balancing adaptation and mitigation in the U.S. and Australia. Habitat Int 33(3):238–245 Hoffmann VH, Sprengel DC, Ziegler A, Kolb M, Abegg B (2009) Determinants of corporate adaptation to climate change in winter tourism: an econometric analysis. Global Environ Change 19(2):256–264 Intergovernmental Panel on Climate Change (IPCC) (2007) IPCC fourth assessment report: climate change 2007 (AR4), vol 3. Cambridge University Press, Cambridge International Energy Association IEA (2009) World energy outlook 2009. International Energy Association (IEA), Paris. ISBN 9789264061309 Page 9 of 10

Handbook of Climate Change Mitigation and Adaptation DOI 10.1007/978-1-4614-6431-0_1-2 # Springer Science+Business Media New York 2015

IPCC (2010) Special Report on Emission Scenarios (SRES). http://www.grida.no/climate/ipcc/emission/ IPCC (2010) Intergovernmental panel on climate change. http://www.ipcc.ch/ IPCC (2013) Climate change 2013: the physical science basis, summary for policymakers. http://www. ipcc.ch/report/ar5/wg1/ IPCC IPCC third assessment report, chap 2.3.3 was there a “Little ice age” and a “Medieval warm period”? http://www.grida.no/publications/other/ipcc_tar/?src=/climate/ipcc_tar/wg1/070.htm Jim Perry (2015) Climate change adaptation in the world’s best places: A wicked problem in need of immediate attention, Landscape and Urban Planning, 133:1–11 Karl TR, Trenberth KE (2003) Modern global climate change. Science 302(5651):1719–1723 Laukkonen J, Blanco PK, Lenhart J, Keiner M, Cavric B, Kinuthia-Njenga C (2009) Combining climate change adaptation and mitigation measures at the local level. Habitat Int 33(3):287–292 Le Houérou HN (1996) Climate change, drought and desertification. J Arid Environ 34:133–185 Linden HR (1993) A dissenting view on global climate change. Electron J 6(6):62–69 Linden HR (2005) How to justify a pragmatic position on anthropogenic climate change. Ind Eng Chem Res 44(5):1209–1219 McMichael AJ, Powles JW, Butler CD, Uauy R (2007) Food, livestock production, energy, climate change, and health. Lancet 370:1253–1263 Miller FP, Vandome AF, McBrewster J (eds) (2009) History of climate change science. Alphascript, Mauritius. ISBN 978-6130229597 Næss LO, Bang G, Eriksen S, Vevatne J (2005) Institutional adaptation to climate change: flood responses at the municipal level in Norway. Global Environ Change Part A 15(2):125–138 Quadrelli R, Peterson S (2007) The energy-climate challenge: recent trends in CO2 emissions from fuel combustion. Energy Policy 35(11):5938–5952 Sonnabend G, Sornig M, Schieder R, Kostiuk T, Delgado J (2008) Temperatures in Venus upper atmosphere from mid-infrared heterodyne spectroscopy of CO2 around 10 mm wavelength. Planet Space Sci 56(10):1407–1413 Stern N (2007) The economics of climate change: the stern review. Cambridge University Press, Cambridge. ISBN 978-0521700801 Stringer LC, Dyer JC, Reed MS, Dougill AJ, Twyman C, Mkwambisi D (2009) Adaptations to climate change, drought and desertification: local insights to enhance policy in southern Africa. Environ Sci Policy 12(7):748–765 United Nations (UN) (1992) United framework convention on climate change. United Nations, Geneva van Aalst MK, Cannon T, Burton I (2008) Community level adaptation to climate change: the potential role of participatory community risk assessment. Global Environ Change 18(1):165–179 Winter DF (1967) Transient radiative heat exchange at the surface of the moon. Icarus 6(1–3):229–235 Worldbank (2009) Attitudes toward climate change: findings from a multi-country poll. http:// siteresources.worldbank.org/INTWDR2010/Resources/Background-report.pdf Zasova LV, Ignatiev N, Khatuntsev I, Linkin V (2007) Structure of the Venus atmosphere. Planet Space Sci 55(12):1712–1728

Page 10 of 10

Life Cycle Assessment of Greenhouse Gas Emissions L. Reijnders

Contents Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . What Is Life Cycle Assessment and How Does It Work? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Goal and Scope Definition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Inventory Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Impact Assessment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Interpretation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Life Cycle Assessments Focusing on Greenhouse Gas Emissions or a Part Thereof . . . . . . Simplified Life Cycle Assessments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Published Life Cycle Assessments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Main Findings from Life Cycle Studies of Greenhouse Gas Emissions . . . . . . . . . . . . . . . . . . . . . . . Energy Conversion Efficiency . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Products Consuming Energy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Transport . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Conventional and Unconventional Fossil Fuels . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Green Energy Supply . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Biofuels . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Food . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Chemicals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Polymeric Materials . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Crop-Based Lubricants and Solvents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Recycling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Nanotechnology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Reduction of Life Cycle Greenhouse Gas Emissions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Future Directions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Change in Carbon Stocks of Recent Biogenic Origin . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Indirect Effects . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

2 3 4 6 8 10 12 13 13 13 13 14 15 15 15 16 17 18 18 19 19 19 20 21 21 21

L. Reijnders (*) Institute for Biodiversity and Ecosystem Dynamics, University of Amsterdam, Amsterdam, The Netherlands e-mail: [email protected] # Springer Science+Business Media New York 2015 W.-Y. Chen et al. (eds.), Handbook of Climate Change Mitigation and Adaptation, DOI 10.1007/978-1-4614-6431-0_2-2

1

2

L. Reijnders

Uncertainty . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Comprehensives of Dealing with Climate Warming . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Consequential Life Cycle Assessment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Concluding Remarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

22 22 23 23 23

Abstract

Life cycle assessments of greenhouse gas emissions have been developed for analyzing products “from cradle to grave”: from resource extraction to waste disposal. Life cycle assessment methodology has also been applied to economies, trade between countries, aspects of production, and waste management, including CO2 capture and sequestration. Life cycle assessments of greenhouse gas emissions are often part of wider environmental assessments, which also cover other environmental impacts. Such wider-ranging assessments allow for considering “trade-offs” between (reduction of) greenhouse gas emissions and other environmental impacts and co-benefits of reduced greenhouse gas emissions. Databases exist which contain estimates of current greenhouse gas emissions linked to fossil fuel use and to many current agricultural and industrial activities. However, these databases do allow for substantial uncertainties in emission estimates. Assessments of greenhouse gas emissions linked to new processes and products are subject to even greater data-linked uncertainty. Variability in outcomes of life cycle assessments of greenhouse gas emissions may furthermore originate in different choices regarding functional units, system boundaries, time horizons, and the allocation of greenhouse gas emissions to outputs in multi-output processes. Life cycle assessments may be useful in the identification of life cycle stages that are major contributors to greenhouse gas emissions and of major reduction options, in the verification of alleged climate benefits, and to establish major differences between competing products. They may also be helpful in the analysis and development of options, policies, and innovations aimed at mitigation of climate change. The main findings from available life cycle assessments of greenhouse gas emissions are summarized, offering guidance in mitigating climate change. Future directions in developing life cycle assessment and its application are indicated. These include better handling of indirect effects, of uncertainty, and of changes in carbon stock of recent biogenic origin and improved comprehensiveness in dealing with climate warming.

Introduction This handbook is about climate change mitigation. In decision-making about climate change mitigation, question marks about proper choices regularly emerge. Is going for electric cars a good thing, when power production is largely coal based? Do the extra inputs in car production invalidate the energy efficiency gains of

Life Cycle Assessment of Greenhouse Gas Emissions

3

hybrid cars? Should a company focus its greenhouse gas management on its own operations or on those of raw material suppliers? Is material recycling better or worse for climate change mitigation than incineration in the case of milk cartons? And what about biofuels: should their use be encouraged or not? Regarding all these questions, assessment of the life cycle emission of greenhouse gases, or more in general the environmental burden, is important for giving proper answers. Life cycle assessments may lead to anti-intuitive results. This can be illustrated by the case of liquid biofuels (Hertwich 2009). It has been argued that biofuels are “climate neutral” (e.g., Sann et al. 2006; De Gorter and Just 2010). The CO2 which emerges from burning biofuels has been recently fixed by photosynthesis, so, it has been argued, there should be no net effect of burning biofuels on the atmospheric concentration of CO2. However, if one looks at the “seed-to-wheel” life cycle of biofuels, a different picture may emerge. Consider, e.g., corn ethanol used as a transport biofuel in the USA. In the actual production thereof, there are substantial inputs of fossil fuels (Fargione et al. 2008; Searchinger et al. 2008). Corn cultivation also leads to emissions of the major greenhouse gas N2O (Crutzen et al. 2007). And corn cultivation is associated with changes in carbon stocks of agroecosystems (Searchinger et al. 2008). Considering the life cycle emissions of greenhouse gases leads to the conclusion that bioethanol from the US corn is far from “climate neutral” but is rather associated with larger greenhouse gas emissions than conventional gasoline (Searchinger et al. 2008; Reijnders and Huijbregts 2009). This has clearly implications for making good decisions about mitigating climate change linked to fuel choice (Hertwich 2009). Against this background, this chapter will consider current life cycle assessment, with a focus on the life cycle emission of greenhouse gases. First, it will be discussed what life cycle assessment is and how it is done. It will appear that such assessment may give rise to substantial uncertainty. Notwithstanding such uncertainty, life cycle assessments can be helpful in making proper choices about climate change mitigation. To illustrate this, main findings from available peerreviewed life cycle assessments of greenhouse gas emissions will be summarized.

What Is Life Cycle Assessment and How Does It Work? Life cycle assessment has been developed for analyzing current products from resource extraction to final waste disposal, or from cradle to grave. Apart from analyzing the status quo, life cycle assessments may also deal with changes in demand for, and supply of, products and with novel products. The latter type of assessment has been called consequential, as distinguished from the analysis of status quo products, which has been called attributional (Sanden and Kalstro¨m 2007; Frischknecht et al. 2009). The assessment of novel products has also occasionally been called: prospective attributional (Hospido et al. 2010; Song and Lee 2010). Different data may be needed in attributional and consequential life cycle assessment. Whereas in attributional life cycle assessment one, e.g., uses electricity data reflecting current power production, in consequential life, one needs data regarding

4

L. Reijnders

changes in electricity supply. For the short term, assessing a marginal change in capacity of current electricity supply may suffice to deal with changes in electricity supply. When the longer term is at stake, major changes in energy supply, including complex sets of energy supply technologies, should be assessed (Lund et al. 2010). When novel products go beyond existing components, materials, and processes, knowledge often partly or fully relates to the research and development stage or to the limited production stage. These stages reflect immature technologies. Comparing these with products of much more mature technologies may be unfair, as maturing technologies are optimized and tend to allow for better resource efficiency and a lower environmental impact (Wernet et al. 2010; Mohr et al. 2009). Also, novel products may be subject to currently uncommon environmental improvement options and may have to operate under conditions that diverge from those that are currently common (Sanden and Kalstro¨m 2007; Frischknecht et al. 2009). The latter conditions may, e.g., include constraints on resource availability which currently do not exist, new infrastructures, budget constraints, higher resource costs which are conducive to resource efficiency, and strict caps on greenhouse gas emissions. A solution to such divergence from “business as usual” may be found in assuming technological trajectories and/or constructing scenarios which include assumptions about the environmental performance of future mature technologies under particular conditions (Frischknecht et al. 2009; Mohr et al. 2009; Jorquera et al. 2010; Spatari et al. 2010). It should be realized that the assumptions involved lead to considerable uncertainty regarding the outcomes of consequential life cycle assessments, as these assumptions may be at variance with “real life” in the future. Life cycle assessment is generally divided in four stages (Guinee 2002; Rebitzer et al. 2004): – – – –

Goal and scope definition Inventory analysis Impact assessment Interpretation

Goal and Scope Definition In the goal and scope definition stage, the aim and the subject of life cycle assessment are determined. This implies the establishment of “system boundaries” and usually the definition of a “functional unit.” A functional unit is a quantitative description of service performance of the product(s) under investigation. It may, for instance, be the production of a megawatt hour (MWh) of electricity. This allows for comparing different products having the same output: e.g., photovoltaic cells, a coal-fired power plant, a gas-fired power plant, and a wind turbine. It should be noted though that the functional unit may cover only a part of the service performance, because products may have special properties. For instance, in the case of power generation, the production of a MWh of electricity as a functional unit does not take account of the

Life Cycle Assessment of Greenhouse Gas Emissions

5

phenomenon that a coal-fired power plant is most suitable for base load and a gas-fired power plant for peak load. In the goal and scope definition stage, a number of questions have to be answered. For instance, the life cycle of products usually includes a transport stage. As to transport the question arises what to include into the assessment: production of the transport vehicle? road building? building storage facilities for products? Similarly, in the life cycle assessment of fishery products, questions arise such as: should one include the bycatch of fish which is currently discarded? the energy input in shipbuilding and ship maintenance? and/or the energy input in building harbor facilities? In the goal and scope definition stage, one should also consider the matter of significant indirect effects of products. A well-known example thereof is the rebound effect in the case of more energy-efficient products with lowered costs of ownership. Such products may, for instance, increase use of the product and may lead to spending of money saved by the energy-efficient product, which in turn may impact energy consumption, and associated greenhouse gas emissions (Schipper and Grubb 2000; Thiesen et al. 2008; Greene 2011). Another case in point concerns biofuels from crops that currently serve as source for food or feed. When carbohydrates or lipids from such crops are diverted to biofuel production, this diversion may give rise to additional food and/or feed production elsewhere, because demand for food and feed is highly inelastic (Searchinger et al. 2008). This, in turn, may have a substantial impact on estimated greenhouse gas emissions. Similarly, the use of waste fat for biodiesel production may have the indirect effect of reducing the amount of fat available for feed production, which in turn might lead to an increased use of virgin fat, which will impact land use and may thus change carbon stocks of recent biogenic origin. However, indirect effects of decisions about biofuels do not end with the consideration of indirect effects on land use. It may, for instance, be argued that not expanding biofuel production may increase dependency on mineral oil and that this may increase military activities to safeguard oil installations and shipping and associated emissions of greenhouse gases (De Gorter and Just 2010). Still another example of indirect effects regards wood products. These may have the indirect effect of substituting for non-wood products, and including such substitution has a significant effect on estimated greenhouse gas emissions (Sathre and O’Connor 2010). Decision-making about significant indirect effects is not straightforward. This has led some to the conclusion that including indirect effects is futile (e.g., De Gorter and Just 2010), whereas others have argued that including at least some indirect effects is conducive to good decision-making (e.g., Searchinger et al. 2008; Sathre and O’Connor 2010). System boundaries refer to what is included in life cycle assessment. In general, system boundaries are drawn between technical systems and the environment, between relevant and irrelevant processes, between significant and insignificant processes, and between technological systems. An example of the latter is, for instance, a boundary between the motorcar life cycle and the life cycle of the building in which the car is produced. The choice of system boundaries may have a substantial effect on the outcomes of life cycle assessments (also: Finnveden et al. 2009; Gandreault et al. 2010).

6

L. Reijnders

Inventory Analysis The inventory analysis gathers the necessary data for all processes involved in the product life cycle. This is a difficult matter when one is very specific about a product: for instance, the apples which I bought last Saturday in my local supermarket. However, databases have been developed, such as Ecoinvent (Frischknecht et al. 2005), the Chinese National Database (Gong et al. 2008), Spine (www. globalspine.com), JEMAI (Narita et al. 2004), and the European Reference Life Cycle Data System (ELCD 2008), which give estimates about resource extraction and emissions that are common in Europe, China, the USA, and Japan for specified processes (for instance, the production and use of phosphate fertilizer). Also, there are databases which extend to economic input–output analyses and give resource extraction and emission data at a higher level of aggregation than the process level (Tukker et al. 2006). A study of De Eicker et al. (2010), which also gives a fuller survey of available databases, suggests that among available databases the Ecoinvent database is preferable for relatively demanding LCA studies. If only greenhouse gas emissions are considered, the 2006 guidelines for national greenhouse gas inventories of the IPCC (Intergovernmental Panel on Climate Change; www.ipcc.ch/) were found to be useful (De Eicker et al. 2010). Available databases do not always give the same emissions for the same functional units. For instance, according to a study of Fruergaard et al. (2009), data about the average emission of greenhouse gases linked to 1 kWh electricity production in 25 EU countries varied between databases by up to 20 %. For similar estimates in the USA, an even greater between-database uncertainty (on average 40 %) was found (Weber et al. 2010). Though such uncertainties are substantial, they should not detract from using databases such as Ecoinvent, Spine, and JEMAI, if only because between-process differences often exceed uncertainty. This may be illustrated by the geographical variability in greenhouse gas emissions linked to electricity production. For instance, country-specific average emissions of greenhouse gases per kWh of electricity in such databases vary by a factor of 160 (Fruergaard et al. 2009). For marginal emissions of greenhouse gases per kWh of electricity (which are used to assess changes in supply or demand as needed for consequential life cycle assessment), variations were even larger: up to 400–750 times (Fruergaard et al. 2009). In the inventory stage of life cycle assessments of greenhouse gas emissions, the focus is evidently on the latter emissions. In wider-ranging life cycle assessments, the inventory may comprise all extractions of resources and emissions of substances causally linked to the functional unit for each product under consideration, within the system boundaries that were established in the stage of goal and scope definition. Such wider-ranging life cycle assessments have a benefit over life cycle assessments, which only focus on greenhouse gas emissions. First, they give a better picture of the overall environmental impact, for which life cycle greenhouse gas emissions may well be a poor indicator (Huijbregts et al. 2006, 2010; Laurent et al. 2010). Also, such wider-ranging LCAs allow for considering “trade-offs”

Life Cycle Assessment of Greenhouse Gas Emissions

7

between environmental impacts and the occurrence of co-benefits linked to reducing greenhouse gas emissions (Nishioka et al. 2006; Haines et al. 2009; Markandya et al. 2009; Chester and Horvath 2010; Walmsley and Godbold 2010). For instance, Walmsley and Godbold (2010) concluded that stump harvesting for bioenergy may not only impact greenhouse gas emissions but may have the co-benefit of reducing fungal infections and may have negative co-impacts linked to erosion, nutrient depletion and loss, increased soil compaction, increased herbicide use, and loss of valuable habitat for a variety of (non-pest) species. Many current transport biofuels have larger life cycle greenhouse gas emissions than the fossil fuel which they replace but have the benefit that dependence on mineral oil is reduced (Reijnders and Huijbregts 2009). A large part of the impacts which go beyond climate change can be covered by standard wider-ranging LCAs. Aspects of environmental impact which are, apart from the emission of greenhouse gases, often covered by such wider-ranging life cycle assessments are summarized in Box 1. In evaluating buildings, the indoor environment may also be a matter to consider (Demou et al. 2009; Hellweg et al. 2009). New operationalizations of some of the aspects of environmental impact mentioned in Box 1 and additions to the list of Box 1 are under development. The latter include ecosystem services (Koellner and de Baan 2013) and the impacts of freshwater use (Boulay et al. 2011; Verones et al. 2013). Adding to the aspects often covered in wide ranging LCAs, a proposal has been published for the inclusion into life cycle assessment of change in albedo which is relevant to climate, characterized in terms of CO2 equivalents (Munoz et al. 2010). An estimate of the contribution inclusion of black carbon emissions to climate change has also become available (IPPC Working Group I 2013). In life cycle assessments, the problem arises that many production systems have more than one output. For instance, rapeseed processing not only leads to the output oil, which may be used for biodiesel production, but also to rapeseed cake, which may be used as feed. Similarly, mineral oil refinery processes may not only generate gasoline but also kerosene, heavy fuel oil, and bitumen, and biorefineries produce a variety of product outputs too (Brehmer et al. 2009). In the case of multi-output processes, extractions of resources and emissions have to be allocated to the different outputs. There are several ways to do so. Major ways to allocate are based on physical units (e.g., energy content or weight of outputs) or on monetary value (price). There may also be allocation on the basis of substitution. In the latter case, the environmental burden of a coproduct is established on the basis of another, similar product. Different kinds of allocation may lead to different outcomes of life cycle assessment (Reijnders and Huijbregts 2009; Finnveden et al. 2009; Fruergaard et al. 2009; Sayagh et al. 2009). The usual outcome of the inventory analysis of a wide ranging life cycle assessment is a list with all extractions of resources and emissions of substances causally linked to the functional unit for the product considered and, apart from the case of nuisance, commonly disregarding place and time of the extractions and emissions.

8

L. Reijnders

Box 1: Aspects of Environmental Impact Which Are Often Considered in Wide Ranging Life Cycle Assessments

Resource depletion (abiotic, biotic) Effect of land use on ecosystems and landscape Desiccation Impact on the ozone layer Acidification Photooxidant formation Eutrophication or nitrification Human toxicity Ecotoxicity Nuisance (odor, noise) Radiation Casualties Waste heat Water footprint

Impact Assessment The next stage in life cycle assessment is impact assessment. This firstly implies a step called characterization. In this step, extractions of resources and emissions are aggregated for a number of impact categories. When only greenhouse gas emissions are considered, the aggregation aims at establishing the emission of other greenhouse gases in terms of CO2 equivalents (CO2eq), which means that the emission of greenhouse gases like N2O, CH4, and CF4 are recalculated in terms of CO2 emissions. To do so, one needs to choose a time horizon (e.g., 25 years, 100 years, 104 years), because the greenhouse gas effect of emitted greenhouse gases may be different dependent on the time horizon chosen (see Table 1). The time-dependent differences in Table 1 reflect differences in atmospheric fate of greenhouse gases. For instance, the removal of CH4 from the atmosphere is much faster than the removal of CO2 (Myrhe et al. 2013). In practice, often a time horizon of 100 years is chosen and the global warming potentials (GWP) from the corresponding column of Table 1 are commonly used in life cycle assessments. Table 1 considers only direct impacts or effects of the greenhouse gases. There are however also indirect impacts. For instance, the emission of CH4 may affect the presence of ozone, which is also a greenhouse gas. There have been proposals for including such indirect effects in global warming potentials. Using a 100-year time horizon and assuming the GWP of CO2 to be 1, Brakkee et al. (2008) proposed, for instance, a GWP for CH4 of 28 and for non-methane volatile organic compounds, a GWP of 8. The latter have a direct GWP of 0. A number of estimated examples of global warming potentials calculated with and without indirect effects are in Table 2.

Life Cycle Assessment of Greenhouse Gas Emissions

9

Table 1 Estimated global warming potentials (GWP) in CO2eq of CH4 and N2O for time horizons of 20 and 100 years as proposed by the Intergovernmental Panel on Climate Change (IPCC) (Myrhe et al. 2013). Apart from climate–carbon interactions, only direct effects are considered Gas/time horizon CO2 CH4 N2O

20 years 1 86 268

100 years 1 34 298

Table 2 Estimated global warming potentials (GWP) with a time horizon of 100 years relative to the GWP of CO2 for a number of gases as calculated by Brakkee et al. (2008) Gas/type of GWP CH4 CO Non-methane volatile organic compounds (NMVOC) Chlorofluorocarbon (CFC) 11 Chlorofluorocarbon (CFC) 12 Chlorofluorocarbon (CFC) 113 CF4 CO2

GWP, direct effect only; time horizon 100 years 18 0 0

GWP, including indirect effects; time horizon 100 years 28 3 8

4,800 11,000 6,200

3,300 6,100 4,700

6,100 1

6,100 1

Table 3 Global warming potentials in CO2eq for a number of gases

Gas/global warming potential CH4 Chlorofluorocarbon (CFC) 11 CF4 CO2

GWP assuming 70 % removal from atmosphere (direct effect only) (Sekiya and Omamoto 2010) 10.6 2,249

GWP as in Table 1 with a time horizon of 100 years as calculated by IPCC (Myrhe et al. 2013) 34 5,350

1,560,558 1

7,350 1

One may note that Brakkee et al. (2008) give an estimate for the GWP of CH4 (direct effect only), which is different from the value in Table 1. Still another possibility is to calculate GWPs on the basis of a similar percentage of greenhouse gas remaining in, or lost from, the atmosphere. This is exemplified by Table 3, with values as calculated by Sekiya and Okamoto (2010). In the case of life cycle assessment of greenhouse gas emissions, calculating the emission in terms of CO2eq is where the impact assessment stage often ends, though there is also the option to quantify the impact in terms of damage to public

10

L. Reijnders

health (e.g., Haines et al. 2009), human health, and ecosystems (De Schryver et al. 2009) and in terms of negative effects on the economy (e.g., Stern 2006). Such damage-based characterizations facilitate weighing of trade-offs and co-benefits, when a variety of environmental impacts (cf. Box 1) are included in life cycle assessment. Having CO2eq emissions as an outcome of life cycle assessment is often sufficient to guide the selection of product life cycle options, policies, and innovations aimed at mitigation of climate change, because the emission of greenhouse gases is in a first approximation directly causally linked with environmental impact (climate change). Still, it should be noted that the temporal pattern of greenhouse emissions may affect the rate of climate change, which in turn is, e.g., a major determinant of impact on ecosystems. When the temporal pattern of the emissions is important, as, for instance, in the case of land use change or capital investments in production systems, it is possible to adapt life cycle assessment by including the estimated temporal pattern of greenhouse gas emissions linked to the object of life cycle assessment (cf. Reijnders and Huijbregts 2003; Kendall et al. 2009). Also, one may note that effect of activities on climate may go beyond the emission of greenhouse gases. For instance, agricultural activities may change albedo, evaporation, and wind speed, which may in turn affect climate (Reijnders and Huijbregts 2009). Also, the greenhouse effect of air traffic may be different than expected solely on the basis of CO2, N2O, and CH4 emissions, because air traffic triggers formation of contrails and cirrus clouds (Lee et al. 2010a). A direct causal link between emission and impact for greenhouse gas emissions may be at variance with other environmental impact categories. For instance, lead emissions which do not lead to exceeding a no-effect level for exposure of organisms will have no direct environmental impact. Also, specificity as to time and place can be very important for other impacts than climate change caused by greenhouse gases, such as the impacts of the emissions of hazardous and acidifying substances (Scho¨pp et al. 1998; Hellweg et al. 2005; Pottimg and Hauschild 2005; BassetMens et al. 2006). It may be noted, however, that in such cases time and place specificity may be introduced by adaptation of life cycle assessment or combining life cycle assessment with other tools (e.g., Hellweg et al. 2005; Huijbregts et al. 2000; Rehr et al. 2010).

Interpretation The interpretation stage connects the outcome of the impact assessment to the real world. Much of the practical usefulness of life cycle assessments of greenhouse gas emissions in this respect depends upon the uncertainty of outcomes, which has a variety of sources (e.g., Finnveden et al. 2009; Huijbregts et al. 2001, 2003; Geisler et al. 2005; De Koning et al. 2010; Williams et al. 2009). These can be categorized as uncertainties due to choices, uncertainties due to modeling, and parameter

Life Cycle Assessment of Greenhouse Gas Emissions

11

uncertainty (Huijbregts et al. 2001, 2003). Parameter uncertainty and uncertainty due to choice (e.g., regarding time horizon, type of allocation, system boundaries, and functional unit) would seem to be the most important types of uncertainty in the case of estimating life cycle greenhouse gas emissions. Uncertainty in the outcomes of life cycle assessments of greenhouse gas emissions partly depends on the reliability of input data (categorized as parameter uncertainty). As pointed out above, databases regarding fossil fuel use in industrialized countries such as the USA, China, and Japan and EU countries allow for substantial uncertainties in this respect (Sann et al. 2006; Fruergaard et al. 2009). Similar data regarding other countries tend to be still more uncertain. Greenhouse gas emissions linked to land use, N2O emissions, and animal husbandry are also characterized by a relatively large uncertainty (Reijnders and Huijbregts 2009; Ro¨o¨s et al. 2010). Additional variability in outcomes of life cycle assessments of greenhouse gas emissions may originate in different choices regarding system boundaries. This has, for instance, been shown by Christensen et al. (2009) and Gandreault et al. (2010), who analyzed life cycle greenhouse gas emissions of forestry products. They found that different assumptions about the boundary to the forestry industry and interactions between the forestry industry on one hand and on the other hand the energy industry and the recycled paper market might lead to substantial differences in outcomes of life cycle assessments. Choices regarding time horizons and the allocation of greenhouse gas emissions to outputs in multi-output processes may also have major consequences for such outcomes (Reijnders and Huijbregts 2009). Sensitivity analysis may be part of the interpretation stage and, for instance, consider the dependence on different assumptions regarding allocation and time horizon. Similarly, uncertainty analysis may be part of the interpretation stage. Several approaches to uncertainty analysis have been proposed, using Monte Carlo techniques (Huijbregts et al. 2003; Hertwich et al. 2000), matrix perturbation (Heijungs and Suh 2002), or Taylor series expansion (Hong et al. 2010). In practice, uncertainty analysis has been applied in a limited way. Also, in the interpretation stage, conclusions can be drawn. For instance, stages or elements of the product life cycle can be identified, which are linked to relatively high greenhouse gas emissions. These can be prioritized for emission reduction options and policies. Also, it may be established that, given a functional unit and specified assumptions, one product has lower greenhouse gas emissions (in CO2eq) than another. Examples of conclusions which can be drawn from life cycle assessments are given in section “Main Findings from Life Cycle Studies of Greenhouse Gas Emissions.” Though life cycle assessment has been developed for products, in practice the methodology has been applied more widely (cf. “Published Life Cycle Assessments”). To the extent that life cycle assessment methodology, which does not focus on products, essentially assesses parts of product life cycles (e.g., the nickel industry, waste incineration, and CO2 capture and sequestration), the usefulness of assessment may be similar to the assessment of products: one may find, prioritize, and validate emission reduction options.

12

L. Reijnders

Some of the applications of life cycle assessments, which go beyond products, give rise to additional problems. For instance, applying life cycle assessments to state economies and trade may give rise to double counting of emissions (Lenzen 2008). On the other hand, e.g., expansion of life cycle assessments to trade between states may give useful insights about the actual environmental impacts of imports and exports. This is a useful addition to climate regimes such as the Kyoto protocol, which focus on greenhouse gas emissions within state borders. Also, economy-wide LCAs may help in prioritizing product categories or economic sectors for policy development (Jansen and Thollier 2006).

Life Cycle Assessments Focusing on Greenhouse Gas Emissions or a Part Thereof The emergence of climate change as a major environmental concern has led to a rapid increase in life cycle assessments focusing on the emission of greenhouse gases. However, it should be pointed out that there are also life cycle assessments which cover only a part of the greenhouse gases. In this context, one may note the growing popularity of “carbon footprinting” (e.g., De Koning et al. 2010; Barber 2009; Johnson 2008; Weber and Matthews 2008; Schmidt 2009). There is no generally agreed upon definition of carbon footprinting. In practice, the focus of carbon footprinting is often on the emission of carbonaceous greenhouse gases, if the footprinting is not being “slimlined” to covering CO2 only (e.g., Schmidt 2009). Also, there is an increasing interest in life cycle assessments focusing on the cumulative input of fossil fuels, which in turn is closely related to the life cycle emission of the major greenhouse gas CO2 (Laurent et al. 2010; Nishioka et al. 2006). The focus on carbonaceous greenhouse gases may lead to outcomes which substantially deviate from overall greenhouse gas emissions. As several authors (Crutzen et al. 2007; Reijnders and Huijbregts 2009; Laurent et al. 2010; Nishioka et al. 2006) have pointed out, cumulative energy demand may be substantially at variance with overall environmental performance and life cycle emissions of greenhouse gases, in the case of agricultural commodities and in other cases in which life cycles impact land use. The same will hold in the case of a number of compounds, such as adipic acid, caprolactam, and nitric acid, when syntheses are used which generate N2O in a poorly controlled way (Fehnann 2000; PerezRamirez et al. 2003). Also, there can be a major divergence of “carbon footprinting” from overall life cycle greenhouse gas emissions when there are substantial emissions of halogenated greenhouse gases. The latter, e.g., applies to the case of halogenated refrigerant use (Ciantar and Hadfield 2000), the use of halogenated blowing agents for the production of insulation (Johnson 2004), to primary aluminum production, which is associated with the emission of potent fluorinated greenhouse gases such as CF4 (Fehnann 2000; Weston 1996), and to circuit breakers using SF6 and magnesium foundries (Fehnann 2000; Harrison et al. 2010). In the following, only assessments will be used which give an estimate of all greenhouse gas emissions, recalculated as CO2eq emissions.

Life Cycle Assessment of Greenhouse Gas Emissions

13

Simplified Life Cycle Assessments Full life cycle assessments require extensive data acquisition, which tends to be laborious and time-consuming, and this may well be beyond what practice in industry and policy requires (Bala et al. 2010). This has led to the emergence of simplified tools for the life cycle assessment of greenhouse gas emissions, such as screening LCAs. These tend to focus on major causes of life cycle greenhouse gas emissions (“hotspots”) and are often useful in identifying and prioritizing emission reduction options (Andersson et al. 1998; Rehbitzer and Buxmann 2005).

Published Life Cycle Assessments A wide variety of products has been the object of life cycle assessments of greenhouse gas emissions. Examples range from teddy bears to power generators, from pesticides to motorcars, from tomato ketchup to buildings, and from a cup of coffee to tablet e-newspapers. Products have not been the only objects of life cycle assessments of greenhouse gas emissions. Life cycle assessment has also been used for state economies, trade between countries, branches of industry, industrial symbiosis, aspects of production and product technologies, networks, soil and groundwater remediation, and waste management options, including CO2 capture and sequestration.

Main Findings from Life Cycle Studies of Greenhouse Gas Emissions Though, as pointed out in section “Goal and Scope Definition,” there are substantial uncertainties in assessments of life cycle greenhouse gas emissions, some outcomes of such assessments are robust to such an extent that they provide a sufficiently firm basis for conclusions. The latter are summarized here, assuming a time horizon of 100 years, using the values for global warming potentials as given by IPCC (Myrhe et al. 2013) (see Table 1), and focusing on direct effects only, unless indicated otherwise. After this summary, options for life cycle greenhouse gas emission reduction which commonly emerge from life cycle assessments will be briefly discussed.

Energy Conversion Efficiency Improvements in efficiency of the conversion of primary energy to energy services, including reduction of heat loss, often lead to lower life cycle greenhouse gas emissions for energy services (e.g., Erlandsson et al. 1997; Citherlet et al. 2000; Nakamura and Kondo 2006; Citherlet and Defaux 2007;

14

L. Reijnders

Boyd et al. 2009) when only direct effects are considered. There are some exceptions. Phase change materials, which may be used in buildings to improve energy conversion efficiency, have been shown to not significantly reduce the life cycle greenhouse gas emission of buildings in a Mediterranean climate (De Gracia et al. 2010). Electric heat pumps, though generally giving rise to lower life cycle greenhouse gas emissions for space heating, may increase life cycle greenhouse gas emissions when electricity generation is coal based (Saner et al. 2010). Also, the III/V solar cells, which contain, e.g., In (indium) and Ga (gallium) and have higher conversion efficiencies for solar energy into electricity than Si (silicium)-based photovoltaic cells, do not appear to have lower life cycle greenhouse gas emissions per kWh than multicrystalline Si solar cells (Mohr et al. 2009). Noteworthy is the potential for indirect effects linked to improvements of energy efficiency. As noted before: in the case that improvements in energy conversion lead to lower costs of ownership, there may be a rebound effect on energy use because money linked to such lower costs tends to be spend on increased use of the product or elsewhere, which in turn entails additional energy consumption and emission of greenhouse gases (Schipper and Grubb 2000; Thiesen et al. 2008; Greene 2011). Lower costs may also be conducive to economic growth (Thiesen et al. 2008). When only microeconomic effects of improved energy efficiency are considered, life cycle greenhouse gas emissions tend to be still lowered, though less so than when only the effect of energy efficiency by itself is considered (Schipper and Grubb 2000; Greene 2011). Including economy-wide rebound effects in life cycle assessments of improved energy conversion efficiency has as yet no firm empirical basis (Thiesen et al. 2008).

Products Consuming Energy Life cycle greenhouse gas emissions of products which consume energy are often dominated by emissions during the use stage of the life cycle, when shares of fossil fuels in the production and consumption stages are similar (Nakamura and Kondo 2006; Boyd et al. 2009, 2010; Finkbeiner et al. 2006; Kofoworola and Gheewala 2008; Yung et al. 2008; Cullen and Allwood 2009; Duan et al. 2009; Ortiz et al. 2010; Rossello-Batle et al. 2010). There are exceptions, however, such as, for instance, a personal computer for limited household use (Choi et al. 2006), mobile phones (Andrae and Andersen 2010), and very energy-efficient dwellings (Citherlet and Defaux 2007). The latter illustrates a more general point. To the extent that energy conversion efficiency in the use stage improves, energy embodied in the product (e.g., Kakudate et al. 2002; Blengini and di Carlo 2010) and in the case of transport also energy embodied in infrastructure (e.g., Frederici et al. 2009) often become a more important factor in life cycle greenhouse gas emissions. It may be noted, though, that there are exceptions as to the growing importance of energy embodied in the product, such as CMOS chips for personal computers and other electronics (Boyd et al. 2009, 2010).

Life Cycle Assessment of Greenhouse Gas Emissions

15

Transport At continental distances in the order of 0). In the maximization the Nash assumption is employed, i.e., the welfare-maximizing country supposes that its choices do not affect the behavior of the other countries, i.e., it takes X~ i to be exogenous. From the first-order conditions for the welfare maximum, we get M RS i ¼

@U i =@X ¼ c: @U i =@yi

(4)

Consequently, it is optimal for the individual country to provide climate protection up to the level where the marginal rate of substitution (left-hand side of Eq. 4) between public good and private good becomes

Page 11 of 27

Handbook of Climate Change Mitigation and Adaptation DOI 10.1007/978-1-4614-6431-0_4-2 # Springer Science+Business Media New York 2015

equal to the unit price ratio (right-hand side of Eq. 4) between public and private good (i.e., 1c). To put it plainly, when deciding about allocating its income between private goods and climate protection, a country fares best when it invests in climate protection until the benefit from spending another dollar on the climate compared to the benefits from spending another dollar on private goods is equal to the relative costs of buying the two goods. While this provision level is optimal from an individual country’s point of view, it is not optimal from a global perspective. Global welfare could be raised by deviating from the provision levels associated with condition (4). In order to illustrate this, global welfare is maximized in a next step. It seems reasonable to assume that global welfare is a function of the individual countries’ welfare levels. The global welfare level attainable from the consumption of private goods and climate protection is, however, restricted by the aggregate income that the countries can spend on private goods and climate protection. Thus the global welfare maximization problem reads max

y1 , ..., yn , X

U ðU 1 ðy1 , X Þ, U 2 ðy2 , X Þ, . . . , U n ðyn , X ÞÞ

(5)

s.t. n n X X yi þ cX ¼ I i: i¼1

(6)

i¼1

Let us – for simplicity – assume that each individual country’s welfare has an equal weight with respect to global welfare, i.e., U ðU 1 ðy1 , X Þ, U 2 ðy2 , X Þ, . . . , U n ðyn , X ÞÞ ¼ U 1 ðy1 , X Þþ U 2 ðy2 , X Þ þ . . . þ U n ðyn , X Þ . Then, optimization yields the so-called Samuelson condition (see Samuelson 1954, 1955) n n X X @U i =@X M RS i ¼ ¼ c: @U i =@yi i¼1 i¼1

(7)

Therefore, in order to maximize global welfare, an individual country should provide climate protection up to a level where the sum of all countries’ marginal rates of substitution between public and private good becomes equal to the unit price ratio between public and private good. Such outcomes, where no country can improve its welfare without harming another one, are called Pareto optima. Condition (7) deviates from condition (4), since – without international coordination – an individual country would only take into account its own marginal rate of substitution between public and private goods (i.e., its own benefits from the two goods) when deciding about its climate protection efforts, while Pareto efficiency requires that countries also take into account spillovers exerted on other countries (i.e., the global benefits generated by its climate protection efforts). Therefore also the other countries’ marginal rates of substitution between the public and private good have to be included in the efficiency condition. On a national level, efficient public good provision can be enforced by the government, but on the global scale there is no central coercive authority which can enforce an efficient global climate protection level. Therefore, the only option is for countries to voluntarily negotiate a climate protection agreement in order to get closer to the globally efficient protection level.

Page 12 of 27

Handbook of Climate Change Mitigation and Adaptation DOI 10.1007/978-1-4614-6431-0_4-2 # Springer Science+Business Media New York 2015

B’s strategy no participation

participation

no participation

0, −1

6, −3

participation

−4, 7

5, 6

A’s strategy

Fig. 4 Prisoner’s dilemma game

International Negotiations in Normal Form Games Such international negotiations on climate change can be comfortably depicted in a game-theoretical setting. Regularly, such negotiations are described as a prisoner’s dilemma game which captures the freerider incentives associated with the provision of public goods. A normal form game in the shape of a prisoner’s dilemma (PD) situation with two agents or countries is presented in Fig. 4. Both considered countries have the choice between “participation” in an international climate protection agreement (or climate protection efforts) and “no participation” in international climate protection efforts. In the matrix, the numbers in front of the commas represent the payoffs for country A, while the numbers behind the commas stand for the payoffs received by country B. In the prisoner’s dilemma case of Fig. 4, the dominant strategy for each agent is to choose “no participation” in an international climate protection agreement (a dominant strategy is a strategy which always yields the highest payoff for the agent choosing this strategy, regardless of the choice of the opponents. For a more detailed discussion of these and related game theoretic concepts, see, e.g., Fudenberg and Tirole (1991)). This outcome is the so-called Nash equilibrium where no country has anything to gain by changing only its own strategy unilaterally. While this equilibrium is stable, the payoffs of countries A and B are merely 0 and 1, respectively. However, “From an economic viewpoint an ideal state of cooperation has two features: It is a Pareto-optimum and it is stable” (Buchholz and Peters 2003, p. 82). The Nash equilibrium in the depicted PD situation is of course not Pareto optimal. Both agents would obtain a higher payoff if they would both participate in the international agreement. Alternatively, a “Chicken” game setting can be employed in order to illustrate the negotiation situation. Lipnowski and Maital (1983) provide an analysis of voluntary provision of a pure public good in general as the game of Chicken. In fact, a Chicken game tends to describe international negotiations on the provision of the specific public good “climate protection” in a more adequate way than the prisoner’s dilemma game (see Carraro and Siniscalco 1993). The case of a Chicken game, which belongs to the group of coordination games, is depicted in Fig. 5. In contrast to the PD situation, there exists no dominant strategy. There are a couple of papers investigating the differences associated with the two, PD and Chicken, games. Ecchia and Mariotti (1998) investigate coalition formation in international environmental agreements and compare different versions of the two game types using simple three-country examples. In their paper, Rapoport and Chammah (1966) stress the difference between both games with respect to the attractiveness of retaliation decisions. Snyder (1971) examines differences in the logic and social implications of PD and Chicken games in the context of international politics. Lipman (1986) and Hauert and Doebeli (2004) analyze how the evolution of cooperation differs in the two games. Rabin (1993), R€ ubbelke (2011), and Pittel and R€ ubbelke (2013) investigate fairness in these settings. Pittel and

Page 13 of 27

Handbook of Climate Change Mitigation and Adaptation DOI 10.1007/978-1-4614-6431-0_4-2 # Springer Science+Business Media New York 2015

B’s strategy no participation

participation

no participation

−6, −6

6, −3

1 –p

participation

−3, 6

3, 3

p

1– q

q

A’s strategy

Fig. 5 Chicken game

R€ubbelke (2012) depict negotiations on climate change in (3  3) matrices in which they integrate both Chicken and PD settings simultaneously. Hence, in their study they allow for a broader range of choices for the involved countries. The main difference between both games, i.e., between PD and Chicken games, is that the agents in the PD situation obtain the lowest payoffs when they play unilateral “participation,” while in the Chicken game, they face the lowest payoffs if they mutually play “no participation.” This outcome is the reason why the Chicken game is said to represent international negotiations better: in case of mutual non-participation, the whole world is threatened by a global warming catastrophe. This catastrophe can be prevented in the best way by means of mutual cooperation in international climate protection. However, if the other agent refuses to cooperate, then unilateral participation in international climate protection efforts would be the best choice since this is the only remaining way to prevent the global warming catastrophe. Yet, if the other agent provides climate protection (and thus chooses “participation”), it would be best to choose “no participation” and thus to take a free ride. Each agent hopes that the other agent provides climate protection, such that he himself can take a free ride in climate protection. As can be observed from Fig. 5, there exist multiple Nash equilibria, which are associated with pure and mixed strategies. The Nash equilibria in connection with pure strategies prevail where the payoffs (3,6) and (6,3) arise. Given possible uncertainties regarding the countries’ behavior, mixed strategies become germane. Agents form probabilities about the other agent’s behavior. Country A assesses the likelihood with which country B will participate (q) or not participate (1  q) and vice versa for country B (p and 1  p). In order to determine the mixed strategies in the Chicken game situation in Fig. 5, the likelihood q (resp. p) of participation by country B (country A) has to be calculated that makes country A (country B) indifferent between playing “participation” and “no participation.” Probability q is determined by calculating the level of q, for which the expected payoffs of both strategies of A (“participation” and “no participation”) are equal. This is the case if 3ð1  qÞ þ 3q ¼ 6ð1  qÞ þ 6q:

(8)

The left-hand side represents A’s expected payoff from participation, and the right-hand side reflects A’s expected payoff from defection. Analogously p can be determined from solving 3ð1  pÞ þ 3p ¼ 6ð1  pÞ þ 6p

(9)

Page 14 of 27

Handbook of Climate Change Mitigation and Adaptation DOI 10.1007/978-1-4614-6431-0_4-2 # Springer Science+Business Media New York 2015

for p. In this case, the mixed-strategy equilibrium requires q ¼ p ¼ 1⁄2:

(10)

If country A or country B is uncertain whether the other country participates or defects, then it should cooperate (participate) provided it expects the antagonist to play “participation” with a probability of less than ½.

Integration of Ancillary Benefits into the Negotiations Climate policies regularly generate side effects. Afforestation and reforestation, for example, do not only mitigate CO2-induced global warming by sequestering carbon; these measures also increase the habitat for endangered species. Furthermore, forests can serve as recreational areas and reduce soil erosion. As Ojea, Nunes, and Loureiro (2010) stress, forests’ “provision of goods and services plays an important role in the overall health of the planet and is of fundamental importance to human economy and welfare.” Furthermore, Sandler and Sargent (1995, p. 160) point out that tropical forests provide a bequest value which the current generation derives from passing on the forests to future generations. Concerning the case of Brazil, Fearnside (2001, p. 180) stresses: “The environmental and social impacts of mitigation options such as large hydropower projects, mega-plantations or nuclear energy, contrast with the “ancillary” benefits of forest maintenance.” An overview of studies assessing the co-effects of afforestation is provided by Elbakidze and McCarl (2007, p. 565). Similarly, side effects arise from the implementation of more efficient technologies, the reduction of road traffic, and the substitution of carbon-intensive fuels. Ancillary or secondary benefits induced by these CO2-emission-reducing activities accrue, for example, when the emissions of other pollutants like particulate matter are reduced simultaneously (see Fig. 6). There are a number of terms which convey the idea of ancillary or secondary benefits. The others are co-benefits and spillover benefits (see IPCC 2001). The main difference is the relative emphasis given to the climate change mitigation benefits versus the other benefits (Markandya and R€ ubbelke 2004, p. 489). In fuel combustion processes, CO2 emissions are accompanied by emissions of, e.g., NOX, SO2, N2O, and others. Therefore, fuel combustion reductions do not only cause a decrease in CO2 emissions but also diminish the emissions of other pollutants. In general, positive health effects of air pollution reduction that accompany climate protection measures are assessed to represent the most important category of secondary benefits. (However, Aunan et al. (2003, p. 289) annotate that “some particulate air pollution has a cooling effect on the atmosphere, reducing it may exacerbate global warming.”) Further negative Climate Policy (e.g., Carbon-Tax)

GHG Abatement Measures

Climate Protection

Primary (ClimateProtection Related) Benefits

Reduction in Local Air Pollution

Ancillary Benefits

Fig. 6 Climate policy generating primary and ancillary benefits, see R€ubbelke (2002, p. 36) Page 15 of 27

Handbook of Climate Change Mitigation and Adaptation DOI 10.1007/978-1-4614-6431-0_4-2 # Springer Science+Business Media New York 2015

impacts of air pollution, like accelerated surface corrosion, weathering of materials, and impaired visibility are mitigated by fuel combustion reductions, too. But, road traffic mitigation does not only produce ancillary benefits by reducing the emission of air pollutants, but it is also accompanied by lower noise levels and reduced frequency of accidents, less traffic congestion, and less road surface damage. While primary benefits accrue globally from the prevention of climate change-induced damages, ancillary benefits are mostly local or regional (IPCC 1996, p. 217; Pearce 1992, p. 5). They represent domestic public goods for individual countries. (However, regarding the abatement of the greenhouse gases chlorofluorocarbons (CFCs), the ancillary effect of ozone layer protection and the respective ancillary benefits can be enjoyed globally.) Local air pollution mitigation generated by climate policy, for example, can be exclusively enjoyed by the protecting country. Therefore, ancillary effects can be considered to be private to the host country of a climate policy. Consequently, they differ from climate protection-related primary benefits which exhibit global publicness. Global damages arise, e.g., in the form of droughts caused by global warming. R€ ubbelke and Vögele (2011, 2013) recently analyzed the effects of such droughts on the power sector. Several studies ascertaining the level of ancillary benefits found that such benefits even represent a multiple of climate protection-related primary benefits, as Pearce (2000, p. 523) illustrates in an overview. In the next stage, ancillary benefits will be explicitly introduced into our normal form game. It will be taken into account that ancillary benefits are enjoyed (mainly) privately by the host country of the climate protection activity. Ancillary benefits arise regardless of the behavior of the antagonist. In Fig. 7, ancillary benefits (ABA,ABB) are explicitly included into the matrix of the Chicken game, where it is assumed that ABA < ABB. Analogously to the procedure concerning the Chicken game situation without ancillary benefits, the mixed strategies can be investigated here. Again, probability q is determined by identifying the level of q, where the expected payoffs of both strategies of A (“participation” and “no participation”) balance. This is the case if ð3 þ ABA Þ ð1  qÞ þ ð3 þ ABA Þq ¼ 6 ð1  qÞ þ 6q:

(11)

Analogously p can be specified ð3 þ ABB Þ ð1  pÞ þ ð3 þ ABB Þp ¼ 6 ð1  pÞ þ 6p:

(12)

From Eqs. 11 and 12, q and p can be derived. Scientific studies largely assess that there are especially important co-benefits of local/regional air pollution reduction in developing countries; an overview of a B’s strategy no participation

participation

no participation

−6, −6

6, −3 + ABB

participation

−3 + ABA, 6

3 + ABA, 3 + ABB

1–q

q

A’s strategy

1–p

p

Fig. 7 Chicken game with ancillary benefits Page 16 of 27

Handbook of Climate Change Mitigation and Adaptation DOI 10.1007/978-1-4614-6431-0_4-2 # Springer Science+Business Media New York 2015

selection of studies investigating ancillary benefits in developing countries can be found in Appendix 1. Neglecting potential differences in the primary benefits and supposing that A represents the group of industrialized countries, while B represents the developing world, we obtain: q ¼ 1⁄2 þ ABA =6 < p ¼ 1⁄2 þ ABB =6:

(13)

If country A (resp. country B) is uncertain whether the antagonist participates or defects, then it should participate provided it expects the antagonist to play “participation” with a probability of less than 1 ⁄2 þ ABA =6 (resp. 1⁄2 þ ABB =6). Comparison of Eqs. 10 and 13 shows that q and p rise due to the inclusion of ancillary benefits into the analysis. Consequently, for the Chicken game example illustrated above, it is found that taking ancillary benefits into account will increase the likelihood of cooperative behavior in international negotiations on climate change. According to Eq. 13, the inclusion of ancillary benefits into the reasoning brings about especially an increase in the likelihood that developing countries will participate in international climate protection efforts (for a more general analysis of the influence of ancillary benefits in international negotiations on climate change, see Pittel and R€ ubbelke 2008). Consequently, these results confirm Halsnæs and Olhoff (2005, p. 2324) who stress that “the inclusion of local benefits in developing countries in GHG emission reduction efforts will [. . .] create stronger incentives for the countries to participate in international climate change policies.” Yet, in their analysis of qualitative and strategic implications associated with ancillary benefits, Finus and R€ ubbelke (2013) find a more moderate influence of co-benefits on the participation in international climate agreements and on the success of these treaties in welfare terms. They employ a setting of noncooperative coalition formation in the context of climate change. According to their results, ancillary benefits will not raise the likelihood of an efficient global agreement on climate change to come about although ancillary benefits provide additional incentives to protect the climate. The rationale behind this result is that countries taking the private ancillary benefits to a greater extent into account will undertake more emission reduction, irrespective of an international agreement. However, if we consider the high local/regional pollution levels in developing countries, it remains at least highly disputable whether developing countries conduct efficient local/regional environmental policies. Hence, the commitment in an international climate protection agreement will most likely help to raise the efficiency in local/regional environmental protection in these countries. Consequently, ancillary benefits – although not being the major impetus for immediate action – may take the role of a catalyst to climate policy (rather than that of a direct driver). Joining international climate protection efforts may become politically more feasible for developing countries (like China and India) which face serious local/regional pollution problems, when ancillary benefits are included in the political reasoning.

Price Ducks: An Approach to Break the Deadlock? Due to the inefficiency of the Kyoto Protocol scheme, which is a quantity duck since it stipulates emission-reduction quantity targets, there arose an intense discussion about general alternatives to such quantity ducks (which are more than just technology-focused climate policy partnerships like the APP). Nordhaus (2006, p. 31) points out: “Unless there is a dramatic breakthrough or a new design the Protocol threatens to be seen as a monument to institutional overreach.” • Price-influencing international climate protection schemes have been proposed by Nordhaus (2006) as a proper successor of the quantity approach of the Kyoto type. “This is essentially a dynamic Pigovian pollution tax for a global public good” (Nordhaus 2006, p. 32). An international carbon tax scheme where no international emission limits are dictated is considered to have several significant advantages

Page 17 of 27

Handbook of Climate Change Mitigation and Adaptation DOI 10.1007/978-1-4614-6431-0_4-2 # Springer Science+Business Media New York 2015

• •





over the Kyoto mechanism. This scheme could also contain side payments in order to motivate countries to participate: “Additionally, poor countries might receive transfers to encourage early participation,” Nordhaus (2006, p. 32). Such a scheme is a price duck, because via the taxes, the prices of polluting activities are increased, such that there are additional incentives to mitigate the level of such polluting activities. In contrast to taxing polluting activities in order to protect the climate, of course, prices can be influenced by subsidizing climate-protecting activities (e.g., energy-efficient appliances or carbon sequestration measures could be subsidized). The subsidy will reduce the effective price of climateprotecting activities, and hence the agents receiving the subsidy will raise their provision level of climate protection. Recently, Altemeyer-Bartscher, R€ ubbelke, and Sheshinski (2010) elaborated Nordhaus’ proposal of an international carbon tax scheme. They analyze how individual countries or regions could negotiate the design of such a tax scheme in a decentralized way. In the scheme they suggest countries offer side payments to their opponents that are conditional on the level of the environmental tax rates implemented in the transfer-receiving opponent country. As can be shown, such a side-payment scheme might yield the first-best optimal tax policy and hence an efficient global climate protection regime. The scheme does not require the coercive power of a central global authority as the individual countries implement carbon taxes voluntarily. Altemeyer-Bartscher, Markandya, and R€ ubbelke (2014) investigate the effects of ancillary benefits on the outcomes of this scheme. Other price-influencing schemes which work in a similar way and do not require an international coercive authority are matching schemes which were first developed by Guttman (1978, 1987). Danziger and Schnytzer (1991) provide a general formulation of Guttman’s matching idea which allows for income effects, nonidentical players, and nonsymmetric equilibria. Guttman’s matching approach has been applied to the sphere of international environmental agreements by R€ ubbelke (2006) and Boadway, Song, and Tremblay (2007, 2011).

Guttman’s basic scheme consists of two stages. Each agent i’s contribution xi to the public good can be written as: n X xi ¼ ai þ bi aj

ðj 6¼ iÞ;

(14)

j¼1

where ai is the agent’s unconditional or flat contribution to the public good (in our case “climate protection”) and bi is his matching rate, which he provides for each unit of flat public good contributions n X by other agents. Therefore, the agent’s matching contribution is bi aj ðj 6¼ iÞ. The unit costs of the goods j¼1

are supposed to be equal to unity. The budget constraint of the agent in the shape of the income restriction is: yi þ ai þ bi

n X aj ¼ I i

ðj 6¼ iÞ:

(15)

j¼1

Ii is again the monetary income of the considered agent i. In the first stage of the game, each agent makes a decision on the level of the matching rates he wants to offer to the other agents. It could be assumed that this decision is stipulated in an international agreement

Page 18 of 27

Handbook of Climate Change Mitigation and Adaptation DOI 10.1007/978-1-4614-6431-0_4-2 # Springer Science+Business Media New York 2015

on matching rates, where all negotiating agents or decision makers – as representatives of their nations – agree on the matching rates their countries will provide (see R€ ubbelke 2006). All the agents’ actions in both stages of the game are guided by welfare-maximizing behavior, i.e., the agents aim to maximize their individual countries’ welfare as represented by the function in Eq. 2. In the second stage, all agents will make decisions about their flat contributions. Total public good contribution of all agents then becomes equal to: ! n n X X ðj 6¼ iÞ: (16) ai þ bi aj X ¼ i¼1

j¼1

Given the matching rates of the other agents, the considered agent will contribute flat contributions to the public good up to the level where the marginal rate of substitution between public and private good is equal to the effective price of the public good, i.e., where M RS i ¼



1 Xn

b j¼1 j

ðj 6¼ iÞ:

(17)

The decline in the effective price, from unity to the level specified on the right-hand side of Eq. 17, induces an increase in the private provision of the public good. Comparison of the right-hand sides of Eq. 4 (for which it is assumed that c = 1) and of Eq. 17 shows that in the matching scheme the considered agent or country faces a decline in the effective price of the public good “climate protection” as long as at least one other agent provides a positive matching rate bj. As Bergstrom (1989) illustrates, there are indeed incentives to announce positive matching rates. Consequently, the matching scheme has a price-influencing effect (similar to that of a subsidy) which the quantity targets stipulated by the Kyoto Protocol do not exert. Due to the decline in the effective price, the agent tends to raise the level of his public good provision. Put differently, within the matching scheme, individual countries manipulate (via their matching commitments) the effective price of climate protection from other countries’ point of view in order to influence these opponent countries to raise their public good provision levels. And as Boadway, Song, and Tremblay (2007, p. 682) point out: “the notion that countries might attempt to influence other countries’ contributions by preemptive matching commitments is not far-fetched in light of recent examples of disaster relief or international campaigns to combat the effects of infectious diseases.” In the case of identical agents, Summing (Eq.17) up over all i generates n X M RS i ¼ n i¼1

1 ðj 6¼ iÞ 1 þ ðn  1Þbj

(18)

Hence, a Pareto optimum is attainable if each agent would choose bi ¼ 1. As Buchholz, Cornes, and R€ ubbelke (2009) demonstrate, matching may work better if there is a large number of agents/countries (than when there is a small number of agents), which is an important result if it is taken into account that international negotiations involve many countries.

Page 19 of 27

Handbook of Climate Change Mitigation and Adaptation DOI 10.1007/978-1-4614-6431-0_4-2 # Springer Science+Business Media New York 2015

Future Directions The Kyoto Protocol has been an inefficient agreement, although its flexible mechanisms (CDM, Joint Implementation, ETS) helped to mitigate this inefficiency. Efficiency would require that the cheapest GHG abatement options are abated first, which is not generally the case under the Kyoto Protocol. Furthermore, the emissions of large greenhouse gas emitters in the industrialized world, e.g., Russia and the USA, are not restricted under the protocol in the second commitment period. The immense threat of global warming necessitates an improved global climate protection regime, since otherwise the world might experience dramatic and life-threatening consequences. Among the possible negative effects are the melting of glaciers, a decline in crop yields (especially in Africa), rising sea levels, sudden shifts in regional weather patterns, and an increase in worldwide deaths from malnutrition and heat stress (Stern 2007, Chapter 3). An improved future international climate protection regime has to organize climate protection more effectively, and it has to stipulate significant GHG emission reductions for all major polluters. Developing countries like China and India belong to the group of major emitter countries. Consequently, if international climate policy is to succeed in combating global warming, developing countries will also have to commit to emission reductions under an international agreement. Since there is no global coercive authority which could enforce countries to conduct an efficient climate protection in the future, mutual voluntary negotiations are the only means by which international coordination in climate protection can be accomplished. Put differently, “international treaties have to rely on voluntary participation and must be designed in a self-enforcing way” (Eyckmans and Finus 2007, p. 74). Yet, international easy- or free-rider incentives which are due to the global public good property of climate protection make the agreement on such an international treaty a difficult task. Another way to protect the global climate, which deviates from the Kyoto concept of stipulating GHG emission-reduction quantities, is the negotiation of international price-influencing regimes. These regimes manipulate effective prices via taxes, subsidies, or matching grants in order to influence the behavior of individual countries in such a way that globally efficient climate protection levels are reached. An international carbon tax, as suggested by Nordhaus (2006), might indeed yield a more efficient outcome, but due to the lack of will in the political arena to launch such a tax, it might be more promising to base the future global climate protection architecture on the already established structures associated with the Kyoto scheme. Yet, the advantages of price ducks like matching schemes are remarkable, and international price-influencing concepts like the global carbon tax or matching schemes should not be dismissed with levity. Private ancillary benefits may take the role of a catalyst to climate policy rather than a direct driver to international climate negotiations. Joining international climate protection efforts may become politically more feasible for developing countries (like China and India) which face serious local/regional pollution problems when ancillary benefits are included in the political reasoning. Not only co-effects in terms of reduced local/regional air pollution are relevant but also co-benefits in the shape of, e.g., economic development, energy security, and employment.

Appendix 1 See Table 1

Page 20 of 27

Handbook of Climate Change Mitigation and Adaptation DOI 10.1007/978-1-4614-6431-0_4-2 # Springer Science+Business Media New York 2015

Table 1 Ancillary benefit studies regarding developing countries Study Aunan et al. (2003)

Country China

Pollutants (local/ regional) PM, SO2, TSP

Aunan et al. (2004)

China

SO2, particles

Aunan et al. (2007) Bussolo and O’Connor (2001) Cao (2004)

China India China

NOX, TSP NOX, particulates, SO2 SO2, TSP

Cao et al. (2008)

China

NOX, particulates, SO2

Chen et al. (2007)

China

Cifuentes et al. (2000) Cifuentes et al. (2001)

Chile Brazil, Chile, Mexico

CO, PM, NOX, SO2 Ozone, particulates

Dadi et al. (2000) Dessus and O’Connor (2003)

China Chile

Dhakal (2003)

Nepal

Eskeland and Xie (1998)

Chile, Mexico

Garbaccio et al. (2000) Garg (2011)

China India

SO2 CO, lead, NO2, ozone, PM, SO2 CO, HC, NOX, SO2, particles, lead NOX, particulates, SO2, VOCs PM, SO2 PM10

Gielen and Chen (2001)

China

NOX, SO2

Ho and Nielsen (2007) Kan et al. (2004) Larson et al. (2003)

China China China

SO2, TSP Particulates SO2

Li (2006) Markandya et al. (2009)

Thailand China, India

Particulates Particles

McKinley et al. (2005)

Mexico

CO, HC, NOX, particulates, SO2

Model/approach Comparison of studies that comprise a bottom-up study, a semi-bottom-up study, and a top-down study using a CGE model Analysis and comparison of six different CO2-abating options CGE model CGE model Technology assessment, sensitivity to discount rate Integrated modeling approach combining a top-down recursive dynamic CGE model with a bottom-up electricity sector model Comparison of partial and general equilibrium MARKAL models No economic modeling Development of scenarios that estimate the cumulative public health impacts of reducing GHG emissions Linear programming model CGE model Analysis of long-range energy system scenarios Technology and cost-curve assessment CGE model Health impacts (mortality and morbidity) quantified for different socioeconomic groups in Delhi MARKAL, technology assessment, and alternative policy scenarios CGE model Shanghai MARKAL model MARKAL of energy sector; base vs. advanced technology scenarios for controlling CO2 and SO2 Dynamic recursive CGE model Use of the POLES and GAINS models as well as of a model to estimate the effect of PM2.5 on mortality on the basis of the WHO’s comparative risk assessment methodology Analysis of five pollution control options in Mexico City (continued)

Page 21 of 27

Handbook of Climate Change Mitigation and Adaptation DOI 10.1007/978-1-4614-6431-0_4-2 # Springer Science+Business Media New York 2015

Table 1 (continued) Study Mestl et al. (2005) Morgenstern et al. (2004)

Country China China

Pollutants (local/ regional) PM, SO2 SO2

O’Connor et al. (2003) Peng (2000)

China China

NOX, SO2, TSP Particulates, SO2

Rive and R€ ubbelke (2010)

China

Shrestha et al. (2007)

Thailand

SO2, development benefits NOX, SO2

Smith and Haigler (2008)

China

Van Vuuren et al. (2003) Vennemo et al. (2006)

China China

SO2 SO2, TSP

Wang and Smith (1999a, b) West et al. (2004)

China Mexico

Zheng et al. (2011)

China

Particulates, SO2 CO, HC, NOX, particulates, SO2 SO2

Model/approach Project-by-project analysis Survey of recent banning of coal burning in small boilers in downtown area of Taiyuan CGE model RAINS-Asia for local and GTAP for economy-wide effects CGE model Four scenarios, use of end-use-based Asia-Pacific Integrated Assessment Model (AIM/Enduse) Sample calculations regarding interventions in the household energy sector Simulation model Synthesis of a significant body of research on co-benefits of climate policy in China No economic modeling Linear programming model Using a panel of 29 Chinese provinces over the period 1995–2007, application of panel cointegration techniques

References Altemeyer-Bartscher M, R€ ubbelke DTG, Sheshinski E (2010) Environmental protection and the private provision of international public goods. Economica 77:775–784 Altemeyer-Bartscher M, Markandya A, R€ ubbelke DTG (2014) International side-payments to improve global public good provision when transfers are refinanced through a tax on local and global externalities. Int Econ J 28:71–93 APP (2008) Asia-Pacific Partnership on clean development and climate. Department of State Publication # 11468 (Brochure) Aunan K, Fang J, Mestl HE, O’Connor D, Seip HM, Vennemo H, Zhai F (2003) Co-benefits of CO2reducing policies in China – a matter of scale? Int J Global Environ Issues 3:287–304 Aunan K, Fang J, Vennemo H, Oye K, Seip HM (2004) Co-benefits of climate policy lessons learned from a study in Shanxi, China. Energy Policy 32:567–581 Aunan K, Berntsen T, O’Connor D, Hindman Persson T, Vennemo H, Zhai F (2007) Benefits and costs to China of a climate policy. Environ Dev Econ 12:471–497 Bauer A (1993) Der Treibhauseffekt. J.C.B Mohr, T€ ubingen Bergstrom T (1989) Puzzles – love and spaghetti, the opportunity cost of virtue. J Econ Perspect 3:165–173

Page 22 of 27

Handbook of Climate Change Mitigation and Adaptation DOI 10.1007/978-1-4614-6431-0_4-2 # Springer Science+Business Media New York 2015

Boadway R, Song Z, Tremblay J-F (2007) Commitment and matching contributions to public goods. J Public Econ 91:1664–1683 Boadway R, Song Z, Tremblay J-F (2011) The efficiency of voluntary pollution abatement when countries can commit. Eur J Polit Econ 27:352–68 Buchholz W, Peters W (2003) International environmental agreements reconsidered – stability of coalitions in a one-shot game. In: Marsiliani L, Rauscher M, Withagen C (eds) Environmental policy in an international perspective. Kluwer Academic, Dordrecht Buchholz W, Cornes RC, R€ ubbelke DTG (2009) Existence and warr neutrality for matching equilibria in a public good economy: an aggregative game approach, CESifo working paper no. 2884, Munich Bussolo M, O’Connor D (2001) Clearing the air in India: the economics of climate policy with ancillary benefits, working paper no. 182, OECD Development Centre, Paris Cao J (2004) Options for mitigating greenhouse gas emissions in Guiyang, China: a cost-ancillary benefit analysis, 2004-RR2. Economy and Environment Program for Southeast Asia (EEPSEA), Singapore Cao J, Ho MS, Jorgenson DW (2008) “Co-benefits” of greenhouse gas mitigation policies in China – an integrated top-down and bottom-up modeling analysis, Environment for development discussion paper series, DP 08–10 Carraro C, Siniscalco D (1993) Strategies for the international protection of the environment. J Public Econ 52:309–328 Chen W, Wu Z, He J, Gao P, Xu S (2007) Carbon emission control strategies for China: a comparative study with partial and general equilibrium versions of the China MARKAL model. Energy 32:59–72 Cifuentes LA, Sauma E, Jorquera H, Soto F (2000) Preliminary estimation of the potential ancillary benefits for Chile. In: OECD (ed) Ancillary benefits and costs of greenhouse gas mitigation. OECD, Paris, pp 237–261 Cifuentes L, Borja-Aburto VH, Gouveia N, Thurston G, Davis DL (2001) Assessing the health benefits of Urban air pollution reductions associated with climate change mitigation (2000–2020): Santiago, São Paulo, México City, and New York City. Environ Health Perspect 109:419–425 Cornes RC, Sandler T (1996) The theory of externalities, public goods and club goods. Cambridge University Press, Cambridge Dadi Z, Yingyi S, Yuan G, Chandler W, Logan J (2000) Developing countries and global climate change: electric power options in China. Pew Center on Global Climate Change, Arlington Danziger L, Schnytzer A (1991) Implementing the Lindahl voluntary-exchange mechanism. Eur J Polit Econ 7:55–64 Dessai S, Michaelowa A (2001) Burden sharing and cohesion countries in European climate policy: the Portuguese example. Clim Pol 1:327–341 Dessai S, Schipper EL (2003) The Marrakech Accords to the Kyoto Protocol: analysis and future prospects. Glob Environ Chang 13:149–153 Dessus S, O’Connor D (2003) Climate policy without tears: CGE-based ancillary benefits estimates for Chile. Environ Resour Econ 25:287–317 Dhakal S (2003) Implications of transportation policies on energy and environment in Kathmandu Valley, Nepal. Energy Policy 31:1493–1507 Dickinson RE, Cicerone RJ (1986) Future global warming from atmospheric trace gases. Nature 319:109–115 Dijkstra B, R€ ubbelke DTG (2013) Group rewards and individual sanctions in environmental policy. Resour Energy Econ 35:38–59 EC (2000) Communication from the Commission to the Council and the European Parliament on EU policies and measures to reduce greenhouse gas emissions: towards a European Climate Change Programme (ECCP), COM(2000) 88 final, Brussels Page 23 of 27

Handbook of Climate Change Mitigation and Adaptation DOI 10.1007/978-1-4614-6431-0_4-2 # Springer Science+Business Media New York 2015

Ecchia G, Mariotti M (1998) Coalition formation in international agreements and the role of institutions. Eur Econ Rev 42:573–582 Edenhofer O, Knopf B, Luderer G, Steckel J, Bruckner T (2010) More heat than light? On the economics of decarbonisation. In: John KD, R€ ubbelke DTG (eds) Sustainable energy. Routledge, London/New York Elbakidze L, McCarl BA (2007) Sequestration offsets versus direct emission reductions: consideration of environmental co-effects. Ecol Econ 60:564–571 Enquete-Kommission (1990) Schutz der Erde – Eine Bestandsaufnahme mit Vorschl€agen zu einer neuen Energiepolitik, Dritter Bericht der Enquete-Kommission “Vorsorge zum Schutz der Erdatmosph€are“ des 11. Deutschen Bundestages, Teilband 1, Bonn Enquete-Kommission (1995) Mehr Zukunft f€ ur die Erde – Nachhaltige Energiepolitik f€ ur dauerhaften Klimaschutz, Schlußbericht der Enquete-Kommission “Schutz der Erdatmosph€are“ des 12. Deutschen Bundestages, Bonn Eskeland GS, Xie J (1998) Acting globally while thinking locally: is the global environment protected by transport emission control programs? J Appl Econ 1:385–411 Eyckmans J, Finus M (2007) Measures to enhance the success of global climate treaties. Int Environ Agreements 7:73–97 Fearnside PM (2001) Saving tropical forests as a global warming countermeasure: an issue that divides the environmental movement. Ecol Econ 39:167–184 Finus M, R€ ubbelke DTG (2013) Public good provision and ancillary benefits: the case of climate agreements. Environ Resour Econ 56:211–226 Fudenberg D, Tirole J (1991) Game theory. MIT Press, Cambridge Garbaccio RF, Ho MS, Jorgenson DW (2000) The health benefits of controlling carbon emissions in China. In: OECD (ed) Ancillary benefits and costs of greenhouse gas mitigation. OECD, Paris, pp 343–376 Garg A (2011) Pro-equity effects of ancillary benefits of climate change policies: a case study of human health impacts of outdoor air pollution in New Delhi. World Dev 39:1002–1025 Gielen D, Chen C (2001) The CO2 emission reduction benefits of Chinese energy policies and environmental policies: a case study for Shanghai, period 1995–2020. Ecol Econ 39:257–270 Glachant M, de Muizon G (2006) Climate change agreements in the UK: a successful policy experience? In: Morgenstern RA, Pizer WD (eds) Reality check: the nature and performance of voluntary environmental programs in the United States, Europe and Japan. Resources for the Future, Washington, DC, pp 64–85 Gupta J (2010) A history of international climate change policy. WIREs Clim Change 1:636–653 Guttman JM (1978) Understanding collective action: matching behavior. Am Econ Rev 68:251–255 Guttman JM (1987) A non-cournot model of voluntary collective action. Economica 54:1–19 Halsnæs K, Olhoff A (2005) International markets for greenhouse gas emission reduction policies – possibilities for integrating developing countries. Energy Policy 33:2313–2325 Hauert C, Doebeli M (2004) Spatial structure often inhibits the evolution of cooperation in the snowdrift game. Nature 428:643–646 Heggelund GM, Buan IF (2009) China in the Asia–Pacific partnership: consequences for UN climate change mitigation efforts? Int Environ Agreements 9:301–317 Ho MS, Nielsen CP (2007) Clearing the air: the health and economic damages of air pollution in China. MIT Press, London Houghton JT (1997) Global warming: the complete briefing. Cambridge University Press, Cambridge IPCC (1996) Climate change 1995 – the science of climate change. Cambridge University Press, Cambridge Page 24 of 27

Handbook of Climate Change Mitigation and Adaptation DOI 10.1007/978-1-4614-6431-0_4-2 # Springer Science+Business Media New York 2015

IPCC (2001) Climate change 2001 – mitigation. Cambridge University Press, Cambridge IPCC (2007) Climate change 2007: synthesis report. Cambridge University Press, Cambridge IPCC (2013a) Climate Change 2013: The Physical Science Basis. Contribution of Working Group I to the Fifth Assessment Report of the Intergovernmental Panel on Climate Change [Stocker, T. F., D. Qin, G.-K. Plattner, M. Tignor, S. K. Allen, J. Boschung, A. Nauels, Y. Xia, V. Bex and P. M. Midgley (eds.)]. Cambridge University Press, Cambridge, United Kingdom and New York, NY, USA IPCC (2013b) Climate change: the physical science basis, summary for policymakers. Cambridge University Press. http://www.ipcc.ch/pdf/assessment-report/ar5/wg1/WGIAR5_SPM_brochure_en.pdf Kan H, Chen B, Chen C, Fu Q, Chen M (2004) An evaluation of public health impact of ambient air pollution under various energy scenarios in Shanghai, China. Atmos Environ 38:95–102 Karlsson-Vinkhuyzen SI, van Asselt H (2009) Introduction: exploring and explaining the Asia-Pacific partnership on clean development and climate. Int Environ Agreements 9:195–211 Lal R (2004) Soil carbon sequestration impacts on global change and food security. Science 304:1623–1627 Laroui F, Tellegen E, Tourilova K (2004) Joint implementation in energy between the EU and Russia: outlook and potential. Energy Policy 32:899–914 Larson ED, Wu Z, DeLaquil P, Chen W, Gao P (2003) Future implications of China’s energy-technology choices. Energy Policy 31:1189–1204 Lawrence P (2009) Australian climate policy and the Asia Pacific partnership on clean development and climate (APP). From howard to rudd: continuity or change? Int Environ Agreements 9:281–299 Li JC (2006) A multi-period analysis of a carbon tax including local health feedback: an application to Thailand. Environ Dev Econ 11:317–342 Lipman BL (1986) Cooperation among egoists in prisoners’ dilemma and chicken games. Public Choice 51:315–331 Lipnowski I, Maital S (1983) Voluntary provision of a pure public good as the game of “chicken”. J Public Econ 20:381–386 Markandya A, R€ ubbelke DTG (2004) Ancillary benefits of climate policy. Jahrb€ ucher f€ur Nationalökonomie und Statistik 224:488–503 Markandya A, Armstrong BG, Hales S, Chiabai A, Criqui P, Mima S, Tonne C, Wilkinson P (2009) Public health benefits of strategies to reduce greenhouse-gas emissions: low-carbon electricity generation. Lancet 374:2006–2015 McGee J, Taplin R (2006) The Asia–Pacific partnership on clean development and climate: a complement or competitor to the Kyoto Protocol? Glob Chang Peace Secur 18:173–192 McGee J, Taplin R (2009) The role of the Asia Pacific partnership in discursive contestation of the international climate regime. Int Environ Agreements 9:213–238 McKinley G, Zuk M, Höjer M, Avalos M, González I, Iniestra R, Laguna I, Martínez MA, Osnaya P, Reynales LM, Valdés R, Martínez J (2005) Quantification of local and global benefits from air pollution control in Mexico city. Environ Sci Technol 39:1954–1961 Mestl HES, Aunan K, Fang J, Seip HM, Skjelvik JM, Vennemo H (2005) Cleaner production as climate investment: integrated assessment in Taiyuan city, China. J Clean Prod 13:57–70 Molina MJ, Rowland FS (1974) Stratospheric sink for chlorofluoromethanes: chlorine atom-catalysed destruction of ozone. Nature 249:810–812 Morgenstern R, Krupnick A, Zhang X (2004) The ancillary carbon benefits of SO2 reductions from a small-boiler policy in Taiyuan, PRC. J Environ Dev 13:140–155 Nordhaus WD (1998) Is the Kyoto Protocol a dead duck? Are there any live ducks around? Comparison of alternative global tradable emission regimes, preliminary version of the paper presented at the

Page 25 of 27

Handbook of Climate Change Mitigation and Adaptation DOI 10.1007/978-1-4614-6431-0_4-2 # Springer Science+Business Media New York 2015

Snowmass workshop on architectural issues in the design of climate change policy instruments and institutions, Yale University, New Haven Nordhaus WD (2006) After Kyoto: alternative mechanisms to control global warming. Am Econ Rev 96:31–34 O’Connor D, Zhai F, Aunan K, Berntsen T, Vennemo H (2003) Agricultural and human health impacts of climate policy in China: a general equilibrium analysis with special reference to Guangdong, technical papers no. 206, OECD Ojea E, Nunes PALD, Loureiro ML (2010) Mapping biodiversity indicators and assessing biodiversity values in global forests. Environ Resour Econ 47:329–347 Pearce D (1992) Secondary benefits of greenhouse gas control, CSERGE working paper no. 92-12, London Pearce D (2000) Policy framework for the ancillary benefits of climate change policies. In: OECD (ed) Ancillary benefits and costs of greenhouse gas mitigation. OECD, Paris, pp 517–560 Peng CY (2000) Integrating local, regional and global assessment in China’s air pollution control policy, CIES working paper no. 23 Pezzey JCV, Jotzo F, Quiggin J (2008) Fiddling while carbon burns: why climate policy needs pervasive emission pricing as well as technology promotion. Aust J Agric Resour Econ 52:97–110 Pickering J, R€ubbelke DTG (2014) International cooperation on adaptation. In: Markandya A, Galarraga I, de Sainz Murieta E (eds) Routledge handbook of the economics of climate change adaptation. Routledge, Oxon/New York Pittel K, R€ ubbelke DTG (2008) Climate policy and ancillary benefits – a survey and integration into the modelling of international negotiations on climate change. Ecol Econ 68:210–220 Pittel K, R€ ubbelke DTG (2012) Transitions in the negotiations on climate change: from prisoners’ dilemma to chicken and beyond. Int Environ Agreements 12:23–39 Pittel K, R€ ubbelke D (2013) International climate finance and its influence on fairness and policy. World Econ 36:419–436 Rabin M (1993) Incorporating fairness into game theory and economics. Am Econ Rev 83:1281–1302 Rapoport A, Chammah AM (1966) The game of chicken. Am Behav Sci 10:10–28 Rive N, R€ubbelke DTG (2010) International environmental policy and poverty alleviation. Rev World Econ 146:515–543 R€ ubbelke DTG (2002) International climate policy to combat global warming – an analysis of the ancillary benefits of reducing carbon emissions. Edward Elgar, Cheltenham/Northampton R€ubbelke DTG (2006) An analysis of an international environmental matching agreement. Environ Econ Policy Stud 8:1–31 R€ubbelke DTG (2011) International support of climate change policies in developing countries: strategic, moral and fairness aspects. Ecol Econ 70:1470–80 R€ubbelke DTG, Vögele S (2011) Impacts of climate change on European critical infrastructures: the case of the power sector. Environ Sci Pol 14:53–63 R€ ubbelke DTG, Vögele S (2013) Short-term distributional consequences of climate change impacts on the power sector: who gains and who loses? Clim Chang 116:191–206 Samuelson PA (1954) The pure theory of public expenditure. Review of Economics and Statistics 36:387–389 Samuelson PA (1955) Diagrammatic exposition of a theory of public expenditure. Review of Economics and Statistics 37:350–356 Sandler T (1997) Global challenges – an approach to environmental, political, and economic problems. Cambridge University Press, Cambridge

Page 26 of 27

Handbook of Climate Change Mitigation and Adaptation DOI 10.1007/978-1-4614-6431-0_4-2 # Springer Science+Business Media New York 2015

Sandler T, Sargent K (1995) Management of transnational commons: coordination, publicness, and treaty formation. Land Econ 71:145–162 Shrestha RM, Malla S, Liyanage MH (2007) Scenario-based analyses of energy system development and its environmental implications in Thailand. Energy Policy 35:3179–3193 Smith B (1998) Ethics of Du Pont’s CFC strategy 1975–1995. J Bus Ethics 17:557–568 Smith KR, Haigler E (2008) Co-benefits of climate mitigation and health protection in energy systems: scoping methods. Annu Rev Public Health 29:11–25 Smith S, Swierzbinski J (2007) Assessing the performance of the UK Emissions Trading Scheme. Environ Resour Econ 37:131–158 Smith SJ, Wigley TML (2000) Global warming potentials: 1. Climatic implications of emissions reductions. Clim Chang 44:445–457 Snyder GH (1971) “Prisoner’s dilemma” and “chicken” models in international politics. Int Stud Q 15:66–103 Stern N (2007) The economics of climate change – the stern review. Cambridge University Press, Cambridge UNFCCC (2011) Decision 1/CP.17: establishment of an Ad Hoc Working Group on the Durban platform for enhanced action. United Nations Framework Convention on Climate Change, Bonn UNFCCC (2014a) Distribution of expected CERs from registered projects by Host Party. https://cdm. unfccc.int/Statistics/Public/files/201407/ExpRed_reg_byHost.pdf. Last viewed 19 Aug 2014 UNFCCC (2014b) Glossary: CDM terms. https://cdm.unfccc.int/Reference/Guidclarif/glos_CDM.pdf. Last viewed 19 Aug 2014 Van Vuuren DP, Fengqi Z, de Vries B, Kejun J, Graveland C, Yun L (2003) Energy and emission scenarios for China in the 21st century – exploration of baseline development and mitigation options. Energy Policy 31:369–387 Vennemo H, Aunan K, Jinghua F, Holtedahl P, Tao H, Seip HM (2006) Domestic environmental benefits of China’s energy-related CDM potential. Clim Chang 75:215–239 Wang X, Smith KR (1999a) Near-term health benefits of greenhouse gas reductions: a proposed assessment method and application in two energy sectors of China, WHO/PHE/99.1. World Health Organization, Geneva Wang X, Smith KR (1999b) Secondary benefits of greenhouse gas control: health impacts in China. Environ Sci Technol 33:3056–3061 WCED (1987) Our common future. Oxford University Press, Oxford West JJ, Osnaya P, Laguna I, Martínez J, Fernández A (2004) Co-control of Urban air pollutants and greenhouse gases in Mexico city. Environ Sci Technol 38:3474–3481 Yamin F, Depledge J (2004) The international climate change regime: a guide to rules, institutions and procedures. Cambridge University Press, Cambridge Zhang ZX (2006) Towards an effective implementation of CDM projects in China. Energy Policy 34:3691–3701 Zheng X, Zhang L, Yu Y, Lin S (2011) On the nexus of SO2 and CO2 emissions in China: the ancillary benefits of CO2 emission reductions. Reg Environ Chang 11:883–891

Page 27 of 27

Handbook of Climate Change Mitigation and Adaptation DOI 10.1007/978-1-4614-6431-0_5-2 # Springer Science+Business Media New York 2015

Ethics and Environmental Policy David J. Rutherford* and Eric Thomas Weber Department of Public Policy Leadership, University of Mississippi, University, MS, USA

Abstract This chapter offers a survey of important factors for the consideration of the moral obligations involved in confronting the challenges of climate change. The first step is to identify as carefully as possible what is known about climate change science, predictions, concerns, models, and both mitigation and adaptation efforts. While the present volume is focused primarily on the mitigation side of reactions to climate change, these mitigation efforts ought to be planned in part with reference to what options and actions are available, likely, and desirable for adaptation. Section “Understanding Climate Change,” therefore, provides an overview of the current understanding of climate change with careful definitions of terminology and concepts along with the presentation of the increasingly strong evidence that validates growing concern about climate change and its probable consequences. Section “Uncertainties and Moral Obligations Despite Them” addresses the kinds of uncertainty at issue when it comes to climate science. The fact that there are uncertainties involved in the understanding of climate change will be shown to be consistent with there being moral obligations to address climate change, obligations that include expanding the knowledge of the subject, developing plans for a variety of possible adaptation needs, and studying further the various options for mitigation and their myriad costs. Section “Traditions and New Developments in Environmental Ethics” covers a number of moral considerations for climate change mitigation, opening with an examination of the traditional approaches to environmental ethics and then presenting three pressing areas of concern for mitigation efforts: differential levels of responsibility for action that affects the whole globe, the dangers of causing greater harm than is resolved, and the motivating force of diminishing and increasingly expensive fossil fuels that will necessitate and likely speed up innovation in energy production and consumption that will be required for human beings to survive once fossil fuels are exhausted.

Introduction Few subjects are as complex and as frequently oversimplified as climate change. After big snowfalls in winters past, news outlets have featured various observers of these local events, who dismiss the idea of global warming with statements such as “so much for the global warming theory” (LaHay 2000). On the other hand, climate scientists note that Earth’s average temperature has risen over time, and as a result, they predict increases in temperature extremes and vaporization of water that, in turn, lead to an expectation of increased snowfall in some years. Problems of understanding and misunderstanding such as these are important causes of confusion in discussions about climate change, and those problems and that confusion combined with the complexity of the issues at stake add considerable challenge to addressing the topic of focus in this chapter: the ethics of climate change mitigation. This chapter will argue that despite limitations to knowledge about the complexities of the climate system, certain efforts must be undertaken to prepare for and address the developments in climate change. The science on the subject is growing increasingly compelling, showing that there is need to work toward *Email: [email protected] Page 1 of 31

Handbook of Climate Change Mitigation and Adaptation DOI 10.1007/978-1-4614-6431-0_5-2 # Springer Science+Business Media New York 2015

mitigating the causal forces that are bringing about climate change along with preparing adaptations to changes in climate, some of which have already begun (Walther et al. 2002). Furthermore, the existence of uncertainties with respect to climate science calls for more study of the subject of climate change, with greater collaboration than is already at work. Calling for further study of the subject, however, does not imply the postponement of all or any particular measure of precaution and potential action. This chapter will examine the current knowledge about climate change as well as the moral dimensions at issue in both seeking to minimize those changes and working to prepare for the changes and their effects. When the term “mitigation” arises in this chapter, it is important to keep in mind a consistent meaning. To mitigate something generally means to make it less harsh and less severe, but in relation to climate change, mitigation carries a more precise meaning. The term refers to human actions taken to reduce the forces that are believed responsible for the increase of the average temperature of the Earth. The primary concern with climate change is the increase of global average temperature, and mitigation is aimed at decreasing the rate of growth of this global temperature and stabilizing it or even decreasing it should it rise too high. Mitigation is sometimes referred to as abatement. Generally, the idea of abatement is either to reduce the rate of growth that is or will likely be problematic or to actually reverse the trend and reduce global average temperature. In contrast to mitigation, a second category of response to climate change is to find ways of adapting life to new conditions, the method of adaptation. Adaptation refers to adjustments made in response to changing climates that moderate harm or exploit beneficial opportunities (Intergovernmental Panel on Climate Change 2007a). The interesting issue that arises in focusing on climate change mitigation – the efforts to decrease the causal forces of rising global temperatures – is that subtle changes in temperature might be the kind to which some or even many people will be able to adapt relatively easily. For instance, if people live on coastal lands that are increasingly inundated, there are ways of reclaiming land from water or places to which people can move in adaptation to the climate changes. Other adaptations might include systems of planned agricultural crop changes prepared to avoid problems that could arise in growing food for the world’s increasing population. An important consideration about adaptation is that while humans may be able to change and adjust to changing climates, natural ecosystems and habitats may not, a point that will also be addressed in this chapter. There are certainly reasons to worry about sudden, great changes, but more gradual and less severe changes raise a host of ethical issues. For instance, it is reasonable to ask whether a farmer has the moral right to grow a certain crop. If so, then it may be that people have a responsibility to avoid changing the climate. Belief in such a right, however, could be considered highly controversial. What if farmers could reasonably expect some help in adapting the crops that they raise to new conditions? This idea would lessen the moral concern over the ability to grow a certain crop in a particular region, and thus a matter of adaptation would have bearing on the moral dimensions of climate change mitigation. It is likely that the best solution to address the ill effects of climate change will require a combination of mitigation and adaptation strategies. A central claim of this chapter, therefore, is that the ethics of climate change mitigation must not be considered in isolation from the options available for adaptation. Of the two, however, the more controversial, morally speaking, are abatement efforts or mitigation. This is because when climate conditions change, there will be no choice for people but to adapt to new circumstances if presented with serious challenges for survival, at least until humans are able to exert control in a desirable way on the trends in global climate. But abatement efforts, on the other hand, require sacrifices early, before certainty exists about the exact nature and extent of the problems to come and whom the problems, benefits, and mitigating efforts will most affect and how. Accompanying the problem of complexity that exists in climate change is a necessary challenge of uncertainty. The approach of addressing change through adaptive measures can be started early and is also possible as some more gradual changes occur, such as in the evacuation of islands that slowly disappear under the rising level of the sea. Other problems, however, are predicted to occur swiftly, such as in the Page 2 of 31

Handbook of Climate Change Mitigation and Adaptation DOI 10.1007/978-1-4614-6431-0_5-2 # Springer Science+Business Media New York 2015

potential disruption of the ocean conveyor, a “major threshold phenomenon” that could bring “significant climatic consequences,” such as severe droughts (Gardiner 2004, pp. 562–563). The problem of knowledge, of the limits to human abilities to identify where suffering or benefits will occur, under what form, by which mechanisms, implies that preventive adaptations may be impossible in the face of sudden changes in global climates. Furthermore, if there existed no idea of changes that might occur, this limited knowledge might render the effects of changing conditions less troubling morally speaking. But the fact is that today many scientists have devised models that suggest potential outcomes of climate change and so undercut the option of ignorant dismissal or avoidance of moral obligation. Limited knowledge about climate change first and foremost calls for increasing the knowledge and study of the subject, but it also demands consideration of the kinds of problems that can be expected, weighed against the anticipated costs of alleviating the worst of the threats. This chapter will offer a survey of a number of important factors for the consideration of the moral obligations involved in confronting the challenges of climate change. The first step is to identify as carefully as possible what is known about climate change science, predictions, concerns, models, and both mitigation and adaptation efforts. While the present volume is focused primarily on the mitigation side of reactions to climate change, these mitigation efforts ought to be planned in part with reference to what options and actions are available, likely, and desirable for adaptation. Section “Understanding Climate Change,” therefore, provides an overview of current understanding of climate change with careful definitions of terminology and concepts along with the presentation of the increasingly strong evidence that validates growing concern about climate change and its probable consequences. Next, section “Uncertainties and Moral Obligations Despite Them” will address the kinds of uncertainty at issue when it comes to climate science. The fact that there are uncertainties involved in human understanding of climate change will be shown to be consistent with there being moral obligations to address climate change. As mentioned above, these are obligations to know more than is currently known, to develop plans for a variety of possible adaptation needs, and to study further the various options for mitigation and their myriad costs. Plus, Gardiner (2004) presented a convincing case for the weighing of options that concludes in accepting the consequences of a small decrease in GNP from setting limits on global greenhouse gas emissions. Gardiner’s argument is compelling even in the face of uncertainty. After all, the uncertainties involved in climate change resemble uncertainties that motivate moral precaution in so many other spheres of human conduct. Finally, section “Traditions and New Developments in Environmental Ethics” covers a number of moral considerations for climate change mitigation. This section opens with an examination of the traditional approaches to environmental ethics and then presents three pressing areas of concern for mitigation efforts: differential levels of responsibility for action that affects the whole globe, the dangers of causing greater harm than is resolved (with geoengineering efforts, among others), and the motivating forces of diminishing and increasingly expensive fossil fuels that will necessitate and likely speed up innovation in energy production and consumption that will be required for human beings to survive once fossil fuels are exhausted.

Understanding Climate Change Given the complexity of addressing global climate change, it is crucial to clarify the meaning of a number of key terms, forces, and strategies for mitigation, so this first section will begin with a description of central terms and concepts at issue. The section then covers perceptions and methods for describing climate change because ideologies and affective influences on discourse about climate change can be used to mislead the public about the nature and the state of climate science. After that, the section examines the state of scientific knowledge and the predictions that the scientific community has presented about the Page 3 of 31

Handbook of Climate Change Mitigation and Adaptation DOI 10.1007/978-1-4614-6431-0_5-2 # Springer Science+Business Media New York 2015

future of climate change. This is important in order to grasp the extent of concern that world leaders and publics ought to feel about the future of the world’s climates. Finally, this section will close with a brief description of the various proposals that have been considered for mitigating climate change.

Terminology and Concepts Uncertainty, confusion, and misunderstanding result from poorly or ambiguously defined terminology and concepts, and this is especially the case with the topic of climate change. Climate change is complex and often elicits heated and impassioned public discourse. To reduce such problems, this section provides definitions for terms and concepts that are essential for both an explanation of what is known about climate change and for consideration of the broader topic of ethics and climate change mitigation. Some of these definitions are contested, and in such cases, the preferred definitions presented here will be contrasted with other definitions found in the literature, along with provision of an explanation for the selections made. Weather and Climate The term “weather” refers to short-term atmospheric conditions occurring in a specific time and place and identified by the sum of selected defining variables that can include temperature, precipitation, humidity, cloudiness, air pressure, wind (velocity and direction), storminess, and more. Weather is measured and reported at the scale of moments, hours, days, and weeks. Climate, on the other hand, is defined (in a narrow sense) as the aggregate of day-to-day weather conditions that have been averaged over longer periods of time such as a month, a season, a year, decades, or thousands to millions of years. Climate is a statistical description that includes not just the average or mean values of the relevant variables but also the variability of those values and the extremes (McKnight and Hess 2000; Intergovernmental Panel on Climate Change 2007b). The Climate System Understanding climate entails more than consideration of just the aggregated day-to-day weather conditions averaged over longer periods of time. Those average atmospheric conditions operate within the wider context of what is called the climate system that includes not just the atmosphere but also the hydrosphere, the cryosphere, the Earth’s land surface, and the biosphere. • The atmosphere is a mixture of gasses that lie in a relatively thin envelope that surrounds the Earth and is held in place by gravity. The atmosphere also contains suspended liquid and solid particles that “can vary considerably in type and concentration and from time to time and place to place” (Kemp 2004, p. 37). On average, 50 % of the atmospheric mass lies between sea level and 5.6 km (3.48 miles or 18,372 ft) of altitude. To highlight how thin this is, consider that the peak of Mt. McKinley in Alaska is 6.19 km (20,320 ft) above sea level and, as a result, the density of air is less than 50 % of that available at sea level or that the peak of Mt. Everest at 8.85 km (29,029 ft) has less than 32 % of the air density that is available at sea level. Commercial jet airliners generally fly at about 10.5 km (35,000 ft) above sea level, and humans would lapse into unconsciousness very quickly if cabin pressure were to decrease suddenly at this altitude (Strahler and Strahler 1978). • The hydrosphere consists of liquid surface water such as the ocean, seas, lakes, and rivers, along with groundwater, soil water, and, importantly, water vapor in the atmosphere. • The cryosphere consists of all snow, ice (glaciers and ice sheets), and frozen ground (including permafrost) that lie on and beneath the surface of the Earth. • Earth’s land surface consists of the naturally occurring rock and soil along with the structures (buildings, roads, etc.) that humans have constructed. Page 4 of 31

Handbook of Climate Change Mitigation and Adaptation DOI 10.1007/978-1-4614-6431-0_5-2 # Springer Science+Business Media New York 2015

• The biosphere consists of all living organisms, both plant and animal, on land, in fresh water, and in the ocean, including derived dead organic matter such as litter, soil organic matter, and ocean detritus. The climate system functions by means of complex interactions among these five components in which flows and fluxes of energy and matter take place through myriad processes such as radiation, convection, evaporation, transpiration, chemical exchanges, and many more (Climate Change 2007c). Given this complexity, climate science is an interdisciplinary endeavor that necessarily involves the interactions and contributions of a wide range of the physical sciences such as physics, chemistry, biology, ecology, oceanography, and the atmospheric sciences. Moreover, because human existence involves interactions with climate, the social sciences such as psychology, political science, and sociology also play important roles in human understanding. In addition, climate operates over time and space, so the synthesizing disciplines of history and geography have much to contribute as well. Furthermore, as shown later in this chapter, the humanities contribute to the understanding of the social dimensions of climate systems when it comes to considering the moral implications of various situations and actions in response to climate change. Climate Change The most recent definition of climate change developed by the Intergovernmental Panel on Climate Change (IPCC) will be used in this chapter: Climate change refers to a change in the state of the climate that can be identified (e.g., by using statistical tests) by changes in the mean and/or the variability of its properties, and that persists for an extended period, typically decades or longer (Climate Change 2007c, p. 78; see also USCCSP (United States Climate Change Science Program) 2007).

Importantly, this definition is solely descriptive and includes no reference to causation, particularly no indication of the extent to which any changes in climate result from natural or human (anthropogenic) causes. Other definitions of climate change include causation, such as the United Nations Framework Convention on Climate Change: “Climate change” means a change of climate which is attributed directly or indirectly to human activity that alters the composition of the global atmosphere and which is in addition to natural climate variability observed over comparable time periods (UNFCCC (United Nations Framework Convention on Climate Change) 1992, p. 3).

The first definition was selected for use in this chapter because it focuses on identifying and describing observed changes in climate and specifically refrains from assigning causation to either natural or anthropogenic processes. As a result, it draws attention to the distinction between two aspects of inquiry: (1) questions related to the presence, extent, and direction of changes in climate and (2) questions about causation of any observed changes, especially determinations of natural or anthropogenic causes. Views about (2) are often disconnected from questions about presence, extent, and direction of change and also tend to generate more contentious debate, especially in public and political discourse. As means to reduce contention, it is helpful to make the clear distinction between these two aspects of inquiry, and such clarity is especially important in this chapter, considering issues of ethics, mitigation, and adaptation. Additionally, and importantly, the selected definition implies no specific type of change(s) but instead fosters recognition that changes can occur in all manner of the variables that constitute climate such as temperature, precipitation, humidity, cloud cover, etc. (this point is further elaborated below with respect to the terms “climate change” and “global warming”). An additional reason to clarify the difference between (1) and (2) is that consideration of (1) generally engenders less controversy, while the task of determining who should act in addressing any needs that arise from climate change will depend in part on how one addresses issue (2). As such, (2) is not to be

Page 5 of 31

Handbook of Climate Change Mitigation and Adaptation DOI 10.1007/978-1-4614-6431-0_5-2 # Springer Science+Business Media New York 2015

ignored in addressing the ethics of climate change, but after untangling (1) from (2), the problems to be addressed can be recognized for what they are more easily. Climate Variability Most definitions of climate variability found in the literature differ little from the above definitions of climate change. For example, as defined in the Synthesis Report for the IPCC Fourth Assessment (Climate Change 2007c, pp. 78–79), the two terms actually seem synonymous in that they both refer to changes occurring on timescales of multiple decades or longer and they both allow for natural and anthropogenic causes. Other definitions of climate variability retain the focus on timescales of multiple decades or longer but limit climate variability to only natural causes (Batterbee and Binney 2008; Climate Research Program 2010). In this chapter, however, the term will refer to something different from either of these uses. The term “climate variability” is used in this chapter in recognition that the long-term, statistical averages of the variables that define climates can contain substantial variation around the mean. Droughts, rainy periods, El Niño events, etc., occur in time periods of a year to as much as three decades within climates that are considered to be stable as well as within climates that are experiencing changes in the longer term. This variability is different from extreme weather events such as floods and heat waves that occur on timescales of hours, days, and weeks, and it is also different from the long-term climate changes that occur on scales that span multiple decades to millions of years (which have already been defined above as “climate change”). The reasons to differentiate climate variability from climate change in this way are twofold. First, climate variability can generate considerable “noise” in the data that can lead to erroneous conclusions about climate change. For example, Fig. 1 shows two levels of variability – interannual and multidecadal – that are present in the observed global temperature record that extends from 1880 to 2009. Interannual variability (variability from year to year) is as much as 0.3  C (0.54  F), a range that could be expressed as 1 year with a very hot summer and a mild winter followed by a second year with a mild summer and a very cold winter. The conditions present in either of these years could lead people to make poor judgments about climate. In particular, the long-term warming trend that the graph shows occurring across the full 119-year period is sometimes dismissed because people generally give greater weight in decision making and opinion formation to immediate affective sensory input over cognitive consideration of statistics (Weber 2010) (more will be said below about human decision making that is affect based Global land-ocean temperature index

Temperature anomaly (°c)

.6 Annual mean 5-year mean

.4 .2 .0 −.2 −.4 1880

1900

1920

1940

1960

1980

2000

Fig. 1 A line plot of the global land-ocean temperature index from 1880 to 2009, with the base period 1951–1980. The dotted black line is the annual mean and the solid black line is the 5-year mean. The gray bars show uncertainty estimates (GISS (Goddard Institute for Space Studies) 2010a) Page 6 of 31

Handbook of Climate Change Mitigation and Adaptation DOI 10.1007/978-1-4614-6431-0_5-2 # Springer Science+Business Media New York 2015

compared to a basis on statistical description). The variability over several decades is exhibited in Fig. 1 for the time period 1940–1980, which shows a plateau within the longer-term, 119-year warming trend. During this shorter time period, media reports and even a few researchers erroneously forecast “global cooling” based on the observational record at the time that included inadequate and uncertain data from years earlier than this time period and, obviously, no data beyond 1980 (de Blij 2005, p. 85). The second important reason for distinguishing between climate variability and climate change in the way defined in this chapter is related to dynamic equilibrium in ecosystems. Dynamic equilibrium results as ecosystems adapt to dynamic, ongoing forces that are not so extreme as to produce catastrophic changes. This dynamic equilibrium occurs because the change forces are not dramatic enough (or they cancel each other out), so that relative stability in the ecosystem can be perpetuated as the organisms (plants and animals) and the physical environment respond with adjustments that are within their adaptive capacities. In general, ecosystem adaptive capacity is not exceeded (and dynamic equilibrium is maintained) as a result of climate variability as defined here, but climate change, on the other hand, often exceeds this capacity and leads to fundamental alterations of the ecosystems. Such fundamental alterations occurring in natural ecosystems include processes such as species extinction, changes in community compositions, changes in ecological interactions, changes in geographical distributions, etc. Fundamental alterations can also occur within ecosystems upon which humans depend, leading to such changes as increases/decreases in agricultural productivity and the availability of water, changes in storm patterns, etc. (Intergovernmental Panel on Climate Change 2007a). These effects on both natural and human ecosystems will be discussed in more detail in what follows, but the important point here is that climate variability rarely produces such fundamental alterations, whereas climate change frequently can. Global Warming and Global Average Temperature Global warming is defined as an increase in the average temperature of Earth’s surface NASA (National Aeronautics and Space Administration) 2007. As Fig. 1 illustrates, this average surface temperature has increased by 0.75  C  0.3  C (1.35  F  0.54  F) between 1880 and 2009. While this change might seem small, the paleoclimate record demonstrates that even “mild heating can have dramatic consequences” such as advancing or retreating glaciers, sea level changes, and changes in precipitation patterns that can all force considerable changes in human activity and push natural ecosystems beyond dynamic equilibrium (Hansen 2009). The graph in Fig. 1 comes from NASA’s Goddard Institute for Space Studies Surface Temperature Analysis (GISTEMP) database which contains temperature observations from land and sea from 1880 to the present (GISS (Goddard Institute for Space Studies) 2010b). It is one of the three such large databases of Earth surface atmospheric observations that all begin in the mid- to late nineteenth century and extend to the present. The National Oceanic and Atmospheric Administration (NOAA) maintains the second database that is titled the Global Historical Climatology Network (GHCN), and while this database contains observations from land stations only, it includes precipitation and air pressure data as well as temperature (National Climatic Data Center 2008). The third database is abbreviated HadCRUT3 which reflects the source of the dataset being a collaborative project of the Met Office Hadley Center of the UK National Weather Service (“Had”) and the Climate Research Unit (“CRU”) at the University of East Anglia. The Hadley Center provides marine surface temperature data, and the Climate Research Unit provides the land surface temperature data. These three databases are not completely independent because they share some of the same observation stations, but nevertheless, some differences in the raw data exist, and the three centers work independently using different approaches to the compilation and analysis done on the datasets. As such, the comparisons of results from the different databases allow for verification. Considerable consistency is apparent across the databases, especially in the overall trend of global warming since 1880. The different centers “work independently and use Page 7 of 31

Handbook of Climate Change Mitigation and Adaptation DOI 10.1007/978-1-4614-6431-0_5-2 # Springer Science+Business Media New York 2015

different methods in the way they collect and process data to calculate the global average temperature. Despite this, the results of each are similar from month to month and year to year, and there is definite agreement on temperature trends from decade to decade. Most importantly, they all agree that global average temperature has increased over the past century and this warming has been particularly rapid since the 1970s” (Stott 2011). Figure 2 shows the temperature record for each of the three datasets superimposed upon one another, and the consistency among them is clear. In addition, research has been done to identify and quantify uncertainty in the data, and good estimates of the uncertainty indicate that the data are valid. As one such study stated: Since the mid twentieth century, the uncertainties in global and hemispheric mean temperatures are small, and the temperature increase greatly exceeds its uncertainty. In earlier periods the uncertainties are larger, but the temperature increase over the twentieth century is still significantly larger than its uncertainty (Brohan et al. 2006, p. 1).

The temperature records shown in Fig. 2 for each of the three centers are developed as each center uses its dataset to calculate a “global average temperature,” both for the past and for monthly updates, and it is these values that are displayed on the graphs in the figure. While these calculations are done differently at the three centers, all three use the following general procedure. First, they expend considerable efforts to obtain the most accurate data possible and define the uncertainty that remains in those data. Then, the monthly average temperature value for each reporting station is converted into what is called an “anomaly.” The anomaly of each reporting station is calculated by subtracting the monthly average value from the average value that the station has maintained over some relatively long-term “base period” (e.g., the HadCRUT3 uses the period 1961–1990 as its base period). The reason for using anomalies is stated as follows: For example, if the 1961–1990 average September temperature for Edinburgh in Scotland is 12  C and the recorded average temperature for that month in 2009 is 13  C, the difference of 1  C is the anomaly and this would be used in the calculation of the global average (Stott 2011).

One of the main reasons for using anomalies is that they remain fairly constant over large areas. So, for example, an anomaly in Edinburgh is likely to be the same as the anomaly further north in Fort William and at the top of Ben Nevis, the UK’s highest mountain. This is even though there may be large differences in absolute temperature at each of these locations.

Anomoly (°C) relative to 1961– 1990

0.6 HadCRUT3 NCDC GISS

0.4 0.2 0 −0.2 −0.4 −0.6 −0.8 1850

1900

1950

2000

Fig. 2 Correlation between the three global average temperature records. All three datasets show clear correlation and a marked warming trend, particularly over the past three decades. The HadCRUT3 graph shows uncertainty bands which tighten up considerably after 1945 (WMO (World Meteorological Organization) 2010) Page 8 of 31

Handbook of Climate Change Mitigation and Adaptation DOI 10.1007/978-1-4614-6431-0_5-2 # Springer Science+Business Media New York 2015

The anomaly method also helps to avoid biases. For example, if actual temperatures were used and information from an Arctic observation station was missing for that month, it would mean the global temperature record would seem warmer. Using anomalies means missing data such as this will not bias the temperature record (Stott 2011; see National Climatic Data Center 2010a for additional explanation of the calculation and use of anomalies as used for the National Climate Data Center’s GHCN system). Even though using anomalies produces the most accurate record of Earth’s global average temperature, it is still interesting to calculate one single absolute “global average temperature.” Using the GHCN dataset (National Climatic Data Center 2010b), the average value for the last 10 years, the warmest decade on record (GISS (Goddard Institute for Space Studies) 2010a; Atmospheric Administration 2009; WMO (World Meteorological Organization) 2009), produces a global average temperature for planet Earth of 14.4  C or 58  F. Climate Forcing and Climate Feedback Climate forcing refers to the processes that produce changes in the climate. The word force is generally defined as “strength or energy that is exerted or brought to bear [and that often] causes motion or change” (Merriam-Webster 2003). With respect to Earth’s climate system, a variety of forces cause climates to change. These are called “climate forcings,” and they are all related to Earth’s “energy balance,” that is, the balance between incoming energy from the Sun and outgoing energy from the Earth. The forcings can be internal or external. “Internal forcings” occur within the climate system and include processes such as changes in atmospheric composition or changes in ice cover that cause different rates of absorption/ reflection of solar radiation. “External forcings” originate from outside the climate system and include processes such as changes in Earth’s orbit around the Sun and volcanic eruptions. Forcings can be naturally occurring, such as those resulting from solar activity or volcanic eruptions, or anthropogenic in origin, for example, the emission of greenhouse gases or deforestation (Intergovernmental Panel on Climate Change 2007a, p. 9). A feedback is defined as a change that occurs within the climate system in response to a forcing mechanism. A feedback is called “positive” when it augments or intensifies the effects of the forcing mechanism or “negative” when it diminishes or reduces the effects caused by that original forcing mechanism (Intergovernmental Panel on Climate Change 2007a, p. 875). Forcing and feedback mechanisms often interact in complex ways that make it difficult to decipher the processes and dynamics of climate change. This difficulty also frequently frustrates policymakers, the media, and the public, and it can result in the dissemination of misinformation, both intentional and unintentional, into the public discourse. One example of this relates to the relationship between carbon dioxide (CO2) and temperature. While it is relatively easy to understand that increasing concentrations of atmospheric CO2 can increase the naturally occurring greenhouse effect thereby causing global warming, confusion and misinformation result when research brings to light a climate record in which changes in the atmospheric CO2 level lag behind changes in temperature by 800–1,000 years. The legitimate question arises as to how it could be possible that CO2 causes global warming if the rise in temperature occurs before the increase in the atmospheric concentration of CO2. While the question is legitimate, unfortunately, some who are disposed to doubt claims of global warming neither seek answers to the question nor pursue additional investigation. Instead, they simply assert the premise that because CO2 lags temperature, it cannot possibly be the cause of global warming. However, a more objective review of the scientific literature emphasizes the importance of distinguishing between forcings and feedbacks. The initial, external forcing that begins the temperature changes observed in the climate record stems from fluctuations in the orbital relations between the Sun and Earth, and these fluctuations produce rather small changes in the amount of solar radiation reaching Earth (Hays et al. 1976). This relatively weak forcing action causes small temperature changes that are then amplified by other processes (Lorius Page 9 of 31

Handbook of Climate Change Mitigation and Adaptation DOI 10.1007/978-1-4614-6431-0_5-2 # Springer Science+Business Media New York 2015

et al. 1990). One such amplifying process that appears to be quite significant occurs because ocean temperature changes also change the ocean’s capacity to retain soluble CO2. As this capacity changes, it causes CO2 to either be released from the oceans into the atmosphere (during times of warming temperatures) or removed from the atmosphere and dissolved into the oceans (during times of cooling temperatures). Consequently, CO2 operates in these situations as a positive feedback mechanism that augments the temperature change. In other words, it enhances the greenhouse effect and amplifies temperature increases during times of warming and reduces the greenhouse effect and reinforces temperature decreases during times of cooling (Martin et al. 2005). Careful analysis therefore suggests that a climate record which shows CO2 operating as a feedback mechanism neither negates nor renders less likely the potential that CO2 could operate as an initial forcing mechanism as well. Considering that the atmospheric concentration of CO2 has increased by 25 % in the last 50 years (Atmospheric Administration 2010), it is entirely possible that this increasing CO2 concentration is functioning as the forcing agent for contemporary global warming. Simply put, it is a false premise to claim that CO2 could not be causing contemporary global warming because CO2 has been observed to lag behind temperature changes in the past. This false premise has been lampooned by the analogous statement that “Chickens do not lay eggs, because they have been observed to hatch from them” (Bruno 2009). Global Warming Versus Climate Change The terms “global warming” and “climate change” have been defined above, and those definitions will not be repeated here. But it is important to emphasize the difference between the two terms and the significance of exercising precision in use of them. While “global warming” is a useful way to refer to the increase of global average temperature that strong scientific evidence shows has occurred over the last 130 years (Fig. 2), for some people, the term carries the automatic connotation that human activity is the cause of this observed temperature increase. As stated earlier, a clear distinction should be made between questions that, on the one hand, relate to the changes in climate, if any, that are occurring and, on the other hand, the causes of any identified changes, specifically, naturally occurring or anthropogenic. Because the term “global warming” carries the more polemical and politicized connotation, it poses a higher probability of conflating the two questions than does the term “climate change” which has not yet attracted such politicized interpretations. Consequently, in general, the term “climate change” is preferable. A second deficiency with the term “global warming” is the one-dimensional and totalizing change that it implies. Although the average temperature of planet Earth is increasing, the temperature change that any particular place on the Earth might experience could be cooling instead of warming, or perhaps that place might be experiencing no change in temperature at all. But the term “global warming” is easily, and perhaps most naturally, understood to mean that all places on the Earth will experience warming. Moreover, even if the term is explained, it does not readily lend itself to the broader understanding that although the global average temperature is increasing, it is not necessarily the case that temperature is increasing at any given place on Earth. The term “climate change,” on the other hand, does not imply this uniform nature of change and thus possesses greater capacity to communicate the potential for different changes occurring in different places and regions. In addition, the term “global warming” implies a narrow view of the nature of changes that can occur in the climate system, namely, an exclusive focus on temperature. But the possible changes to climate are not restricted to just the climate variable of temperature, and the observed increase in global average temperature has been associated with changes in a range of other climate variables that include precipitation amounts, timing and patterns, cloudiness, humidity, wind direction and velocity, storminess, and more. While the term “global warming” places the focus on temperature, the term “climate change” offers a much richer capacity to incorporate these other types of changes as well and, as a result, is generally emerging as the preferred term. Page 10 of 31

Handbook of Climate Change Mitigation and Adaptation DOI 10.1007/978-1-4614-6431-0_5-2 # Springer Science+Business Media New York 2015

Thresholds and Tipping Points The term “threshold” in ecology and environmental science means “a fixed value at which an abrupt change in the behavior of a system is observed” (Park 2008, p. 450). In climate science, the term “climate threshold” means the point at which some forcing of the climate system “triggers a significant climatic or environmental event which is considered unalterable, or recoverable only on very long time-scales, such as widespread bleaching of corals or a collapse of oceanic circulation systems” (Intergovernmental Panel on Climate Change 2007a, p. 872). Substantial research indicates that climate changes are prone to such thresholds, or “tipping points,” at which climate on a global scale or climates at regional scales can suddenly experience major change (Committee on Abrupt Climate Change 2002; Lenton et al. 2008). A wide number of complex systems exhibit similar threshold events – financial markets, ecosystems, and even epileptic seizures and asthma attacks – in which the system seems stable right up until the time when the sudden change occurs (Scheffer et al. 2009). Research has provided general ideas on where these thresholds or tipping points might operate with respect to climate – the loss of Arctic sea ice or Antarctic ice shelves, the release of methane into the atmosphere from the melting of Siberian permafrost, or the disruption of the “oceanic conveyor belt” – but this knowledge is rudimentary at best. Scheffer and colleagues (2009) report tentative efforts to identify “early warning signs” that precede threshold events, and with respect to climate, they state that “flickering,” “rapid alterations,” or increased weather and climate “variability” seem to have preceded sudden changes observed in the climate record. But at present, predicting these climatic thresholds is vague at best. One of the authors explained the idea of thresholds and the uncertainty about them in an interview with Time magazine, “Managing the environment is like driving [on] a foggy road at night by a cliff.. . .You know it’s there, but you don’t know where exactly” (Walsh 2009). Defining and Communicating Uncertainty Clearly, climate science contains uncertainties that are endemic to the data sources used, to the understanding of processes involved, and to predictions of future trends, impacts, and outcomes. Consequently, it is essential to accompany any study of climate change with careful, explicit, and candid assessments of the levels of certainty or confidence associated with the findings or claims made. Indeed, reports or studies are suspect if they fail to include such information and/or if they make unequivocal statements about “proving” their points. To some extent, the same can be said about commentaries, news reports, or various information sources. While the politicized environment in which climate change is debated might encourage strong and definite affirmations, such statements can prove counterproductive if they are perceived or exposed as exaggerated (Weber 2010; Hodder and Martin 2009). Numerous approaches exist for defining and communicating uncertainty, and this brief discussion here does not attempt a comprehensive overview. Instead, it focuses on the approach that the IPCC has developed for its assessment reports. The main function of the IPCC is to “assess the state of our understanding and to judge the confidence with which we can make projections of climate change, its impacts, and costs and efficacy of options,” but in its first and second assessments (1990 and 1995, respectively), the IPCC gave inadequate attention to “systematizing the process of reaching collective judgments about uncertainties and levels of confidence or standardizing the terms used to convey uncertainties and levels of confidence to the decision-maker audience” (Moss 2006, p. 5 emphasis added). Consequently, the IPCC conducted a comprehensive project to rectify these inadequacies (Moss and Schneider 2000; Manning et al. 2004), and the result was the following system for defining and communicating uncertainties in the Fourth Assessment Report published in 2007. The first step is to present a general summary of the state of knowledge related to the topic being presented. This summary should include (1) the amount of evidence available in support of the findings and (2) the degree of consensus among experts on the interpretation of the evidence (Climate Change Page 11 of 31

Increasing level of agreement or consensus

Handbook of Climate Change Mitigation and Adaptation DOI 10.1007/978-1-4614-6431-0_5-2 # Springer Science+Business Media New York 2015

Established but Incomplete High agreement / Limited Evidence

Speculative Low agreement / Limited evidence

Well Established High agreement / Much evidence

Competing Explanations Low agreement / Much evidence

Increasing amounts of evidence (theory, observations, models)

Fig. 3 Conceptual framework for assessing the current level of understanding (Moss 2006; Climate Change 2005)

2005). Figure 3 illustrates how these two factors form interacting continua that produce qualitative categories. The IPCC guidance notes for addressing uncertainty (Climate Change 2005, p. 3 emphasis in original) state that in cases where the level of knowledge is determined to be “high agreement, much evidence, or where otherwise appropriate,” additional information about uncertainty should be provided through specification of a level of confidence scale and a likelihood scale. The level of confidence scale addresses the degree of certainty that the results are correct, while the likelihood scale specifies a probability that the occurrence or outcome is taking place or will take place. The IPCC guidelines state that the level of confidence scale “can be used to characterize uncertainty that is based on expert judgment as to the correctness of a model, an analysis or a statement. The last two terms in the scale should be reserved for areas of major concern that need to be considered from a risk or opportunity perspective, and the reason for their use should be carefully explained” (Climate Change 2005, p. 4). Table 1 shows the scale. The likelihood scale is used to refer to “a probabilistic assessment of some well defined outcome having occurred or occurring in the future” (Climate Change 2005, p. 4). Adaptation and Mitigation The terms “adaptation” and “mitigation” were briefly discussed in the introduction of this chapter, but the more detailed definition and explanation in Table 2 outline important distinctions that will be helpful for the sections of the chapter that follow.

Perceptions, Communication, and Language of Climate Change

Moser (Moser 2010, p. 33) writes that “a number of challenging traits make climate change a tough issue to engage with,” and she implies that something in the nature of climate change itself makes it more challenging for people to perceive and communicate about than many other, even related issues (environmental, hazards, health). She lists the following characteristics of climate change that produce this substantial challenge: • Invisible causes: Greenhouse gasses are not visible and have no direct or immediate health implications. The same is true for other forcing agents such as Earth/Sun relations. • Distant impacts: The lack of immediacy in temporal and geographic distance. • Insulation of modern humans from their environment: This diminishes the perception of any changes in the climate or their significance. • Delayed or absent gratification for taking action: Action taken today is not likely to reduce global average temperature within the lifetime of the person taking the action. • The lack of recognition that humans have of their technological power: This produces disbelief that humans have the capacity to alter the global climate.

Page 12 of 31

Handbook of Climate Change Mitigation and Adaptation DOI 10.1007/978-1-4614-6431-0_5-2 # Springer Science+Business Media New York 2015

Table 1 Scales of uncertainty used in the IPCC Fourth Assessment Report, 2007. None of these are statistically significant because no tests are conducted to determine the values. Instead, they are based on expert judgment Qualitatively calibrated levels of confidence (Climate Change 2005) Terminology Very high confidence High confidence Medium confidence Low confidence Very low confidence Likelihood scale (Intergovernmental Panel on Climate Change 2007b) Terminology Virtually certain Extremely likely Very likely Likely More likely than not About as likely as not Unlikely Very unlikely Extremely unlikely Exceptionally unlikely

Degree of confidence in being correct At least 9 out of 10 chances of being correct About 8 out of 10 chances of being correct About 5 out of 10 chances of being correct About 2 out of 10 chances of being correct Less than 1 out of 10 chances of being correct Likelihood of the occurrence or outcome (%) >99 >95 >90 >66 >50 33–66 5 ktCO2/year or >20 % are liable to request allowance change.

New entrants and activity change

voluntary installation after 2010: 20,000 participation. Threshold: 2 ktce/ tCO2/year year energy consumption.

Beijing

Grandfather method (2010–2012)

Comprehensive method; grandfather

x New entrants reserve (20 Mt). New project (including capacity extension or reconstruction) with >10 ktCO2/ year should purchase all quotas prior to operation. Quota reallocation for activity change, reduction and closure.

10 ktCO2e/year for other sectors. Mandatory emissions reporting for about 600 firms. Threshold: 10 ktCO2/year

2010–2011). Mandatory reporting Threshold: 8 ktce of energy consumed/year

ktCO2/year, mandatory reporting when >5 ktCO2/year, Non industrial sectors: with > 5 ktCO2/ year Transport: threshold TBD

2

Shenzhen

20,000 m for public buildings and 10,000 m2 for state office buildings. Mandatory reporting. Threshold: emissions btw. 3-5 ktCO2e/year + In case of closure Reserve (2 % of or displacement of total cap). New fixed-asset projects activity, with over ¥ compliance 200 million invest. obligation is due should submit and 50 % of emission eval. following-year report. In case of allowances after obligation shall be closure or displacement of taken back. activity, compliance due and 50 % of following-year allowances shall be taken back. Grandfather Carbon Emission (2009–2011) per Industrial Value Benchmarking Added

Shanghai

Hubei

Guangdong

Sources: Zhong (2014), Wu et al. (2014), Quemin and Wang (2014)

Pilots

Table 1 (continued)

Grandfather (base year not specific)

Compliance obligation in case of closure.

carbon intensive industries and civil buildings with >10 ktCO2e/year (steel, iron, power, heating, (petro) chemicals).

Tianjin

Handbook of Climate Change Mitigation and Adaptation DOI 10.1007/978-1-4614-6431-0_8-2 # Springer Science+Business Media New York 2015

Page 25 of 45

Handbook of Climate Change Mitigation and Adaptation DOI 10.1007/978-1-4614-6431-0_8-2 # Springer Science+Business Media New York 2015

Table 2 Economic structure of the seven carbon market pilot regions (% share of GDP, 2012) Pilots Beijing Chongqing Guangdong Hubei Shanghai Shenzhen Tianjin

Primary sector 0.9 % 8.6 % 5% 13.4 % 0.7 % 0.1 1.6

Secondary sector 24 % 55 % 50 % 48.7 % 42.1 % 47.5 % 52.4 %

Tertiary sector 75.1 % 36.4 % 45 % 37.9 % 57.2 % 52.4 % 46.0 %

Energy mix (coal) 43 % 50 % 22 % 72.5 % 30 % 59 % 71 %

Sources: PMR (2014), Liu and Xu (2012), UNDP China and Institute for Urban and Environmental Studies, CASS (2013)

Shenzhen

Shanghai

Guangdong

Beijing

Tianjin

Hubei

Chongqing

Price (RMB/Metric Ton CO2)

150.00

100.00

50.00

0.00 7/1/2013

10/1/2013

1/1/2014

4/1/2014

7/1/2014

10/1/2014

Fig. 13 Prices for Chinese ETSs, July 2013–October 2014 (Source: Bifera 2014)

intensity per unit of Industrial Added Value (gross domestic product GDP due to industry) by 32 % below 2010 levels by 2016, keeping their absolute annual emissions growth to less than 10 %, with 2013 as the baseline (Song and Lei 2014). Despite the variation among them, the seven pilots also share many fundamental features. All pilots include both indirect and direct emissions of carbon dioxide (ICAP 2014b). Most pilots use grandfathering as the principal method by which to allocate initial allowances (PMR 2014). Nearly all pilots distribute allowances for entities mandated to participate in the cap-and-trade system at the beginning of a compliance year without a charge. (In Shenzhen and Guangdong, however, a small number of allowances are also allocated via fixed-price sale or auction (King and Wood Mallesons 2014).) The majority of pilots allow offsets that may or may not include CCERs and other offset types (such as Hubei, which includes forest offsets from within the Province; Chongqing is also considering doing so). Finally, most, with the exception of Shenzhen, which bases its cap on a set of criteria, set their cap based on a minimum quantitative level of carbon emissions. Carbon trading transactions reached approximately USD 140 million by September 2014 (Carbon Eight Group 2014). Every pilot region has its own carbon exchange; membership in the exchange is a prerequisite for trading. Allowances are tradable only in the regional exchanges. During the first year in which trading took place, price volatility was a feature of most of the pilot regions. Prices also varied considerably from one region to the next, not surprisingly, given the variation in design and economic structures among the pilot markets (see Fig. 13). Trading volumes have also been quite low. As one study points out, Shenzhen, which has been the most active of the pilot markets, traded just 4 % of the total

Page 26 of 45

Handbook of Climate Change Mitigation and Adaptation DOI 10.1007/978-1-4614-6431-0_8-2 # Springer Science+Business Media New York 2015

allowances available in its market during his first compliance year (Munnings et al. 2014). Pilots have been experimenting with ways to boost liquidity. To date, Chinese authorities have prohibited futures contracts in carbon trading out of concerns that doing so would invite destabilizing speculation in its financial markets. However, Guangdong, Tianjin, and Hubei have allowed some investors to trade permits with entities bound by emissions limits. Shanghai allows registered institutional investors to trade permits; Shenzhen plans to allow foreign investors to do so, reportedly allowing trading in foreign currency (Chen and Reklev 2014b). China ETS Phase II The announcement by a senior climate policy official from the NDRC in late August 2014 that China would launch a national carbon market by 2016, with regulations for a national market to be sent to the State Council for approval by the end of the year, was an unequivocal commitment by China’s central authorities to scale up carbon market development (Chen and Reklev 2014b). Launching a national carbon market would be an ambitious undertaking for even the most developed economy; to implement cap-and-trade on a nationwide scale for a transitional economy the size and complexity of China’s requires authorities to tackle numerous challenges. They must not only arrive at a functional design but also construct the institutions necessary to create a national market for buying and selling carbon. Doing so requires substantial numbers of technically capable trained personnel along with regulatory institutions that can set emissions caps, support an emissions trading registry, and monitor trading and enforce compliance. Pilots have taken on these challenges at the local level. However, as will be discussed below, the development of regional schemes has also revealed the challenges of designing an effective market in a political-economy in which transparency is limited. For a market to function, an accurate accounting of carbon emissions must be made in order for legitimate transactions to take place. China’s official data collections systems are highly opaque, a feature that must be adjusted for cap-and-trade to work. Specifically, a national MRV system capable of inspiring confidence in trade for an intangible commodity must be developed (Kong and Freeman 2013). In short, on the institutional front, as China’s proposal for market readiness observes, what is required is a “reliable statistical system, effective program management system and necessary laws and/ or regulations.” (PMR 2013). The latter includes the passing by the National People’s Congress of a national environmental law that defines carbon as a commodity and explicitly enables enforcement of compliance by regulated firms (Munnings et al. 2014). In addition to institutional development and implementation, it is also necessary that the central government determine which specific sectors will be covered by the national carbon market, with an eye to future emissions trends, mitigation potential, and other factors such as international linkages (PMR 2013). China has already published monitoring and reporting guidelines for the national level, covering ten sectors: power generation, power transmission and distribution, aviation, cement, ceramics, flat glass, electrolytic aluminum, magnesium smelting, chemicals, and iron and steel. Among the considerations to be addressed are the development of policies to mitigate potential constrains on firm competitiveness and leakage from cap-and-trade; ways of encouraging liquidity without excessive risk to China’s fragile financial system; and management of potential new entrants to ensure that increased participation does not add to carbon emissions (Munnings et al. 2014). China ETS Challenges and Opportunities Ahead China’s bottom-up approach to carbon market development offers numerous lessons for the NDRC as it moves forward. However, the differences among the protocols established for measuring emissions among pilots alone reflects a heterogeneity which will pose challenges to future efforts at harmonization. The seven pilots applied different rules for monitoring, reporting, and verifying emissions; however, a national market requires a single set of enforceable procedures (Kong and Freeman 2013). Chinese Page 27 of 45

Handbook of Climate Change Mitigation and Adaptation DOI 10.1007/978-1-4614-6431-0_8-2 # Springer Science+Business Media New York 2015

authorities, led by the NDRC, are in the process of drafting a National Climate Change law that could provide a legal foundation for a national trading system. The NDRC has also published guidelines for some industries to date but a national registry for greenhouse gas emissions is still under development (Song and Lei 2014). Moreover, for a cap-and-trade system to function, China must develop a system for data reporting, and for collecting greenhouse gas emissions data about industrial sources that is transparent. In addition, China’s lack of a well-developed legal system means that compliance by individual firms is heavily dependent on administrative enforcement, which in turn relies on the capacity and will of local authorities to do so. Currently, local officials’ (cadres’) promotion opportunities are closely linked to economic growth. China’s central authorities will have to complete the retooling of China’s “cadre evaluation system” to increase the effectiveness of local implementation, as they move ahead with legal development in the country. Other key systems structuring China’s economy also require reform and development for a national cap-and-trade system in China to function effectively. First, reforms are needed in how China manages power pricing. Currently, there are centrally-determined price caps on electricity in place that prevent power producers from passing on the cost of carbon to consumers. This explains why local pilots exclude the power sector or limit coverage to implied (i.e., emissions divided by activity) rather than direct emissions from power consumption. To fully bring the power sector– among the largest sources of carbon emissions in China – into the carbon trading system, difficult national policy changes in this area will be required (Kong and Freeman 2013). Second, China’s financial system remains undeveloped and fragile. Concerned about risk, China’s NDRC took futures trading off the table of options for local carbon trading pilots’ design. However, most experts see trading in derivative products as necessary for China’s carbon market to have the liquidity to be an effective tool in reducing the cost of cuts to emissions (Song and Lei 2014). China’s authorities are actively engaged in pushing reforms in the financial sector that will bring it into line with more mature economies; however, this process is a delicate one that will take time. Finally, national tools must be developed to mitigate against the potential for carbon leakage. This requires the ability to assess the risks of leakage accurately so that provisions can be made for regulated enterprises subject to this risk – something the European cap-and-trade system does through rebates in the form of allocations (Munnings et al. 2014). These are just some of the tasks ahead for China as it develops carbon trading on a national scale. Thus, while China’s pilot markets mark significant progress toward the development of cap-and-trade, the country still has a long way to go to build an effective national carbon trading system. US Carbon Trading Programs While the US played a key role in introducing emissions trading through its ETP and acid rain regulatory programs, and also introduced the market-oriented approach into the Kyoto Protocol, the Bush Administration’s withdrawal from the Kyoto process in early 2001 led to a significantly diminished role for the country. European countries, initially quite skeptical about emissions trading, assumed the lead with the launch of the EU ETS in January 2005. The US did not pursue national carbon trading during the Bush administration, but expectations grew as the 2008 elections approached, because all three major candidates – Hillary Clinton and Barack Obama on the Democratic side and John McCain on the Republican one – espoused support for cap-and-trade legislation during the Presidential campaign. The build-up to the Copenhagen meeting thus assumed that the US would rejoin international efforts, and perhaps link its own national carbon market to ongoing EU ETS and Kyoto Protocol efforts. Such enthusiasm was enhanced when the American Clean Energy and Security Act (ACES) passed the US House of Representatives less than 6 months after Obama’s inauguration in January 2009. It contained an allowance-based program that required a 17 % CO2 reduction by 2020 (from a 2005 base year) and an Page 28 of 45

Handbook of Climate Change Mitigation and Adaptation DOI 10.1007/978-1-4614-6431-0_8-2 # Springer Science+Business Media New York 2015

83 % reduction by 2050. Often referred to as the Waxman-Markey bill (after its two principal sponsors), ACES provided for the use of international offsets and also included an allowance price floor. Very similar legislation, entitled the American Power Act (APA), was submitted to the US Senate by Senators Kerry and Lieberman in May of 2010 – but a special election in the State Of Massachusetts earlier that year meant that the Democrats no longer had a “filibuster-proof” Senate (i.e., a Senate able to pass legislation over the objections of Republicans). The overwhelming Republican victory in mid-term elections later in 2010 ensured that such cap-andtrade legislation would not be enacted at the national level, and that party’s efforts have since then focused on rolling back existing environmental legislation (and US EPA’s budget) rather than passing new mandates. Prospects for new emissions trading legislation thus appear quite bleak; as one recent article in Foreign Policy noted: “Congress will never pass cap-and-trade, at least until Miami starts flooding” (Galbraith 2014). Despite such problems, market-oriented GHG control efforts continued at the state level (in California); at the regional level (in the Northeast’s Regional Greenhouse Gas Initiative [RGGI]); and even at the national level, through previous legislation initially designed for CAC regulation. These three levels of programs in the US are described below: California’s Emissions Trading Program California’s cap-and-trade program is a result of the California Global Warming Solutions Act of 2006 (AB 32), which required the state’s Air Resources Board to develop regulations and market mechanisms to cut the state’s GHG emissions back to 1990 levels by 2020 – a reduction of approximately 25 %. Its emissions trading program is thus part of a larger regulatory effort (including a Low Carbon Fuel Standard as well as other energy efficiency standards) to achieve that target. The market-oriented program went into effect in January, 2012, with compliance obligations beginning 1 year later. The first two compliance years focus solely on electricity and industrial sectors, but the program will expand after that to include transportation and heating fuels (see Fig. 14). It is thus the first multisector carbon trading plan in the US, and given its emissions coverage, is second in size only to the EU ETS. 450 400 350 Offsets 300 Allowances

250 e2 /year 200 MMTCO 150

Narrow Scope Projected BAU Emissions Broad Scope Projected BAU Emissions

100 50 0 2008 2009 2010 2011 2012 2013 2014 2015 2016 2017 2018 2019 2020 Year

Fig. 14 California’s GHG Cap compared with BAU projections (Source: Center for Climate and Energy Solutions 2014; adapted from CARB 2010) Page 29 of 45

Handbook of Climate Change Mitigation and Adaptation DOI 10.1007/978-1-4614-6431-0_8-2 # Springer Science+Business Media New York 2015

The market covers the same six pollutants as the first commitment period of the Kyoto Protocol, as well as NF3 and other fluoridated gases. It covers approximately 350 business (with 600 facilities), and has been designed to link with similar trading programs in other states and regions. The California market has several notable features, including both cost containment and market flexibility mechanisms. There is an auction floor price (starting at $10 per allowance in 2012, rising at 5 % above inflation annually) and a strategic reserve (rising from 1 % to 7 % over time, with higher tiered prices similarly rising at 5 % above inflation). There are thus both floor and ceiling mechanisms in place to contain prices (as long as there are sufficient allowances in the reserve). There are three compliance periods: (a 2-year period [2013–2014]), followed by two 3-year periods [2015–2017 and 2018–2020]). At the end of every year, a source must provide allowances and offsets to cover 30 % of its previous year’s emissions. Then, at the end of each compliance period, it must provide the remaining allowances and offsets. This provides sources with the ability to cover any annual variation in product output. If the source does not do so and is not in compliance, then four allowances must be surrendered for every ton not covered within the compliance period. Offsets are allowed in the California program, but were initially restricted to US emission reduction projects from four targeted types: forestry; urban forestry; dairy digesters; and the destruction of ozone depleting substances. A linkage with Quebec’s emissions trading scheme began in January 2014, and linkages with other systems are ultimately expected to occur as well. Regional Greenhouse Gas Initiative The Regional Greenhouse Gas Initiative (RGGI) was the first regulatory US cap-and-trade scheme addressing GHGs. It was designed to reduce CO2 emissions from power plants in ten Northeastern US states – although this was subsequently reduced to nine states when the Republican Governor of New Jersey withdrew his state from the program in 2011. RGGI is a regional program, but it is implemented through legislation adopted by each individual state. A “Model Rule” was drafted in 2006 and finalized in 2008, with requirements for individual facilities (i.e., fossil-fueled power plants greater than 25 MW generating capacity) beginning on January 1, 2009. RGGI initially sought to cap CO2 emissions at a steady rate through 2014, and then drop them annually by 2.5 % – and thus achieve a 10 % reduction one decade later. A significant fuel shift towards natural gas at power plants in the region, however, coupled with lower electricity demand and increased levels of both nuclear power and renewables led to an overallocation of allowances. Prices reflected that, and the clearing price for allowances at RGGI auctions was often less than $2. RGGI’s target was revised when New Jersey left, and was then significantly changed as a result of a 2012 Program Review. The new cap called for a reduction of 45 % by 2020 (from 2005 levels), with a 2.5 % reduction occurring annually from the revised 2014 cap levels. This new Model Rule also introduced other provisions, including a Cost Containment Reserve (CCR), and an interim compliance period requiring sources to hold specific allowance levels in time periods before final compliance dates (Bifera 2013). Most of the allowances in RGGI are sold through auctions, and the collected funds are dedicated for energy efficiency, renewable and clean energy, as well as bill support for low-income energy consumers. RGGI allows offsets to achieve compliance, but only from five categories: (1) Landfill methane capture and destruction; (2) Reduction in emissions of sulfur hexafluoride (SF6) in the electric power sector; (3) Sequestration of carbon due to US forest projects (reforestation, improved forest management, avoided conversion) or afforestation (for CT and NY only); (4) Reduction or avoidance of CO2 emissions from natural gas, oil, or propane end-use combustion due to end-use energy efficiency in the building sector; and (5) Avoided methane emissions from agricultural manure management operations (RGGI n.d.). Despite the significant drop in target levels in 2014, Fig. 15 shows that the actual emissions in recent years were not significantly above the new cap (i.e., 92 million short tons in 2012, just above the Page 30 of 45

Handbook of Climate Change Mitigation and Adaptation DOI 10.1007/978-1-4614-6431-0_8-2 # Springer Science+Business Media New York 2015 million short tons RGGI goes into effect in 2009

200 180 160 140 Actual CO2 120 emissions from 100 RGGI plants 80 60 40 20 0 2005 2007

New Jersey exits RGGI, 2012 cap adjusted New cap, 45% lower than original, accordingly takes effect in 2014

RGGI original emissions cap

compliance margin RGGI new emissions cap

2009

2011

2013

2015

2017

2019

Fig. 15 Regional Greenhouse Gas Initiative CO2 emissions cap vs. actual emissions (Source: EIA 2014)

91 million ton target in 2014). The cap will tighten in coming years, however, and it is not clear that the fuel shifts and other downward trends evident in recent years will continue. Thus, it is anticipated that the RGGI cap could become more binding in the future (EIA 2014). The US EPA’s Clean Power Plan President George Bush promised to address CO2 emissions during the 2000 Presidential campaign, but reneged on this shortly after taking office. In 2003, his Administration’s EPA overturned a previous Clinton Administration decision, and declared that it did not have the authority to regulate CO2 under the Clean Air Act – and further noted that it would refrain from doing so, even if it did have the authority. The State of Massachusetts and others filed suit against EPA for its failure to act, a suit which was subsequently decided in their favor in 2007 by the US Supreme Court. The Court ruled that EPA did have such authority, but the law required EPA to determine whether or not such emissions could reasonably be anticipated to endanger public health or welfare. In 2009, under the Obama Administration, US EPA issued such an “Endangerment Finding,” and proceeded to issue new standards for light, medium and heavy duty vehicles in the following years. The Agency also proposed GHG standards for new power plants in 2012 and then revised and proposed them once again in September 2013. On June 2, 2014, it proposed standards for existing power plants, under a program called the Clean Power Plan. Utility emissions are the largest source of carbon pollution in the US, accounting for roughly one-third of all domestic GHGs (EPA 2014b). The Clean Power Plan tackled this in two ways: (1) It set state-specific goals, which were based on achieving a level of carbon intensity in the state by 2030. This would have the effect of reducing CO2 emissions from the power sector by 30 % (from a 2005 base) and (2) The EPA provided guidelines for the states in how they might achieve such goals. Under the Clean Power Plan, states would have until June 2016 to submit plans to achieve these goals, with the possibility of a 1-year extension – or 2 years if states join together in a multistate plan. The states were also required to make “reasonable progress” in achieving such goals by 2020. Section 111(d) of the Clean Air Act requires US EPA to issue “standards of performance” reflecting the “best system of emission reduction” (BSER), and the Agency has used four “building blocks” of BSER to set the state-by-state goals: (1) heat rate improvements; (2) dispatch changes among affected units (e.g., coal to natural gas units); (3) expanded low- or zero-carbon generation (e.g., renewables and nuclear); and (4) use of demand-side energy efficiency, thereby reducing generation requirements. US EPA has offered the states considerable flexibility in determining how they might meet their goals. They are able, for example, to:

Page 31 of 45

Handbook of Climate Change Mitigation and Adaptation DOI 10.1007/978-1-4614-6431-0_8-2 # Springer Science+Business Media New York 2015

• Look broadly across the power sector for strategies that get reductions • Invest in existing energy efficiency programs – or create new ones • Consider market trends toward improved energy efficiency and a greater reliance on lower-emitting power sources • Expand renewable energy generation capacity • Tap into investments already being made to upgrade aging infrastructure • Integrate their plans into existing power sector planning processes • Design plans that use innovative, cost-effective regulatory strategies • Develop a state-only plan or collaborate with each other to develop plans on a multistate basis (USEPA 2014c) Note that these last two options allow individual states to team up with other states if they choose – and also to employ market-based mechanisms to achieve their goals. Not only would this would allow them to accomplish their reductions in the most cost efficient manner – they will also get an extension on the time required to develop such an approach. The Clean Air Act of 1970 is a piece of legislation now almost 45 years old, and its principal architecture was developed within the CAC framework. It was never intended to tackle a problem as complicated and as comprehensive as GHG control. The failure of the political system to pass legislation (such as ACES or APA) means that it must now serve as the foundation for such control, given the fact that the problem is real (as indicated in the Endangerment Finding) and the courts have indicated that US EPA has the authority (and, indeed, the responsibility) to address it. The US EPA has developed a creative regulatory approach that will allow states to utilize emissions trading, if they so choose – and to do so on a multistate basis. This plan will surely be modified in response to public comment, and must also survive the inevitable lawsuits when it is promulgated. Opponents have already attacked the Plan, based upon media reports that environmentalists played a key role in its development (Davenport 2014; Chait 2014). The final 111(d) rule is due to be released in June 2015, and while states must begin to make reductions by 2020, full compliance with the CO2 emission performance level in the state plan must be achieved by no later than 2030.

Voluntary Carbon Market In addition to the “compliance” markets discussed above, a corollary, voluntary market has developed that provides carbon trading opportunities for companies, individuals, and other entities not subject to mandatory limitations, but still wishing to offset their GHG emissions. As the name implies, the voluntary carbon market includes all carbon offset trades that are not required by regulation. Over the past several years, this market has not only provided an opportunity for consumers to alleviate their carbon footprint, but also provided an alternative source of carbon finance. The instrument of trading is called a Voluntary Emission Reduction (VER), although it should be noted that some market participants consider this acronym to mean “Verified Emission Reduction.” While still very much smaller than the compliance market ( KL2005

40.09 85,578 4.86

15.16 32,364 2.45

27.77 59,285 5.61

7.15 15,264 1.78

0.00 0.00 0.00

100.00 213,485 14.70 = KL2005

56.52 137,997 7.84

0.14 335 0.03

30.74 75,069 7.10

2.38 5,820 0.68

3.02 7,369 0.88

100.00 244,175 16.53 > KL2010

56.52 137,997 7.84

0.14 335 0.03

30.74 75,069 7.10

2.38 5,820 0.68

2.20 5,369 0.64

100.00 244,175 16.29 = KL2010

1. Fossil fuel: natural gas (NG) 40.09 %, coal (C) 27.77 %, oil (O) 15.16 %, and peat (P) 10.02 % 2. Electricity: imported electricity (IE) from Scotland 3.45 % 3. Renewable energy sources (RESs): landfill gas, biomass, and other biogas 0.57 %, hydro 1.06 %, and wind 1.88 % In the short-to-medium future of Ireland’s electricity sector, a well-designed optimal energy resource (OER) mix is required to satisfy both the energy needs and emission limit. The renewable energy sourceelectricity (RES-E) has its disadvantages, such as high cost, limited public acceptability, inherent intermittency/variability, lack of predictability and poor reliability, etc. Thus, only the absolute minimum amount of RES-E should be employed in the optimal energy resource (OER) mix for the sector. Application of CEPA. The basis of the approach is the construction of the composite curves of both the demand and the supply. These composite curves are then manipulated and shifted depending on the desired objectives. Crilly and Zhelev applied the CEPA to the electricity sector based on the data sources from the Sustainable Energy Authority of Ireland (SEAI), which is set up by the Ireland government as its national energy authority. The data for the actual energy resource (AER) mix in 2005 is shown in Table 7. The energy demand (consumption) and resource (supply) composite curves (CC) before shifting are plotted in Fig. 29. More specifically, the figure depicts a correlation between the amount of CO2 or CO2(equivalent) per unit time and the amount of energy per unit time. It shows a slope of the amount of CO2 per unit energy for any line segments, which is also the emission factor. The resource composite curve is constructed by plotting cumulatively the quantity of electricity generated for the several fuel resources against total emissions from those The emission factor (EF) (i.e., the amount of emissions  resources.  produced per unit of electricity, t

CO2ðeÞ TJ

for each energy resources is also provided in Table 7. The fuel

source with the lowest emission factor is plotted first, followed by the next lowest and so on. In this resource composite curve, the renewable energy source is plotted first, followed by natural gas, oil, coal, peat, and imported electricity. The slope of each line segment is equal to the emission factor of Page 35 of 40

Handbook of Climate Change Mitigation and Adaptation DOI 10.1007/978-1-4614-6431-0_21-2 # Springer Science+Business Media New York 2015 CEPA for ireland’s Electricity Sector Over 2005 CO2(e) Produced & Kyoto Limit (Mt CO2(e) /Year)

18 End of Energy Resource CC At (213, 485 TJ/year, 16.30Mt CO2(e) Produced/year

16 14

End of Energy Demand CC At (213, 485 TJ/year, 14.70 Mt CO2(e) Kyoto Limit/year)

12 10 8

Energy Demand Curve

Imported electricity

Peat Coal

CO2(e) 2(e)

6

Energy Resources curve

Oil

4 Renewable 2 0

Natural gas 0

50,000

100,000

150,000

200,000

250,000

Energy Resource & Energy Demand (TJ/Year) Energy Demand Curve

Energy Resources Curve

Fig. 29 CEPA applied to Ireland’s electricity sector over 2005: before shifting of energy resource CC (Crilly and Zhelev 2008)

CEPA for Ireland’s electricity sector over 2005 18 CO2 emission pinch point at (213, 485 TJ/year, 14.70 Mt CO2(e) Kyoto Limit/year)

CO2(e) Produced & Kyoto Limit (Mt CO2(e) /Year)

16 14

Excess IE energy not required and emission avoided

12 Peat 10 Coal

8

Energy demand curve

6

Excess peat energy not required and emission avoided

Shifted energy resources curve Oil

4 Renewable

2

Natural gas 0

0

50,000

100,000

150,000

200,000

250,000

Energy Resource & Energy Demand (TJ/Year) Energy Demand Curve

Energy Resources Curve

Fig. 30 CEPA applied to Ireland’s electricity sector over 2005: after shifting of energy resource CC (Crilly and Zhelev 2008)

corresponding energy resource. All emissions factors are expressed as carbon equivalent and include all relevant greenhouse gases. Ireland permitted an increase of its overall GHG emissions by no more than 13 % per year during 2008–2012, as compared to the baseline year of 1990, which is 55.75 Mt CO2(e). Thus, Ireland’s environmental protection agency determined a leveled-out Kyoto limit (KL) of 62.99 Mt CO2(e) for each year between 2008 and 2012. The Kyoto limit KL2005 for 2005 is 61.78 Mt CO2(e) by the principle of interpolation. Because the electricity sector had a 23.79 % share of the actual overall GHG emissions of 69.63 Mt CO2(e) in 2005, this sector should be allocated the same percentage of the KL2005, which equated to 14.70 Mt CO2(e). This value is the vertical ordinate for the top end of the energy demand curve as shown in Figs. 29 and 30. The energy demand composite curve is also constructed using the same method as the energy resource composite curve. It is assumed that the emissions from various demand sectors are proportional to the electricity usage and therefore will produce a straight line from the origin to the end of the demand composite curve. The horizontal ordinate for the top end of both the energy demand curve and energy

Page 36 of 40

Handbook of Climate Change Mitigation and Adaptation DOI 10.1007/978-1-4614-6431-0_21-2 # Springer Science+Business Media New York 2015

resource composite curve should share the same value because the demand (or consumption) should match the resource (or supply) in any given year. The slope of the demand line is known as the grid emission factor (GEF), which is simply the average emission factor for the entire system. In this case, the CO EF for the energy demand curve is 69.0 t TJ2ðeÞ . In Fig. 29, it is easy to find that the top end of the resource curve is above the top end of the energy demand curve, which shows the AER mix led to more emissions than the permitted KL of the electricity sector. Thus, the energy resource composite curve needs to be shifted horizontally to the right to get rid of the excess emissions. Figure 30 shows a shifted energy resource composite curve that meets the Kyoto limit for the sector. The energy resource CC is shifted horizontally to the right until it intersects with the top end of energy demand CC, and this is the CO2 emission pinch point. At this pinch point, the energy resources not only provide a total amount of 213,485 TJ energy per year (meeting the annual energy demand) but also release 14.70 Mt CO2(e) emissions (meeting the Kyoto limit of emission). In this way, the amount that the resource CC has been shifted then becomes the minimal amount of renewable energy that needs to be added in order to meet the emission target. The overhang of the resource CC to the right of the pinch point represents the amount and type of energy resources that need to be substituted by renewable energy. In this case, the renewable energy portion of the energy resource CC increases, the portion of imported electricity is totally substituted, and the portion of electricity from peat generation decreases as illustrated in Fig. 30. By increasing the energy resources with low-emission factors and decreasing energy resources with high-emission factors, the shifting procedure achieved the desired objective, that is, the emissions produced by the resources equal to the Kyoto limit of the demand. Meanwhile, the other objective of using the minimum amount of renewable energies due to their disadvantages is also achieved by this horizontal shift procedure. Each of the line segments of the shifted energy resource CC is measured off in order to get the optimal energy resource (OER) mix in 2005, which is also the optimal energy resource allocation scheme of the sector. The corresponding emissions produced by each of these optimal amounts are also measured. All of the measured data are listed in Table 7. Further adaptations to CEPA. Crilly and Zhelev made a forecasting adaptation to the CEPA methodology, which is briefly introduced here. If the optimal energy resource (OER) mix in the future can be predicted, then the sector’s policymakers can use this information to guide the future development plan of the sector. For example, in the near future, Ireland will close old and inefficient power plants and create new power generation plants. The ahead-of-time knowledge of the future OER mix will be particularly useful for the policymakers to decide which form of power generation plant should be constructed. As long as the future actual energy resource (AER) mix is available, the future OER mix can be obtained using the same CEPA procedure described in the proceeding section. The future AER mix can be projected based on the energy model linked with macroeconomic model together with many key forecast parameters, such as GDP growth, population growth, fuel prices, etc. In 2006, the Sustainable Energy Authority of Ireland (SEAI), Ireland’s national energy authority, published the projected AER mix for the electricity sector in 2010, which is shown in Table 7. Crilly and Zhelev used that information to forecast the OER mix that the energy sector in 2010 should have. Their forecast is also listed in Table 7. Analyzing those data, the OER mix in 2010 will need to have the input of RESs rising from 7.2 % of the AER mix to 8.2 %. This important and invaluable information will give the relevant policymakers and stakeholders 3 years time in advance to make up for this forecasted shortfall of renewable energies in 2010 for the analysis made in 2007.

Page 37 of 40

Handbook of Climate Change Mitigation and Adaptation DOI 10.1007/978-1-4614-6431-0_21-2 # Springer Science+Business Media New York 2015

Future Trends Heat integration is one branch of process integration technologies. In the authors’ view, there are several directions that can be considered as potentially promising for the future of process integration. Process integration, especially the newer development, has not been used as widely as it could be. It is likely to see a wider range of application in process integration. Still, there is much work to be carried out in the research of integrating heat-integrated network with separation systems and reactor designs and the consideration of operational issues as well. Heat integration is closely related to mass integration by nature. Although extension of pinch analysis to mass integration field, such as water pinch and hydrogen pinch, has already been applied to industries successfully, systematic methods in this area are still in development. Utilizing advanced optimization techniques to solve process integration problems is very promising. With the advancement of computer technology, a new generation of more powerful software tools for process integration may emerge. Comparing to the process simulation software, which is relatively mature, the process integration software is at its infancy. Process integration problems are generally complex tasks at considerable scales and involve comprehensive interactions. The development of powerful commercial software for process integration is instrumental for its wider application. Climate change has recently become a major focus of industry and government. Pinch analysis has been extended to solve emissions and energy footprint problems to meet the environmental goals with technical and economic constraints simultaneously. Several methodological (graphical and numerical) approaches have been developed to handle problems such as energy allocation, segregated targeting, and retrofit planning. Meanwhile, similar approaches for considering energy, land, and water footprint issues in energy and biofuel systems have been developed. Regarding the increasing concerns on climate change, more methodologies and applications are expected in this area.

Conclusion Heat integration is a family of methodologies that can be used to improve energy efficiency, reduce energy consumption, and minimize GHG emissions. Pinch analysis can be considered as the foundation of heat integration. It can identify the maximal heat recovery and minimal external utility needs for the system before any detailed design. As a powerful tool, pinch analysis extends its application to many other fields, such as waste reduction, wastewater treatment, refinery hydrogen management, emission targeting, etc. In spite of the total annualized cost, the HEN design must always consider the operability and controllability issues as well. During operations, various disturbances of temperatures and heat capacity flow rates always present. The disturbance propagation and control (DP&C) model-embedded HEN design approach can estimate the disturbance propagation and reject the severe disturbances through bypass design. This method can generate an optimal design solution satisfying both the economic and control objectives, thereby ensuring the achievement of high energy efficiency and low emissions. The novel carbon emission pinch analysis (CEPA) methodology, developed based on traditional pinch analysis, can identify the minimal quantity of low-carbon-emission energy resources needed to meet both the emission limit and energy requirement and the optimal energy allocation scheme, for a regional or national energy sector. It can provide invaluable information for the decision-makers and stakeholders.

Page 38 of 40

Handbook of Climate Change Mitigation and Adaptation DOI 10.1007/978-1-4614-6431-0_21-2 # Springer Science+Business Media New York 2015

Acknowledgments This work is in part supported by the National Science Foundation under Grants Nos. 0737104, 0736739, and 0731066.

References Ciric AR, Floudas CA (1990) A comprehensive optimization model of the heat exchanger network retrofit problem. Heat Recovery Syst CHP 10(4):407–422 Crilly D, Zhelev T (2008) Emissions targeting and planning: an application of CO2 emissions pinch analysis (CEPA) to the Irish electricity generation sector. Energy 33(10):1498–1507 Dhole VR, Linnhoff B (1993a) Total site targets for fuel, co-generation, emissions and cooling. Comput Chem Eng 17:S101–S109 Dhole VR, Linnhoff B (1993b) Distillation column targets. Comput Chem Eng 17(5–6):549–560 El-Halwagi MM, Gabriel F, Harell D (2003) Rigorous graphical targeting for resource conservation via material recycle/reuse networks. Ind Eng Chem Res 42(19):4319–4328 Elliott TR, Luyben WL (1995) Capacity-based economic approach for the quantitative assessment of process controllability during the conceptual design stage. Ind Eng Chem Res 34(11):3907–3915 Floudas CA (1995) Nonlinear and mixed-integer optimization. Oxford University Press, Oxford Floudas CA, Grossmann IE (1986) Synthesis of flexible heat exchanger networks for multi period operation. Comput Chem Eng 10(2):153–168 Foo DCY, Tan RR, Ng DKS (2008) Carbon and footprint-constrained energy planning using cascade analysis technique. Energy 33(10):1480–1488 Furman KC, Sahinidis NV (2002) A critical review and annotated bibliography for heat exchanger network synthesis in the 20th century. Ind Eng Chem Res 41:2335–2370 Huang YL, Fan LT (1992) Distributed strategy for integration of process design and control: a knowledge engineering approach to the incorporation of controllability into heat exchanger network synthesis. Int J Comput Chem Eng 16(5):496–522 Klemes J et al (1997) Targeting and design methodology for reduction of fuel, power and CO2 on total sites. Appl Therm Eng 17(8–10):993–1003 Kotjabasakis E, Linnhoff B (1986) Sensitivity tables for the design of flexible process (I) – how much contingency in heat exchanger networks is cost-effective. Chem Eng Res Des 64:197–211 Lee SC et al (2009) Extended pinch targeting techniques for carbon-constrained energy sector planning. Appl Energy 86(1):60–67 Linnhoff March (1998) Introduction to pinch technology. Linnhoff March, Cheshire Linnhoff B, Dhole VR (1993) Targeting for CO2 emissions for total sites. Chem Eng Technol 16(4):252–259 Linnhoff B et al (1994) A user guide on process integration for the efficient use of energy, 2nd edn. IChemE, Rugby Lou HH, Huang YL (2002) Rapid prediction of disturbance propagation in a non-sharp ternary separation system. J Chin Inst Chem Eng 33(1):87–94 Matsuda K et al (2009) Applying heat integration total site based pinch technology to a large industrial area in Japan to further improve performance of highly efficient process plants. Energy 34(10):1687–1692 McAvoy TJ (1987) Integration of process design and process control. In: McGee HA, Liu YA Jr, Epperly WR (eds) Recent development in chemical process and plant design. Wiley, New York, p 186 Page 39 of 40

Handbook of Climate Change Mitigation and Adaptation DOI 10.1007/978-1-4614-6431-0_21-2 # Springer Science+Business Media New York 2015

Natural Resources Canada (2003) Pinch analysis: for the efficient use of energy, water and hydrogen. CANMET Energy Technology Center of Natural Resources, Canada Papalexandri KP, Pistikopoulos EN (1994) Synthesis and retrofit design of operable heat exchanger networks: 1. Flexibility and structural controllability aspects. Ind Eng Chem Res 33:1718–1737 Papoulias SA, Grossmann IE (1983) A structural optimization approach in process synthesis-II. Heat recovery networks. Comput Chem Eng 7:707–721 Perry S, Klemeš J, Bulatov I (2008) Integrating waste and renewable energy to reduce the CFP of locally integrated energy sectors. Energy 33:1489–1497 Rossiter AP (1995) Waste minimization through process design. McGraw-Hill, New York Seider WD, Seader JD, Lewin DR (2003) Product and process design principles synthesis, analysis, and evaluation, 2nd edn. Wiley, New York Tan RR, Foo DCY (2007) Pinch analysis approach to carbon-constrained energy sector planning. Energy 32(8):1422–1429 Towler GP et al (1996) Refinery hydrogen management: cost analysis of chemically-integrated facilities. Ind Eng Chem Res 35:2378–2388 Uzturk D, Akman U (1997) Centralized and decentralized control of retrofit heat-exchanger networks. Comput Chem Eng 21:S373–S378 Wang YP, Smith R (1994) Wastewater minimisation. Chem Eng Sci 49:981–1006 Yan QZ, Yang YH, Huang YL (2001) Cost-effective bypass design of highly controllable heat exchanger networks. AlChE J47:2253–2276 Yan QZ, Xiao J, Huang YL (2006) Synthesis of highly controllable heat integration systems. J Chin Inst Chem Eng 37(5):457–465 Yang YH, Gong JP, Huang YL (1996) A simplified system model for rapid evaluation of disturbance propagation through a heat exchanger network. Ind Eng Chem Res 35:4550–4558 Yang YH, Lou HH, Huang YL (2000) Steady state disturbance propagation modelling of heat integrated distillation processes. Chem Eng Res Des 78(2):245–254 Yang YH, Huang YL, Lou HH (2005) A structural disturbance propagation model for the conceptual design of highly controllable heat-integrated reaction systems. Chem Eng Commun 192(8):1096–1115 Yee TF, Grossmann IE (1990) Simultaneous optimization models for heat integration-II. Heat exchanger network synthesis. Comp Chem Eng 10:1165–1184 Yee TF, Grossmann IE, Kravania Z (1990) Simultaneous optimization models for heat integration-I. Area and energy targeting and modeling of multi-stream exchangers. Comput Chem Eng 14:1151–1164 Zhelev TK (2005) On the integrated management of industrial resources incorporating finances. J Cleaner Prod 13(5):469–474

Page 40 of 40

Handbook of Climate Change Mitigation and Adaptation DOI 10.1007/978-1-4614-6431-0_22-2 # Springer Science+Business Media New York 2015

Modern Power Plant Control for Energy Conservation, Efficiency Increase, and Financial Benefit Pal Szentannai* Department of Energy Engineering, Budapest University of Technology and Economics, Budapest, Hungary

Abstract Process control takes place in all power plants. The main task of all automatic controllers is to assure the optimal values of their controlled variables under all circumstances. The quality of operation of these controllers has evidently a crucial effect on the way of operation of the entire power plant. Whether a power plant – based on either renewable resources or fossil fuels – is operated in a highly effective way, or is a rather resource-consuming one, is evidently of very high importance regarding emissions and other ecological aspects. This fact is the reason for discussing in this chapter the possible ways for increasing the level of control quality in power plants. An overview will be given at the beginning about the ways and tools the advanced control methods offer – in case of their more intensive applications in power plants – for protecting the environment and for mitigating the climate change. It will be followed by a concise but goal-oriented introduction of the most relevant control methods together with their evaluations regarding the aspects of their applicabilities in power plants. Because the way toward obtaining the environmental benefits offered by the advanced control methods is not a trivial one, some considerations, aspects, and hints will be given on this issue in the next part. A few successful power plant applications will be introduced afterward, and the actual main development directions will be outlined at the very end of this chapter.

Keywords Model-based control; Optimum control; MPC; Fuzzy; Neural Network; Propsed configuration

Nomenclature A(q) a1. . .a5 B1 (q) B2 (q) b1. . .b5 CCO mol/m3 CNO mol/m3 e e(t) K

Polynomial in the ARX model Free parameters of the cost function Polynomial in the two-input ARX model Polynomial in the two-input ARX model Free parameters of the cost function Molar concentration of CO in the flue gas Molar concentration of NO in the flue gas Control error Equation error of the ARX model Cost function

*Email: [email protected] Page 1 of 20

Handbook of Climate Change Mitigation and Adaptation DOI 10.1007/978-1-4614-6431-0_22-2 # Springer Science+Business Media New York 2015

q r r ts u V_ A m3/s V_ P m3/s V_ S m3/s y yM q, T K

Time shift operator Air distribution: ratio of primary air to total air Reference signal (set point) Time Control signal (process input) Total air flow Primary air flow Secondary air flow Controlled variable Controlled variable modeled Bed temperature

Introduction The practically exclusively used control method in power plants is currently the PID (proportionalintegral-derivative (Evans 1954)) algorithm. The well-known, clear-sighted effects of its three parameters, the easy and uniform methods for setting them, and the multiply proofed, stable operation assure its widespread success in many industrial branches, including the energy industry (Åström and H€agglund 1995; Datta et al. 2000; Visioli 2006; O’Dwyer 2009; Smith 2009; Yu 2006). Besides these clear advantages, the PID controller does have its limitations (which will be discussed later in this chapter), and parallel, modern control theory offers a wide range of advanced control methods. The basic ideas of the most important such methods will be briefly introduced in this chapter, together with the conclusions in the special aspect of their applicabilities in power plants. These introductions will be extended with practical hints regarding their realizations in new or existing power plants of any type, and some practical examples will be introduced too. The problem discussed in this chapter is a rather unusual one! No compromise must, namely, be made between economical and ecological interests, because the benefits of applying advanced control methods in power plants serve both in the same time. It is evident, namely, that increasing the efficiency or decreasing the resource-consuming manner of operation (referring to any sorts of fuel, water, air, or even valuable components under decreased thermal stress) serves both of those goals in parallel. In spite of the limited number of advanced control applications in power plants, the published results show clear, numerically expressible benefits, an overview of which will also be given in this chapter. The total number of industrial applications of advanced control techniques has increased rapidly worldwide, but the distribution of these applications among industry branches is considerably unequal. While chemical industry alone had more than 7,000 running applications of the most popular solution (Model Predictive Control, MPC) in 2005, the number of similar applications in power plants at that time was definitely below 100 (Dittmar and Pfeiffer 2006). Interesting is also the dynamic rate of increase of those applications in the chemical industry: their number has been doubled practically every 5 years since 1995. The goal of this chapter is nothing else but to encourage operators and owners of power plants together with decision makers for applying advanced control methods also in power plants in order to contribute to both global climate change mitigation and local financial benefits. For building a basis, some elementary ideas and notational practice of the control theory will be outlined here for those readers who are unfamiliar with this area.

Page 2 of 20

Handbook of Climate Change Mitigation and Adaptation DOI 10.1007/978-1-4614-6431-0_22-2 # Springer Science+Business Media New York 2015

d

r

+

e C

u

y P



Fig. 1 Basic elements of a closed loop control system – introduction of the notation used throughout the discussion of advanced control methods. Each variable may represent several physical variables joined as a multidimensional vector variable

The central element of a control system is always the process (or plant, P) to be controlled as shown in Fig. 1. The process can be affected by its input signal u (plant input or control signal), and its response is its output signal y (controlled variable). The process is often affected by disturbances (d) too, which may be either measurable or unmeasurable. In the classical control theory, all the above signals are considered as scalars, but throughout this chapter, they will be handled as vector variables – without any extra markings like boldfaced or underlined letters. It means that the current discussions may also refer to systems having multiple input and multiple output signals. In most cases of the following discussion, several signals (several real measuring points) can be handled jointly as components of one variable, which will be handled as a multidimensional vector variable (like in the algebra). The process to be controlled is generally not an entire system (e.g., a whole power plant or a boiler), much rather only a subprocess of it. In some books, papers, and theoretical discussions, the borders and list of inputs and outputs of the process are considered as predefined characteristics of the system. A definitely differing approach will be followed throughout this chapter. The theoretical and practical considerations on defining the borders of the process P are, namely, a key toward successful control, and a high level of knowledge of both power engineering and control sciences is required in this essential step. Another important element of a control system is the controller (C) itself. In the classical approach, its input is the control error (e), which is the deviation between controlled variable (y) and reference signal (or set point, r). In some advanced control methods, both controlled variable and reference signal will be considered, not only their actual difference. In the case of multi-output processes, also the reference signals must be multidimensional vector variables, of course. The goal of controller design is to set the internal behavior of the controller C so that the process output y could keep or follow the value prescribed by the reference signal r. Throughout this design procedure, the instationary behavior of the process also must be considered. The goal of the science of control theory is to develop such design procedures for many different plant types. The modern control theory has reached a really high amount of very useful results throughout its theoretical work; however, these results are still rarely utilized in power plants. The possible utilizations of these results and their environmental (and also financial) benefits will be discussed throughout the rest of this chapter.

Environmental Benefits Offered by Advanced Control Methods in Power Plants Energy conservation and efficiency increase of power plants are important goals to be considered throughout their basic design efforts. But how can these goals be supported by the real-time controllers? The next figure shows just one example. According to this example, a better control may keep the superheated steam temperature of a thermal power plant within a narrower band (Fig. 2). This decreased

Page 3 of 20

Handbook of Climate Change Mitigation and Adaptation DOI 10.1007/978-1-4614-6431-0_22-2 # Springer Science+Business Media New York 2015

Fig. 2 Environmental benefit from applying advanced control. Narrower band of fluctuation allows higher average live steam temperature, which directly results in higher plant efficiency

fluctuation in turn allows a higher set point of the same temperature, since the properties of the steel material used determine the maximum permissible steam temperature. And a higher average live steam temperature directly increases the efficiency of the plant, which means a direct decrease in fuel consumption. As a further consequence, the amount of emitted pollutants (including CO2) will be significantly decreased while producing the unchanged amount of electricity and heat. It is important to mention here that this positive effect is valid not only for fossil-fueled power plants but in an identical fashion also for biomass fueled or other ones. Similarly, an increased efficiency of wind mills, photovoltaic power plants, or hydroelectric power stations will reduce the energy demand to be produced from fossil resources. An obvious case of obtaining direct environmental benefit in the steady-state operation was discussed in this simple example only. It is important to mention already here that modern control techniques offer a much wider range of areas where direct environmental and economic benefits can be expected. The most important such benefits can be listed as follows: • Reaching higher efficiency in steady states (which directly results in lower fuel consumption and emission – as introduced in the example above) • Making load changes smoother and less resource consuming (by means of considering and limiting thermal stresses which in turn results in increased lifetimes) • Making the start-up periods faster (which directly results in savings in fuel consumption) • Increasing the level of supply by making the power plant a more flexible one in the energy market (which increases the potential of thermal power plants for compensating the uneven supply of wind farms) Besides the steam temperature control discussed in the above example, a number of further control tasks exist in power plants. An excellent overall summary of their specific goals and classical solutions can be found in Klefenz (1986, 1991). The basic components of power plants are often extended nowadays with different subprocesses in order to fulfill some specific or newly set requirements. These subprocesses require in most cases some own control tasks like the minimization of the ammonia slip in the flue gas in DENOX facilities. It is important to emphasize that advanced control techniques discussed in this chapter can be applied for all abovementioned groups of power plant control tasks, and in all cases, similar direct economic and environmental benefits are expected due to their higher level of intelligence. What is the secret behind advanced control techniques that allows them to offer such benefits? Let us answer this question using the example of one of the most frequently used techniques, model predictive control (MPC). Its most important properties are as follows:

Page 4 of 20

Handbook of Climate Change Mitigation and Adaptation DOI 10.1007/978-1-4614-6431-0_22-2 # Springer Science+Business Media New York 2015

• Its control actions are based on future values calculated by an integrated process model. • It can inherently consider constraints regarding, e.g., allowed operating areas and actuator positions, speed limits. • Multivariable control is naturally handled allowing an integrated compensation of cross effects. This chapter and its approach are definitely not against the traditional PID (proportional-integralderivative) controller! There are the definite reasons for the worldwide and branchwide success and high proliferation of the PID controller technique. It is also certain that the PID controller technique has had a nearly exclusive role in all branches of the industry from the nineteenth century onward, and it will keep its role in the future as well. However, it is also obvious that the PID control technique does have its limitations. The most important cases – together with just a few power plant examples – for which the efficient application of the PID control technique is strongly limited, can be given as follows: • For MIMO (multi-input, multi-output) systems with significant couplings (e.g., heat and power controls of turbogenerator groups) • For strongly nonlinear processes (e.g., engines and turbines) • For time-variant processes (e.g., waste incinerators) • For cases where better control performance is required The examples given in brackets behind the above bullets could be extended with a really high number of cases from the power generation industry. This makes it an evidence that power plants are typical applications where it is definitely advisable to apply advanced controllers.

Introduction of the Advanced Control Methods of Highest Potentials in Power Plants Which are the most important advanced control methods? What are their basic ideas? In which cases are they advantageous and where are their limits? These questions will be discussed in this section – but from the special aspect of their possible applications in power plants. Figure 3 gives a schematic overview of those advanced control methods that seem to be of the highest potential regarding their applications in power plants or have proven already their successful applicabilities in the energy industry. This figure will be used as a road map throughout this section. It will be seen – after studying the basic ideas of the above methods – that most of them use process models for reaching a better control quality. A wide variety of model structures, depths, and approaches is available, and their presences seem to be general characteristics of the advanced control methods. The reason for it can be understood easily, and it can be summarized as follows: the better the process is known

Soft Sensor Gain Schedule for nonlinear processes Multi Mode Loop Decoupling MPC (Model Predictive Control) DMC (Dynamic Matrix Control) Fuzzy Neural Network

advanced extensions also to classical control most often used subset of MPC “intelligent” methods

Fig. 3 The most important advanced control methods from the aspect of their applicabilities in power plants Page 5 of 20

Handbook of Climate Change Mitigation and Adaptation DOI 10.1007/978-1-4614-6431-0_22-2 # Springer Science+Business Media New York 2015

by its controller, the higher control quality can be expected. According to this, the efforts in process modeling and simulation are of the highest importance. And further on this statement seems to be true also for the person who intends to design the control system of an entire energy technology system like a power plant. According to this, a deep knowledge of the power plant, its subprocesses, thermal, chemical, and practical engineering aspects, operation environment, etc., must be well known for realizing a successful, high-quality (advanced) control system. A mathematical description of the selected process may not be enough, since just the procedure of drawing the borders of the subprocesses requires all the above theoretical and practical knowledge and experiences, and this beginning step is crucial regarding the later success. The basics of the most important advanced control methods will be discussed in the next subsections. However, their detailed theoretical analyses are no goals of the current chapter, since these aspects (e.g., stability issues) are discussed in detail for numerous particular cases in the original research articles and textbooks. In spite of this, a special care will be taken throughout the current discussions on the aspects of their possible roles, advantages, and limiting characteristics regarding their possible applications in power plants for reaching environmental benefits and financial results.

Soft Sensor In some cases, a significant difficulty in building effective control loops is the lack of a measured variable characterizing well the actual state of the process. A wide variety of theoretical and simple practical reasons may cause this situation like a significant time delay between the core process and its measurable output signal, a signal being very difficult or expensive to measure accurately, a signal burdened with significant noise or other inaccuracies, and so on. Soft sensor may be a good solution for these cases. Its basic idea (see Fig. 4) is to measure other, easily accessible process variables being in strong relationship with the required one, and the later one will be deduced from the measured one. For doing this deduction, a model will be used in all cases. Some special cases of the approach of soft sensor are known in the literature under their own names. Kalman filter is a broad set of tools for the cases where the measured data contains significant noise and other inaccuracies, while the Smith predictor gives a very interesting theoretical solution for processes with pure time delays. A significant technical relevance has in this field the so-called state optimal control. This advanced technique was applied successfully in several power plants in classical control environments in the 1990s, and it was mostly used for controlling the superheated steam temperature. This process can be, namely, characterized by a significant time delay being dependent also upon the actual plant load; however, modeling this SISO (single-input, single-output) system is not too difficult. Determining the actual rate of combustion in a boiler can be mentioned as a further example, because an accurate measurement on the steam or hot water side indicates any changes significantly later than the primary processes that originated them.

r + −

e

C

u

P1

y1

P2

y

yM P2,M

Fig. 4 Soft sensor is practically a model (P2, M) of a subprocess (P2). The calculated version (y M) of an unmeasurable process variable (y) can be used for control on the basis of the measurement of another “primary” variable (y1) Page 6 of 20

Handbook of Climate Change Mitigation and Adaptation DOI 10.1007/978-1-4614-6431-0_22-2 # Springer Science+Business Media New York 2015 Scheduling variable

Scheduling variable

Selector

Selector

parameter set 1

C1

parameter set 2 parameter set n

r + −

d r + −

e

u

C

P

e

d u

C2

P

y

y Cn

Fig. 5 Basic ideas of gain schedule (left) and multimode control (right)

Virtual process r1 +− e 1 r2 + e 2 −

C1

C2

v1

D

v2

u1

u2

P

y1

y2

Fig. 6 Inserting a well-designed decoupler (D) between controllers (C1, C2) and process (P) results in a virtual process having no internal cross couplings. This virtual process can be controlled by means of independent, one-dimensional controllers designed according to any (e.g., classical) control design methods

Gain Schedule and Multimode Control A practical extension of all linear controller design methods toward nonlinear processes are gain schedule and multimode control. The idea behind both of them is to choose always among a number of predefined control configurations depending upon the actual operating point. The first step in designing such a control system is to identify an appropriate variable to be used as scheduling variable, which may be the plant load signal in most power plant applications. Thereafter, a set of operating points will be chosen within the whole range of the scheduling variable, and any (advanced or classical) simple control design methods will be applied to each. During the online operation of the system, always one control configuration will be activated according to the actual value of the scheduling variable. The only difference between the two subjected methods is that while in the case of gain scheduling, only the parameter settings of an unchanged controller will be updated according to the actual values of the scheduling variable, in the case of multimode control, the whole controller itself – as visible in Fig. 5. As an evident advantage of these approaches, well-known linear control methods can be used also for nonlinear processes. However, only slight nonlinearities can be handled on this way, because otherwise the frequent switches between the actually used controllers or control parameters would result in unpredictable behaviors. This phenomenon indicates also a drawback of this method: the switches may result in unsmooth operation. Regarding the applicabilities of these two similar control methods in power plants, it can be stated that they can be effectively applied, because the main nonlinearities in these applications can be easily characterized by the plant load signal as the scheduling variable. The nonlinearities caused by the varying actual load is in most cases exactly in the range where an unchanged linear controller cannot be used effectively anymore, but these nonlinearities still allow the applications of these simple methods. An

Page 7 of 20

Handbook of Climate Change Mitigation and Adaptation DOI 10.1007/978-1-4614-6431-0_22-2 # Springer Science+Business Media New York 2015

Process model

u

y = f (u,x,...)

y

Constraints

≤, ...

Cost function

Q .(r–y) 2 + R .u 2 → min

Reference signal r (n.t n )

Fig. 7 A priory requirements of the MPC method

advantage of the first subjected method is its simplicity, while the later one allows its application also in such cases where the use of different control algorithms at different operating points is necessary.

Loop Decoupling In many practical cases, the control loops are not really independent from each other. This fact can easily be observed very often in power plants when a change in one control loop affects the other. The reason is, of course, that because of the presence of strong couplings (dashed lines in Fig. 6) inside the entire process, it cannot be considered as a set of independent one-dimensional (SISO) subsystems. It is in reality a coupled multidimensional process, which should be handled by the methods developed for multidimensional control problems, since the methods and tools developed for one-dimensional cases (e.g., the PID controller) cannot satisfactory solve the multidimensional problems. Control engineers often try to smooth out the most disturbing cross effects by means of several empirical tools. However, a relatively simple overall theoretical solution exists for tracing back the multidimensional problem to a set of one-dimensional problems, and these one-dimensional control tasks can already be solved by means of common (advanced or classical) controller design methods. A so-called decoupler (D in Fig. 6) will be designed and applied according to well-known, relatively simple design procedures, the details of which will not be discussed here. The goal of such a decoupler is to build a virtual process, the inputs of which are the inputs of the decoupler and the outputs are the outputs of the real process. The control loops of this resultant virtual process are independent from each other already. Loop decoupling can easily be realized also in the existing control system of an existing power plant; since most DCS (digital control system) software allows the insertion of extra multiplier blocks, the decoupler is built up in most cases. Their actual values shall be determined off-line by well-known standard procedures, which require also a process model. The controllers C1 and C2 will be designed afterward, also off-line, by considering the dynamic characteristics of the resultant virtual process decoupler + process (the inputs of which are v1 and v2, the outputs, y1 and y2 in Fig. 6).

Model Predictive Control The control method having the highest potential regarding industrial applications (including power plant applications as well) and also the highest number of successfully running industrial realizations is model predictive control (MPC). This is already a complete control method, which cannot be considered as a simple extension to the classical ones. This is a model-based method, which entirely handles also multidimensional processes. A further practical advantage of MPC regarding its industrial realization is its entire capability for handling constraints like maximal and minimal possible flow rates, valve positions, and other technological prescriptions. Model predictive control has several variations and development directions; its common basic idea will be summarized below – with special respect to its power plant applications for energy conservation and efficiency increase. The initial requirements of this control method are a process model, a set of constraints, a cost function, and the future values of the reference signal up to a certain horizon as shown in Fig. 7.

Page 8 of 20

Handbook of Climate Change Mitigation and Adaptation DOI 10.1007/978-1-4614-6431-0_22-2 # Springer Science+Business Media New York 2015

Fig. 8 Way of operation of the model predictive control. This method inherently handles also multidimensional processes and timely changing reference signals; however, for a better visibility, the much simpler single-input single-output case is indicated here. The inherent consideration of constraints is also not visible in this figure

A process model can be used theoretically in any programmed form. In practical applications, empirical models (black box models) are often used because they can be generated relatively easily by means of available identification procedures based on pure input, output measurements. Nevertheless, physical modeling (or at least using semiempirical models) is rather advisable, because a deep understanding of the controlled process (represented in such a model) gives definitely a great help in controlling it successfully. Processes, where identification-based empirical models are practically unusable, are the ones characterized by long-term conservation behaviors. It is important to emphasize here, because this is a frequent case in power plant processes! The long-term fuel and bed material accumulation in fluidized bed combustors (FBC) is a typical case, but grate firing and some other power plant processes are of very similar characteristics. The mathematical procedure of MPC does inherently handle also constraints, which should be given as relational operators referred to any available variables of the model or the control structure. This characteristic of MPC makes it a very practice-oriented one in case of its application in power plants, as discussed above. An interesting utilization of this property of MPC is the inclusion of some technological constraints (e.g., thermal stresses) which cannot be considered directly in the case of most other control methods. A very clear formulation of the goal of the control is the cost function (or target function), which gives the weighting between two opposite interests. A very low control error can, namely, be achieved at the expense of a very intensive actuator operation and vice versa. In Fig. 7, Q and R are the weightings, which are matrices in the general, multidimensional case. They represent the relative importances of these two aspects, where the matrix elements refer to the individual physical control errors (differences between set points and measured outputs) and actuator activities. The reference signal can be either a constant set point or a function of time. Because the basic version of MPC is a timely discrete one, the future values of the reference signal should be available in the time steps n  tn . The task the controller has to execute online at each time step is to solve the quadratic optimization problem with constraints. Because this problem is a well-known one for a longer while, numerous effective solver algorithms are available in the literature of mathematics. They will be adapted and used in the model predictive controllers, the operational procedure of which is the following. In each time step, the optimization problem formulated above will be solved numerically, and its result is the optimal future time series of the control signal u (Fig. 8). Not the whole time series but only its first Page 9 of 20

Handbook of Climate Change Mitigation and Adaptation DOI 10.1007/978-1-4614-6431-0_22-2 # Springer Science+Business Media New York 2015

Fig. 9 A fuzzy membership function to the fuzzy set described by the human expression “youth” (left). Membership functions can be used also in the classical set theory, but their borders are “crisp” (right)

element will be applied to the system, because in the next time step, the same optimization procedure will deliver a newer, updated control signal. In this way, the actually applied control signal will consider also the latest measured process data, which behavior acts as an effective tool against model inaccuracies being present of necessity. As a summary to MPC, it can be stated that this advanced control method offers excellent properties that can be utilized well also in power plants. That is why an increasing number of its applications in any types of power plants would be definitely a very effective tool for energy conservation, efficiency increase, and emission reduction. However, for realizing such applications, expert knowledge is required covering both power engineering and advanced control engineering.

Dynamic Matrix Control A rather simple version of model predictive control (MPC) is dynamic matrix control (DMC). Simplicity means here a procedure of significantly less online computational demand, which is an advantage regarding its applications in power plants. This early version of MPC can use the process model in a predefined simple form only, which is the so-called dynamic matrix. A drawback of this simplicity is, of course, the higher inaccuracy of the model in most cases. As a further difference compared to the basic MPC approach, DMC does not handle constraints entirely, which fact may also be either an advantage or disadvantage depending upon the specific application.

Fuzzy Control Both fuzzy control and neural network control came from the direction of artificial intelligence research, and that is why they are often called intelligent control methods (although this naming does not mean any rank differences compared to other advanced techniques). Fuzzy logic is an alternative direction of the set theory. According to the approach of the classical set theory, a point may either belong to a set or definitely not. In the fuzzy set theory, a membership function will be used instead, which is ranged from 0 to 1. It is important to mention at this point that human thinking seems to be much closer to the later approach, since nobody could clearly define the borders of the set “youth.” The unsharp borders of this set are indicated in Fig. 9, which shows them as an example on fuzzy membership functions. All measured data in fuzzy control will be classified into fuzzy sets, and this initial step of fuzzy control is called fuzzification. A given actual value of a measurement may belong to more sets in the same time. According to the basic idea of fuzzy logic introduced above, several sets will be defined by their membership functions like “very low,” “low,” “medium,” etc., and these membership functions are usually overlapped. The next step does no more deal with exact measured data; it uses the fuzzified states only (like “pressure is low”). In this second step, decisions will be made according to some rules implemented

Page 10 of 20

Handbook of Climate Change Mitigation and Adaptation DOI 10.1007/978-1-4614-6431-0_22-2 # Springer Science+Business Media New York 2015

Fig. 10 Internal structure of the fuzzy controller. Extension toward multivariable and dynamic control is possible

input 1 input 2

input n

w1 w2

b



output f

wn

Fig. 11 One artificial neuron. In a layer of a neural network, several neurons act in parallel. In a neural network, some layers will be applied. wi represents the weighting factors and b is an additive, the actual settings of which is the result of the learning process

during the design process of the fuzzy controller. These rules are rather simple ones like “IF pressure is low THEN set discharge valve position to somewhat open.” The final step in fuzzy control is called defuzzification. Output values will be formed here from the resulted decisions by means of output membership functions in such a way that the parallel decisions will be weighted by those membership values which resulted from them. The whole procedure is indicated in Fig. 10 in a simplified manner. Regarding its usability in power plants, fuzzy control can be characterized by the next advantages (+) and disadvantages (): • + Easy realization of human/expert knowledge, because the way of representation of the operational requirements is very close to the human thinking. • + Low-cost realization is possible, because fuzzification and defuzzification may be realized by means of low-cost sensors and actuators, respectively, and decision making is a procedure requiring relatively low computational capacities. •  Unsmooth output signals may be resulted by the discretized way of operation of the decision-making procedure. •  The overall stability of the control system can rarely be guaranteed because of the heuristic setup of the controller.

Page 11 of 20

Handbook of Climate Change Mitigation and Adaptation DOI 10.1007/978-1-4614-6431-0_22-2 # Springer Science+Business Media New York 2015

Neural Network Neurons are the basic elements of the nervous system. Many of them work in parallel, and the interactions between them determine the way of operation of the entire system. The junctions where these interactions take place are called synapses, and the magnitude of transferring signals from one neuron to another one through a certain synapse can be changed throughout the normal biological learning process. One neuron may receive several input signals from others, but it generates only one output signal. The above (general and simplified) description is the basis of the artificial neural networks (NN), which can be used also as controllers (Fig. 11). A very important characteristic of neural networks is their abilities for learning. It practically means certain procedures for finding the optimal set of the weighting factors w i and additive constants b so that in case of a set of inputs, the network would result in its desired set of outputs. Several search procedures are known, which depend also upon the actual form of the neuron output function f. The application of this theoretical background for the purposes of controlling a process still has a number of different approaches. If, for example, the neural network learns the inverse behavior of the process, applying the desired process output on the neural network input, its output will result the process input necessary for that desired process output. Beyond this theoretically simple application, many further successful ways of industrial applications are known. Also several combinations of fuzzy control and neural networks are applied, and both of these “intelligent” methods are often used as value-added extensions to other control solutions.

Proposed Ways of the Introduction of Advanced Control into Power Plants The introduction of advanced control methods offers a number of ecological and economical benefits as discussed above; however, the way of their implementation is not an evident one (Szentannai 2010). One must be aware of the special requirements of modern control techniques compared to those of the traditional, PID-based ones. As a general and strongly simplified observation, it can be stated that modern techniques are based on more detailed calculations. This is the reason for their requiring significantly higher computational capacities. Computers capable of such performance became commercially available low-cost standard ones in recent years, and many people use equipment of that capacity in everyday life. However, the reliability of these computers is definitely below the level expected in power plants. Moreover, most industrial control systems were designed for lower computational capacities only. A further problem can result from the fact that only a few digital control systems (DCSs) are equipped with standard software tools required to program an advanced control application. In this actual situation, one must distinguish between two different cases: application in a new power plant or application in an existing power plant equipped with traditional controllers. The first case seems to be easier, since the new control system of a new power plant can be designed according to the special needs of the selected advanced control method. This first means the appropriate selection of the hardware and software structure of the DCS to be applied: a system capable of these control methods should be installed. In spite of this theoretically simple and straightforward method, a more conservative approach will be proposed here to realize the benefits of advanced control strategies in new power plants. During the construction of a new power plant, it may be more advantageous and secure to program and use traditional control loops in the commissioning period of the whole power plant technology and to set up advanced controllers in a second phase only after reaching stable and secure operation. In most cases, commissioning in any case is followed by a longer period of fine-tuning the entire power generation technology, which Page 12 of 20

Handbook of Climate Change Mitigation and Adaptation DOI 10.1007/978-1-4614-6431-0_22-2 # Springer Science+Business Media New York 2015

should be used also for setting up, fine-tuning, and testing the final, modern control system as software changes in the unchanged DCS. This approach also allows a final comparison between traditional and advanced controls. In the second case, an existing power plant is running with its complete, proved, and stable traditional control system. In this case, the purpose of the introduction of an advanced control technique is to achieve and utilize its benefits outlined above. While doing this, one should not forget that the existing stable operation is of much higher actual importance than any advantages the new controller may offer. In other words, the benefits of the introduction of the advanced controller must be achieved in such a way that stability of the existing control system will by no means be lost. A good practice for this is to retain the existing control system as a supervisor above the new one. The supervisor should stay idle as long as the difference between the outputs of new and advanced controllers remains below a given threshold. This limit may be increased stepwise by the control engineer after appropriate periods of reliable operation of the new control technique, allowing more and more effective utilization of its benefits. Another question in this case is the choice of hardware on which to run the new control algorithm. Since the existing control system is generally not capable of doing this, an external platform is required. A rather general configuration is proposed in Fig. 12, which indicates both hardware and software structure, together with the necessary communication pathway. This scheme should be considered as a typical arrangement only and must be modified according to the actual environment in each particular case. Positioners are, e.g., in many cases realized outside the DCS (digital control system), sometimes as distributed local ones. Some device border lines must be actualized in this case; however, even in such a case, no change is proposed to the general concept of keeping the positioners outside the advanced controller. As visible on the above figure, the high-level parts (the traditional, one-dimensional PID controllers) of the existing control loops will be replaced by the advanced controller, but the replaced elements will become effective again if and when the new control outputs show an unlikely degree of variance from those of the original control system. This proposed method of implementation assures a secure way to realizing the benefits of advanced control techniques.

Fig. 12 Proposed typical hardware and software configuration for applying advanced control in a power plant originally equipped with a traditional control system. Benefits of the modern control will be utilized while the proven and secure operation of the original control system will be retained (Thick lines, existing system; thin lines, advanced control extension) Page 13 of 20

Handbook of Climate Change Mitigation and Adaptation DOI 10.1007/978-1-4614-6431-0_22-2 # Springer Science+Business Media New York 2015

This scheme may also be applicable in the case of a new power plant, the only difference being that the communication channel between old and new hardware can be omitted since the capacity of the DCS can be chosen to satisfy the higher computation needs of an integrated implementation of the advanced algorithm also. This is possible nowadays without any remarkable surplus in the DCS price.

Successful Applications of Advanced Control in Power Plants Applying the latest results of the control theory also in power plants is not only a theoretical possibility! A number of applications are known from the literature; some of them will be introduced in this section. A general, interesting characteristics of these applications is that they serve the ecological goals not only through increasing the plant efficiency (which directly results in energy conservation and reduced total flue gas emission including CO2), but most of them also bring about further environmental benefits. After the literature overview, a case study will be given where not only the basic idea and the results will be shown but also the complete solution in detail. For those who want to go deeper or want to have a broader overview of advanced control applications in different types of power plants, a recently published book can be proposed (Szentannai 2010).

Some Published Applications Ruusunen (2010) applied the soft sensor technique on two grate-fired combustors based on solid biomass (wood chips, wood pellets, and fuel peat) of 30 and 300 kW thermal capacities. His goal was to compensate the combustion power fluctuations being present in these small-scale biomass fired boilers due to inhomogeneous fuel quality and unequal feeding capacity. The stabilized and accurate combustion power is, namely, critical for maintaining low emissions and stable operating conditions. Based on the model-based approach, fuel power changes could be compensated by the controller before they affected the heat output of the boiler, enabling continuous and delay-free monitoring of disturbances. As inputs of the soft sensor, some temperature measurements were used, the locations of which were found to be critical. Operational experiences have shown that through the applied advanced control strategy, the standard deviations of the heat output and CO reduced by 40 %. Also 25 % reduction of CO concentration was measured during the test period, and the fluctuation of the oxygen concentration was reduced by 45 % in the same time. The increase in boiler efficiency is also very attractive: 1–2.4 % points! Havlena and Pachner (2010) reported a successful application of multivariable Model Predictive Control (MPC). Their goal was to improve stability key process variables, effectiveness of limestone use, and boiler combustion efficiency under emission limits. Two circulating fluidized bed (CFB) boilers were originally operated with standard PID control strategy. As the fluidized bed combustion process shows strong interactions between process variables, standard PID control did not fully meet the operational requirements. The boilers are fueled by a mixture of coal and coke, and the nominal steam production is 310 t/h each. A very simple, half-empirical (gray box) model was set up including also the long-term storage characteristics of this combustor type. The bed temperatures were originally manually kept between 860  C and 900  C by the operators. After the introduction of the model-based advanced control technique, these temperatures are automatically maintained with standard deviation below 1  C at a given reference value, which is optimal for in situ SO2 removal. The SO2 emissions, originally only monitored, are now controlled and held within a very narrow band (100 %, it is called “backfire” (Sorrell 2009). Simply put, energy efficiency makes energy services cheaper, so demand tends to increase. This concept is called “elasticity of demand.” A more economic car might tempt its owner to drive faster and further, thus partially offsetting potential energy savings. A car producer can decide to install more electronic devices for increased driver comfort in a car that has been made more fuel efficient, thanks to the use of lightweight construction materials and a better engine. The extent of the rebound effect depends on the elasticity of demand, which tends to be stronger with consumers than with industrial plants (Sorrell 2009). William Stanley Jevons studied the rebound effect during the industrial revolution (Sorrell 2009). In his 1865 book The Coal Question (Jevons 2008), he was pondering over the question whether efficiency measures would really lower actual coal consumption, based on empirical evidence that after efficiency improvements with steam engines and in steel production, the actual energy consumption had soared. For more information, see Saunders (1992) and Herring and Sorrell (2009).

Energy Intensity Intensity is an ambiguous term. In physics, it is power per unit area [W/m2], a time-averaged energy flux. In heat transfer, intensity commonly denotes the radiant heat flux per unit area per unit solid angle [W  m2  sr1]. Here, energy intensity is an economic concept as a measure of the energy efficiency of a nation’s economy. It is calculated as units of primary energy consumption per unit of GDP or value added, measured in [MJ/$] or [toe/$]. The energy intensity of a country is influenced by many factors, for instance, the climate. Economic productivity and standards of living contribute as well as the energy efficiency of buildings and appliances, traffic patterns (public transportation vs. individual cars), and the way energy is being produced (EIA 2015). Energy intensity can hence be used as a surrogate for aggregate energy efficiency. Countries differ strongly by energy intensity, and within countries, there are marked differences amongst regions. In the USA, a state with superior energy efficiency performance is California, which has established leadership in, e.g., per capita energy consumption (Rosenfeld 2008; Vine et al. 2006). The energy efficiency of different countries is assessed in Utlu and Hepbasli (2007). The term “energy intensity” can also be applied to a production process as a synonymous expression for specific energy consumption, based on quantity [kg] or value added [$] or [€]; see also section “Energy-Intensive Industries.”

Page 8 of 65

Handbook of Climate Change Mitigation and Adptation DOI 10.1007/978-1-4614-6431-0_24-2 # Springer Science+Business Media New York 2014

Table 3 Emission intensities (Source: Bilek et al. 2008). The ratio of H/C is 4 in natural gas, which is higher than in oil and especially coal, leading to lower CO2 emissions per kWh Fuel/resource Coal Oil Natural gas Nuclear power (U) Hydroelectricity Photovoltaics Wind power

Electric g(CO2-eq)/kWhe 863–1,175 893 587–751 60–65 15 106 21

Emission Intensity (Carbon Intensity) Another concept is the emission intensity. It is the average emission rate of a given pollutant from a given source related to the intensity of a specific activity, e.g., grams of CO2 per MJ of energy produced [g/MJ]. The term emission intensity is often used interchangeably with “carbon intensity” and “emission factor” in the climate change discussion. Other greenhouse gases and pollutants can be considered, too, by calculating CO2 equivalents (CO2-eq). Table 3 provides an overview on emission intensities, compiled from Bilek et al. (2008). The subscripts in Table 3 stand for “thermal” and “electric.” In combined heat and power (CHP, cogeneration), both heat and power are produced from a combustion process, boosting overall efficiency (see later).

Historical Development of Energy Efficiency A proverb says “Things that cost nothing have little value.” In this sense, as long as easy access to energy is available, there are few incentives to use it wisely. History tells several lessons here. Visitors to Greek islands will witness testimony of one such unsustainable practice exercised centuries ago, i.e., chopping down trees to build ships without reforestation. There are countless other examples of unsustainable acts related to resource and energy efficiency in the past, some of which have even led to the extinction of a local human population (Bologna and Flores 2008). The global oil crises in the 1970s were an event that has triggered several measures for energy efficiency on a large scale, e.g., the creation of the DoE (Department of Energy) in the USA. In the following decade, when crude oil prices went down again, there was reduced motivation to focus attention on energy efficiency in many areas. The industrial sector has improved its energy efficiency continuously over the last 30 years, partly in order to reduce variable production costs and to improve competitive advantage (one also has to take into account that a significant part of energy-intensive production facilities was transferred to low-labor-cost countries in, e.g., Asia). Economic growth, a trend toward increased personal mobility and toward larger homes and the use of more and more appliances, amongst others, has led to a steady increase of absolute energy demand in most industrialized countries. As a result, the overall energy intensities in the USA have declined as follows between 1980 and 2005 (Granade et al. 2009):

Page 9 of 65

Handbook of Climate Change Mitigation and Adptation DOI 10.1007/978-1-4614-6431-0_24-2 # Springer Science+Business Media New York 2014

Fig. 2 Energy efficiency trends of fossil fuel combustion in the EU27 (Reprinted with permission from Elsevier from Graus and Worrell (2009))

Residential sector Commercial sector Industrial sector

11 % 21 % 42 %

While the national per capita energy consumption in the USA has grown by 1.3 % per year from 1977 to 2007, which means a doubling, it remained almost constant in California. In the EU, the average efficiency of gas-fired power plants has increased from 34 % in 1990 to 50 % in 2005 and is expected to increase to 54 % by 2015 (Graus and Worrell 2009). For coal-fired power plants, the efficiency, also based on the lower heating value, went up from 34 % in 1990 to 38 % in 2005 and is expected to increase to 40 % by 2015. These trends are visualized in Fig. 2. As the developed world has built its industry, specific energy consumption was constantly improved. Yet the largest share of historic and current global emissions comes from developed countries. Many people now fear that while other countries race through their development, they might expel “their share,” i.e., high amounts of pollutants, into the atmosphere. China, for instance, has been able to maintain economic growth of greater than 9 % from 1980 to 2000, while the energy demand only increased by 3.9 % per year (Lin 2007). This shows that energy demand does not necessarily have to outpace economic growth during the early stages of industrialization and development (Lin 2007). A word of caution: Many scientific publications, as well as the public opinion, believe in decreasing energy intensity over time. This hypothesis is often only an assumption, which needs to be proven. In Le Pen and Sévi (2010), the authors conclude that many energy efficiency trends on a national level follow a stochastic nature; see Fig. 3. In Schipper et al. (2005), historic developments and future trends of energy efficiency are discussed. Megatrends (Naisbitt 1985) will also have an impact on energy efficiency. How they are perceived can differ strongly (Atilla Oner et al. 2007). In general, there have been strong improvements in certain areas with respect to energy efficiency, some of which were countered, though, by rebound effects.

Page 10 of 65

Handbook of Climate Change Mitigation and Adptation DOI 10.1007/978-1-4614-6431-0_24-2 # Springer Science+Business Media New York 2014

Fig. 3 Stochastic movement of energy consumption. Left: oil consumption per unit of GDP for OECD countries from 1965 to 2005. Right: same data for non-OECD countries (Reprinted with permission from Elsevier from Le Pen and Sévi (2010))

Assessing Energy Efficiency Improvements Energy efficiency improvements can be achieved by technological progress or by changes in behavior. They can be measured. However, for a correct assessment, the following factors have to be taken into account: • Erosion of part of the improvements by the rebound effect (see above) • Comparability of data (same year, same boundary conditions) • Selection of a proper baseline The baseline for measuring energy efficiency is of utmost importance to avoid wrong conclusions. This is elaborated with an example from the transportation industries below, viz., the fuel consumption of aircraft over time. Figure 4 shows a data compilation of how fuel efficiency of commercial aircraft was improved over the last decades.

Page 11 of 65

1950

20

30

40

50

60

70

80

90

B707-320 DC8-30

1960

DC8-61 B707-320B

B707-120B

B747-100B

1970 1980 Year of Model Introduction

B747-200

B747-200B

B747-100B

B747SP

B747SP

DC10-30

DC10-30

B747-200

B747-100

DC8-63

SVC10

B747-100

B707-120 SVC10 DC8-30 B707-320 DC8-63 B707-120B B707-320B DC8-61

B707-120

Comet 4

B747-300 A310-300

B747-200B

A310-300

1990

B747-400

A330-300

A340-300

A300-600R

Aircraft Fuel Burn per Seat

A340-300

A330-300

B747-400

A300-600R

B747-300

Engine Fuel Consumption

Fig. 4 Fuel efficiency of commercial aircraft over the last 50 years. See text for details. Reprinted with permission (Source: IPCC 2000)

% of Base (Comet 4)

100

2000

B777-200

B777-200

Handbook of Climate Change Mitigation and Adptation DOI 10.1007/978-1-4614-6431-0_24-2 # Springer Science+Business Media New York 2014

Page 12 of 65

Handbook of Climate Change Mitigation and Adptation DOI 10.1007/978-1-4614-6431-0_24-2 # Springer Science+Business Media New York 2014

Fig. 5 IPCC graph with additional data (Reprinted with permission from Peeters et al. (2005))

Taking the Comet 4 as a baseline, fuel consumption was reduced by 70 % in modern aircraft. Approx. 40 % of the improvements are attributed to engine efficiency improvements, and 30 % to airframe efficiency improvements (IPCC 2000). The de Havilland Comet was the world’s first commercial jet airliner (Davies and Birtles 1999). Figure 4 was taken from an IPPC report. The IPCC (Intergovernmental Panel on Climate Change) is a renowned, scientific intergovernmental body established to evaluate the risk of climate change caused by human activity (Intergovernmental Panel on Climate Change (IPCC) 2015). It was awarded the 2007 Nobel Peace prize together with Al Gore. In Peeters et al. (2005), the authors argue that the pre-jet era was ignored in the above IPCC discussion and that the Comet 4 is an unsuitable baseline. From the conclusions of that report (Peeters et al. 2005): The later piston-powered airliners were at least twice as fuel-efficient as the first jet-powered airliners; If, for example, the last piston-engine aircraft of the mid-fifties are compared with a typical turbojet aircraft of today, the conclusion is that the fuel efficiency per available seat-kilometre has not improved. . .. The last piston-powered aircraft appear to have had the same energy efficiency per available seat-kilometre as average modern jet aircraft. The most modern jet aircraft (such as the B777-200 or B737-800) are slightly more efficient per available seat-kilometre.

The findings from this study are depicted in Fig. 5. As it can be seen in Fig. 5, slight changes in the assumptions will lead to strong deviations in the results. This has to be borne in mind when assessing and comparing energy efficiency studies presented by various interest groups.

Innovation and New Technologies for Energy Efficiency In order to increase energy efficiency, innovation (Christensen et al. 2001) is needed. By innovation, either of the following energy efficiency improvements can be achieved:

Page 13 of 65

Handbook of Climate Change Mitigation and Adptation DOI 10.1007/978-1-4614-6431-0_24-2 # Springer Science+Business Media New York 2014

POWER + AIR

INTERNAL + EXTERNAL LOSSSES POWER + AIR

INTERNAL + EXTERNAL LOSSSES

NATURAL GAS AMMONIA CONVERSION PROCESS

CONVERSION PROCESS

STEAM CREDIT

NITRIC ACID

STEAM CREDIT

Fig. 6 Simplified Grassmann diagram for the production of nitric acid (Hinderink et al. 1999) (Reproduced by permission of the Royal Chemical Society (RCS))

• Carrying out the same task or process with less energy • Utilizing the same amount of energy to produce more output or higher value • Redefining the task or process so that the new way consumes less energy Innovation can take place in incremental steps or in a disruptive way, when a new technology is developed, for instance. The electric light bulb, being condemned as energy inefficient today, was one such disruptive innovation, which has been around for more than a century. So in order to innovate, engineers and researchers might be tempted to search and build more knowledge in their own area of expertise and to innovate as much as possible in their very own fields. This strategy has proven successful – take the famous Bell Labs (Gehani 2003) as an example. Fifty years ago, the Bell Labs were generating every new technology that the telephone business needed, and the telephone business, in turn, was using all of Bell Labs’ innovations. Bell Labs were virtually unbeatable. However, the rules of innovation have changed somehow over time. The Bell Labs invented the transistor, which clearly is one of their greatest discoveries. However, Bell Labs did not recognize the value of the transistor, and they gave it away for little money. The transistor, hence after, was extremely successful, but with the main use not being in the telecommunications industry. On the other hand, the very innovation that revolutionized the telecommunications industry – the fiberglass cable – was developed outside that industry. This phenomenon has been observed in many industries over the last 50 years (Drucker 2003) – the major innovations with the biggest impact for an industry are not likely to come out of the industry itself but will rather be “born” in a different area. The significance of this development for the realm of energy efficiency is as follows: Energy efficiency can be improved in many ways. In a passenger car, for instance, an advanced engine, lightweight plastics components instead of steel or tires causing less rolling friction will all serve the same final purpose of energy efficiency. Innovation takes time until its full potential is being realized, though. In Lund (2006) the market penetration rates of new energy technologies were studied. It is concluded there that the time for a takeover of market share from 1 % to 50 % varies from less than 10 years to 70 years, with takeover times below 25 years being associated with end-use technologies. Long investment cycles render the energy production industry inert to change.

Page 14 of 65

Handbook of Climate Change Mitigation and Adptation DOI 10.1007/978-1-4614-6431-0_24-2 # Springer Science+Business Media New York 2014

Typical Energy Efficiencies The energy of photosynthesis is on the order of 1 %, with a fraction of approx. 0.2 % being stored as biomass. Sugarcane exhibits peak storage efficiencies of up to 8 % (Hall and Rao 1999). The first steam engines, designed as external combustion engines, had efficiencies on the same order of magnitude. To visualize the energy balance, i.e., the energy efficiency, of a process or machine, a Sankey diagram can be used. For exergy, Grassmann diagrams (Hinderink et al. 1999) are deployed (though both terms are sometimes used interchangeably in the literature). An example for a Grassmann diagram for nitric acid production is shown in Fig. 6 (Hinderink et al. 1999). The Grassmann diagram can be seen as an energy flow diagram, visually explaining which fraction of the total, initial energy ends up in the final product. In order to obtain typical energy efficiencies, or reference energy efficiencies, a benchmark is deployed. The benchmark in energy efficiency is given by the state-of-the-art and so-called BAT (best available technology) values. However, BAT values are often difficult to obtain, as corporations tend to keep them secret and patents do not always provide full disclosure. The energy efficiency and carbon intensity of a given process depend on the system boundaries that are considered and on the energy path. For instance, whether electricity for a hybrid car has been produced in a coal-fired power plant or by solar cells will heavily impact the overall efficiency (see also section “Life Cycle Assessment (LCA)”). Actual efficiencies will depend on a large number of factors such as the condition of a given system or appliance. Examples are the load of an engine, maintenance on motors, and usage patterns. This is obvious for every car owner who wants to reach the “official” fuel consumption of his/her car. When energy efficiency potentials are presented in the literature, one has to be careful not to overestimate or mix up the various potentials, which are: • Technical potential • Economic potential • Maximum achievable potential (considering factors such as demographics, market conditions, and regulatory factors) • Realistic achievable potential (taking historic data into account) People adapt to change at different rates. Take popular technologies as an example. Even for microwave ovens and mobile phones, it took 10–15 years for market penetration. Therefore, the realistically achievable potential is never equal to the full technical potential. Also, the effort to obtain a large part of any potential saving will increase along the way. For energy efficiencies of various technologies, processes, and appliances, the reader is referred to the respective chapters of this handbook and to the specialized, referenced literature.

Benchmarking of Energy Efficiency There are no useful reference data for absolute energy efficiency from a thermodynamic or theoretical point of view. Rather, one can only compare a given process or technology route, device, or method to other solutions in the lab or in the field, so that the best available technology (BAT) or state of the art can be determined empirically. Such a benchmarking exercise focused on energy efficiency will yield interesting results. In Phylipsen et al. (2002), for instance, it was found that the energy consumption of Page 15 of 65

Handbook of Climate Change Mitigation and Adptation DOI 10.1007/978-1-4614-6431-0_24-2 # Springer Science+Business Media New York 2014

the steelmaking plants in several countries was 25–70 % above the best plant. In the cement industry, the average consumption was 2–50 % higher than the very best plant energy efficiency. Benchmarking can be used by operators of industrial plants to compare their energy efficiency, and ultimately their competitiveness, to that of their contenders. Consumers can use relative indications of energy efficiency, such as the Energy Star ® label, to easily spot energy-efficient appliances as a guide for purchase decisions. It needs to be mentioned that comparing like with like is crucial. If, for instance, steelmaking plants in two countries are to be compared, sectoral differences must be taken into account (Phylipsen et al. 2002) (if, e.g., there is plenty of secondary steel available, energy efficiency will “automatically” be higher). Also, regional differences in feedstock quality (Worrell et al. 2000a) or climatic conditions will affect the energy efficiency of a given plant. More information of reliable reference data for energy efficiency comparisons on a national level can be found in Doukas et al. (2008). In mature industries, energy efficiency differences from plant to plant are not expected to be very large, because improvements tend to be incremental. Generally, there is a lack of energy efficiency benchmark standards for industry at large and factories in various sectors (Yang 2010), secrecy and antitrust legislation being important impeding factors. There exist corporate benchmarks in some companies that operate multiple plants or sites. Several consultants carry out benchmarking studies in various industries, e.g., Solomon Associates for steam crackers, Phillip Townsend Associates for polymerization plants, Plant Services International for ammonia and urea plants, and PDC (Process Design Center) for more than 50 petrochemical processing plants (The International Energy Association in Collaboration with CEFIC 2007), to cite a few examples. These benchmarks present generalized and anonymized data with which the energy efficiency and the competiveness of one’s own plant can be compared to the industry average.

Energy Efficiency World Records A world record in energy efficiency of a car was set in 2005 as 5,134 km per liter of gasoline equivalent, operating on a hydrogen-powered PEMFC (polymer electrolyte membrane fuel cell) (Santin 2005) during the Shell Eco-Marathon. It challenges students around the world to design, build, and drive the most energy-efficient car and has three annual events in Asia, America, and Europe. On the website of the competition (Shell Eco Marathon 2015), additional records on energy efficiency are highlighted, e.g., an equivalent of 3,771 km with 1 l of fuel with a combustion engine-powered car in 2009 (5 years earlier, the record was 3,410 km). These figures, equally impressive and irrelevant for current practical road transportation, show that there is plenty of potential left to increase energy efficiency, even beyond current imagination.

Some Not-So-Energy-Efficient Inventions and Practices Here are some examples of low-energy-efficient appliances and habits, most of which might soon astonish people that they even existed in our times: • • • • • • • •

Incandescent light bulbs Huge private cars such as SUV with single occupancy Standby function on electrical appliances in households Patio heaters to warm open areas outside the house Melting snow in cities such as New York City to dispose of it Flaring of hydrocarbons in petroleum refineries Room temperature regulation by opening and closing a window, while keeping the heater switched on Water ring pumps to produce an industrial vacuum Page 16 of 65

Handbook of Climate Change Mitigation and Adptation DOI 10.1007/978-1-4614-6431-0_24-2 # Springer Science+Business Media New York 2014

In a typical household, appliances on standby use up 10 % of the total amount of electricity consumed. This is equivalent to 400–500 kWh annually, virtually wasted with no energy service rendered.

Barriers to Energy Efficiency There is no doubt about the fact that energy efficiency offers cost-effective energy savings. However, the full potential has barely been tapped into. There are several barriers, associated with financial limitations, uncertainty, or others. They can also be classified as structural and behavioral and related to availability (Granade et al. 2009). Though businesses and households are responsible for implementing most energy efficiency investments, it is their governments to provide the right bordering conditions to catalyze investments in energy efficiency by offering tax incentives, education, or other facilitation. One reason why the potential for energy efficiency has not yet been realized to its full extent is the fact that high upfront investments are often necessary, whereas the savings accrue incrementally over the subsequent years (Granade et al. 2009). Also, the energy efficiency improvement potentials are highly fragmented (Granade et al. 2009). Apart from low awareness, the difficulty to measure energy efficiency improvements in several areas contributes to slow progress. Barriers to energy efficiency are discussed in Granade et al. (2009), alongside the following potential actions to break down these barriers: • Information and education • Incentives and financing • Codes and standards Experience shows that consumers are particularly hostile toward funding of energy efficiency measures, compared to businesses, even if the economics are reasonable. They apply hyperbolic discounting, meaning that immediate value is regarded significantly higher than future one. Barriers toward energy efficiency improvements in industrial settings are reviewed in Schleich (2009). Another interesting question is the durability of energy efficiency measures, which was studied in Climate Action Team (2015), the results of which are given in Table 4. The percentages in Table 4 reflect the portion of the first year energy savings that remain throughout the full lifetime of the studied energy efficiency measures. A distinction was made between measures focused on saving electrical energy and measures to save fuel. It can be seen that already after a few years, Table 4 Estimated persistence of energy efficiency measures (Source: Climate Action 2015) Years following implementation (installation) 1 2 3 4 5 6 7 8 9 10

Remaining energy efficiency impact Electricity-related measures (%) 99.69 95.97 89.59 85.14 84.02 78.32 78.22 78.22 74.58 66.73

Fuel-related measures (%) 100 99.46 98.51 97.84 97.11 89.75 89.75 89.75 89.70 87.45

Page 17 of 65

Handbook of Climate Change Mitigation and Adptation DOI 10.1007/978-1-4614-6431-0_24-2 # Springer Science+Business Media New York 2014

considerable losses from the initial gains are encountered, which can be explained by various factors depending on the efficiency measure. “Hard-wired” energy efficiency initiatives will generally be lasting longer than those based on behavioral changes (see also below). An example on how energy efficiency can stagnate if the economic and organizational conditions are not in favor of it, such as prevailing low electricity prices, is shown for the Swedish building industry in N€assén and Holmberg (2005) and N€assén et al. (2008). Aspects of financing energy efficiency, another prominent barrier, are outlined in Taylor et al. (2008), Lee et al. (2003), Jechoutek and Lamech (1995), and Clark (2001). Barriers to energy efficiency in general are reviewed in Sorrell et al. (2004).

Levels of Energy Efficiency: From Process to Behavior Energy efficiency can be achieved by various means. A product can be manufactured in a way that energy is used efficiently, either during its production or during its use. A process can be energy efficient by itself, or it can produce energy-efficient outcomes. The same applies for services. Here are some examples of more and less efficient products and processes: • • • •

Office lighting by compact fluorescent lights/LED versus traditional incandescent light bulbs Modern compact passenger car versus older, mid-sized model Cement production by the dry process versus the wet process Air separation by pressure swing adsorption versus air separation by cryogenic air cooling and fractionated distillation • Steel manufacture from scrap metal versus ore It is desirable to have efficient equipment and processes in place. However, these can be operated in very inefficient ways. The magnitude of loss in energy efficiency by “bad” operation can be as large as the difference between competing processes and equipment items (Moore 2005). Some examples of these “bad” operation aspects are: • • • • •

Excessive speeding with a car, which strongly increases fuel consumption/km Neglected maintenance on insulation of window frames in a private home Keeping office lights on overnight when they are not needed Operating plant utilities at full capacity during idle production times Not repairing leakages on compressed air pipelines

In contrast to the installation of new, more energy-efficient equipment, or the design of a more energyefficient process, operation thereof requires constant attention (compare also the table above, showing the stunning erosion of energy efficiency gains over a few years’ time). By continuously working on a mindset toward energy efficiency, for instance, by having employees turn off idle equipment and by fostering continuous improvement, also small, individual savings can add up. In Moore (2005), some aspects of why operators in control rooms do not always give utmost importance to energy efficiency are listed: • • • • •

Lack of urgency, little incentives to value long-term performance versus the short term Preference of steady-state operation versus short-term optimization efforts Comfort, trading economy against less effort Individual work history and anecdotes making risk perception highly personal Different levels of skills and knowledge Page 18 of 65

Handbook of Climate Change Mitigation and Adptation DOI 10.1007/978-1-4614-6431-0_24-2 # Springer Science+Business Media New York 2014

• Instinct to preserve assets rather than maximize their utilization • Little effect of administrative control measures alone • Focus drift due to distraction The most economic mode of operation of a plant in the process industries, for instance, is not always the most convenient one (Moore 2005). This will lead operators to at least partly refrain from energy efficiency optimization. Such “human factors” can be improved by considering the usability of processes and equipment. Whereas the usefulness of a man-made tool or installation is related to user satisfaction, the term us(e) ability denotes the ease with which it can be deployed. In general, usability can be defined as a measure of the ease with which a system can be learned or used; its safety, effectiveness, and efficiency; and attitude of its users toward it (Jordan et al. 1996). In Nachreiner et al. (2006) and Nishitani et al. (2000), two examples of the successful application of usability and usability engineering in process control systems and industrial plants are given.

Energy Efficiency Investments As energy-efficient technologies often have higher initial investment costs than older, less advanced ones, economic considerations will determine the extent to which energy efficiency is considered for new investments and for retrofits alike. The TCO (total cost of ownership) approach will clearly recommend energy efficient, but typically more expensive installations, in many cases. Investing in “the right technology,” if it is not supported by a sound business case of yearly energy bill savings, will be easier during the construction of a new building or factory than when one wants to apply for funds, corporate and federal alike, later on. In industry, one can distinguish between: • Pure capacity investments • Pure energy efficiency investments • Hybrid capacity and energy efficiency investments Common appraisal methods for investment projects in industry are: • • • •

Payback period Net present value (NPV) Internal rate of return (IRR): discount rate where NPV = 0 Strategic fit

Approval can be based on an evaluation of several of these parameters, by a ranking or by fulfilling a certain cutoff criterion. To test the validity of the profitability calculation of such a project, a sensitivity analysis can be carried out by varying the most important parameters. Monte Carlo simulation enhances the quality of such simulations (Lackner 2007). Real options (Rugman and Li 2005) can also be used. While debottlenecking investments, which increase production capacities, usually have short payback periods and high IRRs, often exceeding 50 %, energy efficiency investments sometimes cannot make it over the 10 % hurdle. If the funds for investment projects are limited, naturally those with higher IRR will be preferred. Energy efficiency investments can be carried out at a lower IRR than a corporation’s normal hurdle rate (IRR), because the associated risk is generally lower than for a capacity investment (energy savings can be predicted more reliably).

Page 19 of 65

Handbook of Climate Change Mitigation and Adptation DOI 10.1007/978-1-4614-6431-0_24-2 # Springer Science+Business Media New York 2014

Often, when “selling” an energy efficiency project in a corporation, one had to better avoid the term “energy” and describe potential projects as “efficiency” or “productivity” improvement projects when presenting them to decision makers. Energy has a different importance for various sectors. Those industries which are energy intensive will suffer more from high and volatile energy prices than the ones incurring only a small percentage of their costs from energy bills. It is estimated that out of the total global economic activity (according to the International Monetary Fund (IMF) US$77.609 trillion (GDP) or US$106.998 trillion (purchasing power parity, PPP) for 2014), 40 % comes from companies where energy plays a strategic role (McKinsey & Company, Inc. 2009). The sectors concerned are transportation, building and construction, energyintensive industries, engineering, IT (information technology), and the energy industry. For companies in these sectors, energy can have a direct or indirect effect, i.e., on their own production costs or on the acceptance of their products. On the other side, there are industries, such as education, retail, insurance, and healthcare, which do not depend as much on energy competitiveness.

Introducing Energy Efficiency Programs It is estimated that most organizations have a potential for 10–20 % energy efficiency improvement, which will materialize in the bottom line. In order to improve energy efficiency in a company or another larger institution, an energy survey or an energy audit can be a first step to map out the saving potential. More information on such energy audits can be found in Sustainable Energy Ireland (SEI) (2015) and Carbon Trust (2015). They consist of data collection (“hard facts” such as electricity consumption and interviews on common practices) and internal and external benchmarking. There is currently a lack of qualified energy auditing staff (Yang 2010). Checklists can help to uncover inefficiencies in processes and equipment. In the EU, Directive 2012/27/EU of 25 October 2012 on energy efficiency has introduced compulsory energy audits for large corporations in an attempt to foster energy consumption reduction. Using off-peak hour electricity is an option to shrink the electricity bill. How to manage energy efficiency in a corporation is described in Russell (2009). To which extent agreements foster energy efficiency is analyzed in Rietbergen et al. (2002).

Combustion Combustion plays a critical role in energy efficiency considerations, as approx. 80 % of global primary energy is produced by combustion processes. Combustion processes have the single largest human influence on climate with 80 % of anthropogenic greenhouse gas emissions (Quadrelli and Peterson 2007). Fuels can be fossil or renewable (biomass). They are gaseous, liquid, and solid. Combustion is used in power plants for electricity and heat production, transportation, and other areas (see sections below for details). Figure 7 shows the global trend in CO2 emissions over the last 140 years (source: Quadrelli and Peterson 2007). As it can be inferred in Fig. 7, the increase in anthropogenic, combustion-derived CO2 emissions has almost been an exponential one. For the impact on climate change, not only the efficiency of a combustion process itself but also emissions generated during fuel production and transportation have to be considered. For instance, for every kg of mined coal, 1.2–16.5 g of the greenhouse gas methane (GWP = 21) are emitted (Office of Energy Efficiency, Natural Resources Canada 2002).

Page 20 of 65

Handbook of Climate Change Mitigation and Adptation DOI 10.1007/978-1-4614-6431-0_24-2 # Springer Science+Business Media New York 2014 GtCO2 35 30 25 20 15 10 5 0 1870

1890

1910

1930

1950

1970

1990

2010

Fig. 7 Trend in CO2 emissions from fossil fuel combustion (Source: Carbon Dioxide Information Analysis Center, Oak Ridge National Laboratory, US Department of Energy, Oak Ridge, TN, USA). Units: Gigatons of CO2 (Reprinted with permission from International Energy Agency (2014))

Combustion can be carried out in furnaces (see section “Power Plants and Electricity Production” below) and boilers, in internal and external combustion engines, and in gas turbines (Pilavachi 2000; Boyce 2006; Farzaneh-Gord and Deymi-Dashtebayaz 2009). Pyrolysis and gasification are special cases of combustion. These processes can be used to obtain gaseous or liquid fuels from biomass or coal in conjunction with a Fischer-Tropsch (van Vliet et al. 2009; Prins et al. 2004) or other synthesis processes. Due to the removal of moisture and ash and the effect of deoxygenation, liquid hydrocarbons derived from biomass have a threefold energy density and are hence more advantageous for transportation and storage (Demirbas et al. 2000). See also chapter “▶ Integrated Gasification Combined Cycle (IGCC)” in this handbook. Heat recovery from flue gases is a particularly energy-efficient measure. For steam systems, for instance, 1 % of fuel can be saved for every 25  C reduction in exhaust gas temperature (Galitsky 2008). In Quadrelli and Peterson (2007), recent trends on CO2 emissions from fuel combustion are reviewed. For combustion in general, see Lackner et al. (2010) and Lackner et al. (2013).

Power Plants and Electricity Production 12 % of man’s total energy is made up by electricity, a fraction that is expected to rise to 34 % until 2025 (Ibrahim et al. 2008). Energy efficiency in electricity production can be defined as the energy content of the produced electricity divided by the primary energy input, with reference to the lower heating value (Graus and Worrell 2009). The lower heating value (LHV, or net calorific value) assumes that the water formed in combustion remains as vapor. In cogeneration, the overall efficiency can be increased, because the (by-product and formerly waste) heat is used. Cogeneration is also dubbed CHP (combined heat and power). Power production is carried out by (large) public power and CHP plants and by so-called autoproducers. These are users such as chemical factories which produce their own power and heat. In the EU, autoproducers account for 8 % of the total power generation (Graus and Worrell 2009). Electricity production plants have an efficiency of around 30–40 %, whereas combined heat and power (CHP, cogeneration) yields up to 90 % (Office of Energy Efficiency, Natural Resources Canada 2002). For the installed base of CHP, see CHP Installation Database (2015).

Page 21 of 65

Handbook of Climate Change Mitigation and Adptation DOI 10.1007/978-1-4614-6431-0_24-2 # Springer Science+Business Media New York 2014

In the EU, the energy efficiencies for coal-fired power production range from 28 % (Slovak Republic) to 43 % (Denmark). On a global scale, the spread for oil-fired power plants is an efficiency of 23 % for the Czech Republic and 46 % for Japan (Graus and Worrell 2009). The efficiency of a given power plant is dependent on its age. The younger a plant, the higher its energy efficiency was (intuitively) found to be (Graus and Worrell 2009). These findings are in line with another study (Phylipsen et al. 2002), which revealed that the least energy-efficient plants are not always located in developing countries. Apart from the age of a plant, its fuel mix, size, and load account for the big differences in efficiencies mentioned above (see also section “Cross-Cutting Technologies” below). State-of-the-art power plants based on coal and gas have energy efficiencies of 46 % and 60 %, respectively (Graus and Worrell 2009). It is estimated that the replacement of inefficient coal-fired power plants by more efficient coal- or gas-fired ones, particularly in China and in the USA, can reduce global CO2 emissions by 5 % (IEA 2009). In Canada in 1988, according to the Canadian Industry Program for Energy Conservation (CIPEC), the average CO2 emissions in electricity production were 0.22 t/MWh, with a spread of 0.01 in Quebec to 0.91 in Alberta (Office of Energy Efficiency, Natural Resources Canada 2002). Demand side management (DMS) can help to level peak electricity demand (Loughran and Kulick 2004). This will be even more important as more renewable energy plants (wind, solar) are installed, where electricity production and consumption hardly coincide. Energy is increasingly being produced from waste. Methane can be extracted from landfills for power production in gas engines. Waste incineration uses the energy content of waste and converts it to a low-volume, inert residue. While previously the focus of waste incineration plants was on low-emission combustion to get rid of the waste, today the energy efficiency of these plants has become important, too. In Bujak (2009), an incineration plant for medical waste is presented. It is equipped with a heat recovery system and can extract 660–800 kW of usable energy from 100 kg/h of medical waste with an energy efficiency between 47 % and 62 %. New and innovative pyrolysis and gasification technologies for energy-efficient waste incineration are presented in Malkov (2004). In Dijkgraaf and Vollebergh (2004), waste incineration is compared to landfilling, and in Cherubini et al. (2009), a life cycle assessment (LCA) (Guineé 2002) of waste management strategies is performed.

Energy Transmission and Distribution Today, electricity production is centralized, with large power plants being coupled to a complex distribution network. Energy transmission and distribution cannot be performed in a totally loss-free way (leaving apart superconductivity, where electrical resistance is exactly zero). In Europe, they typically amount to 4–10 % and hence reduce the overall efficiency of power supply by several percent points (Graus and Worrell 2009). For the USA, EIA estimates that national electricity transmission and distribution losses are approx. 6 % (FAQ and US Energy Information Administration). In India, losses are estimated at 32 %, which is significantly above the global average of 15 % (Joshi and Pathak 2014). Transporting the fuel to end users is more cost effective yet also consumes substantial amounts of energy (see sections below). Natural gas, for instance, is being pumped across long distances, because placing a gas power station next to the gas field and transmitting the electricity and heat would result in a considerably lower overall efficiency than compressing and moving the gas through pipelines. Globally, Russia is the largest producer and transporter of natural gas. Methane emissions from the Russian natural gas long distance network are estimated at approximately 0.6 % of the natural gas delivered (Lechtenböhmer et al. 2007). Page 22 of 65

Handbook of Climate Change Mitigation and Adptation DOI 10.1007/978-1-4614-6431-0_24-2 # Springer Science+Business Media New York 2014

Reduced consumption

1 0.8 0.6 Current consumption 0.4

Smoothed consumption

0.2 0 0

3

6

9

15 12 Hour of the day

18

21

24

Fig. 8 Average daily power consumption in France (Reprinted with permission from Elsevier from Ibrahim et al. (2008)). Peak demand happens in the morning and afternoon, with the lowest demand being met in the early morning hours

Energy Storage The need for more and cleaner energy leads to an increase in distributed generation (DG) and renewable energy sources (RES) (Hadjipaschalis et al. 2009). Since such sources like wind power are not as reliable and as simple to adjust to demand fluctuations as conventional power plants, they could be coupled with energy storage systems. Power demand by (end) users fluctuates strongly. Typically the lowest consumption during a 24-h period is nearly half the peak demand (compare Fig. 8). Today, with a mainly centralized electricity production scheme, there is only a small storage capacity available, amounting to approx. 90 GW or 2.6 % of the total production of 3,400 GW (Ibrahim et al. 2008). With DG and RES on the increase, it is expected that energy storage, more specifically electrical energy storage, will gain significance on a local (small) and regional (large) level. Energy can be stored in various ways, for instance, as: Potential energy Kinetic energy Chemical energy Thermal energy

Pumped hydro storage (PHS, i.e., pumping water up into a reservoir, so that it can later drive a turbine) or compressed air energy storage systems (CAES, i.e., compressing gas in a cylinder) Accelerating a flywheel Batteries (Rydh and Sandén 2005), fuel cells (H2) Use of sensible or latent heat (Ibrahim et al. 2008), e.g., of NaOH

Lead batteries are well known for the storage of energy; however, they are heavy and inapt for high cycling rates. Rydh and Sandén (2005) discuss the energy efficiency of batteries. In Ibrahim et al. (2008) and Hadjipaschalis et al. (2009), an overview on current and future energy storage technologies is given. They differ in their maturity, target use (e.g., portable or fixed, long- or short-term storage), specific power (power density) [W/kg] and specific energy (energy density) [Wh/kg], the lifetime (number of cycles), the self-discharge rate, and the costs per installed kWh. Hydrogen storage options are reviewed in Hirscher and Hirose (2010). In Ibrahim et al. (2008), the energy efficiencies of various energy storage technologies are compared. An interesting option for electrical energy storage is power to gas (P2G, PtG) (Gahleitner 2013). Energy storage (and conversion) is always associated with losses.

Page 23 of 65

Handbook of Climate Change Mitigation and Adptation DOI 10.1007/978-1-4614-6431-0_24-2 # Springer Science+Business Media New York 2014

Life Cycle Assessment (LCA) Life cycle assessment (LCA) (Guineé 2002), also called life cycle analysis, is a holistic view on a product or service. As the name implies, all steps from its raw material production, manufacturing, transportation, distribution, use, and disposal are considered to determine the overall effect that a given product has on the environment. LCA is rooted in the ISO14001 environmental management system standard, more specifically in ISO 14040, 14041, 14042, and 14043 (ISO 2015). The ISO standard for energy management is ISO 50001. Variants of life cycle analysis are: Cradle-to-grave analysis Cradle-to-cradle analysis Cradle-to-gate analysis Gate-to-gate analysis Well-to-wheel analysis Wire-to-water efficiency

(Full life span) (Including recycling) (Partial process) (One step) (Used in the automotive industry; see below) (Used for pumps; see later)

Eco-balance is a synonymous expression for LCA. An illustrative example for the value of LCA is the use of plastic materials for insulation purposes. Within 4 months of use, the energy savings can equal the energy needed for production, with a service life of up to over 50 years (The International Energy Association in Collaboration with CEFIC 2007). In transportation, LCA is typically done as well-to-wheel (WtW) analysis, which is an overall fuel efficiency calculation (there are also the standard LCA studies for cars, ranging from production to use and disposal). WtW efficiency, detailed in Braungart et al. (2007), van Vliet et al. (2009), Svensson et al. (2007), and Hekkert et al. (2005), is a similar concept as life cycle energy efficiency (Malça and Freire 2006). Both concepts can be understood as overall efficiencies of a process chain, calculated as the product of the individual efficiencies. WtW efficiencies allow meaningful comparisons between different technologies, for instance, internal combustion engines (ICEs) versus fuel cell (FC) vehicle technologies. They provide for a fair comparison. Figure 9, taken with permission from Ellinger et al. (2001), shows the efficiency chain for different automotive propulsion systems under hot start conditions. In Fig. 9, the WtW efficiency is calculated as the product of conversion efficiency c, distribution efficiency t, and propulsion system efficiency p as shown in equation 6:  ¼ c t p

(6)

The conversion efficiency c for gasoline and diesel production in a refinery is quoted as 88 % in Heitland et al. (1990) and as 63 % for their production from methanol according to the Lurgi process (20 years ago), and the distribution efficiency t as 97–98 % in The International Energy Association in Collaboration with CEFIC (2007). In Fig. 9, it can be seen that the CNG-SOFC (compressed natural gas-solid oxide fuel cell) combination achieved the best overall efficiency of around 35 %, with the best internal combustion engine performance being 29 % for diesel from crude oil (the International Energy Association in Collaboration with CEFIC 2007). The eco-balance of biodiesel, for instance, has to consider the consumption of fossil fuels and materials for its production, e.g., the use of lubrication oil. Another important term is that of the energy path.

Page 24 of 65

Handbook of Climate Change Mitigation and Adptation DOI 10.1007/978-1-4614-6431-0_24-2 # Springer Science+Business Media New York 2014

100% 90% 80%

Efficiency

70%

Petrol / Diesel / ICE Petrol / Gasoline / ICE

60%

NG / Diesel / ICE NG / Gasoline / ICE

50%

NG / CNG / ICE

40%

NG / NH3 / AFC NG / CNG / SOFC

30%

NG / CNG / PEMFC

20% 10% 0% Primary Fuel

Conversion Distribution Propulsionsystem

Fig. 9 Well-to-wheel efficiencies under hot starting conditions (Reprinted with permission from the Society of Automotive Engineers (SAE) from Ellinger et al. (2001)); ICE internal combustion engine, NG natural gas, CNG compressed natural gas, AFC alkaline fuel cell, SOFC solid oxide fuel cell, PEMFC polymer electrolyte membrane fuel cell

The production process will strongly impact energy consumption. Methanol, for instance, can be produced via a path starting from sugarcane (1st-generation biofuel), from lignocellulosis (2nd-generation biofuel), or from natural gas (traditional), which will yield different eco-balances. An interesting website on LCA is run by the US Environmental Protection Agency (EPA) (http://www. epa.gov/nrmrl/std/lca/lca.html 2015). A related concept to LCA is the embodied energy (Venkatarama Reddy and Jagadish 2003). It is often used for buildings (see later). Also in other industries, significant amounts of energy are “stored” in the final product. In the case of the petrochemical and chemical industries, which consume 30 % of global industrial energy, more than half of the energy is locked up in the final products (the International Energy Association in Collaboration with CEFIC 2007) and can be recaptured at the end of their lifetime. The total life cycle of a product can not only be assessed with regard to energy use and environmental aspects but also from an economic point of view – in terms of costs. In this case, one speaks about life cycle costs (LCC) or total cost of ownership (TCO). Recycling is an important aspect of life cycle assessment. The primary energy demand for “new” materials is often considerably higher than that needed to recycle them from waste. For instance, if aluminum cans are recycled, the energy consumption will only be 5 % of the energy needed to make them from virgin bauxite ore. Scrap metal, glass, paper, and plastics should be recycled to make best use of their “energy content,” as primary production tends to consume more energy than secondary one. In the case of plastics, “thermal recycling” is an advantageous, final use if other types of recycling are not feasible. The 3Rs (reduction, reuse, recycling) are approaches to limit the quantity of primary raw material demand, hence contributing to sustainability.

Total Cost of Ownership (TCO) The total cost of ownership (TCO) concept acknowledges the fact that the use of any equipment has two types of costs associated with it: Page 25 of 65

Handbook of Climate Change Mitigation and Adptation DOI 10.1007/978-1-4614-6431-0_24-2 # Springer Science+Business Media New York 2014

• Initial investment costs • Running costs over the entire useful lifetime (energy, maintenance, disposal, etc.) For industrial pumps, for instance, which are typically in service for 15–20 years, the initial investment cost is often less than 5 % of total incurred costs (Tutterow et al. 2002). For a majority of industrial assets and facilities, the lifetime energy will dominate the life cycle costs, which is also the case for many equipment items in private homes. More information on TCO can be found in Braun and Leiber (2007), US Department of Energy (2005), and Sorrell et al. (2004), with the latter two providing ample coverage of economic evaluation of energy efficiency.

Energy Efficiency in Various Sectors In the following sections, energy efficiency in various areas is discussed. As is shown in Fig. 1 and Table 1, major consumers of energy are end users, power plants, transportation, industry, buildings, and others, each of which showing potential for cost-effective energy efficiency improvements.

Agriculture and Food Agricultural activities make a strong contribution to anthropogenic climate change. Greenhouse gas emissions from this sector account for 22 % of global total emissions, which is similar to the contribution level of industry and greater than that of transportation. Livestock production (including transport of livestock and its feeding) accounts for nearly 80 % of the sector’s emissions (McMichael et al. 2007). The two strong greenhouse gases (GHG), methane and nitrous oxide (which are closely linked with livestock production), contribute much more to this sector’s warming effect than does carbon dioxide (McMichael et al. 2007). Emission factors of CO2 and CH4 for livestock are estimated at 36–3,960 and 0.01–120 kg per head and year, respectively (Office of Energy Efficiency, Natural Resources Canada 2002). Agricultural operations not only put strain on global climate by CH4 emissions from cattle but also by energy consumption, which is concentrated in the areas of irrigation, process heat applications, and refrigeration. Irrigation pumps, refrigerated warehouses, greenhouses, and postharvest processing offer various potentials for energy efficiency improvements. A nice example is provided by some Dutch greenhouses, which are heated by gas engines, the CO2 from which is fed into the greenhouses to fertilize the plants and to boost their growth (Lugt et al. 1996). In Oude Lansink and Bezlepkin (2003), different heating methods for greenhouses are compared. In Ramírez et al. (2006a), the energy efficiency of the Dutch food industry is reviewed, and in Ramírez et al. (2006b) that of the European dairy industry. Additional case studies of recent improvements in energy efficiency in the agricultural industry are discussed in Swanton et al. (1996). The energy use for the production of various agrichemicals, such as herbicides, growth regulators, and fungicides, ranges from 120 to 550 MJ/kg of active ingredient (Saunders et al. 2006), taking production, packaging, and transportation into account (Saunders et al. 2006). The application rate of these chemicals further determines the total energy consumption per kg of agricultural product. Food miles are a very simplistic concept relating to the distance food travels as a measure of its impact on the environment (Saunders et al. 2006). While a lower number of “food miles” will generally render a product more energy efficient, because transportation ways are shorter, a food commodity that is produced with high energy efficiency, e.g., by low use of fertilizers, and that has a long mileage to the consumer can still have a lower environmental impact that foodstuff manufactured close to the end customer in an otherwise inefficient way. This simple example shows that energy efficiency aspects are closely

Page 26 of 65

Handbook of Climate Change Mitigation and Adptation DOI 10.1007/978-1-4614-6431-0_24-2 # Springer Science+Business Media New York 2014

interwoven and often difficult to compare, not only in the agricultural industry. Globalization affects the food industry as much as it does high-tech goods. Ecuador is the world’s largest banana exporter. The carbon footprint of Ecuadorian export bananas was found to range from 0.45 to 1.04 kg CO2-equivalent/kg banana (Iriarte et al. 2014). In Wang (2008), energy efficiency in the food industry is treated in detail.

Transportation and Logistics Our world has become global, so that people and goods are being transported between countries and continents on a large scale. The IEA predicts significant improvements in energy efficiency in transportation; however, these will be more than offset by increased travel (IEA 2009) and further globalization. Fuel efficiency in transportation ranges from a few megajoules per kilometer and passenger for a bicycle to several 100 MJ for a helicopter. Approx. 1/3 of the energy consumption in transportation is used for freight movement (Sorrell et al. 2009), which accounts for 8 % of total anthropogenic CO2 emissions. Most of these emissions stem from trucks (heavy goods vehicles, HGV), which account for most freight activities in most countries, e.g., 68 % of all tonne kilometers in the UK (Sorrell et al. 2009). Ample road networks make cargo distribution by HGV convenient and efficient in terms of time and costs. Externalities are the costs or benefits that affect parties who did not choose to incur that cost or benefit (Buchanan and Stubblebine 1962). An example for such a negative externality is air pollution or climate change by transportation: The costs are born neither by car producers nor by motorists. For a discussion of freight and transportation externalities, see Ranaiefar and Amelia (2011). For details on transportation and climate change, see the subsections below and also chapter “▶ Energy Efficient Design of Future Transportation Systems” in this handbook. Road Transportation and Internal Combustion Engines Although rail and ship transportations are more efficient and environmentally benign than road transportation, trucking is still heavily used for reasons of flexibility, costs, and timeliness, not only in weakly developed areas, to move goods and people. Most vehicles on the road today are powered by internal combustion engines (ICE). Engine and propulsion system selection for cars is based on various criteria such as driving performance, range, and safety. ICE burn gasoline and diesel, the latter being primarily for trucks. In some countries, cars and trucks with natural gas-, ethanol-, and hydrogen-propelled engines constitute a fleet fraction next to those with alternative systems such as electrical batteries or air buffer tanks. In Brazil, ethanol fuel has become popular. It is mostly produced from sugarcane, whereas the USA use corn as feedstock. For biodiesel, the Americas use soybean, whereas Europe mainly deploys rapeseed (1st generation biofuels). Hydrogen is chiefly obtained by water electrolysis. Internal combustion engines have become more efficient over the last decades. The largest losses in gasoline engines are encountered by throttling the engine (Ellinger et al. 2001). Taylor (2008) estimates that over the next decade, an efficiency improvement of another 6–15 % is feasible. Various optimizations such as direct fuel injection, variable valve timing, supercharging, downsizing, exhaust gas recirculation, onboard fuel reforming, and powertrain improvements, e.g., on the gearbox, are being tested and implemented (Ellinger et al. 2001). The reuse of losses also offers significant potentials, for instance, recuperative braking or the extraction of heat from exhaust gases, as it is a state of the art in power plants (economizers). Stationary engines, such as large gas engines for power backup or landfill gas use, can be operated in steady mode at optimum efficiency. Combustion engines in mobile machines have to perform well over a wide range of load, which yields poorer overall efficiency.

Page 27 of 65

Handbook of Climate Change Mitigation and Adptation DOI 10.1007/978-1-4614-6431-0_24-2 # Springer Science+Business Media New York 2014

A novel, promising combustion technology for engines is HCCI (homogeneous charge compression ignition) (Zhao 2007). HCCI is a hybrid between an auto-ignited diesel engine and a spark-ignited Otto engine in that it deploys auto-ignition of a homogeneous fuel-air mixture. Alternative ignition systems (Lackner 2009) such as laser ignition are also expected to improve fuel economy. For a discussion on internal combustion engines for future cars, see Lackner et al. (2005). Passenger Cars It is estimated that by 2030, 60 % of all new cars sold will be hybrids, plug-in hybrids, and electric vehicles, as opposed to 1 % today (IEA 2009). Hybrid cars combine an electric engine and an internal combustion engine. Dual fuel concepts (natural gas and diesel, for instance) are also feasible. The CO2 intensity of the passenger car fleet in 2030 is estimated to be 90 g of CO2/km, compared to 205 g/km in 2007, as a worldwide average. In OECD countries, it should reach 80 g, in the EU 70 g, and in India and China 110 and 90 g, respectively, in 2030, the latter ones down from 225 and 235 g, respectively, in 2007 (IEA 2009). On the other hand, a large increase in the global number of cars is anticipated, particularly in developing nations such as China and India. Hybrids use regenerative breaking to recapture energy that would otherwise dissipate. The effect on fuel economy of such cars is particularly pronounced in stop-and-go city traffic. Fuel economy of private cars is governed by the following aspects: • • • •

Technology advances of the car, e.g., weight reduction or better engine Driving habits (use of air-condition, cruising speed, payload in the car, etc.) Maintenance (no clogged air filters, correct tire pressure, etc.) Weight (lightweight construction materials can save fuel over the entire lifetime)

Figure 10 shows the breakdown of passenger car energy consumption (Holmberg et al. 2012). In passenger cars, one-third of the fuel energy is used to overcome friction in the engine, transmission, tires, and brakes. The direct frictional losses, without braking friction, are 28 % of the fuel energy. In total, only 21.5 % of the fuel energy is used to move the car. Potential solutions to reduce friction in passenger cars include the use of advanced coatings and surface texturing technology on engine and transmission components, new low-viscosity and low-shear lubricants and additives, and tire designs with reduced rolling friction (Holmberg et al. 2012). There is plenty of information available for consumers who want to pick an energy-efficient car, e.g., one website run by the US EPA (http://www.fueleconomy.gov/ 2015).

Fig. 10 Energy losses in passenger cars (Reproduced with permission from Elsevier from Holmberg et al. (2012)) Page 28 of 65

Handbook of Climate Change Mitigation and Adptation DOI 10.1007/978-1-4614-6431-0_24-2 # Springer Science+Business Media New York 2014

Table 5 Energy consumption in different transportation modes (dwt is the deadweight tonnage (also known as deadweight, DW or dwt), a measure of how much weight a ship can safely carry. It is the sum of the weights of cargo, fuel, ballast water, crew, etc.) (Source: http://www.ics-shipping.org/publications/ 2015) Mode Comment Energy consumption

Air B727-200 (1,200 km flight) 4.07 kWh/(ton*km)

Road Medium-sized truck 0.49 kWh/ (ton*km)

Sea Cargo ship, 2,000–8,000 dwt 0.08 kWh/(ton*km)

Sea Cargo ship, >8,000 dwt 0.06 kWh/(ton*km)

In California, partial zero emission vehicles (PZEVs) were introduced to satisfy part of the state’s zero emission vehicle (ZEV) program (Collantes and Sperling 2008). In Johansson and Åhman (2002), options for carbon-neutral passenger transport are reviewed. Thomas (2009) compares fuel cell and battery electric vehicles. The primary energy efficiencies of alternative powertrains in vehicles are discussed in Åhman (2001)). In Ellinger et al. (2001), the energy efficiency of internal combustion engines and fuel cells for automotive use with different fuels is assessed. It is concluded there that fuel cells have an advantage during hot start conditions, but suffer from efficiency losses during cold starts (Ellinger et al. 2001). Although the energy efficiency of a fuel cell-powered car is not the best, the environmental performance of a vehicle burning hydrogen from solar generation in a low-noise, virtually emission-free fuel cell is outstanding. It is expected that the fraction of fuel cell cars will increase over the next decade, with an accompanying growth of the necessary infrastructure. Ships Nifty percent of the world’s trade is carried by the international shipping industry, supported by 50,000 merchant ships (http://www.ics-shipping.org/publications/ 2015). Over the last four decades, total seaborne trade is estimated to have quadrupled, from just over 8,000 billion tonnes-miles in 1968 to over 32,000 billion tonnes-miles in 2008 (http://www.ics-shipping.org/publications/ 2015). In 2011, figures were 42.8 billion tonnes-miles or 8.7 billion tonnes according to UNCTAD (United Nations Conference on Trade and Development) and the ITF (International Transport Forum 2013). Seaborne shipping is one of the most energy-efficient means of transportation, especially for large, bulky goods. Here is a comparison of energy efficiency of different transportation modes, taken from a study by the Swedish Network for Transport and the Environment (Table 5) (http://www.ics-shipping.org/publications/ 2015). It has to be noted that the table above is slightly biased in favor of sea transportation, as the aircraft mentioned is an outdated one used on a short-haul flight. Ships can be driven by different technologies (Schneekluth and Bertram 1998) with diesel engines being most common. The resistance of the ship’s hull, the design or the propeller, and the tonnage are important factors for its energy efficiency as well. The impact of shipping on the atmosphere and on the climate is discussed in Eyring et al. (2010). The auxiliary powering of ships by kitelike devices is discussed in Burgin and Wilson (1985) and Kim and Park (2010). Spinning vertical rotors installed on a ship to convert wind power into thrust based on the Magnus effect, so-called Flettner rotors, are another option to increase energy efficiency. Microbubbles as a means of reducing skin friction on ships are studied in Kodama et al. (2000). Different propulsion systems for LNG carriers are discussed in Chang et al. (2008). LNG (liquefied natural gas) is expected to gain an increasing importance.

Page 29 of 65

Handbook of Climate Change Mitigation and Adptation DOI 10.1007/978-1-4614-6431-0_24-2 # Springer Science+Business Media New York 2014

Rail Transportation Intuitively, rail transportation of people and cargo is amongst the most environmentally friendly modes of movement. Technological progress has increased energy efficiency in rail transportation, too. According to Kemp (1997), aerodynamic drag per seat at 150 km/h was cut by half over 30 years. Train speed determines energy efficiency. The energy consumption for a high-speed train from London to Edinburgh increases from 30 to almost 60 kWh/seat as the speed goes up from 225 to 350 km/h (Kemp 1994). The American railway corporation Amtrak reported an energy use of 2,935 BTU per passenger-mile (1.9 MJ/passenger-km) in 2005 (Amtrak 2015). A critical factor in energy efficiency of trains is the occupancy. If a train is only 25 % loaded, the fuel consumption per passenger and seat can be worse than with economic cars and modern aircraft as shown in Kemp (2004). For a discussion on the potential role of high-speed trains in future sustainable transportation, see Kamga and Yazici (2014). Air Transportation Aviation has helped shape our current business dealings and lifestyles significantly. Virtually any point on the globe has got into easy reach within 24 h. Air transportation is used for cargo and people. It has contributed approx. 3.5 % to global greenhouse gas emissions in 1990 with a projection of 15 % or more in the future (Penner et al. 1999). The impact of aviation on climate change is not only driven by CO2 emissions but also by H2O emissions at high altitude (Williams et al. 2002). Due to the long residence time of water vapor at aircraft cruising altitude, it can disproportionally contribute to global warming by reflecting and retaining infrared radiation (compare the effect of natural clouds). Biofuels for aviation (Marsh 2008) were already tested in a proof-of-concept study (BBC 2008), provoking mixed feelings amongst critics. Winglets (Marks 2009) and lightweight materials (Marsh 2007) are two commonly known concepts to increase fuel efficiency of aircraft, hence increasing energy efficiency. See also Figs. 4 and 5. The impact of service network topology on air transportation efficiency is discussed in Kotegawa et al. (2014). In a recent study on the impact of airline mergers and hub reorganization on aviation fuel consumption, it was found that a typical airline merger in the USA has a fuel saving potential of 25–28 % (Ryerson and Kim 2014). Renewable fuels in aviation are discussed in Winchester et al. (2013). Pipeline Transportation Pipelines (Ellenberger 2010), i.e., conduits of pipe, can be used to transport liquids, gases, and slurries. The Romans built aqueducts for water transportation some 2,000 years ago. An early industrial pipeline was installed in Austria in 1595 to transport brine from Hallstatt to Ebensee for salt production (Bedford and Pitcher 2005). Today, pipelines are commonly used to transport petroleum, natural gas, and other commodities over large distances. A comparison of natural gas transportation by LNG tankers and pipelines is made in Elvers (2007). LNG compression and regasification consume 7–13 % of the original amount of natural gas, as well as roughly 0.15 % per day of marine transport, which adds about another 1 % to overall energy losses. Pipeline transportation of natural gas results in energy losses of approx. 1 % per 1,000 km. Therefore, an intercontinental 8,000 km pipeline would involve energy losses of roughly 10 %, which is approx. half the amount of transportation by LNG tankers over the same distance (Elvers 2007). The transportation of liquids in pipelines versus onboard of trucks is compared in Pootakham and Kumar (2010) and (Ghafoori et al. (2007). The conveying of coal as slurry in pipelines is assessed in Kania (1984). In industrial plants, pneumatic conveying (dense phase or dilute phase conveying of a solid in air) and hydraulic conveying (solids in liquid carrier media) are used to transport materials between Page 30 of 65

Handbook of Climate Change Mitigation and Adptation DOI 10.1007/978-1-4614-6431-0_24-2 # Springer Science+Business Media New York 2014

various processing sections. Variable speed drives (VSD) for pneumatic conveying blowers are a means of enhancing energy efficiency versus blowing off excess air at low conveying capacities for the transportation of solids in the gas phase. Kumar et al. (2007) review the transportation of biomass in pipelines. It is concluded that long distances and high throughput rates make such systems economic, as is generally the case with pipeline transportation.

Industry Industry accounts for a high fraction of the global energy consumption; see Table 1. The energy intensity varies strongly from 52.3 end-use BTUs per USD of value added in cement production to 0.4 end-use BTUs per USD in computer assembly (Granade et al. 2009). Ten end-use BTUs per USD can be set as limit for energy-intensive industries as done in Granade et al. (2009). There is a huge potential for energy savings in industry, yet the biggest opportunities for optimization are not easily known to the people involved (Yang 2010). Approximately 2/3 of the energy saving potential can be found in specific process steps of energyintensive industries, whereas 1/3 resides in various areas of nonenergy-intensive ones. Savings can be realized by more efficient processes or by more efficient equipment. Crosscutting Technologies Equipment which is used in different sectors of industry, such as lighting, motors, boilers, and pumps, is subsumed as crosscutting technologies. For these, best practices (see, e.g., US Department of Energy 2010) and general recommendations can be formulated that are valid for several branches and sectors of industry. Generally, there exist untapped-into saving potentials in: • • • • • • •

Waste heat recovery Steam systems Motor systems Pumps (Tutterow et al. 2002) Lighting Buildings Utilities For quantifying energy efficiency potentials, there are various methods (Phylipsen et al. 1997). Here are some aspects of energy efficiency that are relevant for many industries:

Process design: The largest contribution to energy efficiency is made during the design of a process. If a product, for instance, has to be heated up and cooled down several times, chances are high that the process is not energy efficient. Also, an implemented production process is difficult to change. Overcapacity: Design capacity should meet the needs for a process in terms of vessel size, engine power, etc. Overdesign always costs money – not only in the investment phase but most likely also later on, when energy consumption is higher than necessary. Overcapacities of process equipment should normally not exceed 10 % of the overall design capacity. Debottlenecking: If a plant can be deblottlenecked, i.e., the output can be increased by making some small modifications, one typically has a highly profitable project. Also from an energy efficiency perspective, debottleneckings frequently lower the specific energy consumption of a product, thus making it more energy efficient.

Page 31 of 65

Handbook of Climate Change Mitigation and Adptation DOI 10.1007/978-1-4614-6431-0_24-2 # Springer Science+Business Media New York 2014

Measuring, monitoring: In order to be able to track energy efficiency measures, it is necessary to measure accurately and regularly actual consumption values of electricity and other utilities such as compressed air or cooling water. Only by monitoring them actively will deviations be spotted. Automatic controls: Automatic process control is generally faster and more accurate than a manual one and also less prone to errors. Therefore, a production process can be carried out in the most energyefficient way if it is automatically controlled (Szentennai and Lackner 2014). Automation will be more economic for large processing plants where the investment costs can be diluted over the volume. Compressed air: Leaks of air from pipes can easily lead to 20–50 % efficiency loss of a compressed air system. Preventive maintenance and the timely repair of leaks will help to minimize running costs. A pressure reduction of the entire system can often be considered, as instrument air (plant air) typically only needs to have ~6 bar pressure, which is less than the design pressure of many compressor systems. If the operating pressure is reduced by just 1 bar, energy savings of over 5 % can result. Maintenance: If industrial assets are not properly being taken care of, their energy consumption tends to increase. Advanced maintenance techniques such as risk-based maintenance, preventive maintenance, thermography, and others will help to keep energy efficiency up. Cutting costs on maintenance can bring short-term gains at the expense of increased risk and deferred costs. A typical yearly maintenance budget for industrial plants would be 2 % of the investment value, depending of course on the process. Cogeneration: Production sites that produce their own electricity should seriously consider combined heat and power (CHP). If there is no need for heat in the installation itself, there might be an opportunity to sell the heat, e.g., for district heating purposes. Cogeneration will use the heat which would otherwise be wasted, thereby increasing the energy efficiency. For details, see Raj et al. (2011) and Çakir et al. (2012). Motors and drives: It is estimated that 2/3 of all electricity consumption in industry is used to drive various motors (US Department of Energy 2010), so there is a huge optimization potential. The “motor challenge” is a recent program to improve motor efficiency (Energy Efficient Motor Driven 2010). Typical energy efficiencies of motors are 80–90 %, with advanced models reaching 97 % (Office of Energy Efficiency, Natural Resources Canada 2002). Variable speed drives: An engine’s energy consumption can be matched to the load by using a variable speed drive (VSD). VSDs can be realized with a frequency converter coupled to an engine. Up to 50 % of energy can be saved. Today, only an estimated 10 % of all engines in industry are equipped with VSD. A large number of motors are still controlled by throttling valves in pump systems or vanes in fan applications. By throttling, a part of the produced output immediately goes to waste. Speed control with intermediate transmission such as belt drives, gearboxes, and hydraulic couplings adds to the inefficiency of the system and requires the motor to run at full speed. Another drawback is that such systems typically require more maintenance. They can be noisy, too. Pumps: It is estimated that pumps consume 25 % of the electricity in US manufacturing facilities (Galitsky 2008). Industrial pumps have a lifetime of 20 years and longer. Pump efficiency is defined as the pump’s fluid power divided by the input shaft power and is influenced by hydraulic effects, mechanical losses, and internal leakages. Pump manufacturers have devised many ways to improve pump efficiencies. For example, the pump surface finish can be made smoother by polishing to reduce hydraulic losses. A “good” efficiency for a pump will vary depending on the type of the pump. A more useful efficiency term is the wire-to-water efficiency, which is the product of the pump and motor efficiency. An even better measure of efficiency for analysis purposes is the system efficiency, which is defined as the combined efficiency of the pump, motor, and distribution system. See also Tutterow et al. (2002) and “Life Cycle Assessment (LCA)” above. Blowers and fans: Fans move air as pumps move liquids. They can often be optimized for energy efficiency, e.g., by adding a VSD. For details, see, e.g., Gunner et al. (2014). Page 32 of 65

Handbook of Climate Change Mitigation and Adptation DOI 10.1007/978-1-4614-6431-0_24-2 # Springer Science+Business Media New York 2014

Energy management system: An energy management system (EMS) is the energy equivalent of an environmental management system. Generally, industrial sites or units that consume more than 1,000 toe/day should have a dedicated energy manager, who will “pay himself/herself” by economizing on energy bills. A guideline for energy management is provided by Office of Energy Efficiency, Natural Resources Canada (2002). The standard ISO 50001 can serve as guidance. Several smaller units instead of large one: Instead of one large pump which is controlled with a bypass, several smaller pumps might be more energy efficient, matching power consumption to the process needs. The same consideration might work for air compressors, etc. Energy audit and energy survey: These tools were mentioned already earlier in this chapter in the context of the EU Directive 2012/27/EU of 25 October 2012 on energy efficiency. Energy audits and energy surveys can be administered by internal or external staff. Generally speaking, it is vital for the success of an energy efficiency program in a corporation to have the support of a senior, recognized executive and to make the effort lasting by introducing energy performance indicators, which can be linked to employee’s targets and performance management. Load shifting (using off-peak electricity) (Favre and Peuportier 2014): If energy-intensive production processes can be concentrated in off-peak hours, the energy bill will be lower. This will also have positive effects on the environment, as peak electricity demand often needs to be produced in a not-soefficient way. For details, see, e.g., demand side management in smart grid operation in López et al. (2015). Load shedding (Kanimozhi et al. 2014): By reducing peak electricity consumption, energy costs can also be reduced. Insulation: Process insulation can be optimized for energy efficiency. A waterlogged insulation transfers heat 15–20 times faster than a dry one, and one filled with ice even 50 times faster (Office of Energy Efficiency, Natural Resources Canada 2002)! Using waste heat: Heat losses are a major sink for energy. Process heat in general can be upgraded using absorption heat pumps (AHP) (Wei et al. 2014). Heat losses in flue gases are a particularly large term: If flue gases exist and the chimney too hot, significant amounts of heat are wasted (up to 1 % of fuel savings for 25  C colder flue gas temperature (Galitsky 2008)); see also cogeneration. As for heat exchangers, cleaning and optimization can bring additional energy efficiency gains (Wang et al. 2009). An overview on energy efficiency improvement potentials in industry is given in Rajan (2002) and Bannister (2010), the latter focusing on mechanical systems. Industrial energy efficiency in Asia, where a large part of global energy-intensive industry has settled, is treated in United Nations (2006). Steam and Boilers Steam engines are gone; however, still 37 % of the fossil fuel burned in the US industry is used to produce steam (Einstein et al. 2001). Steam is the working fluid in steam turbines for electricity production. It is used in various industries to transfer and to store heat, as it is a capacious reservoir for thermal energy because of the high heat of vaporization of water. The chemical industry uses significant amounts of steam as process heat, one reason being that steam is generated as a by-product in some processes in integrated chemical production sites. Steam in general can be produced efficiently in cogeneration plants. In contrast to district heating networks to heat private homes, cogeneration plants in industry can be operated at full capacity all year round. Steam is produced in boilers. Energy efficiency measures for boilers include: • Improved process control • Reduced excess air Page 33 of 65

Handbook of Climate Change Mitigation and Adptation DOI 10.1007/978-1-4614-6431-0_24-2 # Springer Science+Business Media New York 2014

• • • • •

Improved insulation Maintenance Recovery of heat from flue gas Recovery of steam from blowdown Optimization of fuel mix For steam distribution systems, the following measures are effective:

• • • • •

Improved insulation Improved steam traps Steam trap monitoring Leak repairs Condensate return

In Einstein et al. (2001), information on steam systems in industry and their energy use and energy efficiency improvement potentials are outlined. Detailed information on boilers is given in Heselton (2004). Energy-Intensive Industries There are certain “heavy industries” that consume a large fraction of total energy output. In China, for instance, the top 1,000 energy-intensive enterprises consumed 30 % of China’s total energy and 50 % of the total industrial energy in 2007 (NDRC 2007). Energy intensity is a specific quantity, expressed as kWh/kg of product or as kWh/monetary unit (value added, often in USD). Above an arbitrary threshold of ten end-use BTUs per USD, one can speak about energy-intensive industries (Granade et al. 2009). This classification is valid for the production of: Cement Steel Aluminum Ores Pulp and paper

(Calcination process, clinker production) (Coke consumption) (Primary metal production by electrolysis) (Mining operations) (Mechanical pulping)

These industries have a strong effect on global energy consumption, because they are not only energy intensive as such but because they produce high amounts of materials per year. The global steel production, for instance, is in excess of one million tonnes (Lackner 2010). The IEA predicts big improvements in energy efficiency in industry, which are expected to be more than offset by higher output of steel and cement (IEA 2009), especially in the developing world, to which countries like Brazil, Russia, India, China (BRIC), Mexico, and South Korea belong. Figure 11 shows the trend in China’s industrial energy consumption and intensity from 1996 to 2010 (Ke et al. 2012). The industrial energy consumption of China increased significantly from 1996 to 2010, especially after 2002. By 2010, China’s industrial energy intensity had decreased 46 % below the 1996 level (Ke et al. 2012). Energy production in China is largely based on coal combustion, with efficiencies being approx. 10 % lower than in Europe or the USA (Nuo and Gaoshang 2008). The CO2 emissions from coal combustion are naturally higher than those from other fuels with a lower C/H ratio. Several technology options to reduce energy consumption and CO2 emissions in energy-intensive industries are reported in Yudken and

Page 34 of 65

Handbook of Climate Change Mitigation and Adptation DOI 10.1007/978-1-4614-6431-0_24-2 # Springer Science+Business Media New York 2014

Fig. 11 China’s industrial energy consumption and intensity from 1996 to 2010 (Reproduced with permission from Ke et al. (2012))

Bassi (2009); see also below. Iron and Steel In the iron and steel industry, as the name implies, iron production and steel production are the main processes (Berns et al. 2008). Iron can be produced along different routes. The classic path is the production of pig iron from ore and coke in the blast furnace, which is then further processed into steel in the basic oxygen furnace (BOF) or the open hearth furnace (OHF), the first one being more energy efficient. Smelt reduction and direct reduction (DR) are two other, advanced routes to iron. The electric arc furnace (EAF) is used to produce secondary steel from scrap. In China, the energy consumption per tonne of steel has declined from 1.43 to 0.52 toe between 1980 and 2005 (Wei et al. 2007). Integrated steel plants have a specific primary energy consumption ranging from 19 to 40 GJ/t of steel (Gale and Freund 2014), with minimills that use scrap steel being more efficient. Technology options for reducing energy use and CO2 emissions in the iron and steel industry are tabulated in Table 6, reproduced from Yudken and Bassi (2009). Aluminum Worldwide primary aluminum production is projected to increase from 23 to 38 million tonnes by 2020 (Gale and Freund 2014). The primary aluminum (Moors 2006) production, starting from bauxite via electrolysis (Hall-Héroult process), is a very energy-intensive process, contributing 1 % of total anthropogenic greenhouse gas emissions in 1995 with about 364 million tonnes/year CO2-equivalent (Gale and Freund 2014). Secondary aluminum production (Li et al. 2006) consumes approx. 5 % of the energy needed for primary production. Existing and potential future processes for bauxite processing are reviewed in Smith (2009). Technology options for reducing energy use and CO2 emissions in primary aluminum are summarized in Table 7, reproduced from Yudken and Bassi (2009). Other Primary Metals Generally, one can distinguish between pyrometallurgical and hydrometallurgical processes. The ore content of a deposit influences energy efficiency as the chosen process does. The energy demand for Page 35 of 65

Handbook of Climate Change Mitigation and Adptation DOI 10.1007/978-1-4614-6431-0_24-2 # Springer Science+Business Media New York 2014

Table 6 Potential technologies to make energy-intensive production processes more efficient (Source: Yudken and Bassi (2009) from IEA, DOE, AISI, Aluminum Association, Korean Energy Institute) Technology option Pulverized coal and plastic waste injection New reactor designs Paired straight hearth furnace Molten oxide electrolysis Hydrogen flash melting Geological sequestration and steelmaking

Description Pulverized coal is already used by more than 50 % of all US BOFs Uses coal and ore fines (COREX™, FINEX™) Substitutes coal for coke in blast furnaces, lower costs, uses 2/3 energy Produces iron and oxygen, no CO2 Uses hydrogen in shaft furnaces, no CO2

Time frame ST-MT MT MT-LT LT MT MT-LT

ST short term (2010–2015), MT medium term (2015–2030), LT long term (2030–2050)

Table 7 Potential technologies to make energy-intensive production processes more efficient (Source: Yudken and Bassi (2009) from IEA, DOE, AISI, Aluminum Association, Korean Energy Institute) Technology option Wetted, drained cathode technology Alternative cell concepts Carbothermic and kaolinite reduction process on commercial scale

Description Combines inert anode, drained cathodes Alternatives to the Hall-Héroult process

Time frame MT-LT LT LT

ST short term (2010–2015), MT medium term (2015–2030), LT long term (2030–2050)

comminution is described in Tromans (2008). Energy efficiency of a lead smelter is discussed in Morris et al. (1983), and energy efficiency of copper and magnesium production in Alvarado et al. (2002) and Cherubini et al. (2008), respectively. Processes for the production of steel, aluminum, copper, lead, and zinc are reviewed from an energy perspective in Stepanov and Stepanov (1998). Sintering processes and their energy efficiencies are discussed in Musa et al. (2009) for one system, and scale-up in metallurgy in general in Lackner (2010). Pulp and Paper The pulp and paper (P&P) industry is a very energy-intensive one. Pulp is being produced from wood by the kraft process, with electricity as additional input and output, plus steam as an output. An efficient kraft pulp mill can be a net exporter of heat and electricity (Jönsson and Algehed 2010). Industry practice shows that in the past, most energy-efficient measures were limited to low-investment, high-return projects, with typically 5 % energy savings with a 1-year payback time (Costa et al. 2007), with a lot of potential still untapped into. In current paper mills, steam savings of up to 30 % are deemed feasible (Kilponen et al. 2001; Costa et al. 2009; Axelsson et al. 2008; Nordman and Berntsson 2009; Lutz 2008). Energy efficiency savings can be obtained from the use of different fuels, which are typically wood, bunker oil, and black liquor (Costa et al. 2007), the latter being a by-product of the transformation of wood chips into pulp. Typical energy efficiencies in the industry for bark combustion are 67 % (based on the higher heating value) and 80 % for bunker oil combustion, respectively (Costa et al. 2007). In Jönsson and Algehed (2010), the utilization options for excess steam and heat at kraft pulp mills are studied. Traditional ways are increased electricity production and district heating, whereas increased sales of biomass as bark and/or extracted lignin and carbon capture and storage (CCS) are new pathways.

Page 36 of 65

Handbook of Climate Change Mitigation and Adptation DOI 10.1007/978-1-4614-6431-0_24-2 # Springer Science+Business Media New York 2014

Table 8 Potential technologies to make energy-intensive production processes more efficient (Source: Yudken and Bassi (2009) from IEA, DOE, AISI, Aluminum Association, Korean Energy Institute) Technology option Black liquor gasification Efficient drying technology

Description In demonstration, R&D; commercially available 2030; 15–23 % gain R&D now; commercial demo, 2015–2030; commercial, 2030 onward

Time frame MT-LT MT-LT

ST short term (2010–2015), MT medium term (2015–2030), LT long term (2030–2050)

There is a trend toward additional products, complementing the traditional pulp and paper output, by biofuels, pellets, lignin, carbon fibers, and other specialty chemicals (Jönsson and Algehed 2010) from pulp and paper plants. In Costa et al. (2007), the economics of trigeneration in a kraft pulp mill are discussed. In trigeneration, pulp production, waste heat upgrading, and power production are simultaneously carried out (compare polygeneration). Absorption heat pumps (AHP) can be used to cool waste heat streams and to extract energy from them. Technology options for reducing energy use and CO2 emissions in the paper and paperboard industry, reprinted from Yudken and Bassi (2009), are summarized here in Table 8. Recycling is another option to increase energy efficiency of paper products. For details on energy efficiency options in the pulp and paper industry, see Worrell et al. (2001). Cement The cement industry, already 15 years ago, exceeded 1.5 billion tonnes of annual output, making it a huge consumer of energy. For cement production, first clinker has to be made, which is then blended with approx. 5–70 % additives such as gypsum and fly ash to yield cement. This first step is the most energyintensive one. Limestone (CaCO3) is burnt with silicon oxides, aluminum oxides, and iron oxides. There is a wet process and a dry process, the latter one being more energy efficient. As cement plants (Deolalkar 2009) consume significant amounts of energy, approx. 4 GJ/t of cement produced (Khurana et al. 2002), energy efficiency programs have been extensively applied to various plants (da Graça Carvalho and Nogueira 1997; Utlu et al. 2006; Mandal and Madheswaran 2010; Worrell et al. 2000a; Doheim et al. 1987). For each t of cement, approx. 0.5 t of CO2 are generated (Office of Energy Efficiency, Natural Resources Canada 2002). In Worrell et al. (2000a), potentials for energy efficiency improvements in the US cement industry are discussed, and in Liu et al. (1995) those for China. CO2 and energy intensity reductions in cement production can be achieved by: • • • • •

Modification of the product composition (less clinker) Use of alternative cements (e.g., mineral polymers) Improving the energy efficiency of the process and process equipment Introduction of a different process (e.g., change from wet to dry process) Replacement of high-carbon fossil fuels by low-carbon fossil fuels

A trend in the cement industry is the use of waste fuels such as tires. Recommendations on energy efficiency and cost saving opportunities for the cement industry can be found in Worrell and Galitsky (2008). Glass Production Glass is a ubiquitous material that comes as sheet glass (produced in the float glass process), hollow glass (for glass containers), automotive glass, optical glass, and other glasses such as glass fiber and glass wool. Page 37 of 65

Handbook of Climate Change Mitigation and Adptation DOI 10.1007/978-1-4614-6431-0_24-2 # Springer Science+Business Media New York 2014

Its production is an energy-intensive process. According to Sheredeka et al. (2001), 74 % of production costs are typically raw materials, fuels, and electricity. Recycling of glass offers a good way of increasing energy efficiency. One recycled bottle can save approx. 0.1 kWh (Glass Manufacturing Industry Council (GMIC) 2015). In http://www.osti.gov/glass/bestpractices.html (2015), best practices for energy efficiency improvements in the glass industry are provided. A detailed treatise of energy efficiency potentials in the American glass industry can be found in Worrell et al. (2008). Petroleum Refining In a petroleum refinery (oil refinery) (Fahim et al. 2009), crude oil is processed into various petroleum products such as naphtha, gasoline, diesel, and liquefied petroleum gas (LPG). Refineries are complex, chemical plants that are usually highly integrated. Crackers, for instance, can produce lightweight hydrocarbons as basic feedstock for the petrochemical industry (see also below). Energy efficiency in a petroleum refinery can be tackled from various angles. Like in industry in general, there is usually optimization potential in cogeneration, steam systems, heat transfer systems, and motors; see also Coletti and Macchietto (2009a, b), Bevilacqua and Braglia (2002), Wenkai et al. (2003), Fath and Hashem (1988), Najjar and Habeebullah (1991), and McKay and Holland (1981) for details reported in the literature. Worrell et al. (1994a) estimated the energy saving potential for refineries to be around 15 %. The determination of the energy efficiency of a certain process is a somewhat tricky task, as it depends on boundary limits to be drawn. Alireza Tehrani Nejad (2007) attempts to allocate CO2 emissions in petroleum refineries to various petroleum products. One aspect of the petrochemical and chemical industry in general that has to be noted here with respect to energy efficiency is that the energy contained in the feedstock is partly converted to heat and power but also remains in the final products to some extent, providing potentials for recycling at the end of the various materials’ lifetimes (feedstock recycling or thermal recycling). Recommendations on energy efficiency and cost saving opportunities in refineries can be found in Worrell and Galitsky (2005). Petrochemicals Petrochemicals are products derived from petroleum (Meyers 2004) other than fuels for combustion. The petrochemical industry consumes approx. 8 % of total oil production for the manufacture of various products (The International Energy Association in Collaboration with CEFIC 2007) ranging from plastics, rubbers, and solvents to various fine chemicals. Two important upstream processes are cracking (fluid catalytic cracking, steam cracking) for the production of olefins such as ethylene and propylene and reforming (catalytic reforming) for the production of aromatics. Worldwide, more than 107 t of propylene, 6.5*106 t of ethylene, and 7*106 t of aromatics are produced per year. From these primary petrochemicals, to which also synthesis gas can be counted, a wide range of chemical products is made. Energy efficiencies of a steam cracker are reported in Tuomaala et al. (2010) and Ren et al. (2008). Naphtha crackers are estimated to consume 31.5 GJ/t of energy (Worrell et al. 2000b). The gross energy requirement (GER) for major petrochemical products such as ethylene, propylene, butadiene, and benzene is reviewed in Worrell et al. (1994a). Technology options for reducing energy use and CO2 emissions for petrochemicals are shown here in Table 9, from Yudken and Bassi (2009). Below, details on some petrochemical products with respect to energy efficiency are reviewed.

Page 38 of 65

Handbook of Climate Change Mitigation and Adptation DOI 10.1007/978-1-4614-6431-0_24-2 # Springer Science+Business Media New York 2014

Table 9 Potential technologies to make energy-intensive production processes more efficient (Source: Yudken and Bassi (2009) from IEA, DOE, AISI, Aluminum Association, Korean Energy Institute) Technology option High-temperature furnaces Gas turbine integration Advanced distillation columns Combined refrigeration plants Biomass-based system options

Description Able to withstand more than 1,100  C Higher-temperature CHP for cracking furnace

Feedstock substitution

Time frame MT-LT MT-LT MT-LT MT-LT LT

ST short term (2010–2015), MT medium term (2015–2030), LT long term (2030–2050)

Polymers The polymer industry has ramped up plastic production between 1950 and 2007 from 1.5 to 260 million tonnes (Johansson 2015) worldwide, which corresponds to an annual growth rate of more than 9 %, making plastics ubiquitous and versatile construction materials. Today, plastic production has reached 300 million tonnes per year (http://www.essentialchemicalindustry.org/processes/recycling-in-thechemical-industry.html 2015). Polyolefins are the most common plastics, with polyethylene (PE) and polypropylene (PP) accounting for the largest fraction, followed by polyvinylchloride (PVC), polystyrene (PS) and expanded polystyrene (EPS), polyethylene terephthalate (PET), polyurethane (PUR), and others, e.g., engineering plastics such as polycarbonate (PC). Polymers can be produced with different technologies, ranging from radical reactions (hightemperature and high-pressure processes such as for high-density polyethylene (HDPE)) to catalytic processes (at more moderate conditions), which show varying energy efficiencies. The gross energy requirements for the production of low-density polyethylene (LDPE), PP, PS, and PVC are 69.8, 61.6, 81.5, and 55.7 GJ/t, respectively (Worrell et al. 1994a). Plastic production uses 8 % of the world’s oil production, 4 % as feedstock and 4 % during manufacture (University of York 2010). Cogeneration and heat recovery in polymerization processes are discussed in Budin et al. (2006). In Europe, the recycling rate of plastics has reached 51.3 % (21.3 % recycling and 30.0 % energy recovery, i.e., combustion) (Johansson 2015). Worrell et al. (1994a) investigated potential energy savings in the production of plastics. That study found that the technical potential for energy efficiency savings varies from 12 % for PE to 25 % for PVC. Further information on energy use in plastic production can be found in Patel and Mutha (2004). Alternative feedstocks, biopolymers, and feedstock recycling (Scheirs 2006) are emerging trends in the industry with impact on energy efficiency. Chemical Industry The chemical industry uses crude oil, natural gas, and coal, apart from electricity, both as raw materials and as fuels to produce more than 50,000 different products. More than half of the energy used by the chemical industry is processed as feedstock, which means that it is transformed into various products such as chemicals or polymers. Most energy is consumed by the production of a few small, intermediate compounds. In the chemical industry, energy costs account for 10–15 % of total manufacturing costs (Bieling 2007). For some processes such as electrolysis, energy costs can exceed 50 % of production costs. The DOE estimates potential energy savings within the chemical industry to be approximately 20 %. Strategies to improve energy efficiency in the chemical industry are process improvements, cogeneration, integration, and the introduction of energy management systems (EMS). Integration means that rather than producing a single chemical, a production location should strive to use its feedstock to make the desired final product, while utilizing by-products as well. If several production steps, such as crude oil distillation, cracking, and polymerization, can be done in one location, Page 39 of 65

Handbook of Climate Change Mitigation and Adptation DOI 10.1007/978-1-4614-6431-0_24-2 # Springer Science+Business Media New York 2014

costly and wasteful transportation and storage steps can be avoided (compare the German concept of an integrated chemical complex, the “Verbund.” At the largest chemical Verbund site, BASFs, Ludwigshafen, synergies amount to 500 million € per year, 150 million € out of which are attributed to energy savings (The International Energy Association in Collaboration with CEFIC 2007)). Process design is also an important consideration for energy efficiency, as different unit operations (McCabe et al. 2004) have varying energy demands. In Worrell et al. (2000b), energy use and energy intensity of the US chemical industry are analyzed. A general review on sustainability and energy efficiency in the chemical industry is provided by de Swaan Arons (2010). Below, some details on various products of the chemical and process industries with respect to energy efficiency are compiled. Actual energy consumption values for the production of chemicals are significantly higher than the theoretical demand stipulated by thermodynamics. A “clean-sheet redesign,” not considering cost-effectiveness, would offer a potential for energy savings in chemical production of up to 95 % (Granade et al. 2009; Hinderink et al. 1999). Catalysts, as they lower the activation energy, can generally increase energy efficiency, particularly enzymatic catalysts for several particular reactions. Process intensification and polygeneration are two emerging technologies that could reduce energy demand in the chemical industry. By process intensification (Etchells 2005), more compact and efficient plants can be designed. Polygeneration using natural resources is detailed in Serra et al. (2009). An overview on energy efficiency in the chemical industry is provided in Worrell and Blok (1994) and Worrell et al. (1994a, b, 2000c). Green chemistry is discussed in Poliakoff et al. (2002) and Anastas and Warner (2000). Ammonia Ammonia is one of the inorganic chemicals with the highest yearly production volume. Its global consumption is in excess of 107 t. NH3 is the precursor to most industrially produced nitrogencontaining compounds. More than 80 % of ammonia is processed to fertilizers. Ammonia production consumes more than 1 % of all man-made power (Max Appl 2006). CO2 emissions in ammonia production are estimated to be 1.58 t for each t of the product (Office of Energy Efficiency, Natural Resources Canada 2002). Energy consumption is quoted as 39.3 GJ/t for feedstock (natural gas) plus 140 kWh/t electricity, totaling to 40.9 GJ/t (based on higher heating value, corresponding to 37.1 GJ/t based on lower heating value) (Worrell et al. 2000b). Without considering the natural gas, the primary energy consumption for ammonia production is 16.7 GJ/t (Worrell et al. 2000b). For energy efficiency studies and improvement potentials in ammonia production, see Panjeshahi et al. (2008) and Rafiqul et al. (2005). The use of ammonia as a fuel is described in Zamfirescu and Dincer (2009). The specific energy consumption for the production of urea is estimated at 2.8 GJ/t (1994) (Worrell et al. 2000b). Fertilizers Nitrogen-bearing fertilizer production is a very energy-intensive industry. Ammonia is the most important intermediate chemical compound here (see also above). Table 10 shows the energy use and emission intensity for the production of various fertilizer components, reprinted from Wells (2001): An early review on energy efficiency in fertilizer production is provided by Mudahar and Hignett (1985). Energy efficiency in the fertilizer industry is reviewed in Ladha et al. (2005), Abdul Quader (2003), Kumar (2002), Mudahar and Hignett (1985), and Fadare et al. (2010).

Page 40 of 65

Handbook of Climate Change Mitigation and Adptation DOI 10.1007/978-1-4614-6431-0_24-2 # Springer Science+Business Media New York 2014

Table 10 Energy requirements to manufacture fertilizer components plus associated CO2 emissions (Source: Wells 2001) Component N P K S Lime

Energy use [MJ/kg] 65 15 10 5 0.6

Emissions [kg CO2/MJ] 0.05 0.06 0.06 0.06 0.72

Nitrogen fertilizer production has an additional impact on climate change, mainly via N2O emissions (Stuart et al. 2014). Methanol Methanol can be produced by steam reforming from methane (Rosen and Scott 1988). It can also be obtained from coal (Li et al. 2010) and various biomass products (Hamelinck and Faaij 2002) such as sugarcane. Methanol has seen increased interest for its use in: • Direct methanol fuel cells (Jiang et al. 2004) • Fuel for combustion engines (Agarwal 2007) • Feedstock for chemical industry (Olah et al. 2009) In 1994, the specific energy consumption for the production of methanol was 38.4 GJ/t (based on higher heating value) (Worrell et al. 2000b). Best practice in 2013 was 9.0–10.0 GJ/t. Figure 12 shows today’s energy losses in the chemical industry for the major chemicals, amongst them methanol. Catalysis bears a great potential for further energy reduction (DECHEMA 2013). Industrial Gases A wide variety of gases is industrially produced and sold in compressed or liquid state. Apart from air, oxygen and nitrogen are amongst the most commonly used industrial gases (H€aring et al. 2007), others being argon (welding), carbon dioxide, and methane. Oxygen and nitrogen have traditionally been produced through cryogenic air separation where air is cooled and pressurized until it becomes a liquid with the various gases being extracted through fractionated distillation. The associated energy consumption is estimated to be 1.8–2.0 GJ/t of oxygen or nitrogen (Worrell et al. 2000b). Other energy-efficient technologies such as pressure swing adsorption (PSA) (Sharma 2009) and membrane separation (Koros and Fleming 1993) are increasingly used. For a comparison of cryogeny versus membranes for oxygen-enriched air (OEA) production, see Belaissaoui et al. (2014). Methane can be produced through anaerobic fermentation (biogas) and methanogenesis (through bacteria). Also, hydrogen can be produced by bacteria (Xia et al. 2014); see also below. An article on energy efficiency gains in gas production (thermal gasification) is given by Kumar et al. (2010). Chlorine Chlorine is produced through electrolysis of a salt solution (brine), which is an energyintensive process requiring between 3,065 and 3,960 kWh/t (Worrell et al. 2000b). The coproducts caustic soda (sodium hydroxide, NaOH) and hydrogen gas (H2) are obtained, with the major markets for chlorine being PVC (polyvinylchloride) manufacturing, inorganic chemicals, propylene oxide, water treatment, and organic chemicals. The chlorine industry is reviewed in Johnston and Stringer (2001).

Page 41 of 65

Handbook of Climate Change Mitigation and Adptation DOI 10.1007/978-1-4614-6431-0_24-2 # Springer Science+Business Media New York 2014

Fig. 12 Cumulated theoretical total energy loss for major chemical processes based on 2010 production volumes (Source: DECHEMA 2013). TPA terephthalic acid, PP polypropylene, EO ethylene oxide, VCM vinyl chloride monomer, PX paraxylene, BTX benzene, toluene, xylene, pygas pyrolysis gasoline, PO propylene oxide

Table 11 Potential technologies to make energy-intensive production processes more efficient (Source: Yudken and Bassi (2009) from IEA, DOE, AISI, Aluminum Association, Korean Energy Institute) Technology option Convert mercury process and diaphragm process plants to membrane technology

Description Combined electrolytic cell with a fuel cell, using hydrogen by-product

Time frame MT-LT

ST short term (2010–2015), MT medium term (2015–2030), LT long term (2030–2050)

Technology options for reducing energy use and CO2 emissions in chlor-alkali manufacturing are summarized from Yudken and Bassi (2009) in Table 11. Hydrogen Hydrogen is regarded as an interesting option, as transportation fuel, and as storage medium for electricity, being produced from renewable resources. The “hydrogen economy” (Ball and Wietschel 2009) is often seen as a straightforward solution to many issues around pollution and global warming. Despite all the potential that lies in the technical exploitation of hydrogen, it needs to be borne in mind that the hydrogen – as clean as it is as such – has to be produced. Hydrogen from nuclear power is treated in Hori (2008) and Yildiz and Kazimi (2006). It is the overall energy efficiency (system efficiency) that will determine whether hydrogen will be used on a large scale as energy carrier. For details, see Page and Krumdieck (2009). A comparison of thermochemical, electrolytic, photoelectrolytic, and photochemical solar-to-hydrogen production technologies is made in Wang et al. (2012). Pharmaceutical Industry The US pharmaceutical industry has energy expenses of approx. one billion USD per year (Galitsky 2008), which, being only a small fraction of total production costs, is still significant, given the fact that energy savings will translate into direct and predictable earnings. In the pharmaceutical industry, there are three overall stages:

Page 42 of 65

Handbook of Climate Change Mitigation and Adptation DOI 10.1007/978-1-4614-6431-0_24-2 # Springer Science+Business Media New York 2014

Table 12 Pharmaceutical industry and energy use (Source: Galitsky 2008) Area R&D Offices Production of bulk pharmaceutical substances Formulation, packaging, and filling Warehouse Miscellaneous Total

Distribution of energy use (%) 30 10 35 15 5 5 100

• R&D • Production of bulk pharmaceutical substances • Formulation of the final products Table 12 shows the distribution of energy use (Galitsky 2008) in this sector. Twenty-five percent of the total energy is used for plug loads and processes, 10 % for lighting, and 65 % for HVAC (heating, ventilation, and air-conditioning). The biggest potential can hence be found in R&D and bulk manufacturing.

Public Sector and Community Infrastructure The public sector is another area where energy efficiency potential exists. Awareness of energy efficiency and conservation is a major topic. In a typical office, nearly 40 % of the electricity consumption occurs after closing hours (Danish Ministry of Transport und Energy 2005). In China, the energy consumption in the building sector is 25 % of total energy consumption. The energy use in urban buildings in megacities like Beijing and Shanghai are about 90 % of the whole energy consumption in buildings (Jiang 2011). It was found that amongst these urban buildings, the energy use in public buildings is higher than in other building sectors (Jiang 2011). China’s Ministry of Construction has issued six energy efficiency design standards to the building sector since 1995, where the latest one is the design standard for energy efficiency in public buildings, aiming at a 50 % reduction of energy consumption in new and refurbished public buildings. Beijing and Shanghai governments have also issued their local energy saving standards for public buildings with 65 % and 50 % of energy saving, respectively (Jiang 2011). Government institutions can apply energy-efficient procurement and create awareness for energy savings. Public buildings (see also next section) offer energy efficiency increase potential, as does, for instance, the lighting infrastructure of public roads. For enhancing energy efficiency in public buildings, local energy audit programs were found to be successful (Annunziata et al. 2014). Energy efficiency in public lighting is discussed in Radulovic et al. (2011). Desalination plants are important in several parts of the world. Their energy efficiencies for different technologies are assessed in Mesa et al. (1997), Tay et al. (1996), Al-Kharabsheh (2006), Gomri (2009), and Charcosset (2009). Another important infrastructure is data centers. Their energy efficiency is discussed, e.g., in Todorovic and Kim (2014).

Buildings Buildings have a strong and long-lasting impact on global energy consumption, because they are constructed for typically 50–100 years. In 2005, 39 % of the total energy consumption in the USA Page 43 of 65

Handbook of Climate Change Mitigation and Adptation DOI 10.1007/978-1-4614-6431-0_24-2 # Springer Science+Business Media New York 2014

stemmed from commercial and residential buildings (US Green Building Council 2015). They accounted for as much as 70 % of total electricity consumption (US Green Building Council 2015). There is hence a huge potential for what is known as green buildings. The residential sector in the USA is expected to account for 29 % of the US energy consumption in 2020 (Granade et al. 2009), driven by population growth, larger homes, and more electric and electronic gadgets in private households. The specific energy use for heating of buildings, a major parameter for their energy efficiency, is given in kWh/(m2*year). Key determinants for energy efficiency of buildings are: • Location and surroundings • Insulation • Heating technology Sealing of ducts, basement insulation, and improved heating equipment are seen as major efficiency opportunities in private homes in the USA (Granade et al. 2009). Heat pumps are particularly energy efficient. There are three types of heat pumps: air to air, water source, and ground source. Ground source heat pumps typically use four times less electrical energy than direct electrical heaters. Deviations in energy efficiency from the design requirements to actual performance may come from: • • • • •

Errors in the design Errors in the construction Incorrect operation Lack of maintenance Changed use of the building

Various tools, such as an energy survey or an energy audit, can help uncover efficiency potentials. On average, heating and cooling account for almost half of a typical utility bill. Drafty rooms can be improved by checking windows and doors. The HVAC (heating, ventilation, air-conditioning) system often offers potential for improvement and so does the lighting. Compact fluorescent lights (CFL) are more efficient than electric bulbs. Passive buildings (Miller et al. 2009) and zero net energy (ZNE) buildings (Hernandez and Kenny 2010; Elkinton et al. 2009) are more energy efficient than traditional ones. For ZNE buildings, embodied energy (Venkatarama Reddy and Jagadish 2003) can be considered. This is the quantity of energy required to manufacture and transport the materials utilized for their construction. According to Venkatarama Reddy and Jagadish (2003), the total embodied energy of load-bearing masonry buildings can be reduced by 50 % when energy-efficient/alternative building materials are used. Landscaping around private homes can also bring measurable energy savings. Carefully positioned trees can save up to 25 % of a household’s energy consumption for heating and cooling. They can, apart from giving a nice appearance, provide shade and shelter from wind. Payback times for such planting measures can be as low as several years (DOE 1995). Microgeneration for individual houses is another interesting technology option for the energy savvy. A small combined heat and power (CHP) system to produce electricity and heat for a community or a single household is known as microgeneration (Entchev et al. 2004). The most promising technologies are Stirling engines and fuel cells in a size range of approx. 1–10 kWe. Total efficiencies can be typically 80–88 % (Entchev et al. 2004).

Page 44 of 65

Handbook of Climate Change Mitigation and Adptation DOI 10.1007/978-1-4614-6431-0_24-2 # Springer Science+Business Media New York 2014

Fig. 13 Residential electricity saving potential in the year 2030 (Reprinted with permission from Brown (2008))

It is estimated that in US buildings, 1/3 of the total energy consumption can be saved at a cost of 2.7 $c/ kWh (Brown 2008); see also for natural gas savings there. Figure 13 shows the electricity saving potential for the residential area, and Fig. 14 the same scenario for the commercial sector. It can be seen in Fig. 13 that in the residential area, a huge potential exists for TV sets, lighting, and space cooling, with freezers already being rather optimized. Figure 14 takes a look at the commercial sector. In the commercial sector, space cooling and lighting offer large potential, with the most cost-effective opportunities residing in space heating and ventilation. Energy efficiency in the residential area is covered in International Energy Agency (2008). A guide on energy efficiency for home owners can be found in Krigger and Dorsi (2008). Smart metering has been suggested for enhancing residential energy efficiency (Anda and Temmen 2014).

Appliances Appliances are a collection of electrically powered devices, which can be found in nearly every household. They account for approx. 20 % of a typical household’s energy consumption, with refrigerators, washing machines, and dryers at the top of the consumption list. A “cheap” device can become very costly over its entire lifetime of up to 10 or 20 years (see TCO concept above). In 1978, California took a leading national role in the USA by establishing the first building and appliance standards in the country. Nearly 85 % of all dishwashers in California are Energy Star™ compliant (see later), and 50 % of refrigerators and washing machines conform to these standards, too. What is even more impressive, however, is that this increase in market share occurred within no more than 7 years; see Fig. 15, reprinted with permission from Next 10’s California Green Innovation Index (2010). Typical renewal cycles of appliances in industrialized countries, here the USA, are shown in Fig. 16, reprinted from Okura et al. (2006). Modern appliances consume significantly less energy than older ones.

Lighting Lighting has played a large part in the public discussion on energy efficiency. As traditional incandescent bulbs, which have an efficiency on the order of 1 % to produce light, are being phased out in many countries, mild panic-buying could be observed in 2009 (Jamieson 2009). Some consumers oppose the Page 45 of 65

Handbook of Climate Change Mitigation and Adptation DOI 10.1007/978-1-4614-6431-0_24-2 # Springer Science+Business Media New York 2014

Fig. 14 Commercial electricity saving potential in the year 2030 (Reprinted with permission from Brown (2008))

100% 80% 60% 40% 20% 0% 1998

1999 Dishwashers

2000

2001

2002

Refrigerators

2003

2004

2005

Clothes Washers

Fig. 15 Market share of Energy Star™ appliances in California (Reprinted with permission from Next 10’s California Green Innovation Index (2010))

Fig. 16 Appliance renewal cycles (Reprinted with permission from Okura et al. (2006))

compact fluorescent lights (CFL), which typically cost four times as much as traditional bulbs. The fact that their energy consumption is one-fifth and that payback times are typically short has not convinced all consumers (yet). There are reservations against the hue of the CFL’s light. CFL that work in dimmers tend

Page 46 of 65

Handbook of Climate Change Mitigation and Adptation DOI 10.1007/978-1-4614-6431-0_24-2 # Springer Science+Business Media New York 2014

Table 13 Number luminous flux emitted by common light sources (Reproduced with permission from Gan et al. (2013)). Lumen is the SI unit of luminous flux, a measure of the total quantity of visible light emitted by a source Lamp Incandescent lamp Compact fluorescent lamp Fluorescent lamp LED

Lamp wattage 75 W 15 W 36 W 18 W

Lumens 950 810 2,400 1,600

Fig. 17 Annual average of expenditures of households on energy for heating and electricity (Reprinted with permission from Elsevier from N€assén et al. (2008))

to cost more than standard CFL. In Techato et al. (2009), a life cycle analysis of CFL is made. An alternative to CFL is light-emitting diodes (LED) (Principi and Fioretti 2014; Gan et al. 2013). For a comparison of typical light sources, see Table 13.

Consumers Up to 2/3 of household energy use is for space heating, water heating, and refrigeration (Granade et al. 2009) with lighting playing a lesser role. Another significant share is held by the “plug load”. “Plug load” is a collective term for electrical devices and small appliances. These are virtually hundreds of small devices in private homes, consuming electricity. The biggest shares are held by TV sets (22 %), DVD players (5 %), PCs (5 %), and microwave ovens (3 %) (Granade et al. 2009). Standby power consumption is a huge energy waster. In Japan, the annual per household standby electricity consumption could be reduced from 437 to 308 kWh from 2002 to 2005 (Granade et al. 2009). Figure 17 shows typical energy expenditures for Swedish households, reproduced from N€assén et al. (2008). It is assumed that with a tripling of energy prices, energy use of private households would decrease by 30 % (N€assén et al. 2008). Energy consciousness of consumers has increased over the last years, partly induced by various initiatives such as Energy Star™; see also below. Tips and Tricks for Consumers There are plenty of tips and tricks in various organizations’ and authorities’ brochures and Internet pages for consumers on how to lower their utility bills. Most of them are commonsense, but it is worthwhile to take a look at them to capture some fast savings. Here are a few examples of often unused potential in private homes: Page 47 of 65

Handbook of Climate Change Mitigation and Adptation DOI 10.1007/978-1-4614-6431-0_24-2 # Springer Science+Business Media New York 2014

• • • •

The temperature of the refrigerator is too low. The refrigerator is positioned in a confined space. The washing machine is operated half empty with too warm water temperature. Open food is stored in refrigerators (liquids need to be covered, and food should be wrapped to avoid moisture release). • Untight windows. • Time is not considered (peak electricity is most costly). Ample advice on how to save energy (energy conservation and energy efficiency) in the household can be found in the internet, e.g., from governmental sites such as (http://energy.gov/energysaver/articles/ energy-saver-guide-tips-saving-money-and-energy-home 2015) or organizations like the OECD (http:// www.oecd.org/greengrowth/40317373.pdf 2015).

Initiatives for Energy Efficiency Energy efficiency improvements do not come “naturally”, at least not at the desired speed. In order to overcome the known barriers toward energy efficiency, which were outlined above in this chapter, government action can help. Numerous programs and initiatives to educate people about and to promote energy efficiency have been started by governments, NGOs (nongovernmental organizations), NPOs (nonprofit organizations), for-profit entities, and visionary individuals such as business owners and public celebrities. One such initiative is Energy Star™. The Energy Star ® label is used to identify energy-efficient appliances. It was initiated by the DOE (US Department of Energy) and the EPA (US Environmental Protection Agency). Products with the Energy Star™ label usually exceed minimum efficiency standards by a substantial amount. More information on Energy Star ® can be found at (http://www.energystar.gov/ 2015) and (http://www. eu-energystar.org/ 2015). The impact of agreements on energy efficiency is reviewed in Grossman and Krueger (1991).

Other Aspects There are countless areas for hidden or for indirect energy efficiency improvements, some of which are being touched upon here. Advanced packaging, for instance, can save substantial amounts of materials to achieve the same level of good protection. Lightweight packaging will make transportation over long distances more energy efficient. One example is the replacement of bulky glass bottles by composite containers of (recycled) cardboard and plastics. In information technology (IT), there is often an untapped potential for energy savings and efficiency improvements. Anyone who has witnessed the large air-conditioning systems for server rooms will immediately see the potential offered by what has become known as green computing. More details can be found in Minas and Ellison (2009) and Namboodiri (2009). The service sector can also contribute to more energy efficiency. Electronic banking, video telephony, and teleconferencing (Liang et al. 2007), telecommuting (Nelson et al. 2007; Rhee 2008), and fleet management (D’Agosto and Ribeiro 2004) are just a few examples where energy for traveling can be economized.

Page 48 of 65

Handbook of Climate Change Mitigation and Adptation DOI 10.1007/978-1-4614-6431-0_24-2 # Springer Science+Business Media New York 2014

In general, shifting employment and economic activity from manufacturing to the service sector saves energy and cuts greenhouse gas emissions, because the service sector is much lower in energy intensity. Energy efficiency potentials in hospitals are discussed in Sloan et al. (2009). Energy efficiency under extreme conditions is reviewed in Tin et al. (2010).

Energy Conservation Being a broader term than energy efficiency, energy conservation is about using less energy, with a lower energy service being delivered. Sometimes, it is used synonymously with energy efficiency. Energy saving is without doubt the quickest, most effective, and most cost-efficient way for reducing greenhouse gas emissions, as well as improving air quality, especially in developing countries and in densely populated areas. An example of energy conservation on a private level is, for instance, driving less with one’s car. An organization can study its office lighting setup to remove costly over-illumination, for example. For more information on energy conservation, see Thumann and Dunning (2008), Patrick et al. (2007), Chirarattananon and Taweekun (2003), Jin et al. (2009), Markis and Paravantis (2007), Lin (2007), and Al-Mofleh et al. (2009).

Further Study and Reading In this section, a few terms that are related to energy efficiency were compiled as a starting point for further exploration by the interested reader. Dematerialization: By this expression, one can understand the decline of weight and “embedded energy” (cf. embodied energy) of materials in industrial end products over time or, more broadly speaking, the absolute or relative reduction in the quantity of materials required to serve economic functions (Wernick et al. 1996; Tapio et al. 2007). On the one hand, one can observe a decline in weight of certain good such as PCs; on the other hand, people tend to use more materials as their comfort level increases (e.g., larger homes, larger cars). Trends of dematerialization are reviewed in Wernick et al. (1996). A similar term is ephemeralization, which was coined by R. Buckminster Fuller. It is the ability of technological advancement to do “more and more with less and less until eventually you can do everything with nothing” (Buckminster Fuller 1973). Industrial Ecology: Being defined as a “systems-based, multidisciplinary discourse that seeks to understand emergent behaviour of complex integrated human/natural systems” (Allenby 2006), industrial ecology strives at sustainability and eco-efficiency. More information on the topic can be found in Frosch and Gallopoulos (1989). Eco-efficiency: According to the World Business Council for Sustainable Development (WBCSD), it is expressed as: • • • • • •

Reduction in the material intensity of goods or services Reduction in the energy intensity of goods or services Reduced dispersion of toxic materials Improved recyclability Maximum use of renewable resources Greater durability of products

Page 49 of 65

Handbook of Climate Change Mitigation and Adptation DOI 10.1007/978-1-4614-6431-0_24-2 # Springer Science+Business Media New York 2014

• Increased service intensity of goods and services More information can be found in World Business Council for Sustainable Development (WBCSD) (2000). Water efficiency: Water efficiency is closely linked to water conservation. It can be defined as the accomplishment of a function, task, process, or result with the minimal amount of water feasible. Effluent reuse is one important means of achieving water efficiency (White and Howe 1998). It is estimated that each m3 of water utilized in the industrial and service sectors generates at least 200 times more wealth than it does in the agricultural sector (Beaumont 2000). This suggests that water-intensive production will be shifted from arid regions to those with more water (compare the shift of CO2-intensive production to certain areas). Here, the concept of virtual water (Allan 2005; Chapagain 2006) steps into place. Virtual water, also called embedded water, embodied water, or hidden water, refers to the water needed to manufacture a good or service. Yearly individual water consumption is on the order of 1 m3 for drinking, 100 m3 for domestic use, and 1,000 m3 embedded in food. This shows that the concept of virtual water is closely linked to water efficiency and ultimately to energy efficiency. Other burning topics related to energy are the affordability of energy and access to energy, which are both not secured for a high number of people.

Conclusions This chapter has taken a look at energy efficiency in industry, transportation, the private sector, and other areas, exploring a topic of high relevance for climate change mitigation. Energy use efficiency is the cheapest and easiest source of energy, with a huge unused potential. It is estimated that up to 1/3 of the worldwide energy demand in 2050 can be saved by energy efficiency measures. In its “International Energy Outlook 2014,” the EIA (US Energy Information Administration) mentions a growing energy efficiency in the transportation sector, which, in OECD Europe, already induced a decline in consumption of liquid fuels (EIA (US Energy Information Administration) 2014). Energy efficiency has started to proliferate, and there is still a lot of potential. In this chapter, aspects of energy efficiency from various sectors were presented, spanning historic data, current levels, and future trends. An emphasis is placed on providing brief information and references on how energy efficiency improvements can be realized.

References Abdul Quader AKM (2003) Natural gas and the fertilizer industry. Energy Sustain Dev 7(2):40–48 Agarwal AK (2007) Biofuels (alcohols and biodiesel) applications as fuels for internal combustion engines. Prog Energy Combust Sci 33(3):233–271 Åhman M (2001) Primary energy efficiency of alternative powertrains in vehicles. Energy 26(11):973–989 Tehrani Nejad M A (2007) Allocation of CO2 emissions in petroleum refineries to petroleum joint products: a linear programming model for practical application. Energy Econ 29(4):974–997 Al-Kharabsheh S (2006) An innovative reverse osmosis desalination system using hydrostatic pressure. Desalination 196(1–3):210–214

Page 50 of 65

Handbook of Climate Change Mitigation and Adptation DOI 10.1007/978-1-4614-6431-0_24-2 # Springer Science+Business Media New York 2014

Allan JA (2005) Virtual water: a strategic resource global solutions to regional deficits. Ground Water 36(4):545–546 Allenby B (2006) The ontologies of industrial ecology. Prog Ind Ecol 3(1/2):28–40 Al-Mansour F, Merse S, Tomsic M (2003) Comparison of energy efficiency strategies in the industrial sector of Slovenia. Energy 28(5):421–440 Al-Mofleh A, Taib S, Mujeebu MA, Salah W (2009) Analysis of sectoral energy conservation in Malaysia. Energy 34(6):733–739 Alvarado S, Maldonado P, Barrios A, Jaques I (2002) Long term energy-related environmental issues of copper production. Energy 27(2):183–196 Amtrak (2015) http://www.amtrak.com/. Accessed 1 Jan 2015 Anastas P, Warner JC (2000) Green chemistry: theory and practice. Oxford University Press, Oxford. ISBN 978-0198506980 Anda M, Temmen J (2014) Smart metering for residential energy efficiency: the use of community based social marketing for behavioural change and smart grid introduction. Renew Energy 67:119–127 Ang BW (2006) Monitoring changes in economy-wide energy efficiency: from energy–GDP ratio to composite efficiency index. Energy Policy 34(5):574–582 Annunziata E, Rizzi F, Frey M (2014) Enhancing energy efficiency in public buildings: the role of local energy audit programmes. Energy Policy 69:364–373 Atilla Oner M, Nuri Basoglu A, Sýtký Kok M (2007) Megatrends as perceived in Turkey in comparison to Austria and Germany. Technol Forecast Soc Chang 74(4):538–557 Axelsson E, Olsson MR, Berntsson T (2008) Opportunities for process-integrated evaporation in a hardwood pulp mill and comparison with a softwood model mill study. Appl Therm Eng 28(16):2100–2107 Ball M, Wietschel M (2009) The hydrogen economy: opportunities and challenges. Cambridge University Press, Cambridge. ISBN 978-0521882163 Bannister K (2010) Industrial energy efficiency handbook: eliminating energy waste from mechanical systems. Mcgraw-Hill Book. ISBN: 978-0071490665, New York, USA BBC (2008) Airline in first biofuel flight, BBC News UK, Sunday, 24 Feb 2008. http://news.bbc.co.uk/2/ hi/7261214.stm. Accessed 1 Jan 2015 Beaumont P (2000) The quest for water efficiency – restructuring of water use in the Middle East. Water Air Soil Pollut 123(1–4):551–564 Bedford N, Pitcher G (2005) Austria, Lonely planet Austria. Lonely Planet Publications, Torino, page 56. ISBN 978-1740594844 Belaissaoui B, Le Moullec Y, Hagi H, Favre E (2014) Energy efficiency of oxygen enriched air production technologies: cryogeny vs membranes. Sep Purif Technol 125:142–150 Berns H, Theisen W, Scheibelein G (2008) Ferrous materials: steel and cast iron. Springer, Berlin. ISBN 978-3540718475 Bevilacqua M, Braglia M (2002) Environmental efficiency analysis for ENI oil refineries. J Clean Prod 10(1):85–92 Bieling H-H (2007) Chemical reaction – an energy-intensive industry finds the solution in CHP. Cogeneration & On-Site Power. http://www.cospp.com/articles/article_display.cfm?ARTICLE_ ID=288130&p=122. Accessed 1 Jan 2015 Bilek M, Hardy C, Lenzen M, Dey C (2008) Life-cycle energy balance and greenhouse gas emissions of nuclear energy: a review. Energy Convers Manag 49(8):2178–2199 Blok K, Luiten EEM, De Groot HLF (2004) The effectiveness of policy instruments for energy-efficiency improvement in firms: the dutch experience. Springer, Dordrecht. ISBN 978-1402019654

Page 51 of 65

Handbook of Climate Change Mitigation and Adptation DOI 10.1007/978-1-4614-6431-0_24-2 # Springer Science+Business Media New York 2014

Bologna M, Flores JC (2008) A simple mathematical model of society collapse applied to Easter Island. EPL 81:48006 Bor YJ (2008) Consistent multi-level energy efficiency indicators and their policy implications. Energy Econ 30(5):2401–2419 Boyce MP (2006) The gas turbine engineering handbook, 3rd edn. Elsevier, Oxford. ISBN 978-0750678469 Braun E, Leiber W (2007) The right pump lowers total cost of ownership. World Pumps 2007(491):30–33 Braungart M, McDonough W, Bollinger A (2007) Cradle-to-cradle design: creating healthy emissions – a strategy for eco-effective product and system design. J Clean Prod 15(13–14):1337–1348 Brown R (2008) U.S. building-sector energy efficiency potential. Lawrence Berkeley National Laboratory, LBNL paper LBNL-1096E. Retrieved from http://www.escholarship.org/uc/item/8vs9k2q8. Accessed 1 Jan 2015 Buchanan JM, Stubblebine WC (1962) Externality. Econ New Ser 29(116):371–384 Buckminster Fuller R (1973) Nine chains to the moon. Jonathan Cape, London. ISBN 978-0224008006 Budin R, Mihelić-Bogdanić A, Sutlović I, Filipan V (2006) Advanced polymerization process with cogeneration and heat recovery. Appl Therm Eng 26(16):1998–2004 Bujak J (2009) Experimental study of the energy efficiency of an incinerator for medical waste. Appl Energy 86(11):2386–2393 Burgin N, Wilson PA (1985) The influence of cable forces on the efficiency of kite devices as a means of alternative propulsion. J Wind Eng Ind Aerodyn 20(1–3):349–367 Çakir U, Çomakli K, Y€ uksel F (2012) The role of cogeneration systems in sustainability of energy. Energy Convers Manag 63:196–202 Callen HB (1985) Thermodynamics and an introduction to thermostatistics, 2nd edn. Wiley, New York. ISBN 978 0471862567 Chambadal P (1957) Les centrales nucléaires, vol 4. Armand Colin, Paris, pp 1–58 Chang D, Rhee T, Nam K, Chang K, Lee D, Jeong S (2008) A study on availability and safety of new propulsion systems for LNG carriers. Reliab Eng Syst Saf 93(12):1877–1885 Chapagain AK (2006) Globalisation of water: opportunities and threats of virtual water trade. Taylor & Francis, London. ISBN 978-0415409162 Charcosset C (2009) A review of membrane processes and renewable energies for desalination. Desalination 245(1–3):214–231 Cherubini F, Raugei M, Ulgiati S (2008) LCA of magnesium production: technological overview and worldwide estimation of environmental burdens. Resour Conserv Recycl 52(8–9):1093–1100 Cherubini F, Bargigli S, Ulgiati S (2009) Life cycle assessment (LCA) of waste management strategies: landfilling, sorting plant and incineration. Energy 34(12):2116–2123 Chirarattananon S, Taweekun J (2003) A technical review of energy conservation programs for commercial and government buildings in Thailand. Energy Convers Manag 44(5):743–762 CHP Installation Database (2015) ICF International/EEA. http://www.eea-inc.com/chpdata/index.html. Accessed 1 Jan 2015 Christensen CM, Overdorf M, Thomke S (2001) Harvard business review on innovation. Mcgraw-Hill Professional. ISBN: 978-1578516148, New York, USA Clark A (2001) Making provision for energy-efficiency investment in changing markets: an international review. Energy Sustain Dev 5(2):26–38 Climate Action Team (2015) http://www.climatechange.ca.gov/climate_action_team/index.html. Accessed 1 Jan 2015 Coletti F, Macchietto S (2009a) A heat exchanger model to increase energy efficiency in refinery pre heat trains. Comput Aided Chem Eng 26:1245–1250 Page 52 of 65

Handbook of Climate Change Mitigation and Adptation DOI 10.1007/978-1-4614-6431-0_24-2 # Springer Science+Business Media New York 2014

Coletti F, Macchietto S (2009b) Predicting refinery energy losses due to fouling in heat exchangers. Comput Aided Chem Eng 27:219–224 Collantes G, Sperling D (2008) The origin of California’s zero emission vehicle mandate. Transp Res A Policy Pract 42(10):1302–1313 Costa A, Paris J, Towers M, Browne T (2007) Economics of trigeneration in a kraft pulp mill for enhanced energy efficiency and reduced GHG emissions. Energy 32(4):474–481 Costa A, Bakhtiari B, Schuster S, Paris J (2009) Integration of absorption heat pumps in a Kraft pulp process for enhanced energy efficiency. Energy 34(3):254–260 Cullen JM, Allwood JM (2010) Theoretical efficiency limits for energy conversion devices. Energy 35(5):2059–2069 Curzon FL, Ahlborn B (1975) Efficiency of a carnot engine at maximum power output. Am J Phys 43:22–24 D’Agosto M, Ribeiro SK (2004) Eco-efficiency management program (EEMP) – a model for road fleet operation. Transp Res Part D: Transp Environ 9(6):497–511 da Graça Carvalho M, Nogueira M (1997) Improvement of energy efficiency in glass-melting furnaces, cement kilns and baking ovens. Appl Therm Eng 17(8–10):921–933 Danish Ministry of Transport and Energy (2005) Action plan for renewed energy-conservation. ISBN: 87-7844-564-7, Copenhagen, Denmark. http://188.64.159.37/graphics/Publikationer/Energipolitik_ UK/Action_plan_for_renewed_energy_conservation/index.htm Davies REG, Birtles PJ (1999) Comet: the world’s first jet airliner. Paladwr Press, McLean. ISBN 1-888962-14-3 de Swaan Arons J (2010) Efficiency and sustainability in the energy and chemical industries: scientific principles and case studies, 2nd edn. CRC Press, Boca Raton. ISBN 978-1439814710 DECHEMA (2013) Energy and GHG reductions in the chemical industry via catalytic processes: ANNEXES. http://www.dechema.de/dechema_media/Chemical_Roadmap_2013_Annexes-p-4582view_image-1-called_by-dechema2013-original_site-dechema_eV-original_page-124930.pdf. Accessed 1 Jan 2015 Demirbas A, Caglar A, Akdeniz F, Gullu D (2000) Conversion of olive husk to liquid fuel by pyrolysis and catalytic liquefaction. Energy Sources Part A Recov Util Environ Eff 22(7):631–639 Deolalkar SP (2009) Handbook for designing cement plants. CRC Press. ISBN: 978-8178001456, Boca Raton, USA Dewulf J, van Langenhove H, Muys B, Stijn B, Bakshi BR, Grubb GF, Paulus DM, Sciubba E (2008) Exergy: its potential and limitations in environmental, science and technology. Environ Sci Technol 42(7):2221–2232 Dijkgraaf E, Vollebergh HRJ (2004) Burn or bury? A social cost comparison of final waste disposal methods. Ecol Econ 50(3–4):233–247 DOE (1995) Landscaping for energy efficiency. DOE/GO-10095-046 FS 220, The Energy Efficiency and Renewable Energy Clearinghouse (EREC), Merrifield, USA. http://www1.eere.energy.gov/library/ pdfs/16632.pdf Doheim MA, Sayed SA, Hamed OA (1987) Analysis of waste heat and its recovery in a cement factory. Heat Recovery Syst CHP 7(5):441–444 Doukas H, Papadopoulou AG, Psarras J, Ragwitz M, Schlomann B (2008) Sustainable reference methodology for energy end-use efficiency data in the EU. Renew Sustain Energy Rev 12(8):2159–2176 Drucker PF (2003) The essential Drucker: the best of sixty years of Peter Drucker’s essential writings on management. HarperCollins, New York. ISBN 978-0060935740

Page 53 of 65

Handbook of Climate Change Mitigation and Adptation DOI 10.1007/978-1-4614-6431-0_24-2 # Springer Science+Business Media New York 2014

Ehrhardt-Martinez K, Laitner JA (2008) The size of the U.S. energy efficiency market: generating a more complete picture. ACEEE, Washington, DC EIA (2015) US Energy Information Administration. http://www.eia.doe.gov/emeu/international/ energyconsumption.html. Accessed 1 Jan 2015 EIA (U.S. Energy Information Administration) (2014) International energy outlook 2014. http://www.eia. gov/forecasts/ieo/pdf/0484%282014%29.pdf. Accessed 1 Jan 2015 Einstein D, Worrell E, Khrushch M (2001) Steam systems in industry: energy use and energy efficiency improvement potentials. Lawrence Berkeley National Laboratory, LBNL paper LBNL-49081. Retrieved from http://www.escholarship.org/uc/item/3m1781f1. Accessed 1 Jan 2015 Electric Power Research Institute (EPRI) (2015) http://www.epri.com/. Accessed 1 Jan 2015 Elkinton MR, McGowan JG, Manwell JF (2009) Wind power systems for zero net energy housing in the United States. Renew Energy 34(5):1270–1278 Ellenberger P (2010) Piping and pipeline calculations manual: construction, design fabrication and examination. Butterworth Heinemann, Amsterdam. ISBN 978-1856176934 Ellinger R, Meitz K, Prenninger P, Salchenegger S, Brandst€atter W (2001) Comparison of CO2 emission levels for internal combustion engine and fuel cell automotive propulsion systems. SAE paper 200101-3751, Warrendale, USA. http://papers.sae.org/2001-01-3751/ Elvers B (2007) Handbook of fuels: energy sources for transportation. Wiley-VCH, Weinheim. ISBN 978-3527307401 Energy Efficient Motor Driven Systems (2010) The Motor Challenge Programme. http://www.motorchallenge.eu/. Accessed 1 Jan 2015 Entchev E, Gusdorf J, Swinton M, Bell M, Szadkowski F, Kalbfleisch W, Marchand R (2004) Microgeneration technology assessment for housing technology. Energy Build 36(9):925–931 Etchells JC (2005) Process intensification: safety pros and cons. Process Saf Environ Prot 83(2):85–89 Eyring V, Isaksen ISA, Berntsen T, Collins WJ, Corbett JJ, Endresen O, Grainger RG, Moldanova J, Schlager H, Stevenson DS (2010) Transport impacts on atmosphere and climate: shipping. Atmos Environ 44(37):4735–4771 Fadare DA, Bamiro OA, Oni AO (2010) Energy and cost analysis of organic fertilizer production in Nigeria. Energy 35(1):332–340 Fahim MA, Al-Sahhaf TA, Lababidi HMS (2009) Fundamentals of petroleum refining. Elsevier Science & Technology, Amsterdam. ISBN 978-0444527851 FAQ, US Energy Information Administration (2014) How much electricity is lost in transmission and distribution in the United States? http://www.eia.gov/tools/faqs/faq.cfm?id=105&t=3. Accessed 1 Jan 2015 Farzaneh-Gord M, Deymi-Dashtebayaz M (2009) A new approach for enhancing performance of a gas turbine (case study: Khangiran refinery). Appl Energy 86(12):2750–2759 Fath HES, Hashem HH (1988) Waste heat recovery of dura (Iraq) oil refinery and alternative cogeneration energy plant. Heat Recovery Syst CHP 8(3):265–270 Favre B, Peuportier B (2014) Application of dynamic programming to study load shifting in buildings. Energy Build 82:57–64 Frosch RA, Gallopoulos NE (1989) Strategies for manufacturing. Sci Am 261(3):144–152 Gahleitner G (2013) Hydrogen from renewable electricity: an international review of power-to-gas pilot plants for stationary applications. Int J Hydrog Energy 38(5):2039–2061 Gale J, Freund P (2014) Greenhouse gas abatement in energy intensive industries. IEA Greenhouse Gas R&D Programme. http://ccs101.ca/assets/Documents/ghgt5.pdf. Accessed 1 Jan 2015 Galitsky C (2008) Energy efficiency improvement and cost saving opportunities for the pharmaceutical industry, an ENERGY STAR guide for energy and plant managers. Lawrence Berkeley National Page 54 of 65

Handbook of Climate Change Mitigation and Adptation DOI 10.1007/978-1-4614-6431-0_24-2 # Springer Science+Business Media New York 2014

Laboratory, LBNL paper, LBNL-57260. Retrieved from http://www.escholarship.org/uc/item/ 9zw158vm. Accessed 1 Jan 2015 Gan CK, Sapar AF, Mun YC, Chong KE (2013) Techno-economic analysis of LED lighting: a case study in UTeM’s faculty building. Procedia Eng 53:208–216 Gehani N (2003) Bell Labs: life in the crown jewel. Silicon Press, Summit. ISBN 978-0929306278 Geller H, Harrington P, Rosenfeld AH, Tanishima S, Unander F (2006) Polices for increasing energy efficiency: thirty years of experience in OECD countries. Energy Policy 34(5):556–573 Ghafoori E, Flynn PC, Feddes JJ (2007) Pipeline vs. truck transport of beef cattle manure. Biomass Bioenergy 31(2–3):168–175 Glass Manufacturing Industry Council (GMIC) (2015) http://www.gmic.org/. Accessed 1 Jan 2015 Gomri R (2009) Energy and exergy analyses of seawater desalination system integrated in a solar heat transformer. Desalination 249(1):188–196 Gow D (2009) Russia-Ukraine gas crisis intensifies as all European supplies are cut off. The Guardian, 7 Jan 2009. http://www.theguardian.com/business/2009/jan/07/gas-ukraine. Accessed 1 Jan 2015 Granade HC, Creyts J, Derkach A, Farese P, Nyquist S, Ostrowski K (2009) Unlocking energy efficiency in the U.S. economy, McKinsey Global Energy and Materials. McKinsey & Company, Washington DC. http://www.mckinsey.com/client_service/electric_power_and_natural_gas/latest_ thinking/unlocking_energy_efficiency_in_the_us_economy Graus W, Worrell E (2009) Trend in efficiency and capacity of fossil power generation in the EU. Energy Policy 37:2147–2160 Grossman G, Krueger A (1991) Environmental impacts of a North American free trade agreement, National Bureau of Economic Research, Working paper, 3914. NBER, Cambridge, MA Guineé JB (2002) Handbook on life cycle assessment: operational guide to the ISO standards, 2nd edn, Eco-efficiency in industry and science. Springer, Dordrecht. ISBN 978-1402005572 Gunner A, Hultmark G, Vorre A, Afshari A, Bergsøe NC (2014) Energy-saving potential of a novel ventilation system with decentralised fans in an office building. Energy Build 84:360–366 Hadjipaschalis I, Poullikkas A, Efthimiou V (2009) Overview of current and future energy storage technologies for electric power applications. Renew Sustain Energy Rev 13(6–7):1513–1522 Hall DO, Rao K (1999) Photosynthesis, 6th edn. Cambridge University Press, Cambridge. ISBN 978-0521644976 Hamelinck CN, Faaij APC (2002) Future prospects for production of methanol and hydrogen from biomass. J Power Sources 111(1):1–22 H€aring H-W, Belloni A, Ahner C (2007) Industrial gases processing. Wiley-VCH, Weinheim. ISBN 978-3527316854 Heitland H, Hiller H, Hoffmann HJ (1990) Factors influencing CO2 emission of future passenger car traffic. MTZ 51:2 Hekkert MP, Hendriks FHJF, Faaij APC, Neelis ML (2005) Natural gas as an alternative to crude oil in automotive fuel chains well-to-wheel analysis and transition strategy development. Energy Policy 33(5):579–594 Hernandez P, Kenny P (2010) From net energy to zero energy buildings: defining life cycle zero energy buildings (LC-ZEB). Energy Build 42(6):815–821 Herring H, Sorrell S (2009) Energy efficiency and sustainable consumption: the rebound effect, Energy, climate and the environment. Palgrave, New York. ISBN 978-0230525344 Heselton KE (2004) Boiler operator’s handbook. Marcel Dekker, New York. ISBN 978-0824742904 Hinderink P, van der Kooi HJ, De Swaan Arons J (1999) On the efficiency and sustainability of the process industry. Green Chem 176–180. http://www.rsc.org/delivery/_ArticleLinking/ DisplayArticleForFree.cfm?doi=a909915h&JournalCode=GC. Accessed 1 Jan 2015 Page 55 of 65

Handbook of Climate Change Mitigation and Adptation DOI 10.1007/978-1-4614-6431-0_24-2 # Springer Science+Business Media New York 2014

Hirscher M, Hirose K (2010) Handbook of hydrogen storage: new materials for future energy storage. Wiley-VCH, Weinheim. ISBN 978-3527322732 Hoffmann KH, Burzler JM, Schubert S (1997) Endoreversible thermodynamics. J Non-Equilib Thermodyn 22(4):311–355 Hollinger P (2014) Europe risks ‘significant’ gas shortages this winter. Financial Times, 11 July 2014. http://www.ft.com/cms/s/0/a119b2e4-082e-11e4-acd8-00144feab7de.html#axzz3NCxSbl9h. Accessed 1 Jan 2015 Holmberg K, Andersson P, Erdemir A (2012) Global energy consumption due to friction in passenger cars. Tribol Int 47:221–234 Hori M (2008) Nuclear energy for transportation: paths through electricity, hydrogen and liquid fuels. Prog Nucl Energy 50(2–6):411–416 http://www.energystar.gov/ (2015). Accessed 1 Jan 2015 http://www.epa.gov/nrmrl/std/lca/lca.html (2015). Accessed 1 Jan 2015 http://www.essentialchemicalindustry.org/processes/recycling-in-the-chemical-industry.html (2015). Accessed 1 Jan 2015 http://www.eu-energystar.org/ (2015). Accessed 1 Jan 2015 http://www.fueleconomy.gov/ (2015). Accessed 1 Jan 2015 http://www.ics-shipping.org/publications/ (2015). Accessed 1 Jan 2015 http://www.osti.gov/glass/bestpractices.html (2015). Accessed 1 Jan 2015 Ibrahim H, Ilinca A, Perron J (2008) Energy storage systems – characteristics and comparisons. Renew Sustain Energy Rev 12(5):1221–1250 IEA (2009) World energy outlook 2009. International Energy Association (IEA), Paris. ISBN 9789264061309 IEA (2014a) World energy outlook 2014. ISBN 978-92-64-20804-9. http://www.iea.org/W/bookshop/ 477-World_Energy_Outlook_2014. Accessed 1 Jan 2015 IEA (2014b) World energy outlook 2014. Presentation to the Press. http://www.worldenergyoutlook.org/ media/weowebsite/2014/WEO2014_LondonNovember.pdf. Accessed 1 Jan 2015 Intergovernmental Panel on Climate Change (IPCC) (2015) http://www.ipcc.ch/. Accessed 1 Jan 2015 International Energy Agency (2008) Promoting energy efficiency investments: case studies in the residential sector. Organization for Economic Cooperation & Development, Paris. ISBN 978-9264042148 International Energy Agency (2014) CO2 emissions from fuel combustion, IEA statistics. http://www.iea. org/publications/freepublications/publication/CO2EmissionsFromFuelCombustionHighlights2014. pdf. Accessed 1 Jan 2015 International Transport Forum (2013) Statistics brief, Dec 2013, Global transport trends in perspective. http://www.internationaltransportforum.org/statistics/StatBrief/2013-12-Trends-Perspective.pdf. Accessed 1 Jan 2015 IPCC (2000) Aviation and the global atmosphere. IPCC special reports on climate change. http://www. grida.no/publications/other/ipcc_sr/?src=/climate/ipcc/aviation/avf9-3.htm. Accessed 1 Jan 2015 Iriarte A, Almeida MG, Villalobos P (2014) Carbon footprint of premium quality export bananas: case study in Ecuador, the world’s largest exporter. Sci Total Environ 472:1082–1088 ISO (2015) http://www.iso-14001.org.uk/index.htm. Accessed 1 Jan 2015 Jaffe AB, Stavins RN (1994) The energy-efficiency gap, what does it mean? Energy Policy 22(10):804–810 Jamieson A (2009) Customers buy up traditional light bulbs before switch to low energy alternatives. The Telegraph, 18 Apr 2009. http://www.telegraph.co.uk/technology/news/5179266/Customers-buy-uptraditional-light-bulbs-before-switch-to-low-energy-alternatives.html. Accessed 1 Jan 2015 Page 56 of 65

Handbook of Climate Change Mitigation and Adptation DOI 10.1007/978-1-4614-6431-0_24-2 # Springer Science+Business Media New York 2014

Jechoutek KG, Lamech R (1995) New directions in electric power financing. Energy Policy 23(11):941–953 Jevons WS (2008) The coal question. Lulu Press, Gloucester. ISBN 978-1409952312 Jiang P (2011) Analysis of national and local energy-efficiency design standards in the public building sector in China. Energy Sustain Dev 15(4):443–450 Jiang R, Rong C, Chu D (2004) Determination of energy efficiency for a direct methanol fuel cell stack by a fuel circulation method. J Power Sources 126(1–2):119–124 Jin JC, Choi J-Y, Yu ESH (2009) Energy prices, energy conservation, and economic growth: evidence from the postwar United States. Int Rev Econ Financ 18(4):691–699 Johansson J-E (2015) Compelling facts about plastics, plastics Europe. http://www.plasticseurope.org/. Accessed 1 Jan 2015 Johansson B, Åhman M (2002) A comparison of technologies for carbon-neutral passenger transport. Transp Res Part D: Transp Environ 7(3):175–196 Johnston P, Stringer R (2001) Chlorine and the environment: an overview of the chlorine industry. Springer, Dordrecht. ISBN 978-0792367970 Jönsson J, Algehed J (2010) Pathways to a sustainable European kraft pulp industry: trade-offs between economy and CO2 emissions for different technologies and system solutions. Appl Therm Eng 30(16):2315–2325 Jordan P, Jordan JW, McClelland IL (1996) Usability evaluation in industry. Taylor & Francis, London. ISBN 978-0748404605 Joshi R, Pathak M (2014) Decentralized grid-connected power generation potential in India: from perspective of energy efficient buildings. Energy Procedia 57:716–724 Joskow PL, Marron DB (1993) What does a negawatt really cost? Further thoughts and evidence. Electr J 6(6):14–26 Kamga C, Yazici MA (2014) Achieving environmental sustainability beyond technological improvements: potential role of high-speed rail in the United States of America. Transp Res Part D: Transp Environ 31:148–164 Kania JJ (1984) Economics of coal transport by slurry pipeline versus unit train: a case study. Energy Econ 6(2):131–138 Kanimozhi R, Selvi K, Balaji KM (2014) Multi-objective approach for load shedding based on voltage stability index consideration. Alex Eng J 53(4):817–825 Ke J, Price L, Ohshita S, Fridley D, Khanna NZ, Zhou N, Levine M (2012) China’s industrial energy consumption trends and impacts of the Top-1000 Enterprises Energy-Saving Program and the Ten Key Energy-Saving Projects. Energy Policy 50:562–569 Kemp RJ (1994) The European high speed network. In: Feilden GBR, Wickens AH, Yates I (eds) Passenger transport after 2000 AD. Spon Press, London. ISBN 0419194703 Kemp RJ (1997) Rail transport in the next Millennium, Visions of Tomorrow. IMechE 150 year symposium, London. ISBN: 186058098X Kemp R (2004) Take the car and save the planet. Power Eng 18(5):12–17 Khurana S, Banerjee R, Gaitonde U (2002) Energy balance and cogeneration for a cement plant. Appl Therm Eng 22(5):485–494 Kilponen L, Ahtila P, Parpala J, Pihko M (2001) Improvement of pulp mill energy efficiency in an integrated pulp and paper mill. In: Proceedings ACEEE summer study on energy efficiency in industry, Washington DC, pp 363–374. http://aceee.org/files/proceedings/2001/data/papers/SS01_ Panel1_Paper32.pdf Kim J, Park C (2010) Wind power generation with a parawing on ships, a proposal. Energy 35(3):1425–1432 Page 57 of 65

Handbook of Climate Change Mitigation and Adptation DOI 10.1007/978-1-4614-6431-0_24-2 # Springer Science+Business Media New York 2014

Kodama Y, Kakugawa A, Takahashi T, Kawashima H (2000) Experimental study on microbubbles and their applicability to ships for skin friction reduction. Int J Heat Fluid Flow 21(5):582–588 Koros WJ, Fleming GK (1993) Membrane-based gas separation. J Membr Sci 83(1):1–80 Kotegawa T, Fry D, DeLaurentis D, Puchaty E (2014) Impact of service network topology on air transportation efficiency. Transp Res Part C Emerg Technol 40:231–250 Krigger J, Dorsi C (2008) The homeowner’s handbook to energy efficiency: a guide to big and small improvements. Saturn Resource Management, Helena. ISBN 978-1880120187 Kruyt B, van Vuuren DP, de Vries HJM, Groenenberg H (2009) Indicators for energy security. Energy Policy 37(6):2166–2181 Kumar S (2002) Cleaner production technology and bankable energy efficiency drives in fertilizer industry in India to minimise greenhouse gas emissions – case study. In: Greenhouse gas control technologies – 6th international conference, Pergamon Press, Oxford. pp 1031–1036 Kumar A, Cameron JB, Flynn PC (2007) Pipeline transport of biomass. Appl Biochem Biotechnol 113(1–3):27–39 Kumar A, Demirel Y, Jones DD, Hanna MA (2010) Optimization and economic evaluation of industrial gas production and combined heat and power generation from gasification of corn stover and distillers grains. Bioresour Technol 101(10):3696–3701 Lackner M (2007) Innovation in business unit pipe: shaping a strategy for the future. Master thesis, LIMAK Johannes Kepler University Business School, Linz Lackner M (ed) (2009) Alternative ignition systems. ProcessEng Engineering GmbH, Vienna. ISBN 978-3902655059 Lackner M (ed) (2010) Scale-up in metallurgy. ProcessEng Engineering GmbH, Vienna. ISBN 978-3902655-10-3 Lackner M, Winter F, Geringer B (2005) Chemie im Motor. Chemie in unserer Zeit 4:228–229 Lackner M, Winter F, Agarwal AK (2010) Handbook of combustion. Wiley-VCH, Weinheim. ISBN 978-3527324491 Lackner M, Palotás AB, Winter F (2013) Combustion: from basics to applications. Wiley-VCH, Weinheim. ISBN 978-3527333516 Ladha JK, Pathak H, Krupnik TJ, Six J, van Kessel C (2005) Efficiency of fertilizer nitrogen in cereal production: retrospects and prospects. Adv Agron 87:85–156 Le Pen Y, Sévi B (2010) What trends in energy efficiencies? Evidence from a robust test. Energy Econ 32(3):702–708 Lechtenböhmer S, Dienst C, Fischedick M, Hanke T, Fernandez R, Robinson D, Kantamaneni R, Gillis B (2007) Tapping the leakages: methane losses, mitigation options and policy issues for Russian long distance gas transmission pipelines. Int J Greenhouse Gas Control 1(4):387–395 Lee M-K, Park H, Noh J, Painuly JP (2003) Promoting energy efficiency financing and ESCOs in developing countries: experiences from Korean ESCO business. J Clean Prod 11(6):651–657 Li T, Hassan M, Kuwana K, Saito K, King P (2006) Performance of secondary aluminum melting: thermodynamic analysis and plant-site experiments. Energy 31(12):1769–1779 Li Z, Gao D, Chang L, Liu P, Pistikopoulos EN (2010) Coal-derived methanol for hydrogen vehicles in China: energy, environment, and economic analysis for distributed reforming. Chem Eng Res Des 88(1):73–80 Liang Y, Lee Y-C, Teng A (2007) Real-time communication: internet protocol voice and video telephony and teleconferencing. In: Multimedia over IP and wireless networks. Academic Press, New York, pp 503–525 Lin J (2007) Energy conservation investments: a comparison between China and the US. Energy Policy 35(2):916–924 Page 58 of 65

Handbook of Climate Change Mitigation and Adptation DOI 10.1007/978-1-4614-6431-0_24-2 # Springer Science+Business Media New York 2014

Liu F, Ross M, Wang S (1995) Energy efficiency of China’s cement industry. Energy 20(7):669–681 López MA, de la Torre S, Martín S, Aguado JA (2015) Demand-side management in smart grid operation considering electric vehicles load shifting and vehicle-to-grid support. Int J Electr Power Energy Syst 64:689–698 Loughran DS, Kulick J (2004) Demand-side management and energy efficiency in the United States. Energy J 25(1):19–44 Lugt PM, de Niet A, Bouwman WH, Bosma JCN, van den Bleek CM (1996) Catalytic removal of NOx from total energy installation flue-gases for carbon dioxide fertilization in greenhouses. Catal Today 29(1–4):127–131 Lund P (2006) Market penetration rates of new energy technologies. Energy Policy 34(17):3317–3326 Lutz E (2008) Identification and analysis of energy saving projects in a Kraft mill. Pulp Paper Can 109(5):13–17 Malça J, Freire F (2006) Renewability and life-cycle energy efficiency of bioethanol and bio-ethyl tertiary butyl ether (bioETBE): assessing the implications of allocation. Energy 31(15):3362–3380 Malkov T (2004) Novel and innovative pyrolysis and gasification technologies for energy efficient and environmentally sound MSW disposal. Waste Manag 24(1):53–79 Mandal SK, Madheswaran S (2010) Environmental efficiency of the Indian cement industry: an interstate analysis. Energy Policy 38(2):1108–1118 Markis T, Paravantis JA (2007) Energy conservation in small enterprises. Energy Build 39(4):404–415 Marks P (2009) ‘Morphing’ winglets to boost aircraft efficiency. New Sci 201(2692):22–23 Marsh G (2007) Airbus takes on Boeing with reinforced plastic A350 XWB. Reinf Plast 51(11):26–27, 29 Marsh G (2008) Biofuels: aviation alternative? Renew Energy Focus 9(4):48–51 Max Appl (2006) Ammonia. In: Ullmann’s encyclopedia of industrial chemistry. Wiley-VCH, Weinheim McCabe WL, Smith J, Harriott P (2004) Unit operations of chemical engineering, 7th edn. Mcgraw-Hill, New York. ISBN 978-0072848236 McKay G, Holland CR (1981) Energy savings from steam losses on an oil refinery. Eng Cost Prod Econ 5(3–4):193–203 McKinsey & Company, Inc. (2009) Energy: a key to competitive advantage, new sources of growth and productivity. Anja Hartmann, Wolfgang Huhn, Christian Malorny, Martin Stuchtey, Thomas Vahlenkamp, Detlef Kayser, Detlev Mohr, Claudia Funke Frankfurt/Germany http://www.mckinsey. com/~/media/mckinsey/dotcom/client_service/sustainability/pdfs/energy_competitive_advantage_in_ germany.ashx McLean-Conner P (2009) Energy efficiency: principles and practices. Pennwell, Tulsa. ISBN 978-1593701789 McMichael AJ, Powles JW, Butler CD, Uauy R (2007) Food, livestock production, energy, climate change, and health. Lancet 370:1253–1263 Mesa AA, Gómez CM, Azpitarte RU (1997) Design of the maximum energy efficiency desalination plant (PAME). Desalination 108(1–3):111–116 Meyers RA (2004) Handbook of petrochemicals production processes. Mcgraw-Hill Professional, New York. ISBN 978-0071410427 Miller FP, Vandome AF, McBrewster J (2009) Zero-energy building. Energy efficiency in British housing, energy conservation, passive house. Alphascript Publishing. ISBN: 978-6130023331, Beau Bassin, Mauritius Minas L, Ellison B (2009) Energy efficiency for information technology: how to reduce power consumption in servers and data centers. Intel Press, Santa Clara. ISBN 978-1934053201 Mitsos A, Chachuat B, Barton PI (2007) What is the design objective for portable power generation: efficiency or energy density? J Power Sources 164(2):678–687 Page 59 of 65

Handbook of Climate Change Mitigation and Adptation DOI 10.1007/978-1-4614-6431-0_24-2 # Springer Science+Business Media New York 2014

Moore DA (2005) Sustaining performance improvements in energy intensive industries. In: Proceedings of the twenty-seventh industrial energy technology conference, New Orleans, ESL-IE-05-05-31, 10–13 May 2005 Moors EHM (2006) Technology strategies for sustainable metals production systems: a case study of primary aluminium production in The Netherlands and Norway. J Clean Prod 14(12–13):1121–1138 Morris DR, Steward FR, Evans P (1983) Energy efficiency of a lead smelter. Energy 8(5):337–349 Mudahar MS, Hignett TP (1985) Energy efficiency in nitrogen fertilizer production. Energy Agric 4:159–177 Mundaca L (2009) Energy Efficiency Trading: concepts, practice and evaluation of tradable certificates for energy efficiency improvements. VDM Verlag, Saarbr€ ucken. ISBN 978-3639139730 Musa C, Licheri R, Locci AM, Orrù R, Cao G, Rodriguez MA, Jaworska L (2009) Energy efficiency during conventional and novel sintering processes: the case of Ti–Al2O3–TiC composites. J Clean Prod 17(9):877–882 Nachreiner F, Nickel P, Meyer I (2006) Human factors in process control systems: the design of human–machine interfaces. Saf Sci 44(1):5–26 Naisbitt J (1985) Megatrends: ten new directions transforming our lives. Grand Central Publishing, New York. ISBN: 978-0446512510, Najjar YSH, Habeebullah MB (1991) Energy conservation in the refinery by utilizing reformed fuel gas and furnace flue gases. Heat Recovery Syst CHP 11(6):517–521 Namboodiri V (2009) Algorithms & protocols towards energy-efficiency in wireless networks. VDM Verlag, Saarbr€ ucken. ISBN 978-3639157024 N€assén J, Holmberg J (2005) Energy efficiency – a forgotten goal in the Swedish building sector? Energy Policy 33(8):1037–1051 N€assén J, Sprei F, Holmberg J (2008) Stagnating energy efficiency in the Swedish building sector – economic and organisational explanations. Energy Policy 36(10):3814–3822 NDRC (2007) Bulletin of energy consumption in the top 1000 Chinese enterprises. Beijing, Sept 2007 (Chinese) Nelson P, Safirova E, Walls M (2007) Telecommuting and environmental policy: lessons from the ecommute program. Transp Res Part D: Transp Environ 12(3):195–207 Next 10’s California Green Innovation Index (2010) http://www.nextten.org/environment/ greenInnovation.html. Accessed 1 Jan 2015 Nishitani H, Kawamura T, Suzuki G (2000) University – industry cooperative study on plant operations. Comput Chem Eng 24(2–7):557–567 Nordman R, Berntsson T (2009) Use of advanced composite curves for assessing cost-effective HEN retrofit II. Case studies. Appl Therm Eng 29(2–3):282–289 Novikov II (1958) The efficiency of atomic power stations. J Nucl Energy II 7:125–128 (translated from Atomnaya Energiya 3:409 (1957)) Nuo G, Gaoshang W (2008) Analysis on China’s energy efficiency. Energy China 7:32–36 Office of Energy Efficiency, Natural Resources Canada (2002) Energy efficiency planning and management guide. Canadian Industry Program for Energy Conservation, Ottawa. ISBN 0-662-31457-3 Okura S, Rubin R, Brost M (2006) What types of appliances and lighting are being used in California residences? http://mail.mtprog.com/CD_Layout/Day_2_22.06.06/1615-1815/ID147_Okura_final. pdf, http://escholarship.org/uc/item/7qz3b977. Accessed 1 Jan 2015 Olah GA, Goeppert A, Surya Prakash GK (2009) Beyond oil and gas: the methanol economy, 2nd edn. Wiley-VCH, Weinheim. ISBN 978-3527324224 Oude Lansink A, Bezlepkin I (2003) The effect of heating technologies on CO2 and energy efficiency of Dutch greenhouse firms. J Environ Manage 68(1):73–82 Page 60 of 65

Handbook of Climate Change Mitigation and Adptation DOI 10.1007/978-1-4614-6431-0_24-2 # Springer Science+Business Media New York 2014

Page S, Krumdieck S (2009) System-level energy efficiency is the greatest barrier to development of the hydrogen economy. Energy Policy 37(9):3325–3335 Panjeshahi MH, Ghasemian Langeroudi E, Tahouni N (2008) Retrofit of ammonia plant for improving energy efficiency. Energy 33(1):46–64 Patel M, Mutha N (2004) Plastics production and energy. Encycl Energy 3:81–91 Patrick DR, Fardo S, Richardson RE (2007) Energy conservation guidebook, 2nd edn. CRC Press, Boca Raton. ISBN 978-0849391781 Patterson MG (1996) What is energy efficiency? Concepts, indicators and methodological issues. Energy Policy 24(5):377–390 Peeters PM, Middel J, Hoolhorst A (2005) Fuel efficiency of commercial aircraft, an overview of historical and future trends, NLR-CR-2005-669. Nationaal Lucht- en Ruimtevaartlaboratorium, National Aerospace Laboratory NLR. http://www.transportenvironment.org/Publications/prep_hand_ out/lid/398. Accessed 1 Jan 2015 Penner JE, Lister DH, Griggs DJ, Dokken DJ, McFarland M (1999) Aviation and the global atmosphere; a special report to IPCC working groups I and III. Cambridge University Press, Cambridge Perrot P (1998) A to Z of thermodynamics. Oxford University Press, Oxford. ISBN 978-0198565529 Phylipsen GJM (Dian), Blok K, Bode J-W (2002) Industrial energy efficiency in the climate change debate: comparing the US and major developing countries. Energy Sustain Dev 6(4):30–44 Phylipsen GJM, Blok K, Worrell E (1997) International comparisons of energy efficiency-methodologies for the manufacturing industry. Energy Policy 25(7–9):715–725 Pilavachi PA (2000) Power generation with gas turbine systems and combined heat and power. Appl Therm Eng 20(15–16):1421–1429 Poliakoff M, Fitzpatrick JM, Farren TR, Anastas PT (2002) Green chemistry: science and politics of change. Science 297:807–810 Pootakham T, Kumar A (2010) A comparison of pipeline versus truck transport of bio-oil. Bioresour Technol 101(1):414–421 Principi P, Fioretti R (2014) A comparative life cycle assessment of luminaires for general lighting for the office – compact fluorescent (CFL) vs Light Emitting Diode (LED) – a case study. J Clean Prod 83(15):96–107 Prins MJ, Ptasinski KJ, Janssen FJJG (2004) Exergetic optimisation of a production process of Fischer–Tropsch fuels from biomass. Fuel Process Technol 86:375–389 Quadrelli R, Peterson S (2007) The energy–climate challenge: recent trends in CO2 emissions from fuel combustion. Energy Policy 35(11):5938–5952 Radulovic D, Skok S, Kirincic V (2011) Energy efficiency public lighting management in the cities. Energy 36(4):1908–1915 Rafiqul I, Weber C, Lehmann B, Voss A (2005) Energy efficiency improvements in ammonia production – perspectives and uncertainties. Energy 30(13):2487–2504 Raj NT, Iniyan S, Goic R (2011) A review of renewable energy based cogeneration technologies. Renew Sustain Energy Rev 15(8):3640–3648 Rajan GG (2002) Optimizing energy efficiencies in industry. McGraw-Hill Professional, London. ISBN 978-0071396929 Ramírez CA, Blok K, Neelis M, Patel M (2006a) Adding apples and oranges: the monitoring of energy efficiency in the Dutch food industry. Energy Policy 34(14):1720–1735 Ramírez CA, Patel M, Blok K (2006b) From fluid milk to milk powder: energy use and energy efficiency in the European dairy industry. Energy 31(12):1984–2004 Ranaiefar F, Amelia R (2011) Freight-Transportation Externalities, Logistics Operations and Management, pp 333–358 Page 61 of 65

Handbook of Climate Change Mitigation and Adptation DOI 10.1007/978-1-4614-6431-0_24-2 # Springer Science+Business Media New York 2014

Ren T, Patel MK, Blok K (2008) Steam cracking and methane to olefins: energy use, CO2 emissions and production costs. Energy 33(5):817–833 Rhee H-J (2008) Home-based telecommuting and commuting behavior. J Urban Econ 63(1):198–216 Rietbergen MG, Farla JCM, Blok K (2002) Do agreements enhance energy efficiency improvement?: analysing the actual outcome of long-term agreements on industrial energy efficiency improvement in The Netherlands. J Clean Prod 10(20):153–163 Rosen MA, Scott DS (1988) Energy and exergy analyses of a production process for methanol from natural gas. Int J Hydrog Energy 13(10):617–623 Rosenfeld A (2008) Energy efficiency: the first and most profitable way to delay climate change. EPA Region IX, California Energy Commission, Sacramento Rugman AM, Li J (2005) Real options and international investment. Edward Elgar, Northampton. ISBN 10: 1840649011 Russell C (2009) Managing energy from the top down: connecting industrial energy efficiency to business performance. CRC Press. ISBN: 978-1439829967, Boca Raton, USA Rydh CJ, Sandén BA (2005) Energy analysis of batteries in photovoltaic systems. Part II: energy return factors and overall battery efficiencies. Energy Convers Manag 46(11–12):1980–2000 Ryerson MS, Kim H (2014) The impact of airline mergers and hub reorganization on aviation fuel consumption. J Clean Prod 85:395–407 Saunders H (1992) The Khazzoom-Brookes postulate and neoclassical economic growth. Energy J 13(14):131–148 Saunders C, Barber A, Taylor G (2006) Food miles – comparative energy/emissions; performance of New Zealand’s agriculture industry, vol 285, Research report. Agribusiness & Economics Research Unit, Lincoln University, Christchurch. ISBN 0-909042-71-3 Scheirs J (2006) Recycling of waste plastics. In: Pyrolysis and related feedstock recycling technologies: converting waste plastics into diesel and other fuels. Wiley, ISBN: 978-0470021521, Weinheim, Germany Schipper L, Meyers S, Howarth RB, Steiner R (2005) Energy efficiency and human activity: past trends, future prospects. Cambridge University Press, Cambridge. ISBN 978-0521479851 Schleich J (2009) Barriers to energy efficiency: a comparison across the German commercial and services sector. Ecol Econ 68(7):2150–2159 Schneekluth H, Bertram V (1998) Ship propulsion. In: Ship design for efficiency and economy, 2nd edn. Butterworth Heinemann, Oxford, pp 180–205 Serra LM, Lozano M-A, Ramos J, Ensinas AV, Nebra SA (2009) Polygeneration and efficient use of natural resources. Energy 34(5):575–586 Sharma SD (2009) Fuels – hydrogen production|gas cleaning: pressure swing adsorption. In: Encyclopedia of electrochemical power sources. Elsevier Science & Technology, Amsterdam/Netherlands, pp 335–349 Shell Eco Marathon (2015) http://www.shell.com/home/content/ecomarathon/about/current_records/. Accessed 1 Jan 2015 Sheredeka VV, Krivoruchko PA, Polokhlivets EK, Kiyan VI, Atkarskaya AB (2001) Energy-saving technologies in glass production. Glas Ceram 58(1–2):70–71 Sloan P, Legrand W, Chen JS (2009) Energy efficiency. In: Sustainability in the hospitality industry. Butterworth Heinemann, Oxford, pp 13–26 Smith P (2009) The processing of high silica bauxites – review of existing and potential processes. Hydrometallurgy 98(1–2):162–176 Sorrell S (2009) Jevons’ Paradox revisited: the evidence for backfire from improved energy efficiency. Energy Policy 37(4):1456–1469 Page 62 of 65

Handbook of Climate Change Mitigation and Adptation DOI 10.1007/978-1-4614-6431-0_24-2 # Springer Science+Business Media New York 2014

Sorrell S, O’Malley E, Schleich J (2004) The economics of energy efficiency: barriers to cost-effective investment. Edward Elgar, Cheltenham. ISBN 978-1840648898 Sorrell S, Lehtonen M, Stapleton L, Pujol J, Champion T (2009) Decomposing road freight energy use in the United Kingdom. Energy Policy 37(8):3115–3129 Stepanov V, Stepanov S (1998) Energy use efficiency of metallurgical processes. Energy Convers Manag 39(16–18):1803–1809 Stern N (2007) The economics of climate change: the stern review. Cambridge University Press, Cambridge. ISBN 978-0521700801 Stuart D, Schewe RL, McDermott M (2014) Reducing nitrogen fertilizer application as a climate change mitigation strategy: Understanding farmer decision-making and potential barriers to change in the US. Land Use Policy 36:210–218 Sustainable Energy Ireland (SEI) (2015) http://www.sei.ie. Accessed 1 Jan 2015 Svensson AM, Møller-Holst S, Glöckner R, Maurstad O (2007) Well-to-wheel study of passenger vehicles in the Norwegian energy system. Energy 32(4):437–445 Swanton CJ, Murphy SD, Hume DJ, Clements DR (1996) Recent improvements in the energy efficiency of agriculture: case studies from Ontario, Canada. Agric Syst 52(4):399–418 Santin J (2005) Swiss fuel cell car breaks fuel efficiency record. Fuel Cells Bull 2005(8):8–9 Szentennai P, Lackner M (2014) Advanced control methods for combustion. Chem Eng 2–6:08 Tapio P, Banister D, Luukkanen J, Vehmas J, Willamo R (2007) Energy and transport in comparison: immaterialisation, dematerialisation and decarbonisation in the EU15 between 1970 and 2000. Energy Policy 35(1):433–451 Tay JH, Low SC, Jeyaseelanb S (1996) Vacuum desalination for water purification using waste heat. Desalination 106(1–3):131–135 Taylor AMKP (2008) Science review of internal combustion engines. Energy Policy 36(12):4657–4667 Taylor RP, Govindarajalu C, Levin J (2008) Financing energy efficiency: lessons from Brazil, China, India, and beyond. World Bank, Washington, DC. ISBN 978-0821373040 Techato K-a, Watts DJ, Chaiprapat S (2009) Life cycle analysis of retrofitting with high energy efficiency air-conditioner and fluorescent lamp in existing buildings. Energy Policy 37(1):318–325 The International Energy Association in Collaboration with CEFIC (2007) Feedstock substitutes, energy efficient technology and CO2 reduction for petrochemical products, A workshop in the framework of the G8 dialogue on climate change, clean energy and sustainable development, Paris, France Thomas CE (2009) Fuel cell and battery electric vehicles compared. Int J Hydrog Energy 34(15):6005–6020 Thumann A, Dunning S (2008) Plant engineers and managers guide to energy conservation, 9th edn. CRC Press, Boca Raton. ISBN 978-1420052466 Tin T, Sovacool BK, Blake D, Magill P, El Naggar S, Lidstrom S, Ishizawa K, Berte J (2010) Energy efficiency and renewable energy under extreme conditions: case studies from Antarctica. Renew Energy 35(8):1715–1723 Todorovic MS, Kim JT (2014) Data centre’s energy efficiency optimization and greening – case study methodology and R&D needs. Energy Build 85:564–578 Tromans D (2008) Mineral comminution: energy efficiency considerations. Miner Eng 21(8):613–620 Tuomaala M, Hurme M, Leino A-M (2010) Evaluating the efficiency of integrated systems in the process industry–case: steam cracker. Appl Therm Eng 30(1):45–52 Tutterow V, Casada D, McKane A (2002) Pumping systems efficiency improvements flow straight to the bottom line. Lawrence Berkeley National Laboratory, LBNL paper LBNL-51043. Retrieved from http://www.escholarship.org/uc/item/8s4315r9. Accessed 1 Jan 2015 UK Carbon Trust (2015) http://www.carbontrust.co.uk. Accessed 1 Jan 2015 Page 63 of 65

Handbook of Climate Change Mitigation and Adptation DOI 10.1007/978-1-4614-6431-0_24-2 # Springer Science+Business Media New York 2014

United Nations (2006) Energy efficiency guide for industry in Asia. United Nations, Nairobi. ISBN 978-9280726473 University of York (2010) Recycling in the chemical industry. http://www.wasteonline.org.uk/resources/ InformationSheets/Plastics.htm. Accessed 1 Jan 2015 US Department of Energy (2005) A manual for the economic evaluation of energy efficiency and renewable energy technologies. International Law & Taxation, Washington, DC. ISBN 978-1410221056 US Department of Energy (2010) Energy efficiency & renewable energy, best practices, motors, pumps and fans. http://www1.eere.energy.gov/industry/bestpractices/motors.html. Accessed 1 Jan 2015 US Green Building Council (2015) http://www.usgbc.org. Accessed 1 Jan 2015 Utlu Z, Hepbasli A (2007) A review on analyzing and evaluating the energy utilization efficiency of countries. Renew Sustain Energy Rev 11(1):1–29 Utlu Z, Sogut Z, Hepbasli A, Oktay Z (2006) Energy and exergy analyses of a raw mill in a cement production. Appl Therm Eng 26(17–18):2479–2489 van Vliet OPR, Faaij APC, Turkenburg WC (2009) Fischer–Tropsch diesel production in a well-to-wheel perspective: a carbon, energy flow and cost analysis. Energy Convers Manag 50(4):855–876 Venkatarama Reddy BV, Jagadish KS (2003) Embodied energy of common and alternative building materials and technologies. Energy Build 35(2):129–137 Vine E (2002) Promoting emerging energy-efficiency technologies and practices by utilities in a restructured energy industry: a report from California. Energy 27(4):317–328 Vine E, Rhee CH, Lee KD (2006) Measurement and evaluation of energy efficiency programs: California and South Korea. Energy 31(6–7):1100–1113 Wall G, Sciubba E, Naso V (1994) Exergy use in the Italian society. Energy 19(12):1267–1274 Wang L (2008) Energy efficiency and management in food processing facilities. CRC Press, Boca Raton. ISBN 978-1420063387 Wang Y, Feng X, Cai Y, Zhu M, Chu KH (2009) Improving a process’s efficiency by exploiting heat pockets in its heat exchange network. Energy 34(11):1925–1932 Wang Z, Roberts RR, Naterer GF, Gabriel KS (2012) Comparison of thermochemical, electrolytic, photoelectrolytic and photochemical solar-to-hydrogen production technologies. Int J Hydrog Energy 37(21):16287–16301 Wei Y-M, Liao H, Fan Y (2007) An empirical analysis of energy efficiency in China’s iron and steel sector. Energy 32(12):2262–2270 Wei M, Patadia S, Kammen DM (2010) Putting renewables and energy efficiency to work: how many jobs can the clean energy industry generate in the US? Energy Policy 38(2):919–931 Wu W, Wang B, Shi W, Li X (2014) An overview of ammonia-based absorption chillers and heat pumps. Renew Sustain Energy Rev 31:681–707 Wells C (2001) Total energy indicators of agricultural sustainability: dairy farming case study. Ministry of Agriculture and Forestry, Wellington Wenkai L, Hui C-W, Hua B, Tong Z (2003) Material and energy integration in a petroleum refinery complex. Comput Aided Chem Eng 15(Part 2):934–939 Wernick IK, Herman R, Govind S, Ausubel JH (1996) Materialization and dematerialization: measures and trends. Daedalus 125(3):171–198 White SB, Howe C (1998) Water efficiency and reuse: a least cost planning approach. In: Proceedings of the 6th NSW recycled water seminar, Sydney Williams V, Noland RB, Toumi R (2002) Reducing the climate change impacts of aviation by restricting cruise altitudes. Transp Res Part D: Transp Environ 7(6):451–464

Page 64 of 65

Handbook of Climate Change Mitigation and Adptation DOI 10.1007/978-1-4614-6431-0_24-2 # Springer Science+Business Media New York 2014

Winchester N, McConnachie D, Wollersheim C, Waitz IA (2013) Economic and emissions impacts of renewable fuel goals for aviation in the US. Transp Res A Policy Pract 58:116–128 World Business Council for Sustainable Development (WBCSD) (2000) Eco-efficiency: creating more value with less impact. World Business Council for Sustainable Development, Geneva. ISBN 2-94-024017-5 Worrell E, Blok K (1994) Energy savings in the nitrogen fertilizer industry in the Netherlands. Energy 19(2):195–209 Worrell E, Galitsky C (2005) Energy efficiency improvement and cost saving opportunities for petroleum refineries. Lawrence Berkeley National Laboratory, LBNL paper LBNL-56183. Retrieved from http:// www.escholarship.org/uc/item/96m8d8gm. Accessed 1 Jan 2015 Worrell E, Galitsky C (2008) Energy efficiency improvement and cost saving opportunities for cement making, an ENERGY STAR ® guide for energy and plant managers. Ernest Orlando Lawrence Berkeley National Laboratory, LBNL-54036-Revision Worrell E, De Beer JG, Faaij APC, Blok K (1994a) Potential energy savings in the production route for plastics. Energy Convers Manag 35(12):1073–1085 Worrell E, Cuelenaere FA, Blok K, Turkenburg WC (1994b) Energy consumption of industrial processes in the European union. Energy 11(19):1113–1129 Worrell E, Martin N, Price L (2000a) Potentials for energy efficiency improvement in the US cement industry. Energy 25(12):1189–1214 Worrell E, Phylipsen D, Einstein D, Martin N (2000b) Energy use and energy intensity of the U.S. chemical industry. Lawrence Berkeley National Laboratory, LBNL paper LBNL-44314. Retrieved from http://www.escholarship.org/uc/item/2925w8g6. Accessed 1 Jan 2015 Worrell E, Phylipsen D, Einstein D, Martin N (2000c) Energy use and energy intensity of the U.S. chemical industry, LBNL-44314. Lawrence Berkeley National Laboratory, Berkeley Worrell E, Martin N, Anglani N, Einstein D, Khrushch M, Price L (2001) Opportunities to improve energy efficiency in the U.S. pulp and paper industry. Lawrence Berkeley National Laboratory. LBNL paper LBNL-48354. Retrieved from http://www.escholarship.org/uc/item/7sv597fv. Accessed 1 Jan 2015 Worrell E, Galitsky C, Masanet E, Graus W (2008) Energy efficiency improvement and cost saving opportunities for the glass industry: an energy star guide for energy and plant managers. Lawrence Berkeley National Laboratory, Publication no LBNL-57335-Revision Xia A, Cheng J, Ding L, Lin R, Song W, Zhou J, Cen K (2014) Enhancement of energy production efficiency from mixed biomass of Chlorella pyrenoidosa and cassava starch through combined hydrogen fermentation and methanogenesis. Appl Energy 120:23–30 Yang M (2010) Energy efficiency improving opportunities in a large Chinese shoe-making enterprise. Energy Policy 38:452–462 Yildiz B, Kazimi MS (2006) Efficiency of hydrogen production systems using alternative nuclear energy technologies. Int J Hydrog Energy 31(1):77–92 Yudken JS, Bassi AM (2009) Climate policy and energy-intensive manufacturing impacts and options. Millenium Institute, 2111 Wilson Boulevard, Suite 700, Arlington 22201. http://www.globalurban.org/ Climate_Policy_and_Energy-Intensive_Manufacturing.pdf. Accessed 1 Jan 2015 Zamfirescu C, Dincer I (2009) Ammonia as a green fuel and hydrogen source for vehicular applications. Fuel Process Technol 90(5):729–737 Zhao H (2007) HCCI and CAI engines for the automotive industry. Woodhead Publishing, Cambridge. ISBN 978-1845691288

Page 65 of 65

Handbook of Climate Change Mitigation and Adaptation DOI 10.1007/978-1-4614-6431-0_25-2 # Springer Science+Business Media New York 2015

Biomass as Feedstock Debalina Sengupta* Texas A&M University, College Station, TX, USA Louisiana State University, Baton Rouge, LA, USA

Abstract The world has a wide variety of biofeedstocks. Biomass is a term used to describe any material of recent biological origin, including plant materials such as trees, grasses, agricultural crops, or animal manure. In this chapter, the formation of biomass by photosynthesis and the different mechanisms of photosynthesis giving rise to biomass classification are discussed. Then, these classifications and composition of biomass are explained. The various methods used to make biomass amenable for energy, fuel, and chemical production are discussed next. These methods include pretreatment of biomass, biochemical routes of conversion like fermentation, anaerobic digestion, transesterification, and thermochemical routes like gasification and pyrolysis. An overview of current and future biomass feedstock materials, for example, algae, perennial grass, and other forms of genetically modified plants, is described including the current feedstock availability in the United States.

Introduction The world is dependent heavily on coal, petroleum, and natural gas for energy and fuel and as feedstock for chemicals. These sources are commonly termed as fossil or nonrenewable resources. Geological processes formed fossil resources over a period of millions of years by the loss of volatile constituents from plant or animal matter. The human civilization has seen a major change in obtaining its material needs through abiotic environment only recently. Plant-based resources were the predominant source of energy, organic chemicals, and fibers in the western world as recently as 200 years ago, and the biotic environment continues to play a role in many developing countries. The discovery of coal and its usage has been traced back to fourth century B.C. Comparatively, petroleum was a newer discovery in the nineteenth century, and its main use was to obtain kerosene for burning oil lamps. Natural gas, a mixture containing primarily methane, is found associated with the other fossil resources, for example, in coal beds. The historical, current, and projected use of fossil resources for energy consumption is given in Fig. 1. Petroleum, coal, and natural gas constitute about 86 % of resource consumption in the United States. The remaining 8 % comes from nuclear, and 6 % comes from renewable energy. Approximately 3 % of total crude petroleum is currently used for the production of chemicals, the rest being used for energy and fuels. The fossil resources are extracted from the earth’s crust, processed, and burnt or converted to chemicals. The proven reserves, in North America, for coal were 276,285 million tons (equivalent to 5,382 EJ [exajoule = 1018 J]) in 1990, for oil were 81 billion barrels (equivalent to 476 EJ) in 1993, and for natural gas were 329  103 billon ft3 (equivalent to 347 EJ) in 1993 (Klass 1998). The United States has considerable reserves of crude oil, but the country is also dependent on oil imports from other countries for meeting the energy requirements. The crude oil price has fluctuated over the past 40 years, the most recent price increase over $130 per barrel being in 2008. The EIA published a projection of the price of crude oil *Email: [email protected] Page 1 of 42

Handbook of Climate Change Mitigation and Adaptation DOI 10.1007/978-1-4614-6431-0_25-2 # Springer Science+Business Media New York 2015

U.S. Energy Consumption by Fuel (1980-2035) (quadrillion Btu) 45 History

Projections

40

Liquids

35 30 Natural Gas

25

Coal 20 15 Nuclear

10

Non-Hydro Renewable

5

Hydropower

0 1980

1990

2000

2005

2010

2020

2030

2035

2030

2035

Fig. 1 Energy consumption in the United States, 1980–2035 (EIA 2010) Oil Prices, Historical and Projected 250

Historical

Projected

2008 dollars per barrel

200

150

100

50

0 1980

1985

1990

1995 High

2000

2005 2010 Low

2015 2020

2025

AEO2010 Reference

Fig. 2 Oil prices (in 2008 dollars per barrel), historical data, and projected data (Adapted from EIA (2010))

over the next 25 years, where high and a low projections were given in addition to the usual projection of crude oil price, as shown in Fig. 2 (EIA 2010). The projection shows a steady increase in price of crude to above $140 per barrel in 2035. With a high price trend, the crude can cost over $200 per barrel. The fossil resources are burnt or utilized for energy, fuels, and chemicals. The process for combustion of fossil resources involves the oxidation of carbon and hydrogen atoms to produce carbon dioxide and water vapor and releasing heat from the reactions. Impurities in the resource, such as sulfur, produce sulfur oxides, and incomplete combustion of the resource produces methane. The Intergovernmental Panel on Climate Change identified that changes in atmospheric concentration of greenhouse gases (GHG), aerosols, land cover, and solar radiation alter the energy balance of the climate system (IPCC 2007). These

Page 2 of 42

Handbook of Climate Change Mitigation and Adaptation DOI 10.1007/978-1-4614-6431-0_25-2 # Springer Science+Business Media New York 2015 CO2 Emissions due to Fossil Feedstock Usage 2008

2035

Buildings and industrial, 1,571 (25%)

Buildings and industrial, 1,530 (26%) Electric Power, 2,359 (41%)

Total: 5,814 million metric tons

Transportation, 1,925 (33%)

Electric Power, 2,634 (42%)

Total: 6,320 million metric tons

Transportation, 2,115 (33%)

Fig. 3 Carbon dioxide emissions in 2008 (current) and 2035 (projected) due to fossil feedstock usage (Adapted from EIA (2010))

changes are also termed as climate change. The green house gases include carbon dioxide, methane, nitrous oxide, and fluorinated gases. Atmospheric concentrations of carbon dioxide (379 ppm) and methane (1,774 ppb) in 2005 were the highest amounts recorded on the earth (historical values computed from ice cores spanning many thousands of years) till date. The IPCC report states that global increases in CO2 concentrations are attributed primarily to fossil resource use. In the United States, there was approximately 5,814 million metric tons of carbon dioxide released into the atmosphere in 2008, and this amount is projected to increase to 6,320 million metric tons in 2035 (EIA 2010) as shown in Fig. 3. The increasing trends in resource consumption, resource material cost, and consequent increase carbon dioxide emissions from anthropogenic sources indicate that a reduction of fossil feedstock usage is necessary to address climate change. This has prompted world leaders, organizations, and companies to look for alternative ways to obtain energy, fuels, and chemicals. Thus, carbon fixed naturally in fossil and nonrenewable resources over millions of years is released to the atmosphere by anthropogenic sources. A relatively faster way to convert the atmospheric carbon dioxide into useful resources is by photosynthetic fixation into biomass. The life cycle of the fossil resources showed that the coal, petroleum, and natural gas all are derivatives of decomposed biomass on the earth’s surface trapped in geological formations. Thus, biomass, being a precursor to the conventional nonrenewable resources, can be used as fuel, generate energy, and produce chemicals with some modifications to existing processes. Biomass can be classified broadly as all the matter on earth’s surface of recent biological origin. Biomass includes plant materials such as trees, grasses, agricultural crops, and animal manure. Aquatic plants, such as algae, also undergo photosynthesis and provide good sources for carbohydrates and lipids. Just as petroleum and coal require processing before the use as feedstock for the production of fuels, chemicals, and energy, biomass also requires processing such that the resource potential can be utilized fully. As explained earlier, biomass is a precursor to fossil feedstock, and a comparison between the biomass energy content and fossil feedstock energy content is required. The heating value of fuel is the measure of heat released during the complete combustion of fuel at a given reference temperature and pressure. The higher or gross heating value is the amount of heat released per unit weight of fuel at the reference temperature and pressure, taking into account the latent heat of vaporization of water. The lower or net heating value is the heat released by fuel excluding the latent heat of vaporization of water. The higher heating values of some bioenergy feedstocks, liquid biofuels, and conventional fossil fuels are Page 3 of 42

Handbook of Climate Change Mitigation and Adaptation DOI 10.1007/978-1-4614-6431-0_25-2 # Springer Science+Business Media New York 2015

Table 1 Heating value of biomass components (Klass 1998; McGowan 2009) Component Bioenergy feedstocks Corn stover Sweet sorghum Sugarcane bagasse Sugarcane leaves Hardwood Softwood Hybrid poplar Bamboo Switchgrass Miscanthus Arundo donax Giant brown kelp Cattle feedlot manure Water hyacinth Pure cellulose Primary biosolids Liquid biofuels Bioethanol Biodiesel Fossil fuels Coal (low rank; lignite/sub-bituminous) Coal (high rank; bituminous/anthracite) Oil (typical distillate)

Heating value (gross) (GJ/MT unless otherwise mentioned) 17.6 15.4 18.1 17.4 20.5 19.6 19.0 18.5–19.4 18.3 17.1–19.4 17.1 10.0 MJ/dry kg 13.4 MJ/dry kg 16.0 MJ/dry kg 17.5 MJ/dry kg 19.9 MJ/dry kg 28 40 15–19 27–30 42–45

given in Table 1. It can be seen from the table that the energy content of the raw biomass species is less than bioethanol, and biodiesel compares almost equally to the traditional fossil fuels. This chapter gives an outline for the use of biomass as feedstock. The following sections will discuss various methods for biomass formation, biomass composition, conversion technologies, and feedstock availability.

Biomass Formation Biomass is the photosynthetic sink by which atmospheric carbon dioxide and solar energy are fixed into plants (Klass 1998). These plants can be used to convert the stored energy in the form of fuels and chemicals. The primary equation of photosynthesis is given by Eq. 1: 6CO2 þ 6H2 O þ Light ! C6 H12 O6 þ 6O2

(1)

The photosynthesis process utilizes inorganic material (carbon dioxide and water) to form organic compounds (hexose) and releases oxygen. The Gibbs free energy change for the process is +470 KJ per mole of CO2 assimilated, and the corresponding enthalpy change is +470 KJ. The positive sign on the energy denotes that energy is absorbed in the process. The initial product for biochemical reactions for photosynthetic assimilation is sugars. Secondary products are derived from key intermediates of the

Page 4 of 42

Handbook of Climate Change Mitigation and Adaptation DOI 10.1007/978-1-4614-6431-0_25-2 # Springer Science+Business Media New York 2015

−1.5

P700* FeS

−1.0 P680*

E°(V)

−0.5

NADP+

Ph

NADPH

QA

ADP

QB

ATP ADP

0.0

Fd

Cyt bf

ATP

PC

P700

Light quanta

Photosystem I +0.5 Light quanta

P680

+1.0 H2O

MSP

1/2O2

+ 2H++2e−

Photosystem II

Fig. 4 Z-scheme of biomass photosynthesis P680 and P700 is the chlorophylls of the photosystem II and I, respectively. (MSP manganese stabilizing protein, Ph pheophytin, Q quinone, Cyt cytochrome, PC plastocyanin, FeS nonheme iron-sulfur protein, Fd ferredoxin) (Adapted from Drapcho et al. (2008))

biochemical reactions and include polysaccharides, lipids, and proteins. A wide range of other organic compounds may also be produced in certain biomass species, such as simple low molecular weight organic chemicals (e.g., acids, alcohols, aldehydes, and ethers), complex alkaloids, nucleic acids, pyrroles, steroids, terpenes, waxes, and high molecular weight polymers such as polyisoprenes. A detailed description of how these components are formed from the intermediates is beyond the scope of this chapter. The basic reactions for photosynthesis will be discussed in this section, and the key products will be explained. Photosynthesis is a two-phase process comprising of the “light reactions” (in the presence of light) and “dark reactions” (in the absence of light). The light reactions capture light energy and convert it to chemical energy and reducing power. In the dark reactions, chemical energy and the reducing power from light reactions are used to fix atmospheric carbon dioxide. The light reaction in photosynthesis is explained using the “Z-scheme” diagram as shown in Fig. 4 (Drapcho et al. 2008). Solar energy in the wavelength range of 400–700 nm is captured by chlorophylls within the cells of plants and microorganisms like green algae or cyanobacteria. The flow of electrons is shown in Fig. 4. Two photosystems, photosystem I and photosystem II, are used in the light reactions. All the terms in Fig. 4 are not explained in this text, but the most important intermediates are listed below the figure. In photosystem II (PSII), light energy at 680 nm wavelength is used to split water molecules as shown in Eq. 2: 2H2 O

light energy

!

O2 þ 4Hþ 4e

(2)

The electrons are accepted by the chlorophyll in PSII and reduce it from a reduction potential of +1 V to approximately 0.8 V. The electrons are then transferred to photosystem I (PSI) through a series of membrane-bound electron carrier molecules. ATP (adenosine triphosphate) is produced as the electrons are transferred due to a proton-motive force that develops as protons are pumped across the thylakoid membrane. Acceptance of the electron reduces the potential of PSI to approximately 1.4 V. The

Page 5 of 42

Handbook of Climate Change Mitigation and Adaptation DOI 10.1007/978-1-4614-6431-0_25-2 # Springer Science+Business Media New York 2015 3 CO2

3 ribulose-5-bisphosphate

6 3-phosphoglycerate

3 ADP

ATP

6 ATP

3 ATP 3 ribulose-5-phosphate

6 ADP 6 1,3-bisphosphoglycerate 6 NADPH

2 Pi

NADPH

6 NADP++ 6 Pi

5 glyceraldehyde-3-phosphate

6 glyceraldehyde-3-phosphate

1 glyceraldehyde-3-phosphate

Biosynthesis of sugars, fatty acids, amino acids

Fig. 5 Calvin-Benson cycle for photosynthesis (Adapted from Drapcho et al. (2008))

reduction potential of PSI is then sufficient to reduce ferrodoxin, which in turn reduces NADP+ to NADPH. This NADPH is used to reduce inorganic carbon for new cell synthesis. Thus, the light reactions are common to all plant types, where eight photons per molecule of carbon dioxide excite chlorophyll to generate ATP (adenosine triphosphate) and NADPH (reduced nicotinamide adenosine dinucleotide phosphate) along with oxygen (Klass 1998). The “Z-scheme” transfers electrons from a low chemical potential in water to a higher chemical potential in NADPH, which is necessary to reduce CO2. The ATP and NADPH produced in the light reactions react in the dark to reduce CO2 and form the organic components in biomass via the dark reactions and regenerate ADP (adenosine diphosphate) and NADP+ (nicotinamide adenosine dinucleotide phosphate) for the light reactions. The biochemical pathways and organic intermediates involved in the reduction of CO2 to sugars determine the molecular events of biomass growth and differentiate between various kinds of biomass. In photosynthesis, CO2 enters the leaves or stems of biomass through stoma, the small intercellular openings in the epidermis. These openings provide main route for photosynthetic gas exchange and water vapor loss in transpiration. The dark reactions can proceed in accordance with at least three different pathways, the Calvin-Benson cycle, the C4 cycle, and the CAM cycle, as discussed in the following sections.

The Calvin-Benson Cycle The Calvin-Benson cycle is shown in Fig. 5. The overall reaction for the Calvin cycle is given in Eq. 3. Plant biomass species, which use the Calvin-Benson cycle to form products, are called the C3 plants (Klass 1998). 6CO2 þ 12 NADPH þ 18 ATP ! C6 H12 O6 þ 12NADPþ þ 18 ADP

(3)

Page 6 of 42

Handbook of Climate Change Mitigation and Adaptation DOI 10.1007/978-1-4614-6431-0_25-2 # Springer Science+Business Media New York 2015 CO2

Pi

Phosphoenolpyruvic acid

Oxaloacetic acid

NADPH

ADP

NADP+

ATP

Pyruvic acid

Malic or aspartic acid

CO2 (to C3 cycle)

NADPH

NADP+

Fig. 6 Biochemical pathway from carbon dioxide to glucose for C4 biomass (Adapted from Klass (1998))

This cycle produces the 3-carbon intermediate 3-phosphoglyceric acid (3-phosphoglycerate) and is common to fruits, legumes, grains, and vegetables. C3 plants usually exhibit low rates of photosynthesis at light saturation, low light saturation points, sensitivity to oxygen concentration, rapid photorespiration, and high CO2 compensation points. The light saturation point is the light intensity beyond which it is not a limiting factor for photosynthesis. The CO2 compensation point is the CO2 concentration in the surrounding environment below which more CO2 is respired by the plant than is photosynthetically fixed. Typical C3 biomass species are alfalfa, barley, chlorella, cotton, Eucalyptus, Euphorbia lathyris, oats, peas, potato, rice, soybean, spinach, sugar beet, sunflower, tall fescue, tobacco, and wheat. These plants grow favorably in cooler climates.

The C4 Cycle The C4 cycle is shown in Fig. 6. In this cycle, CO2 is initially converted to 4-carbon dicarboxylic acids (malic or aspartic acids) (Klass 1998). Phosphoenolpyruvic acid reacts with carbon dioxide to form oxaloacetic acid. Malic or aspartic acid is formed from the oxaloacetic acid. The C4 acid is transported to bundle sheath cells where decarboxylation occurs to regenerate pyruvic acid, which is returned to the mesophyll cells to initiate another cycle. The CO2 liberated in the bundle sheath cells enter the C3 cycle described above, and it is in this C3 cycle where the CO2 fixation occurs. The subtle difference between the C3 and C4 cycles is believed to be responsible for the wide variations in biomass properties. Compared to C3 biomass, C4 biomass is produced in higher yields with higher rates of photosynthesis, high light saturation points, and low levels of respiration, low carbon dioxide compensation points, and greater efficiency of water usage. Typical C4 biomass includes crops such as sugarcane, corn, and sorghum and tropical grasses like Bermuda grass.

The CAM Cycle The CAM cycle is the crassulacean acid metabolism cycle, which refers to the capacity of chloroplast containing biomass tissues to fix CO2 in dark reactions leading to synthesis of free malic acid (Klass 1998). The mechanism involves b-carboxylation of phosphoenolpyruvic acid by phosphoenolpyruvate Page 7 of 42

Handbook of Climate Change Mitigation and Adaptation DOI 10.1007/978-1-4614-6431-0_25-2 # Springer Science+Business Media New York 2015

carboxylase enzyme and the subsequent reduction of oxaloacetic acid by maleate dehydrogenase. Biomass species in the CAM category are typically adapted to arid environments and have low photosynthesis rates and higher water usage efficiencies. Plants in this category include cactus and succulents like pineapple. The CAM has evolved so that the initial CO2 fixation can take place in the dark with much less water loss than C3 or C4 pathways. CAM biomass also conserves carbon by recycling endogenously formed CO2. CAM biomass species have not been exploited commercially for use as biomass feedstock. Thus, different photosynthetic pathways produce different intermediates leading to different kinds of biomass. The following section discusses the different components in biomass.

Biomass Classification and Composition The previous section gave the mechanisms for the formation of biomass by photosynthesis. The classification and composition of biomass will be discussed in this section. Biomass can be classified into two major subdivisions, crop biomass and wood (forest) biomass. There are other sources of biomass, like waste from municipal areas and animal wastes, but these can be traced back to the two major sources. Crop biomass primarily includes corn, sugarcane, sorghum, soybeans, wheat, barley, rice, etc. These contain carbohydrates, glucose, starch, or oils as its primary constituents. Wood biomass is composed of cellulose, hemicellulose, and lignin. Examples of woody biomass include grasses, stalks, stover, etc. Starch and cellulose are both polymeric forms of glucose, a 6-carbon sugar. Hemicellulose is a polymer of xylose. Lignin is composed of phenolic polymers. Oils are composed of triglycerides. Other biomass components, which are generally present in minor amounts, include proteins, sterols, alkaloids, resins, terpenes, terpenoids, and waxes. Apart from crop and woody biomass, a class of microorganisms exist which are capable of producing biomass. These are single-celled organisms like algae or cyanobacteria and have the capability of photosynthesis to produce oils, carbohydrates, proteins, etc. These are discussed in details in a later section. The components of biomass are discussed in details below.

Saccharides and Polysaccharides Saccharides and polysaccharides are hydrocarbons with the basic chemical structure of CH2O. The hydrocarbons occur in nature as 5-carbon or 6-carbon ring structure. The ring structures may contain only one or two connected rings, which are known as monosaccharides, disaccharides, or simply as sugars, or they may be very long polymer chains of the sugar building blocks. The simplest six-sided saccharide (hexose) is glucose. Long-chained polymers of glucose or other hexoses are categorized either as starch or cellulose. The characterization is discussed in the following sections. The simplest five-sided sugar (pentose) is xylose. Pentoses form long-chain polymers categorized as hemicellulose. Some of the common 6-carbon and 5-carbon monosaccharides are listed in Table 2. Starch is a polymer of glucose as the monomeric unit (Paster et al. 2003). It is a mixture of a- amylose and amylopectin as shown in Fig. 7. a-Amylose is a straight chain of glucose molecules joined by a-1,4-glucosidic linkages as shown in Fig. 7a. Amylopectin and amylase are similar except that short chains of glucose molecules branch off from the main chain (backbone) as shown in Fig. 7b. Starches found in nature contain 10–30 % a-amylose and 70–90 % amylopectin. The a-1,4-glycosidic linkages are bent and prevent the formation of sheets and subsequent layering of polymer chains. As a result, starch is soluble in water and relatively easy to break down into utilizable sugar units.

Page 8 of 42

Handbook of Climate Change Mitigation and Adaptation DOI 10.1007/978-1-4614-6431-0_25-2 # Springer Science+Business Media New York 2015

Table 2 Common 6-carbon and 5-carbon monosaccharides 6-Carbon sugars D-Fructose

Structure

5-Carbon sugars D-Xylose

O

Structure O

O

O O

O

O O

O

D-Glucose

O

O

O

D-Ribulose

O

O

O

O O

D-Gulose

O

O

O

O

O O D-Ribose

O

O

O O

O

O O

D-Mannose

O

O

O

D-Arabinose

O

O

O

O O

O

O O

D-Galactose

O

O

O

O

O O

O O

O

Lignocellulosic Biomass The non-grain portion of biomass (e.g., cobs, stalks), often referred to as agricultural stover or residues, and energy crops such as switchgrass are known as lignocellulosic biomass resources (also called cellulosic). These are comprised of cellulose, hemicellulose, and lignin (Paster et al. 2003). Generally, lignocellulosic material contains 30–50 % cellulose, 20–30 % hemicellulose, and 20–30 % lignin. Figure 8a illustrates how cellulose, hemicellulose, and lignin are physically mixed in lignocellulosic biomass. Figure 8b illustrates how pretreatment is necessary to break the polymeric chains before cellulose and hemicellulose can be used for chemical conversions. Some exceptions to this are cotton (98 % cellulose) and flax (80 % cellulose). Lignocellulosic biomass is considered to be an abundant resource for the future bio-industry. Recovering the components in a cost-effective way requires pretreatment processes discussed in a later section. Cellulose Cellulosic biomass comprises 35–50 % of most plant material. Cellulose is a polymer of glucose with degree of polymerization of 1,000–10,000 (Paster et al. 2003). Cellulose is a linear unbranched polymer Page 9 of 42

Handbook of Climate Change Mitigation and Adaptation DOI 10.1007/978-1-4614-6431-0_25-2 # Springer Science+Business Media New York 2015

Fig. 7 Structure of starch; (a) a-amylose; (b) amylopectin

Fig. 8 (a) Physical arrangement of lignocellulosic biomass; (b) lignocellulosic biomass after pretreatment

Fig. 9 Structure of cellulose

of glucose joined together by b  1,4-glycosidic linkages as shown in Fig. 9. Cellulose can either be crystalline or amorphous. Hydrogen bonding between chains leads to chemical stability and insolubility and serves as a structural component in plant walls. The high degree of crystallinity of cellulose makes lignocellulosic materials much more resistant than starch to acid and enzymatic hydrolysis. As the core structural component of biomass, cellulose is also protected from environmental exposure by a sheath of

Page 10 of 42

Handbook of Climate Change Mitigation and Adaptation DOI 10.1007/978-1-4614-6431-0_25-2 # Springer Science+Business Media New York 2015 O

HO HO

OH OH

Fig. 10 Structure of xylose, building block of hemicellulose

lignin and hemicellulose. Extracting the sugars of lignocellulosics therefore involves a pretreatment stage to reduce the recalcitrance (resistance) of the biomass to cellulose hydrolysis. Hemicellulose Hemicellulose is a polymer containing primarily 5-carbon sugars such as xylose and arabinose with some glucose and mannose dispersed throughout (Paster et al. 2003). The structure of xylose is shown in Fig. 10. It forms a short-chain polymer that interacts with cellulose and lignin to form a matrix in the plant wall, thereby strengthening it. Hemicellulose is more easily hydrolyzed than cellulose. Much of the hemicellulose in lignocellulosic materials is solubilized and hydrolyzed to pentose and hexose sugars during the pretreatment stage. Some of the hemicellulose is too intertwined with the lignin to be recoverable. Lignin Lignin helps to bind the cellulose/hemicelluloses matrix while adding flexibility to the mixture. The molecular structure of lignin polymers is very random and disorganized and consists primarily of carbon ring structures (benzene rings with methoxyl, hydroxyl, and propyl groups) interconnected by polysaccharides (sugar polymers) as shown in Fig. 11. The ring structures of lignin have great potential as valuable chemical intermediates, mainly aromatic compounds. However, separation and recovery of the lignin is difficult. It is possible to break the lignin-cellulose-hemicellulose matrix and recover the lignin through treatment of the lignocellulosic material with strong sulfuric acid. Lignin is insoluble in sulfuric acid, while cellulose and hemicellulose are solubilized and hydrolyzed by the acid. However, the high acid concentration promotes the formation of degradation products that hinder the downstream utilization of the sugars. Pyrolysis can be used to convert the lignin polymers to valuable products, but separation techniques to recover the individual chemicals are lacking. Instead, the pyrolyzed lignin is fractionated into a bio-oil for fuels and high phenolic content oil which is used as a partial replacement for phenol in phenol-formaldehyde resins.

Lipids, Fats, and Oils Oils can be obtained from oilseeds like soybean, canola, etc. Vegetable oils are composed primarily of triglycerides, also referred to as triacylglycerols. Triglycerides contain a glycerol molecule as the backbone with three fatty acids attached to glycerol’s hydroxyl groups. The structure of a triglyceride is shown in Fig. 12 with linoleic acid as the fatty acid chain. In this example, the three fatty acids are all linoleic acid, but triglycerides could be a mixture of two or more fatty acids. Fatty acids differ in chain length and degree of condensation. The fatty acid profile and the double bonds present determine the property of the oil. These can be manipulated to obtain certain performance characteristics. In general, the greater the number of double bonds, the lower the melting point of the oil.

Proteins Proteins are polymers composed of natural amino acids, bonded together through peptide linkages (Klass 1998). They are formed via condensation of the acids through the amino and carboxyl groups by removal

Page 11 of 42

Handbook of Climate Change Mitigation and Adaptation DOI 10.1007/978-1-4614-6431-0_25-2 # Springer Science+Business Media New York 2015 H2COH

OH

H2COH

CH2

CH2

CH2

CH2

OCH3

CH3

HOC

HC

OH

CH3O

H2COH

OCH3

O O

HC

HC OCH3

HC

HC O

O

CH3O

HOCH

CH

CH

CH

CH2OH HC

HOCH2 CH3O

O

CH HOCH2

H2COH

HOCH

CH3O

HC

CH

HC

HOCH2

OCH3 O

CO

HC

CH

HC

CH2

CH3O HO

HC

O

CH2 HC

CH H2COH

HCOH HCOH

CH

O

OH

CHO CH

CH3O

O

O

OCH3 O

OCH3

H2COH

O

CHO

O

CO

CH3O

HC

O

HCOH

H2COH

H2COH

CH

O CH3O

CH

O

O HOC

HC

H2COH

CH

CH3O

H2C

H2COH

CH

HC

CH

O

OH

OCH3 O

CH

OCH3

HOCH

HC

C2H

O

OHH2C

HOCH2

H2COH

OCH3 CH3O

HC

HCO

CH3O

(Carbohydrate)

CH3O

CH

HO

CH

O

H2COH

O CH3O

CH

CH

CH3O

HCOH CH2OH O HC

CH

H2COH H2COH

OCH3 O

HCOH

CH3O

H2CO

O

OCH3 OH

Fig. 11 Structure of lignin (Glazer and Nikaido 1995) O O

O O

Glycerol backbone O O

Trilinolein

Linoleic Acid Chains

Fig. 12 Formation of triglycerides (linoleic acid as representative fatty acid chain)

of water to form polyamides. Proteins are present in various kinds of biomass as well as animals. The concentration of proteins may approach zero in different biomass systems, but the importance of proteins arises while considering enzyme catalysis that promotes the various biochemical reactions. The apparent precursors of the proteins are amino acids in which an amino group, or imino group in a few cases, is bonded to the carbon atom adjacent to the carboxyl group. Many amino acids have been isolated from natural sources, but only about 20 of them are used for protein biosynthesis. These amino acids are divided into five families: glutamate, aspartate, aromatic, serine, and pyruvate. The various amino acids under these groups are shown in Table 3.

Page 12 of 42

Handbook of Climate Change Mitigation and Adaptation DOI 10.1007/978-1-4614-6431-0_25-2 # Springer Science+Business Media New York 2015

Table 3 Amino acid groups present in proteins Family Glutamate Aspartate Aromatic Serine Pyruvate

Amino acids Glutamine, arginine, proline Asparagine, methionine, threonine, isoleucine, lysine Tryptophan, phenylalanine, tyrosine Glycine, cysteine Alanine, valine, leucine

Table 4 Component composition of biomass feedstocks (Klass 1998; McGowan 2009) Name Corn stover Sweet sorghum Sugarcane bagasse Hardwood Softwood Hybrid poplar Bamboo Switchgrass Miscanthus Arundo donax RDF (refuse-derived fuel) Water hyacinth Bermuda grass Pine

Celluloses (dry wt%) 35 27 32–48 45 42 42–56 41–49 44–51 44 31 65.6 16.2 31.7 40.4

Hemicelluloses (dry wt%) 28 25 19–24 30 21 18–25 24–28 42–50 24 30 11.2 55.5 40.2 24.9

Lignins (dry wt%) 16–21 11 23–32 20 26 21–23 24–26 13–20 17 21 3.1 6.1 25.6 34.5

Table 4 gives the composition of some biomass species based on the above components. The biomass types are marine, fresh water, herbaceous, woody, and waste biomass, and a representative composition is given in the table. Other components not included in the composition are ash and crude protein.

Biomass Conversion Technologies The conversion of biomass involves the treatment of biomass so that the solar energy stored in the form of chemical energy in the biomass molecules can be utilized. Common biomass conversion routes begin with pretreatment in case of cellulosic and grain biomass and extraction of oil in case of oilseeds. Then the cellulosic or starch containing biomass undergoes fermentation (anaerobic or aerobic), gasification, or pyrolysis. The oil in oilseeds is transesterified to get desired product. There are other process technologies including hydroformylation, metathesis, and epoxidation, related with direct conversion of oils to fuels and chemicals, the details of which are not included in this chapter.

Biomass Pretreatment Biomass is composed of components such as starch, sugars, cellulose, hemicellulose, lignin, fats, oils, etc., as described in the previous section. Often two or more of these components are physically mixed with each other, and a pretreatment is necessary before the chemical energy in biomass molecules can be utilized in a useful way. For example, lignocellulosic biomass is composed of cellulose, hemicelluloses, and lignin. The cellulose and hemicelluloses are polysaccharides of hexose and pentose. Any process that

Page 13 of 42

Handbook of Climate Change Mitigation and Adaptation DOI 10.1007/978-1-4614-6431-0_25-2 # Springer Science+Business Media New York 2015

uses biomass needs to be pretreated so that the cellulose and hemicellulose in the biomass are broken down to their monomeric form. Pretreatment processes produce a solid pretreated biomass residue that is more amenable to enzymatic hydrolysis by cellulases and related enzymes than native biomass. Biocatalysts like yeasts and bacteria can act only on the monomers and ferment them to alcohols, lactic acid, etc. The pretreatment process also removes the lignin in biomass which is not acted upon by enzymes or fermented further. Pretreatment usually begins with a physical reduction in the size of plant material by milling, crushing, and chopping (Teter et al. 2006). Some of the equipment used in the industry for size reduction include rotary breaker, roll crusher, hammer mill, impactor, tumbling mill, and roller mill. The size of biomass particles needs to be reduced to nominal size of 1–6 mm (Womac et al. 2007). For example, in the processing of sugarcane, the cane is first cut into segments and then fed into consecutive rollers to extract cane juice rich in sucrose and physically crush the cane, producing a fibrous bagasse having the consistency of sawdust. In the case of corn stover processing, the stover is chopped with knives or ball milled to increase the exposed surface area and improve wettability. Corn is hammer milled to flour before it is transferred to cook tanks. The physical reduction in size enables a wider surface area to come in contact for further chemical conversions. However, physical size reduction is an energy-intensive process, and an optimum size reduction is required to balance energy consumption and conversion efficiency. For example, recent research in corn fermentation using finer ground corn enables the liquefaction to be conducted at lower temperatures, and this process is known as cold starch hydrolysis. After the physical disruption process, the biomass may be chemically treated to remove lignin. This is shown in Fig. 8b. Lignin forms a coating on the cellulose microfibrils in untreated biomass, thus making the cellulose unavailable for enzyme or acid hydrolysis. Lignin also absorbs some of the expensive cellulose-active enzymes. The following chemical pretreatment processes are employed for biomass conversion. Hot wash pretreatment: This pretreatment concept was developed at the National Renewable Energy Laboratory and uses hot water or hot dilute acids at temperatures above 135  C to wash out the solubilized lignin and hemicellulosic sugars (Tucker et al. 2011). The hot wash pretreatment process involves the passage of hot water through heated stationary biomass and is responsible for solubilization of the hemicellulose fraction (Teter et al. 2006). The hemicellulose is converted to pentose oligomers by this process which needs to be further converted to respective monosaccharides before fermentation. The performance of this pretreatment process depends on temperature and flow rate, requiring about 8–16 min. About 46 % of lignin is removed at high rates and temperatures. The hydrothermal process does not require acid resistant material for the reactors, but water use and recovery costs are disadvantages to the process. Acid hydrolysis: Hydrolysis is a chemical reaction or process where a chemical compound reacts with water. The process is used to break complex polymer structures into its component monomers. The process can be used for the hydrolysis of polysaccharides like cellulose and hemicelluloses (Katzen and Schell 2006). When hydrolysis is catalyzed by the presence of acids like sulfuric, hydrochloric, nitric, or hydrofluoric acids, the process is called acid hydrolysis. The reactions for hydrolysis can be expressed as in reaction given by Eqs. 4 and 5: Cellulose ðGlucanÞ ! Glucose ! 5  Hydroxymethylfurfural ! Tars

(4)

Hemicellulose ðXylanÞ ! Xylose ! Furfural ! Tars

(5)

Page 14 of 42

Handbook of Climate Change Mitigation and Adaptation DOI 10.1007/978-1-4614-6431-0_25-2 # Springer Science+Business Media New York 2015

The desired products of hydrolysis are the glucose and xylose. Under severe conditions of high temperature and acid concentrations, the product tends to form hydroxymethylfurfural, furfural, and the tars. Dilute sulfuric acid is inexpensive in comparison to the other acids. It has also been studied and the chemistry well known for acid conversion processes (Katzen and Schell 2006). Biomass is mixed with a dilute sulfuric acid solution and treated with steam at temperatures ranging from 140  C to 260  C. Xylan is rapidly hydrolyzed in the process to xylose at low temperatures of 140–180  C. At higher temperatures, cellulose is depolymerized to glucose, but the xylan is converted to furfural and tars. The pretreatment conditions used in lignocellulosic biomass (corn stover) feedstock-based ethanol process by (Aden et al. 2002) were acid concentration of 1.1 %, residence time of 2 min, temperature maintained at 190  C, and a pressure of 12.1 atm. Concentrated acids at low temperatures (100–120  C) are used to hydrolyze cellulose and hemicelluloses to sugars (Katzen and Schell 2006). Higher yields of sugars are obtained in this case with lower conversion to tars. The viability of this process depends on low-cost recovery of expensive acid catalysts. Enzymatic hydrolysis: Acid hydrolysis explained in the previous section has a major disadvantage where the sugars are converted to degradation products like tars. This degradation can be prevented by using enzymes favoring 100 % selective conversion of cellulose to glucose. When hydrolysis is catalyzed by such enzymes, the process is known as enzymatic hydrolysis (Katzen and Schell 2006). The temperature and pressure for enzymatic hydrolysis depend on the particular enzyme and its tolerance to a particular temperature. A detailed discussion of the particular temperatures for enzymes is beyond the scope of this chapter. Enzymatic hydrolysis is carried out by microorganisms like bacteria, fungi, protozoa, insects, etc. (Teter et al. 2006). Advancement of gene sequencing in microorganisms has made it possible to identify the enzymes present in them which are responsible for the biomass degradation. Bacteria like Clostridium thermocellum, Cytophaga hutchinsonii, Rubrobacter xylanophilus, etc., and fungi like Trichoderma reesei and Phanerochaete chrysosporium have revealed enzymes responsible for carbohydrate degradation. Based on their target material, enzymes are grouped into the following classifications (Teter et al. 2006). Glucanases or cellulases are the enzymes that participate in the hydrolysis of cellulose to glucose. Hemicellulases are responsible for the degradation of hemicelluloses. Some cellulases have significant xylanase or xyloglucanase side activity which makes it possible for use in degrading both cellulose and hemicelluloses. Ammonia fiber explosion: This process uses ammonia mixed with biomass in a 1:1 ratio under high pressure (21 atm) at temperatures of 60–110  C for 5–15 min, and then there is explosive pressure release. This process, also referred to as the AFEX process, improves saccharification rates of various herbaceous crops and grasses. The pretreatment does not significantly solubilize hemicellulose compared to acid pretreatment. The conversions achieved depend on the composition of feedstock, e.g., over 90 % hydrolysis of cellulose and hemicellulose was obtained after AFEX pretreatment of Bermuda grass (Sun and Cheng 2002). The volatility of ammonia makes it easy to recycle the gas (Teter et al. 2006).

Fermentation The pretreatment of biomass is followed by the fermentation process where pretreated biomass containing 5-carbon and 6-carbon sugars is catalyzed with biocatalysts to produce desired products. Fermentation refers to enzyme catalyzed, energy yielding chemical reactions that occur during the breakdown of complex organic substrates in presence of microorganisms (Klass 1998). The microorganisms used for fermentation can be yeast or bacteria. The microorganisms feed on the sucrose or glucose released after Page 15 of 42

Handbook of Climate Change Mitigation and Adaptation DOI 10.1007/978-1-4614-6431-0_25-2 # Springer Science+Business Media New York 2015

Biomass Mixed Culture

Cellulose, Starch Proteins, Fats

Carboxylic Acids = Volatile Fatty Acids (VFAs) (like acetic, propionic, butyric.... heptanoic) (C2 to C7)

of Micro-organisms

Hydrolysis Free Sugars, Amino Acids, Fatty Acids

Acidogenesis Carboxylic Acids, NH3, CO2, H2S

Acetogenesis

Methanogenesis

Acetic Acid, CO2, H2

CH4, CO2

Fig. 13 Anaerobic digestion process

pretreatment and converts them to alcohol and carbon dioxide. The simplest reaction for the conversion of glucose by fermentation is given in Eq. 6: C6 H12 O6 ! 2C2 H5 OH þ 2CO2

(6)

An enzyme catalyst is highly specific, catalyzes only one or a small number of reactions, and a small amount of enzyme is required. Enzymes are usually proteins of high molecular weight (15,000 < MW < several million Daltons) produced by living cells. The catalytic ability is due to the particular protein structure, and a specific chemical reaction is catalyzed at a small portion of the surface of an enzyme, called an active site (Klass 1998). Enzymes have been used since early human history without knowing how they worked. Enzymes have been used commercially since the 1890s when fungal cell extracts were used to convert starch to sugar in brewing vats. Microbial enzymes include cellulase, hemicellulase, catalase, streptokinase, amylase, protease, clipase, pectinase, glucose isomerase, lactase, etc. The type of enzyme selection determines the end product of fermentation. The growth of the microbes requires a carbon source (glucose, xylose, glycerol, starch, lactose, hydrocarbons, etc.) and a nitrogen source (protein, ammonia, corn steep liquor, diammonium phosphate, etc.). Many organic chemicals like ethanol, succinic acid, itaconic acid, lactic acid, etc., can be manufactured using live organisms which have the required enzymes for converting the biomass. Ethanol is produced by the bacteria Zymomonas mobilis or yeast Saccharomyces cerevisiae. Succinic acid is produced in high concentrations by Actinobacillus succinogenes obtained from rumen ecosystem (Lucia et al. 2007). Other microorganisms capable of producing succinic acid include propionate producing bacteria of the Propionibacterium genus, gastrointestinal bacteria such as Escherichia coli, and rumen bacteria such as Ruminococcus flavefaciens. Lactic acid is produced by a class of bacteria known as lactic acid bacteria (LAB) including the genera Lactobacillus, Lactococcus, Leuconostoc, Enterococcus, etc. (Axelsson 2004). Commercial processes for corn wet milling and dry milling operations and the fermentation process for lignocellulosic biomass through acid hydrolysis and enzymatic hydrolysis are discussed in details in chapter “▶ Chemicals from Biomass.”

Anaerobic Digestion Anaerobic digestion of biomass is the treatment of biomass with a mixed culture of bacteria to produce methane (biogas) as a primary product. The four stages of anaerobic digestion are hydrolysis, acidogenesis, acetogenesis, and methanogenesis as shown in Fig. 13. In the first stage, hydrolysis, complex organic molecules are broken down into simple sugars, amino acids, and fatty acids with the addition of hydroxyl groups. In the second stage, acidogenesis, volatile fatty acids (e.g., acetic, propionic, butyric, valeric) are formed along with ammonia, carbon dioxide, and hydrogen sulfide. In the third stage, acetogenesis, simple molecules from acidogenesis are further digested Page 16 of 42

Handbook of Climate Change Mitigation and Adaptation DOI 10.1007/978-1-4614-6431-0_25-2 # Springer Science+Business Media New York 2015

to produce carbon dioxide, hydrogen, and organic acids, mainly acetic acid. Then in the fourth stage, methanogenesis, the organic acids are converted to methane, carbon dioxide, and water. Anaerobic digestion can be conducted either wet or dry where dry digestion has a solid content of 30 % or greater, and wet digestion has a solid content of 15 % or less. Either batch or continuous digester operations can be used. In continuous operations, there is a constant production of biogas; while batch operations can be considered simpler, the production of biogas varies. The standard process for anaerobic digestion of cellulose waste to biogas (65 % methane-35 % carbon dioxide) uses a mixed culture of mesophilic or thermophilic bacteria (Kebanli 1981). Mixed cultures of mesophilic bacteria function best at 37–41  C, and thermophilic cultures function best at 50–52  C for the production of biogas. Biogas also contains small amount hydrogen and a trace of hydrogen sulfide, and it is usually used to produce electricity. There are two by-products of anaerobic digestion: acidogenic digestate and methanogenic digestate. Acidogenic digestate is a stable organic material comprised largely of lignin and chitin resembling domestic compost, and it can be used as compost or to make low-grade building products such as fiberboard. Methanogenic digestate is a nutrient-rich liquid, and it can be used as a fertilizer but may include low levels of toxic heavy metals or synthetic organic materials such as pesticides or PCBs depending on the source of the biofeedstock undergoing anaerobic digestion. Kebanli et al. (1981) give a detailed process design along with pilot unit data for converting animal waste to fuel gas which is used for power generation. A first-order rate constant, 0.011  0.003 per day, was measured for the conversion of volatile solids to biogas from dairy farm waste. In a biofeedstock, the total solids are the sum of the suspended and dissolved solids, and the total solids are composed of volatile and fixed solids. In general, the residence time for an anaerobic digester varies with the amount of feed material, type of material, and the temperature. Resident time of 15–30 days is typical for mesophilic digestion, and residence time for thermophilic digestion is about one-half of that for mesophilic digestion. The digestion of the organic material involves mixed culture of naturally occurring bacteria, each performs a different function. Maintaining anaerobic conditions and a constant temperature is essential for the viability of the bacterial culture. Holtzapple et al. (1999) describe a modification of the anaerobic digestion process, the MixAlco process, where a wide array of biodegradable material is converted to mixed alcohols. Thanakoses et al. (2003) describe the process of converting corn stover and pig manure to the third stage of carboxylic acid formation. In the MixAlco process, the fourth stage in anaerobic digestion of the conversion of the organic acids to methane, carbon dioxide, and water is inhibited using iodoform (CHI3) and bromoform (CHBr3). Biofeedstocks to this process can include urban wastes, such as municipal solid waste and sewage sludge, and agricultural residues, such as corn stover and bagasse. Products include carboxylic acids (e.g., acetic, propionic, butyric acid), ketones (e.g., acetone, methyl ethyl ketone, diethyl ketone), and biofuels (e.g., ethanol, propanol, butanol). The process uses a mixed culture of naturally occurring microorganisms found in natural habitats such as the rumen of cattle to anaerobically digest biomass into a mixture of carboxylic acids produced during the acidogenic and acetogenic stages of anaerobic digestion. The fermentation conditions of the MixAlco Process make it a viable process, since the fermentation involves mixed culture of bacteria obtained from animal rumen, which is available at lower cost compared to genetically modified organisms and sterile conditions required by other fermentation processes. The MixAlco process is outlined in Fig. 14 where biomass is pretreated with lime to remove lignin. Calcium carbonate is also added to the pretreatment process. The resultant mixture containing hemicellulose and cellulose is fermented using a mixed culture of bacteria obtained from cattle rumen. This process produces a mixture of carboxylate salts which is then fermented. Carboxylic acids are naturally formed in the following places: animal rumen, anaerobic sewage digesters, swamps, termite guts, etc. The same microorganisms are used for the anaerobic digestion process, and the acid products at different culture temperatures are given in Table 5. Page 17 of 42

Handbook of Climate Change Mitigation and Adaptation DOI 10.1007/978-1-4614-6431-0_25-2 # Springer Science+Business Media New York 2015

Mixed Alcohols

Carboxylate Salts Biomass

Pretreat

Ferment

Thermal Conversion

Dewater

Mixed Ketones

Hydrogenate

Hydrogen Lime Lime Kiln

Calcium Carbonate

Fig. 14 Flow diagram for the MixAlco process using anaerobic digestion (Granda 2007) Table 5 Carboxylic acid products at different culture temperatures (Granda 2007) 40  C 41 wt% 15 wt% 21 wt% 8 wt% 12 wt% 3 wt% 100 wt%

Acid C2 – Acetic C3 – Propionic C4 – Butyric C5 – Valeric C6 – Caproic C7 – Heptanoic

55  C 80 wt% 4 wt% 15 wt% 97 % methane, can be used as transport fuel. Marine algae have gained importance as potential sources for biofuel production, both as substrates for fermentation to hydrogen, ethanol, and butanol, and as oil-rich sources for biodiesel production. Due to their less energy and water requirement, higher carbon dioxide capture and negligible lignin, they are considered as superior to terrestrial biomass (Tran et al. 2010; Jung et al. 2011). However, several factors including availability, moisture content, and cellulose/lignin ratio impact the biochemical production of biofuels.

Process Overview Major processes involved in the biochemical production of biofuels are biomass handling, biomass pretreatment, hydrolysis, and fermentation. However, depending on the source of biomass, the route of conversion to biofuel and the type of biofuel, the series of processes can alter. Figure 1 shows a schematic representation of some common unit operations and processes for the biofuels mentioned in section “Biofuels.”

Handling Biomass, either grown or obtained from various sources, needs to be transported to the production sites for biochemical conversion to fuels. Postharvest it is prepared as bales, pellets, and briquettes for which the biomass has to be size reduced. Size reduction is an important mechanical preprocessing step to increase the bulk density and flowability of particles for transportation. Biomass is generally ground to 3–8 mm particles to compact it into pellets or briquettes of higher density. Important parameters in evaluating the efficiency of size reduction are particle size, particle size distribution, shape, surface area, density, and energy efficiency of mill used (Miao et al. 2011). Due to the unavailability of a continuous supply of biomass feedstocks, storage of biomass becomes important to ensure uninterrupted supply for continuous production of biofuels. Although outdoor storing of wood chunks is a commonly practiced method, studies show that terpenes are emitted from wood due to the exposure of direct heat from sunlight (Rupar and Sanati 2005). Large silos and specially constructed facilities are used for biomass storage to protect feedstock from the effects of weather, rodents, and microbial growth. Microbial growth during storage

Page 4 of 28

Handbook of Climate Change Mitigation and Adaptation DOI 10.1007/978-1-4614-6431-0_26-2 # Springer Science+Business Media New York 2015

causes loss of substrate and also has the potential to result in self-ignition due to exothermic reactions. Therefore, it is required to maintain dry conditions to allow little microbial activity in the biomass during storage. Field drying postharvest is a common method for drying in sunny regions. However, thermal or mechanical drying techniques using drum driers are available for drying biomass after harvest and before storage in colder regions (Venturi et al. 1999).

Pretreatment Pretreatment plays an important role in the biochemical conversion yields of biofuels. Complex structures in biomass are broken down into oligomeric subunits through pretreatment. These oligomers are further broken down into monomeric units during hydrolysis and fermentation. Pretreatment enhances the product yields by disrupting and solubilizing the hemicelluloses and lignin structures in biomass. Key properties affecting the conversion of lignocellulose are the crystallinity of cellulose, degree of polymerization, moisture content, available surface area, and lignin content (Chang and Holtzapple 2000). The aim of pretreatment is to disrupt the lignocellulosic structure by (1) removing hemicellulose, increasing mean pore size, and facilitating the entrance of enzymes and hydrolysis; (2) removing or redistributing lignin to reduce its “shielding” effect (Alvira et al. 2010). Pretreatment processes will ideally achieve the following (Yang and Wyman 2008): • • • • • • • • • • •

High yields for multiple crops, sites ages, and harvesting times Highly digestible pretreated solid Minimum amount of toxic compounds Biomass size reduction not required Operation in reasonable size and moderate cost reactors Nonproduction of solid-waste residues Effective at low moisture content Obtains high sugar concentration (from hydrolysis) Fermentation compatibility (minimal production of inhibitors) Lignin recovery Minimum heat and power requirements

Main Classes of Pretreatment The main classes of pretreatment covered in this chapter are mechanical, chemical, physiochemical, and biological. Mechanical pretreatment is discussed at this point as it applies to most process trains for biomass conversion. Chemical, physiochemical, and biological pretreatments are described in section “Pretreatment,” as they pertain most closely to bioethanol production. At that point, characteristics making acid and alkali pretreatments suitable for methane production are also discussed. Mechanical Milling uses grinding to reduce particle size and crystallinity. Specific surface area is increased and degree of polymerization gets decreased. Numerous milling systems can be employed: ball, hammer, roller, colloid, and vibro energy milling (Alvira et al. 2010; Taherzadeh and Karimi 2008). Coupled with other pretreatment, milling can increase hydrolysis yield for lignocellulose by 5–25 % and reduces digestion time by 23–59 % (Delgenes et al. 2003; Hartmann and Ahring 2000). There are limits to effectiveness. Size reduction below #40 mesh does not improve hydrolysis yield or rate (Chang and Holtzapple 2000). Power requirements are large, which will limit economic feasibility (Hendriks and Zeeman 2009).

Page 5 of 28

Handbook of Climate Change Mitigation and Adaptation DOI 10.1007/978-1-4614-6431-0_26-2 # Springer Science+Business Media New York 2015

• Chemical (section “Pretreatment”) • Acid pretreatment – concentrated and dilute • Alkali pretreatment – NaOH, Ca(OH)2, or ammonia • Physiochemical (section “Pretreatment”) • Thermal processes include liquid hot water (LHW) and steam pretreatment • Steam explosion • Ammonia explosion (and CO2 explosion) • Other physiochemical methods include organosolv and wet oxidation • Biological pretreatment – brown and white soft-rot fungi (section “Pretreatment”) Alvira et al. conclude that chemical and thermochemical methods are the most effective and promising technologies for industrial applications (Alvira et al. 2010). They suggest combination of different pretreatments should be considered for optimal fractionation of components and high yields. They also stress the need for additional fundamental research plant cells to better understand the reactions induced by pretreatment. Taherzadeh and Karimi (2008) concluded that concentrated acids, wet oxidation, solvents, and metal complexes are effective, but too expensive (Fan et al. 1987; Mosier et al. 2005a). They concluded that steam pretreatment, lime pretreatment, LHW systems, and ammonia-based pretreatments have a high potential. Eggeman and Elander (2005) presented an economic evaluation showing only small differences in cost for five different pretreatment technologies (dilute acid, hot water, ammonia fiber explosion (AFEX), ammonia recycle percolation (ARP), and lime). This analysis appears in the special issue “Coordinated development of leading biomass pretreatment technologies” (Wyman et al. 2005). Optimizing enzyme blends and hydrolysate conditioning may better differentiate process economics.

Hydrolysis and Fermentation During hydrolysis, breaking down of polymeric and oligomeric cellulosic structure, to simpler molecules such as glucose, cellobiose, xylose, galactose, arabinose, and mannose, takes place. It is done by the action of either chemical or enzymatic agents. Enzymatic hydrolysis is a complex process that takes place at the solid/liquid interphase. Several processes such as chemical and physical changes in the solid biomass, primary hydrolysis of soluble intermediates from the surface, and secondary hydrolysis to ultimately simpler molecules such as glucose take place simultaneously (Balat 2007). More discussion about enzymes used in hydrolysis is provided in section “Hydrolysis.” Conversion of simpler carbohydrates to alcohol through action of microbes is called as fermentation. Fermentation is both substrate and microbe specific, more details about fermentation are mentioned in section “Biofuels” for each biofuel, hydrogen, methane, ethanol, butanol, and biodiesel. A combination of hydrolysis and fermentation is another process where simultaneous breaking down of complex carbohydrates to simpler ones and converting to alcohol takes place. This process is commonly called as simultaneous saccharification and fermentation (SSF). Product yields from SSF are higher than separate hydrolysis and fermentation (SHF), as the end product inhibition during hydrolysis of higher carbohydrates to glucose and cellobiose, is relieved by simultaneous fermentation of glucose to ethanol (Balat 2007). Hydrolysis and fermentation are carried out in both batch and continuous modes. Batch reactors require higher reactor volume compared to the continuous reactors to achieve similar product yields. Two basic types of continuous reactors used in biochemical reactions are continuously stirred tank reactor (CSTR) and plug flow reactor (PFR). Most commonly, CSTR is used for hydrolysis and fermentation during the biochemical production of biofuels. Studies show usage of a packed bed reactor (PBR) in comparison with upflow anaerobic sludge bed (UASB) for the production of hydrogen from organic fraction of Page 6 of 28

Handbook of Climate Change Mitigation and Adaptation DOI 10.1007/978-1-4614-6431-0_26-2 # Springer Science+Business Media New York 2015

municipal solid waste, where the PBR was packed with municipal solid waste. The retention times of 50 and 24 h gave maximum hydrogen yields of 23 % v/v and 30 % v/v (based on volume of waste) for PBR and UASB, respectively (Alzate-Gaviria et al. 2007). Another study investigated combined or sequential two-stage processes involving coproduction of hydrogen and methane since hydrogen is an intermediate byproduct of methane production (Park et al. 2010; Zhu et al. 2008; Koutrouli et al. 2009). Dissolved oxygen and heat transfer are known to be limited by reactor volume. Fermentation for hydrogen, methane, ethanol, and butanol production is anaerobic, and the reactor volume is not limited by the dissolved oxygen and heat transfer when run in continuous mode. Therefore, CSTR fermentation systems with recycling of cell mass are sufficient to overcome solvent toxicity and limited cell growth (García et al. 2011).

Biofuels Hydrogen Biohydrogen is considered as a potential biofuel for the future, it is produced from biomass through different routes and their combinations. Gasification of biomass is one of the routes; refer to the chapters on thermal conversion of biomass, integrated gasification for combined cycle (IGCC), and conversion of syngas to fuels in this handbook for more details about the gasification process. Hydrogen is a natural byproduct of many microbial processes under anaerobic conditions. Certain microbes release hydrogen from water in the presence of sunlight and/or carbon dioxide. Microbes that derive carbon from carbohydrates and need sunlight as a source of energy to release hydrogen are called phototrophic or photosynthetic organism (e.g., Rhodobacter) and those that derive their carbon from carbon dioxide and energy from sunlight are called photoautotrophic organisms (e.g., green microalgae and cyanobacteria) (Wukovits et al. 2009). Different fermentative processes, based on different sources of energy and their combinations, are anaerobic fermentation, dark fermentation, photo fermentation, direct photolysis, indirect biophotolysis, and fermentative water-gas shift reaction. The majority of these processes combine microbiological routes led by several microbes. Anaerobic fermentation is a four-stage process carried out by a consortium of microbes. In the first stage, the complex organic components are converted to simpler components (e.g., sugars) by hydrolysis. In the second stage, the products of hydrolysis are further broken down to short-chain fatty acids by acidogenic bacteria. During the third stage, acidogenesis, the products of second stage are converted to acetic acid, hydrogen, and carbon dioxide. In the final stage, methanogenesis, the products from the third stage are used by the methanogenic bacteria to produce methane. Thus, hydrogen in this process is an intermediate product and its production can be increased by increasing the substrate content in the raw material used. Figure 2 represents three different two-stage routes that are under active investigation. In the first stage, optimized technologies of above-mentioned conventional methods are used to convert biomass to organic acids and hydrogen. In the second stage, additional energy such as light, electricity, and methane and hydrogen from the first stage are used for achieving stoichiometric conversions. Although this combination of two stages produces a mixture of methane and hydrogen, the process can be developed to achieve hydrogen stream. Dark fermentation is carried out by the anaerobes that convert biomass substrate to hydrogen under the absence of light and is shown in Fig. 2. This process is similar to the first three stages of anaerobic fermentation where the initial raw substrate is simpler carbohydrate. For a complex substrate, hydrolysis such as a chemical/physical pretreatment of biomass is required to break down the complex polymeric biomass substrate to simpler monomeric and oligomeric carbohydrates, which can be later converted to organic acid, carbon dioxide, and hydrogen by anaerobes during dark fermentation. Reaction (1) represents a general formula for hydrogen Page 7 of 28

Handbook of Climate Change Mitigation and Adaptation DOI 10.1007/978-1-4614-6431-0_26-2 # Springer Science+Business Media New York 2015

CH4 Acetate Methanogenesis H2 CO2

Syntrophic oxydation

Acetate Fermentation productions

CO2

Anaerobic digestion

H2

H2 ADP

Acetate NADH Ethanol Butanol Butyrate etc.

NADH

ATP

ATP

ADP

ADP

Ethanol EchH2ase

Acetyl-CoA

H2 CO2

H2

FdH2ase

H2

NADHH2ase

Fd

Reverse e– transport

Formate

ATP

ATP

CO2

ADP

ADP

NADH

H2

Power supply e–

e–

Sugars Enterobacteracae

Dark fermentation 1st stage

Energy Crops

H+

Cathode

Carbohydrate rich substrates

Anode

Agro/food Forestry Wastes

ATP

Photofermentation

Substrate

Clostridia

e– N2ase

H+ Organic acids

Pyruvate NADH

e–

Bacteria

MEC 2nd stage TRENDS in Biotechnology

Fig. 2 Different two-stage routes for conversion to hydrogen and methane (Hallenbeck and Ghosh 2009)

metabolism from glucose. It is evident that in the presence of hydrogenase enzyme, 4 moles of hydrogen are released for every 1 mole of glucose. Thermophilic bacteria, that grow at high temperatures (above 60  C) ferment biomass, produce hydrogen at higher rates than the mesophilic bacteria that grow at moderate temperatures (below 50  C), due to aseptic conditions maintained at high temperatures. Additionally, hydrogen production depends on the other byproduct organic acids present in the effluent. Acetic acid and other organic acids have an inhibitory effect on the growth of microbes, consequently influencing hydrogen yield. Besides its inhibitory effect, acetic acid influences the pH of the system, thus affecting the activity of hydrogenase enzyme responsible for the production of hydrogen. C6 H12 O6 þ 2H2 O ! 2CH3 COOH þ 2CO2 þ 4H2

(1)

Photo fermentation involves a series of biochemical reactions such as anaerobic digestion. However, unlike dark fermentation, it requires light for energy during the process of hydrogen production. Simple, short-chain fatty acids are converted to carbon dioxide and hydrogen catalyzed by nitrogenase enzyme in the absence of nitrogen by purple nonsulfur bacteria or green micro algae. Reaction (2) describes the

Page 8 of 28

Handbook of Climate Change Mitigation and Adaptation DOI 10.1007/978-1-4614-6431-0_26-2 # Springer Science+Business Media New York 2015

conversion process. Theoretically, 4 moles of hydrogen are produced for every mole of acetic acid but, in practice, part of the acetic acid is used for the production of cells. Moreover, large surface area is required to capture the necessary light energy, making it practically challenging in terms of bioreactor design. Transparent tubular reactors and flat panel reactors consisting of transparent rectangular boxes are under investigation (Wukovits et al. 2009). CH3 COOH þ 2H2 O þ light energy ! 4H2 þ 2CO2

(2)

Combination of the above-mentioned fermentations enhances the yield of hydrogen production. One such combination is dark fermentation and anaerobic digestion in which the monomeric components of the polymeric biomass are converted to biohydrogen. Dark fermentation and photo fermentation is another combination process that theoretically yields 12 moles of hydrogen for every mole of hexose sugar. This approach, called “Hyvolution,” would allow complete digestion of biomass, enhancing smallscale, cost-effective production of hydrogen, which otherwise is limited by thermodynamic considerations (Wukovits et al. 2009). Another approach mentioned in the second stage (lower right of Fig. 2) employs microbial electrohydrogenesis cells (MECs). In this method, electricity is applied to a microbial fuel cell that provides the necessary energy to convert the byproducts (typically organic acids) of the first stage into hydrogen (Hallenbeck and Ghosh 2009). Several raw materials such as kitchen waste, animal waste, agricultural residues, etc., are used as substrates for biohydrogen production. Fermentation of kitchen waste devoid of plastic and bones was used to produce hydrogen with a maximum efficiency of 4.77 LH2/(L reactor day) in a continuous stirred tank reactor (Shi et al. 2009). Use of second-generation feedstocks that are of cellulose origin such as corn stalks, wheat straw, switch grass, and miscanthus further enhance economical production of hydrogen. Pretreated lipid extracted microalgal biomass residue (LMBR) showed threefold hydrogen yields compared to the untreated LMBR (Yang et al. 2010). However, noncellulosic components such as xylose require conversion by a fermentative organism. High-thermophilic mixed culture was developed for xylose fermenting to biohydrogen at 1.36  0.03 mol H2/mol xylose consumed (Kongjan et al. 2009). Organisms belonging to genus Clostridium such as Clostridium butyricum, C. acetobutylicum, C. saccharoperbutylacetonicum, and C. pasteurianum are often used in the anaerobic production of hydrogen. Anaerobic thermophilic bacterial fermentation to hydrogen is the most suitable option due to increasing chemical and enzymatic reaction rates at high temperatures. Additionally, thermophilic processes yield lesser undesirable products as compared to mesophilic processes (Koskinen et al. 2008). An optimized fermentation of hydrolysate obtained from treating sugarcane bagasse with 0.5 % H2SO4 under 121  C and 1.5 kg/cm2 in autoclave for 60 min was obtained at initial pH 5.5 and initial total sugar concentration of 20 g/L at 37  C (Pattra et al. 2008). Thus, initial pH and total sugar concentration are important factors for an optimal hydrogen yield. However, an increase in hydrolysate (sugar) concentrations from 25 % (v/v) to 30 % (v/v) led to no hydrogen production. Further, an increase in lag time was observed from 11 to 38 h for an increase in hydrolysate concentrations from 20 % (v/v) to 25 % (v/v) for a mixed thermophilic dark fermentation process (Kongjan et al. 2010). Supplemental glucose and xylose with a ratio of 2:3 along with suitable pH control and inoculum concentration are realized to be the key factors for enhanced hydrogen production (Prakasham et al. 2009). Finally, biophotolysis is a low productivity method for hydrogen gas production. It involves dissociation of water by solar energy using green micro algae. The process takes place in two ways, direct biophotolysis and indirect biophotolysis. In direct biophotolysis, the microbes split the water into oxygen and hydrogen using sunlight by releasing two photons, which can either reduce carbon dioxide or form hydrogen in the presence of hydrogenase enzyme. However the released oxygen has an inhibitory effect

Page 9 of 28

Handbook of Climate Change Mitigation and Adaptation DOI 10.1007/978-1-4614-6431-0_26-2 # Springer Science+Business Media New York 2015

Gas handling

Manure source and collection

Manure handling

Digester

Gas utilization: Electricity generation and/or heat

Manure handling

Manure storage

Land application

Fig. 3 Block diagram of biogas production from manure (Source: http://pubs.ext.vt.edu/442/442-881/442-881.html)

on the hydrogenase enzyme which can be overcome by indirect biophotolysis. Indirect biophotolysis is carried out by cyanobacteria, in which water and carbon dioxide form carbohydrates and oxygen via photosynthesis. The second stage involves either dark fermentation or a combination of dark and photo fermentation to produce hydrogen. Fermentative water-gas shift reaction is another biological route in which carbon monoxide in the presence of water is converted to carbon dioxide and hydrogen (Wukovits et al. 2009).

Methane Methane is the main component of natural gas which is used as an energy carrier and raw material all over the world (Seiffert et al. 2009). Biogas produced from anaerobic digestion of biomass contains methane which can be used for energy purposes. The biochemical conversion of manure and other biomass to methane involves three stages. In the first stage, hydrolysis, enzymes produced by strict anaerobes such as Clostridia, Bactericides, and Streptococci, break up the complex molecules such as lipids, polysaccharides, proteins, fats, nucleic acids, etc., to simpler molecules such as monosaccharides, amino acids, fatty acids, etc. In the second stage, acidogenesis, a group of bacteria ferment the byproducts of hydrolysis to acetic acid, propionic acid, and butyric acid. In the third stage, methanogenesis, methanogens convert the acetic acid, hydrogen, and carbon dioxide into methane and carbon dioxide. Figure 3 shows a block diagram of biogas production from manure. Biogas production is greatly affected by temperature. Anaerobic fermentation is effective mostly at mesophilic (15–40 C) and thermophilic (50–60 C) temperature ranges. Therefore, the reactors are coated with biomass residues such as charcoal and even constructed in a sun-facing direction to avoid cold winds and make maximum use of heat available from nature (Anand and Singh 1993). Reactors have been designed to have a polythene sheet covering the top of it to utilize the energy from sun to heat up the reactor contents even during winter (Bansal 1988). As acetic acid and hydrogen produced during the process decrease the pH of the system, pH maintenance is another important parameter affecting the methane production, the desired pH being 6.8–7.2. Several techniques are involved in enhancing the production of biogas, such as addition of organic and inorganic additives, microbial strains, recycling of digested slurry, and maintaining C:N ratio. Additives, such as powdered green leaves, allow adsorption of substrate to increase localized concentration and enhance microbial growth. Addition of Ca and Mg salts act as microbial energy supplements and avoid foaming. Recycling of slurry avoids loss of active culture which otherwise occurs through the effluent stream. As the microbes tend to utilize carbon 25–30 times faster than nitrogen for the production of methane, maintaining C:N ratio is another critical factor in efficient production of biogas (Yadvika et al. 2004).

Page 10 of 28

Handbook of Climate Change Mitigation and Adaptation DOI 10.1007/978-1-4614-6431-0_26-2 # Springer Science+Business Media New York 2015

Biomethane can be distributed into the natural gas grid. In the case of existing pipelines in UK, Italy, and Germany, this concept is called the “green gas concept” (Åhman 2010). However, to employ biogas as a transportation fuel, concentration of biogas to 97  1 % of methane by removing the carbon dioxide is required (Power and Murphy 2009). About 30–60% of the wet biomass can be converted to methane by anaerobic digestion, while the remaining residue can be used as biofertilizer (Åhman 2010). Coproduction of methane and hydrogen using a two-stage anaerobic digestion process is another way to optimize simultaneous production of methane and hydrogen (Zhu et al. 2008). An energy input approximating 22 % of the fuel value is utilized in the production of biomethane, compared to approximately 57 % in the production of bioethanol (Power and Murphy 2009). The majority of the difference arises from the thermal energy consumption involved in the distillation of ethanol and drying of the residue obtained from fermentation. Thus methane’s gaseous nature has an added advantage over liquid biofuels. However, biomethane losses during digestion and upgrading constitute about 7.41 % of total biogas produced. Minimizing these losses and improving infrastructure efficiency for biomethane is needed to enhance the utility of methane relative to ethanol (Power and Murphy 2009).

Ethanol Ethanol is the most extensively studied biofuel to date and has gained great attention as sustainable biofuel. Bioethanol production and utilization is estimated to reduce green house gas emissions, improve agricultural economy, enhance rural employment, and increase national security (Mabee and Saddler 2009). Bioethanol has higher octane number, broader flammability limits, higher flame speeds, and higher heats of vaporization than gasoline, which allow for higher compression ratio, shorter burn time, and leaner burn engine. A major problem with ethanol is its water solubility and azeotropic mixture formation with water, limiting separation during distillation, consequently intensifying the cost of the separation process. Other major disadvantages include lower energy density than gasoline, low vapor pressure (making cold starts difficult), and toxicity to ecosystems (Balat 2007). However, ethanol is a 35 % oxygenated fuel and reduces particulate and NOx emissions. It increases combustion efficiency as it provides a reasonable antiknocking value. It can be blended with gasoline in various amounts, ranging from 5 % to 85–100 %, for use in the existing internal combustion engines, where 85 % (E85, meaning 85 % ethanol in gasoline) blends are used in flexible fuel vehicles (FFVs). Table 2 shows various blends of ethanol in gasoline used in different countries worldwide. In pure ethanol cars, sulfur emissions have totally disappeared; gasoline-driven cars with ethanol replacing lead have negligible carbon monoxide emissions (Goldemberg et al. 2008).

Table 2 Common gasoline ethanol blends available in various countries (Balat 2007) USA Canada Sweden India Australia Thailand China Columbia Peru Paraguay Brazil

Common vehicles E10 E10 E5 E5 E10 E10 E10 E10 E10 E7 E20, E25

Flexible fuel vehicles (FFVs) E85 E85 E85 – – – – – – – Any blend available

Page 11 of 28

Handbook of Climate Change Mitigation and Adaptation DOI 10.1007/978-1-4614-6431-0_26-2 # Springer Science+Business Media New York 2015

Size Reduction

Pretreatment

Enzyme Production

Lignin to the burners

Enzymatic hydrolysis of cellulose

Residual solids processing

Fermentation Ethanol Recovery

Fig. 4 Cellulosic ethanol “sugar platform”

Substrates used for the production of bioethanol vary with the availability of feedstock and geographical location. The USA and Brazil are the two major bioethanol producers in the world. Sugarcane and cane molasses are the substrates for the ethanol production in Brazil as is cornstarch in the USA (Almeida et al. 2007). Other substrates used are cassava, sugar beet, wheat, etc. However, use of food products like corn and cassava for ethanol production has an inflating effect on the prices of these staple crops and an effect on their supply. Additionally, storage of high concentration sugar substrates is liable to microbial contamination and requires sophisticated storage methods, such as refrigeration, which in turn requires energy use over long periods (Dodic et al. 2009). Work by Dodic et al. suggests the use of intermediate products such as thick juice in sugar beet production as substrates for ethanol production, in order to reduce storage volume and microbial contamination. Use of lignocellulosic materials such as switch grass, miscanthus, sorghum, and corn stover is highly encouraged due to high substrate availability, economic feasibility of production, and storage, and due to other reasons mentioned in section “Sources” of this chapter. Waste mushroom logs have been studied for their potential as substrates for ethanol production where 12 g/L ethanol concentration was obtained as against 8 g/L concentration for normal logs (Lee et al. 2008b). Mahua flowers were investigated for their potential as substrates for ethanol fermentation, with ethanol productivity of 3.13 g/kg flower/h at 77.1 % efficiency (Mohanty et al. 2009). Lignocellulosic biomass consists of majorly cellulose, hemicelluloses, and lignin of which cellulose is the most desired component for ethanol production. Ethanol is produced from the sugars that are present in the cellulose in polymeric form. Biomass is initially preprocessed, such as size reduced and washed for ease of handling and removal of soil. As shown in Fig. 4, the first major stage requires release of sugars from the cellulose-hemicellulose-lignin matrix; the second major stage involves the hydrolysis of higher sugars and fermentation of the monomeric sugars to ethanol; and the third stage involves the separation of ethanol from the fermentation broth. Pretreatment Pretreatments for bioethanol production may be performed using chemicals such as sulfuric acid, sodium hydroxide, ammonium hydroxide, supercritical ammonia, and supercritical carbon dioxide at both high and low temperature and pressure conditions to separate undesirable components such as lignin from biomass. Pretreatment disrupts the biomass structure and increases the surface area to enhance enzyme access during the hydrolysis stage. Several pretreatment methods such as hot water treatment, steam explosion, dilute sulfuric acid treatment, and ammonia fiber expansion can be employed to remove lignin and/or depolymerize lignocelluloses structure in biomass. Thermal processes include liquid hot water (LHW) and steam pretreatment. At temperatures above 150–180 C, hemicellulose and then lignin begin to dissolve (Bobleter 1994a; Garrote et al. 1999). Hot Page 12 of 28

Handbook of Climate Change Mitigation and Adaptation DOI 10.1007/978-1-4614-6431-0_26-2 # Springer Science+Business Media New York 2015

water pretreatment primarily dissolves hemicellulose to increase access for enzyme hydrolysis and to limit formation of inhibitors (Mosier et al. 2005a). Liquid hot water has removed up to 80 % of the hemicellulose to improve enzymatic hydrolysis by increasing the accessible surface area of the cellulose (Mosier et al. 2005a; Laser et al. 2002). pH should be kept between 4 and 7 to maintain hemicellulosic sugars in oligomeric, reducing formation of degradation products and thus inhibitors (Mosier et al. 2005a). Hemicellulose can be hydrolyzed to form acids which further hydrolyze the hemicelluloses (Gregg and Saddler 1996). The main advantages for LHW are recovery of pentoses, minimization of inhibitors, compared to steam explosions and minimal need for chemical and neutralization as compared to dilute acid pretreatment (Taherzadeh and Karimi 2008). Hot water pretreatment of lignocellulosic biomass has three types of reactor configurations, cocurrent, counter current, and flow through. In cocurrent pretreatment, biomass and water are heated to a desired temperature and held in the reactor for a controlled residence time before cooling. In counter current flow system, biomass slurry and water are allowed to flow in opposite directions into the reactor. In flow through configuration, hot water is allowed to flow through a stationary bed of biomass (Mosier et al. 2005b). Therefore, pretreatment technologies have been developed to be carried out in both batch and continuous flow reactor configurations. Steam explosion has been widely tested in lab and pilot-scale systems. Biomass is pressurized with steam at 160–260 C for several seconds to minutes and pressure is rapidly released. Mechanical forces separate fibers and the high temperature promotes conversion of acetyl groups to acetic acid (Alvira et al. 2010; Taherzadeh and Karimi 2008). The main action of the acetic acid is probably to catalyze the hydrolysis of soluble hemicellulose oligomers (Bobleter 1994b). Lignin is redistributed and some removed (Pan et al. 2005). Removing hemicellulose increases accessibility of enzymes to the cellulose (Alvira et al. 2010). The advantages of steam explosion include use of larger chip size, reduced need for acid catalyst, high sugar recovery, and feasibility for industrial-scale use (Alvira et al. 2010). The primary disadvantages include partial hemicellulose degradation and generation of inhibitory compounds (Oliva et al. 2003). Steam explosion can be combined with addition of sulfur dioxide and sulfuric acid to enhance recovery of cellulose and hemicellulose. It improves the solubilization of hemicelluloses, lowers optimal treatment temperatures, and partially hydrolyzes cellulose (Brownell et al. 1986; Tengborg et al. 1998). Acid addition is particularly effective with softwoods, which have a low content of acetyl groups (Sun and Cheng 2002). Acid pretreatment removes hemicellulose to make cellulose more accessible. It can also hydrolyze fermentable sugars. Acid pretreatment can be practiced using high concentrations of acid (generally sulfuric) at low temperatures or low concentrations at high temperatures (Taherzadeh and Karimi 2008). Use of concentrated acid requires corrosion resistant process equipment. Recovery of the acid is energy intensive and produces degradation products inhibitory to fermentation (Alvira et al. 2010; Taherzadeh and Karimi 2008; Chisti 1996). Use of dilute acid is more promising, for example at 0.1–1 % sulfuric acid at 140–190 C. This achieves almost total hemicellulose removal and high cellulose conversion (Taherzadeh and Karimi 2008). Production of inhibitory compounds is lessened (Hendriks and Zeeman 2009). Addition of nitric acid greatly improves solubilization of lignin in newspaper (Xiao and Clarkson 1997). The use of acid pretreatment for methane production is more forgiving because methanogens can tolerate the inhibitory compounds (Xiao and Clarkson 1997; Benjamin et al. 1984). Alkali pretreatment uses NaOH, Ca(OH)2, or ammonia. Lime is very effective (Hendriks and Zeeman 2009). It removes acetyl groups and has lower cost and less safety concerns. Solvation and saponification reactions (Hendriks and Zeeman 2009) lead to swelling. The swelling increases internal surface area of cellulose, decreases polymerization and crystallinity, and disrupts lignin structure and removes some lignin and hemicellulose (Taherzadeh and Karimi 2008), increasing accessibility to enzymes enhancing saccharification (Kassim and El-Shahed 1986). Processing can be done at low (ambient) temperature Page 13 of 28

Handbook of Climate Change Mitigation and Adaptation DOI 10.1007/978-1-4614-6431-0_26-2 # Springer Science+Business Media New York 2015

(Xu et al. 2007) for long time periods (24 h) or at elevated (120–130 C) levels for minutes to hours (Silverstein et al. 2007). Production of inhibitory compounds is significantly less (Taherzadeh and Karimi 2008). But, solubilization and redistribution of lignin and modifications in crystalline state of lignin can counteract the benefits of the method (Gregg and Saddler 1996). Addition of hydrogen peroxide to alkaline pretreatment enhances lignin removal and improves enzymatic hydrolysis (Carvalheiro et al. 2008). Alkaline pretreatment, as with acid, is more forgiving for production of methane versus ethanol (Pavlostathis and Gossett 1985). Ammonia fiber explosion or “expansion” (AFEX) is analogous to the steam expansion method. Anhydrous ammonia is added to biomass at approximately 1 kg NH3: 1 kg dry and held at temperatures of approximately 100–120 C for several minutes. Pressure is rapidly released, swelling and disrupting the lignocellulose structure (Alvira et al. 2010; Taherzadeh and Karimi 2008). Only a solid residue is produced and a little hemicellulose and lignin are removed (Wyman et al. 2005). Enzyme hydrolysis yields and ethanol production are increased (Alizadeh et al. 2005). AFEX does not produce inhibitors, although some lignin may remain on the biomass surface (Alvira et al. 2010). It is more effective on lowerlignin crop residues and herbaceous crops than woody material (Wyman et al. 2005). CO2 explosion uses CO2 at high pressure to penetrate the pores of lignocellulose. Explosive depressurization disrupts the cellulose and hemicellulose structure and improves enzymatic hydrolysis. Supercritical conditions at 35  C and 73 bar remove lignin and increase digestibility more effectively (Alvira et al. 2010). However, pretreatment with appropriate conditions is a highly desirable step for lignocellulosic biomass to improve its digestibility. Other physiochemical methods include organosolv and wet oxidation. Organosolv uses organic solvents to dissolve lignin. Solvent recovery is essential, and inexpensive, low molecular weight alcohols are favored. The recovery of low molecular weight lignin as a coproduct is potentially a significant advantage (Pan et al. 2005). Wet oxidation uses water and oxygen under elevated pressure and temperature (Taherzadeh and Karimi 2008). Hydrogen peroxide can be used at ambient temperature can also be used to enhance enzymatic hydrolysis (Azzam 1989). Batch treatment of corn stover using FeCl3 in tubular reactors resulted in the hydrolysis yield of 98 % compared to 22.8 % yield for the untreated corn stover (Liu et al. 2009). Biological pretreatment primarily uses brown and white soft rot fungi that degrade lignin and hemicelluloses (Taherzadeh and Karimi 2008). White rot fungi in particular have been evaluated and several shown to have high delignification efficiency (Kumar et al. 2009). Increase in total sugar yields during hydrolysis has been reported for switch grass preprocessed with Phanerochaete chrysosporium for 7 days (Mahalaxmi et al. 2010). Advantages include low energy and chemical requirements and ambient conditions. However, hydrolysis rates after biological pretreatment are low, and more research is needed (Alvira et al. 2010). Hydrolysis Hydrolysis of the pretreated biomass can be performed both chemically and biochemically. Chemical hydrolysis uses a continuous two-step dilute sulfuric acid process. The first step involves low temperature treatment and the second step, a high temperature treatment, as hemicellulose depolymerizes at lower temperature than the cellulose polymer. In the first step, the hemicellulosic fraction is removed, followed by the second step in which hexose release occurs. A batch process, using concentrated sulfuric acid, is also used for biomass hydrolysis; however, the use of concentrated acid requires high capital investment due to the requirement of corrosive resistant process equipment. Additionally, it requires acid recycling and recovery for economic viability of the process (Balat 2007). Biochemical hydrolysis is the most sought out process in recent years and is commonly called as saccharification. It is initiated by enzymes that cleave the cellulose-lignin matrix into various monomeric, Page 14 of 28

Handbook of Climate Change Mitigation and Adaptation DOI 10.1007/978-1-4614-6431-0_26-2 # Springer Science+Business Media New York 2015

Fig. 5 Molecular structure of cellulose and site of action of endoglucanase, cellobiohydrolase, and b-glucosidase (Kumar et al. 2008)

Fig. 6 Polymeric chemical structure of hemicellulose and targets of hydrolytic enzymes involved in hemicellulosic polymer degradation (Kumar et al. 2008)

dimeric, and oligomeric sugars. Most common enzymes that act synergistically for cellulose hydrolysis, called cellulases, are endoglucanases or endo-1,4-b-glucanases (EG), exoglucanases or cellobiohydrolases (CBH), and b-glucosidases (BGL). While endoglucanases cleave the intramolecular bonds of the cellulose polymer, CBH and BGL catalyze the release of cellobiose and glucose from oligomeric ends and glucose from cellobiose respectively as shown in the Fig. 5. A synergistic effect of an enzyme component system consisting of at least endo-b-glucanases, exo-b-glucanases, and b-glucosidases results in hydrolytic efficiency (Sun and Cheng 2002; Maeda et al. 2011). Enzymes related to hemicellulose hydrolysis, hemicellulases, are majorly endo-1,4- b-xylanase, b-xylosidase, a-glucuronidase, a-L-arabinofuranosidase, and acetylylan esterase as shown in Fig. 6. Therefore, the hydrolysate contains both hexoses and pentoses and their oligomeric forms depending on the treatment (Kumar et al. 2008). Various bacteria such as Clostridium, Cellulomonas, Bacillus, Thermomonospora, Ruminococcus, Bacteriodes, Erwinia, Acetovibrio, Microbispora, and Streptomyces produce these enzymes to hydrolyze lignocelluloses. Fungi such as Trichoderma, Ceriporiopsis, Aspergillus, and Sporotrichum also possess the cellulolytic abilities to hydrolyze lignocellulosic biomass. Therefore, enzyme extracts from these cultures are used for hydrolyzing biomass and recent developments in enzyme technology have reduced their price of production significantly.

Page 15 of 28

Handbook of Climate Change Mitigation and Adaptation DOI 10.1007/978-1-4614-6431-0_26-2 # Springer Science+Business Media New York 2015

Fig. 7 Ethanol fermentation pathway of Saccharomyces

The factors that influence the enzymatic hydrolysis are mainly temperature, pH, and substrate concentration. At low substrate concentration, increase in substrate concentration increases the yield and the reaction rate of hydrolysis. However, at high substrate concentration, yield and reaction rate decrease due to substrate inhibition of enzymes (Sun and Cheng 2002; Chisti 1996). Temperature and pH for enzyme activity varies by the microbe source from which it is derived. However, most commonly used industrial cellulases are derived from wild and modified strains of Trichoderma reesei and have an optimum temperature between 45  C and 50  C. Hydrolysis yields are also increased by addition of surfactants such as Tween-20. It is reported that the addition of Tween-20 resulted in 8 % increase in ethanol and 50 % reduction in cellulases dosage, increase in enzyme activity and the hydrolysis rate (Sánchez and Cardona 2008). Consolidated microbial treatment of biomass is another method of saccharification of biomass. Loss of sugars during the process is inevitable, due to the consumption by microbes, which makes the use of enzyme extracts advantageous for hydrolysis. Enzyme hydrolysis is limited byproduct inhibition, which requires continuous removal of hydrolysis products apart from the use of BGL for subsequent conversion of the generated cellobiose to glucose. Therefore, simultaneous saccharification and fermentation (SSF) is a potential solution for product inhibition, where release of glucose using enzyme hydrolysis and its subsequent fermentation to ethanol by yeast take place in the same system (Balat 2007). Fermentation Fermentation of biomass to ethanol is commonly carried out using yeast such as Saccharomyces and Pichia, bacteria such as Zymomonas and Escherichia, and fungi such as Aspergillus. Products of hydrolysis and sugars are converted to ethanol producing carbon dioxide as byproduct and energy for cell growth. The most commonly used microbe Saccharomyces cerevisiae ferments sugars to ethanol at almost anaerobic conditions, although it requires a certain amount of oxygen for essential polyunsaturated fats and lipids. Figure 7 depicts the ethanol fermentation pathway of Saccharomyces from glucose. It briefly describes the conversion of glucose to ethanol through intermediate biochemical reactions Page 16 of 28

Handbook of Climate Change Mitigation and Adaptation DOI 10.1007/978-1-4614-6431-0_26-2 # Springer Science+Business Media New York 2015

involving NAD+ and NADH (Nicotinamide adenine dinucleotide – oxidized and reduced forms, respectively). Since lignocellulosic biomass consists of several components such as pentoses, hexoses, and acids (acetic acid), degradation products derived from the pretreatment stage could inhibit the fermentation process. Chemical, physical, and biological methods have been developed to overcome the inhibition effect of these compounds by detoxification. Trichoderma reesei has been reported to degrade the inhibitors present in willow hydrolysate after steam pretreatment. Overnight extraction of spruce hydrolysate with diethyl ether at pH 2 showed detoxification effects with ethanol yields comparable to the reference fermentation. Detoxification by alkali treatment at pH 9 using Ca(OH)2 and readjustment of pH to 5.5 allowed better fermentability due to precipitation of toxic compounds (Palmqvist and HahnH€agerdal 2000). Usually, the temperature of operation is in the mesophilic range (15–40 C) for most of the species mentioned above. Increases in temperature beyond the optimum condition result in a decrease in ethanol yield and eventually in cell death. Another important factor in maintaining good cell growth is pH, generally a pH range of 6.5–7.5 (Aminifarshidmehr 1996) is suitable for ethanol fermentation for most of the strains, although, yeast and fungal strains can tolerate up to 3.5–5.0. pH below 4.0 reduces the potential of bacterial contamination thus alleviating the requirement of severe aseptic techniques (Balat 2007). Fermentation of biomass is affected by several other factors such as ethanol tolerance, substrate concentration, and byproduct inhibition. Ethanol tolerance is one of the factors which determine the maximum ethanol concentration that can be reached during fermentation, as most of the microbes responsible for fermentation cannot tolerate high concentrations of ethanol, eventually leading to cell death. Zymomonas has higher ethanol tolerance and achieves 5 % higher ethanol yields, as compared to the other yeast strains (Mohagheghi et al. 2002). Increase in substrate concentration decreased the ethanol yield. However, batchwise charging of substrate reduces this kind of inhibition. Therefore, fed-batch reactors are more suitable for industrial applications. Byproduct inhibition is overcome by chemical, mechanical, or biological detoxification as mentioned above (Balat 2007). Butanol Butanol is a colorless liquid which causes a narcotic effect at high concentrations. It is used as a solvent in biopharmaceutical, chemical, and cosmetic applications because of its high solubility in organic solvents and low water miscibility. Its physical properties very closely resemble those of gasoline, making it a potential additive in partial or complete to transportation fuel (Lee et al. 2008c). Butanol can also be used as a replacement fuel to gasoline-driven engines with minimum or no changes; it can also be blended with gasoline at much higher composition than ethanol as butanol has similar energy content as that of gasoline. It can be added to gasoline at the refinery and distributed through existing gasoline pipeline unlike ethanol, as butanol is less corrosive and does not absorb water (D€ urre 2008). Butanol, a four carbon primary alcohol, can be synthesized both chemically and biochemically; chemical synthesis of butanol is conducted majorly by three methods, namely, Oxo synthesis, Reppe synthesis, and crotonaldehyde hydrogenation. However, the discussion of this chapter is limited to biochemical conversion of biomass to butanol. In biochemical route, butanol is a fermentation product of anaerobic bacteria Clostridium acetobutyliticum, Clostridium butyricum, etc. Industrial production of butanol dates back to 1914 during World War Ι, as a byproduct in the production of acetone (which was used in war ammunition) by fermentation using C. acetobutyliticum. Although there was no immediate application of butanol during that time, later in 1920s in the USA, it was used to replace amyl acetate, a product from amyl alcohol, a solvent for lacquers in the automobile industry. By the 1950s, 66 % of the butanol used in the world was produced biochemically. However, due to increased biomass cost and low crude oil prices, crude oil Page 17 of 28

Handbook of Climate Change Mitigation and Adaptation DOI 10.1007/978-1-4614-6431-0_26-2 # Springer Science+Business Media New York 2015

Fig. 8 Butanol fermentation pathway of Clostridium acetobutylicum (D€urre 2008)

replaced butanol as a transportation fuel (D€ urre 2007). Substrates used for butanol production can be of both starch and cellulose origin such as molasses, corn fiber, wheat straw, etc. However, the conflict of using food substrates for fuel production regulates the usage of starch-based substrates. Figure 4, which depicts the flow of processes for ethanol, can also be applied for butanol. However, fermentation of biomass is carried out by butanol producing bacteria. The biochemical routes involved in butanol formation are given in Fig. 8 (Lee et al. 2008c). Butanol formation takes place through the glucose-pyruvate-butyraldehyde route. Butanol fermentation is a biphasic transformation consisting of an acidogenic phase which occurs during exponential growth phase and solventogenic phase. During the acidogenic phase, acid-forming pathways are activated, and acetate, butyrate, hydrogen, and carbon dioxide are produced as major products. Acetone, butanol, and ethanol/propanol are the products of solventogenic phase which occurs after the exponential growth phase (Lee et al. 2008c). Both acidogenic and solventogenic phases can be seen in the Fig. 8 based on the final products produced in the two phases. The solventogenic phase is a response to the increased acid production after acidogenic phase, which if not initiated, would lead to a decrease in the extracellular pH, and finally to cell death due to increasing proton gradient between inner and outer cellular environments (D€urre 2008). Therefore, pH control has a very crucial effect on butanol production, and it requires being in the acidic range for the solventogenic phase. Solvent toxicity is another major concern that causes cell death, due to cell wall weakening in the presence of acetone, ethanol, and butanol (the most toxic compound), leading to low product concentrations and productivity (Lee et al. 2008c). Solvent toxicity can be overcome by continuous removal of the solvents through various unit operations. Traditionally, butanol formed is separated by distillation which

Page 18 of 28

Handbook of Climate Change Mitigation and Adaptation DOI 10.1007/978-1-4614-6431-0_26-2 # Springer Science+Business Media New York 2015

Fig. 9 Formation of biodiesel (Fatty acid methyl ester)

is a cost-intensive operation due to its high boiling point. Alternative methods for butanol separation are adsorption, gas stripping, liquid-liquid extraction, perstraption, pervaporation, and reverse osmosis (D€urre 2007). Each of these processes has certain limitations, among which, gas stripping is simple and successful in spite of low selectivity, as it can be used in a continuous operation for removing butanol. Liquid-liquid extraction requires use of a solvent that is noninhibitory to the microbes. In pervaporation, butanol is selectively diffused through a membrane and evaporated without removing the medium components necessary for the microbial growth (Qureshi et al. 1999). However, it is limited by fouling of membranes by the particles present in the fermentation broth. Biodiesel Biodiesel is a biofuel derived from transesterification of fats and oils with properties similar to the petroleum diesel. It can be blended with diesel or used directly in the existing diesel engines without significant modifications. The main advantage of biodiesel is that, as a biomass-derived fuel, it produces 78 % less (net) carbon dioxide emissions, compared with that for petroleum-derived diesel fuel. Because its structure is nonaromatic, it combusts more efficiently, producing 46.7 % less carbon monoxide emissions, 66.7 % less particulate emissions, and 45.2 % less unburned hydrocarbons compared to conventional diesel. Therefore, it can be used in highly sensitive environments such as marine and mining environments (Helwani et al. 2009). Additionally, its high boiling point (about 150  C) and presence of fatty acids impart lesser volatility and higher lubricating effect respectively, on engines, eventually reducing wear and tear and enhancing longer service life (Al-Zuhair 2007). Biodiesel is conventionally produced from transesterification of oil (triglycerides) with alcohol (methanol) in the presence of an acid, base, or enzyme catalyst with glycerin as byproduct as shown in Fig. 9. The sources of oil include oil seed plants such as palm, rapeseed, soybean, castor, and jatropha, used oils, lard, animal fat residue, etc. Palm oil having the highest yield of around 4,000 kg of oil per hectare is considered to be the best source of oil for biodiesel production (Al-Zuhair 2007). However, the majority of the cost involved in biodiesel production arises from the cost of the feedstock oil. Further, with the increasing edible oil consumption, it is more economical and environmentally sustainable to employ used oils and nonedible oils for biodiesel production. The major differences between the fresh and used oils are the moisture and free fatty acid (FFA) content, with used oils having high moisture and FFA

Page 19 of 28

Handbook of Climate Change Mitigation and Adaptation DOI 10.1007/978-1-4614-6431-0_26-2 # Springer Science+Business Media New York 2015 MeOH NaOH Refined vegetable oil

Catalyst preparation

Water washing or H3PO4

Transesterification at 60°C 1.4-4.0 bar

Catalyst neutralization

Aqueous Phosphates phase

Separator

Glycerin/alcohol phase

Vacuum distillation 28°C, 0.2 bar

Filtration and

Methanol recycle

Vacuum distillation

Catalyst neutralization

H3PO4

Separator

Fatty phase

Oil waste

Vacuum distillation Aqueous phase

Phosphates

MeOH and water Glycerin (92%) Biodiesel (99.6%)

Fig. 10 Block diagram for base-catalyzed production of biodiesel (Helwani et al. 2009) H2SO4

Oil

MeOH Biodiesel (99.6%)

H2SO4/MeOH Methanol and water

Yellow grease

Simultaneous esterification and transesterification reaction (main reactor)

Distillation

Vacuum distillation

Glycerin (92%) and water Vacuum distillation

Water washing

H2SO4+CaO→Ca SO4+H2O

Gravity separation

CaO

CaSO4

Fig. 11 Block diagram for acid-catalyzed production of biodiesel (Helwani et al. 2009)

content, which affect the acid- and alkaline-catalyzed transesterification, respectively. Alternatively, animal fats from waste residues are a useful source of oils. However, the heat at their high melting points denatures the enzymes used during enzyme-catalyzed transesterification. Other sources of oil are oleaginous yeasts and filamentous fungi which on their outer surface secrete oil (Miao and Wu 2006). As mentioned earlier, biodiesel production process can be alkali, acid, or enzyme catalyzed depending on the amount of FFAs and moisture present in the oil feedstock. The stoichiometry from Fig. 9 suggests oil to methanol ratio to be 1:3. However, for equilibrium to proceed toward the formation of biodiesel, use of excess alcohol is suggested. During an alkali-catalyzed reaction, the oils in the presence of excess methanol are converted to fatty acid methyl esters and glycerin (Fig. 10). Alternately, during an acid-catalyzed reaction the triglycerides are esterified followed by a transesterification process (Fig. 11) (Schuchardt et al. 1998). Low FFA-containing feedstock is more suitable for alkali-catalyzed transesterification and high

Page 20 of 28

Handbook of Climate Change Mitigation and Adaptation DOI 10.1007/978-1-4614-6431-0_26-2 # Springer Science+Business Media New York 2015

FFA-containing ones for acid-catalyzed reaction. FFAs present in oils during base-catalyzed reaction react with the oils to form soap and emulsions that hinder the purification processes of biodiesel apart from base consumption (Basu and Norris 1996). Alkaline methoxides are high biodiesel yielding base catalysts with short reaction times, even at very low (0.5 mol%) concentrations. However, they are more expensive than metal hydroxides (KOH and NaOH) (Helwani et al. 2009). On the other hand, acid-catalyzed reactions are 400 times slower than the alkali-catalyzed transesterification (Al-Zuhair 2007) and less sensitive to FFA content. The presence of water greatly inhibits the conversion due to catalyst deactivation. The major reaction parameters affecting the biodiesel conversion are temperature, oil/methanol ratio, FFA, and moisture contents. An increase in temperature will increase the conversion the most appropriate range being 60–70 C, the alcohol boiling range at atmospheric pressure. Enzyme-catalyzed transesterification is achieved using lipases obtained from organisms such as Candida rugosa, Pseudomonas fluorescens, Rhizopus oryzae, Burkholderia cepacia, Aspergillus niger, Thermomyces lanuginosa, and Rhizomucor miehei (Al-Zuhair 2007). Enzymes are more compatible in terms of usage of a wide range of feedstocks, fewer processing steps, and fewer separation steps. Enzymes do not form soaps with the FFAs present in the feedstock, which allows the use of spent oils and animal fats for biodiesel production. They can convert both FFAs and triglycerides (TAG) simultaneously without another pretreatment step for converting FFAs to TAG (Fjerbaek et al. 2009). An increase in temperature increases the enzymatic conversion of biodiesel due to increased rate constants and lesser mass transfer limitations (Al-Zuhair et al. 2003). Additionally, optimal water content increases the biodiesel conversion as lipase acts as an interface between the aqueous and organic phases which allow its activation by rendering suitable conformation for transesterification (Panalotov and Verger 2000). However, they are currently facing challenges related to lower reaction rate, high cost, and loss of activity. Methanol is the most widely used alcohol for biodiesel production due to its availability from syngas. However, it is required to use an alcohol produced from a renewable source, such as ethanol, to make biodiesel production a completely green process. Additionally, methanol is toxic and renders lipases inactive at high concentrations. Therefore, methyl acetate can be used as a methyl acceptor in place of methanol, as it still has no negative effects on Novozyme 435, the only commercial lipase known, used for biodiesel production from soybean oil (Du et al. 2004). Immobilization of lipases is considered an economical process to overcome the limitations of using a batch process and employing a continuous process to enable glycerol separation for higher conversion rates (Watanabe et al. 2002).

Genetic Engineering Approaches With the above background of conversion of biomass to fuels, it is evident that several factors such as biomass composition, pH, temperature, by-products, etc., have a potential impact on the biofuel production. Process factors such as pH and temperature can be maintained using appropriate reactor and process conditions. Intrinsic factors such as biomass composition, product tolerance such as ethanol and butanol tolerance, specific binding of enzymes, and byproduct inhibition will remain potential challenges without recombinant DNA technology. Recombinant DNA technology is comprised of five general procedures (Nelson and Cox 2008): 1. A desired segment of the microbe DNA of interest is cut using sequence-specific endonucleases which are nucleotide cleaving enzymes, otherwise called restriction endonucleases. These endonucleases act as molecular scissors to obtain the required nucleotide sequence.

Page 21 of 28

Handbook of Climate Change Mitigation and Adaptation DOI 10.1007/978-1-4614-6431-0_26-2 # Springer Science+Business Media New York 2015

2. A small molecule of DNA capable of self-replication is selected. These molecules, called cloning vectors, are generally plasmids or viral DNA which can be coupled with the nucleotide sequence obtained from the previous step. 3. The two segments are incubated in the presence of DNA ligase to obtain a recombinant DNA. 4. Recombinant DNA is introduced into the host cell for replication. The most common host cell used is E. coli for its well-understood DNA metabolism and its well-characterized bacteriophages (viruses that live on bacteria) and plasmids. 5. After cell replication, the host cells with recombinant DNA are identified and used for expression. The most commonly used host cells for metabolic engineering are Escherichia coli, Zymomonas mobilis, and Saccharomyces cerevisiae as their genetic maps are the most well studied (Banerjee et al. 2010). They are facultative anaerobes with fast growth rates and viability (Lee et al. 2008a). Incorporation and expression of pyruvate decarboxylase and alcohol dehydrogenase II genes from Z. mobilis into E. coli has resulted in high yields of ethanol from the utilization of both pentoses and hexoses, as against only hexoses (Banerjee et al. 2010). Although the recombinant strains are helpful in exploring the solutions for pathway-related problems, their industrial sustenance is limited due to the lack of robustness. Recombinant E. coli can produce isopropanol, n-butanol, and fatty acid ethyl esters through various engineered pathways (Atsumi and Liao 2008). Modification of enzymes used in hydrolysis of biomass to produce sugars is generating immense interest. However, it is noticed that the enzymes belonging to the same class have different amino acid sequences conferring low level of homogeneity, for example CBH1 (T. reesei) has 15,000 h to generate electricity 2 MWe. According to the blowing pressure of gasification agent, gasifiers have atmospheric gasifiers (0.11  0.15 MPa) and pressurized gasifiers (1.8  2.25 MPa). The pressurized gasifier with hightemperature and high-pressure outgas is suitable for the large-scale system for power generation. A gas compression process in the downstream system for the gas turbine or liquefaction could be avoided in the pressurized gasification process, in the short term, either pressurized circulating fluidized-bed, bubbling bed, or pressurized bubbling bed gasifiers have the lack of market appeal mainly due to the complex system and high construction cost of large pressure shell. And the pressurized gasification process often uses pure O2 as gasifying agent to improve the gas quality. Hence, special security measures are required to guarantee safe operation. Page 9 of 34

Handbook of Climate Change Mitigation and Adaptation DOI 10.1007/978-1-4614-6431-0_27-2 # Springer Science+Business Media New York 2015

Entrained-Flow-Bed Gasifier Work characteristics: fine biomass powder as raw material carried by the high-speed air flow is injected into the gasifier with gasifying agent. In the reactor, solid particles are dragged along with the gas stream. Their properties of dispersing and flowing in the airflow are similar to the flow of mass points of gas. This generally means short residence times (typically 1 s) and high temperatures (typically 1,300  1,500  C). Hence, entrained-flow gasification is of high reaction rate, large capacity, high carbon conversion, and improved syngas without tar and phenol and little environmental pollution. Now entrained-flow gasification technology is mainly used in coal gasification industry. The most mature technology of entrained-flow gasification is Koppers-Totzek (KT) technology, which is in the atmospheric pressure operation. And pressurized entrained-flow gasification technology is successfully developed: Shell and Prenflo technology can feed dry coal powder, and Texaco and Destec technology can feed water-coal slurry or oil. Although there are many commercial coal-based entrained-flow gasifiers, the experience in the biomass-based gasifiers is still little. Experimental results show: biomass ash in the entrained-flow gasifiers is difficult to melt under the operating temperature (1,300  1,500  C), due to ash containing high contents of CaO and alkali metal generally found in the gas phase, which can reduce the ash melting point. However, a slagging gasifier is preferred over a non-slagging gasifier: (1) little slagging can never be avoided and (2) a slagging gasifier is more fuel flexible, but it needs to add a fluxing material (silica or clay) to achieve melting properties at required temperature. Currently, the research on biomass entrained-flow gasification is at the stage of experimental study and numerical simulation. The CARBO-V system of Colin (CHOREN) in the German city of Freiburg, Saxony, is the most advanced biomass gasification system for bio-oil production in the industrial level. Energy Research Centre of the Netherlands (ECN) studied the feasibility of biomass entrained-flow gasification, ash melting properties, the feeding device, pressurization methods, and the selection of gasification routes (Drift et al. 2004). Biomass Technology Group of the Netherlands (BTG) investigated the bio-oil entrained-flow gasification (Venderbosch and Prins 1998). Zhejiang University of China designed the reactor, investigated biomass gasification characteristics, residual carbon properties, the volatile issue of alkali metal, and pretreatment of raw materials; and established a dynamic model about the gasification process (Zhao 2007). In recent years, countries in the world usually pressurized entrained-flow gasifiers for the research on the biomass gasification using powder materials. As a potential gasification technology, pressurized gasification has become a hot research spot. How to effectively realize the pressurized entrained-flow gasification of biomass is the focus of future research. Syngas Quality Control and Cleaning Technology Any gasification process for synthesis gas will produce pollutants: particulate, condensable tars, alkali metal compounds, H2S, HCl, NH3, HCN, COS, etc. (Van et al. 1995). So deep purification is needed according to the requirements of the downstream gas appliances and the use restrictions of catalyst. In general, Fischer-Tropsch synthesis demands higher standards of gas cleaning than biomass integrated gasification combined cycle (BIGCC). At present, the common feedstock for Fischer-Tropsch synthesis is the relatively clean natural gas. Hence, actual cleaning specifications for some specific biomass contaminants are not known. Some specifications for biomass gasification are estimated based on the practical experience. Ash Particles Ash particles in the product gas can be mainly purified by mechanical clarification. The particle reduction of different methods can be seen in Table 3. Most of them are operated at low temperatures. Some are at high temperatures, such as the operation temperature of ceramic filters which Page 10 of 34

Handbook of Climate Change Mitigation and Adaptation DOI 10.1007/978-1-4614-6431-0_27-2 # Springer Science+Business Media New York 2015

Table 3 The particle reduction of different methods Method Wash tower Jet scrubber Granular-bed filter Bag filter Cyclone separator Inertial dust separator Wet electrostatic precipitator

Particle reduction (%) 95–98 % 95–99 % 99 % 99 % 90 % 70 % >99

Particle size >1 mm >1 mm >0.1 mm >10 mm 20–30 mm

Table 4 The tar yield of different gasification processes Gasification method An air-blown circulating fluidized-bed (CFB) biomass gasifier Updraft fixed-bed gasifier Downdraft fixed-bed gasifier Other gasifier

Tar content in syngas (g/Nm3) 10 100 1 0.5  100

is 600  C. Ceramic filter according to its structure types can be divided into bag-type ceramic filters, webbing ceramic filters, tubular ceramic filters, cross flow ceramic filters, cellular-type ceramic filter, and so on. Low-temperature cleaning technology has been realized in industrialization and more mature than high-temperature cleaning. But the pollution problems of low-temperature cleaning are more serious, such as secondary pollution caused by wastewater from washing and wet ESP. Compared with low-temperature cleaning, high-temperature cleaning can improve system energy efficiency, can reduce the operating cost from the utilization of high-temperature syngas, and also can be combined with the high-temperature fuel cells for heat and power generation (Ma et al. 2005). Tar Biomass tar is a light hydrocarbon and phenolic mixture. “Naphthalene” is the most difficult compound to reform. Tar will cause many severe problems. It will condense into liquid below its dew point temperature to lead to clogged, blockage, or corrosion in the downstream pipeline, filters, or equipment. It is difficult to completely burn tar. Gas facilities such as internal combustion engines and gas turbines would be damaged. Table 4 shows the tar yield of different gasification processes. Tar removal, conversion, or destruction has been one of the greatest technical challenges for the successful development of commercial gasification technologies (Dayton 2002). For this reason, most applications require the product gas with a low tar content, of the order 0.05 g/m3 or less. The methods to remove tar are mechanical cleaning, low-temperature cleaning, high-temperature cleaning, thermal cracking, and catalytic cracking. The operation and economical analysis shows that mechanical cleaning and catalytic cracking are suitable for small-scale plants and large-scale plants, respectively. Mechanical Cleaning The common mechanical methods are considerably efficient in removing tar accompanied with effective particles capture (Hasler and Nussbaumer 1999). Table 5 shows the effect of different methods on removing tars. However, the cost of mechanical cleaning system is usually high. And it only removes the tar from the product gas, while the energy in the tar is lost. Now a new tar removal system called OLGA (OLGA is the Dutch acronym for oil-based gas washer) is developed. In this system,

Page 11 of 34

Handbook of Climate Change Mitigation and Adaptation DOI 10.1007/978-1-4614-6431-0_27-2 # Springer Science+Business Media New York 2015

Table 5 The effect of different methods on removing tars Method Water scrubber Venturi scrubber Fabric filter Rotating particle separator Wet electrostatic precipitator

Tar reduction (%) 10–25 50–90 (Hasler 1997) 0–50 30–70 40–70 (Paasen and Rabou 2004)

heavy tars, 99 % phenol, and 97 % of the heterocyclic tars can be removed (Boerrigter 2005). The lab test results (Hasler 1997) show that active carbon has good removal efficiency for high-boiling hydrocarbons and phenols. Meanwhile, the “tar” and the active carbon itself can be recycled as a feedstock. But the tars’ accumulation on the carbon is difficult to clean completely to cause the blockage of active carbon filters. Thermal Cracking In thermal cracking method, the raw gas derived from gasification or pyrolysis is heated to high temperatures. The tar molecules can be cracked into lighter gases. Biomass-derived tar is very refractory and hard to crack by high temperature alone. Three ways are beneficial for tar’s splitting decomposition reaction. The first method is to increase the residence time such as the utilization of fluidized-bed reactor. But the improving effect is not obvious. The second method is increasing the area of the heating surfaces, but it depends on the mixture grade of various compositions. The last method is adding oxygen or air to strengthen the partial oxidation of tar, which increases the CO content at the expense of conversion efficiency decrease and operation cost enhancement. Catalyst Cracking At present, the catalytic cracking is the most effective way to remove the tar. It is divided into low-temperature catalytic reforming (350  600  C) and high-temperature catalytic reforming (500  800  C). Catalysts are as follows: nickel-based catalyst, dolomite, alkali metals, and nano-catalyst. Nickel-based catalyst supported on SiO2 and Al2O3 can be used at low temperatures or at high temperatures for catalytic cracking. Although nickel-based catalyst has good effect on cracking tars, it is very expensive and easy to lose activation because of the carbon deposition, H2S poisoning, and catalyst attrition. Compared to nickel-based catalyst, the abundant naturally occurring catalysts such as dolomite CaMg (CO3)2 are cheaper. And it is the most common and effective catalyst for tar removal (Rapagna et al. 2000). However, the conversion of tars over dolomite cannot reach 90–95 % or more (Zhang 2003). And it is difficult for dolomite to crack the heavy tar components (Karlsson and Ekström 1994). In addition, due to its low melting point, dolomite is very easy to melt to cause deactivation. Adding inexpensive alkali metal catalyst to biomass raw material can significantly reduce tar content through dry mixing or wet impregnation. Many studies (Brown et al. 2000; Kumar et al. 1997; Elliott and Baker 1986) suggest that potassium has a better catalytic effect on tar cracking compared to other alkali metals (such as Na, Li, Ca, etc.). But the alkali metal in the furnace would lead to agglomeration, sintering, fluidization performance degradation, blockage in the pipes, and other metal catalyst deactivation. Currently, some novel metals have been widely used as catalysts for tar cracking. It is found that Rh/CeO2/SiO2 has the best catalytic performance: little carbon deposition at low temperatures and high and stable activity even under the presence of high concentrations of H2S (280 ppm) (Tomishige et al. 2005). In addition, nano-Ni catalyst (NiO/g-Al2O3) also can improve the quality of synthesis gas and reduce the tar in the gasification process (Li et al. 2008).

Page 12 of 34

Handbook of Climate Change Mitigation and Adaptation DOI 10.1007/978-1-4614-6431-0_27-2 # Springer Science+Business Media New York 2015

Plasma Methods Some studies demonstrated that corona discharge could also decompose the organic components, which can be used to reduce the tar content. Tests (Heesch and Paasen 2000) were carried out on a wood gasifier, which was designed to produce a 100 kW electrical output. The dust removal efficiency was about 72–95 %. Conversion efficiencies of heavy tar components and light tar components were 68 % and 50 %, respectively. In addition to capturing dust and tar, plasma technology can operate at high temperatures. Alkali Metal In the biomass gasification process, the problems associated with alkali metals are mainly caused by the main nonmetallic components: Si and alkali metal potassium in the ash. Si reacts with K at temperature less than 900  C. For this reason, Si-O-Si bond is broken to form silicate or to react with sulfur to form sulfate. The melting points of silicate and sulfate are lower than 700  C. So they are easy to deposit on the walls of reactors or pipes to cause sintering, corrosion, anti-fluidization, or blockage. These problems can be mitigated by leaching and fractionation as the two main pretreatments (Arvelakis et al. 2002, 2005; Arvelakis and Koukios 2002). However, mechanical fractionation could reduce up to 50 % of the ash content in the biomass. The remaining ash would still produce such problems (Arvelakis et al. 2005). Eighty percent of the alkali metals in the syngas can be separated together with the coke through the cyclone. Syngas Utilization Gas Centralized Supply In the developing countries, in addition to the heat and power supply, biomass gasification technology has been mainly applied for domestic cooking in the way of gas centralized supply. The process of biomass gasification system project for central gas supply: straw is put into the gasifier and converted into combustible gas through pyrolysis and gasification reactions. The dust and tars in the combustible gas are removed by the downstream cleaner. Then the clean gas stored in the air storage can be delivered to the every user of this system. The main types of gasifiers used in the biomass gasification system project for central gas supply are pyrolysis gasifiers, updraft fixed-bed gasifiers, pressurized updraft fixed-bed gasifiers, downdraft fixed-bed gasifiers, and fluidized-bed gasifiers. Downdraft fixed-bed gasifier is the most often used reactor in all of them. But the operation rate of village-level straw gasification system for centralized cooking gas supply is still very low. There are many reasons for this. Technically the syngas quality is low due to the low heating value and the high contents of CO and N2. Contents of tars and dust in the combustible gas are high. The whole centralized gas supply system is not fully used. The syngas should be utilized in many ways to improve the utilization rate of system such as power generation, preheating, and drying grain and other agricultural products. From a policy and economic point of view: limited by the capital cost, the system needs to be as simple as possible. Therefore, the system cannot be perfectly designed, leaving some operation difficulties and environmental problems (Bridgwater et al. 1999a). Therefore, most of the domestic cooking fuel projects need government financial support now. Combined Heat and Power Generation Due to its properties of energy saving and environmental conservation, combined heat and power generation (CHP) technology has been the focal point of worldwide attention as an alternate energy source for traditional source. The major conversion technologies of biomass-based CHP systems are combustion, gasification, pyrolysis, biochemical/biological processes, and chemical/mechanical processes. Combustion technology is widely used at large- and medium-scale systems. Although gasification technology is still developing, this technology has great potential for CHP. The main types of gasifiers for CHP systems Page 13 of 34

Handbook of Climate Change Mitigation and Adaptation DOI 10.1007/978-1-4614-6431-0_27-2 # Springer Science+Business Media New York 2015

are updraft/downdraft fixed-bed gasifiers, fluidized-bed gasifiers, circulating fluidized-bed gasifiers, and entrained-flow gasifiers. Internal combustion engines and turbine can use cleaned product gas to produce heat and power. Gasification-based CHP system potentially has higher electricity efficiency than a direct combustion-based CHP system. Moreover, syngas from biomass gasification can increase the bio-based fuel percentage used in the existing pulverized combustors without any concern about plugging of the coal-feeding system during co-firing of biomass coal. But gasification-based CHP systems have not been realized in commercialization till now. The unstable gasification process leads to the great changed quality of the synthetic gas and higher content of tars, which seriously damage engines. And to reduce system cost, the gasification-based system is short of automatic measurement and control measures to result in the varied system performance. According to CHP capacity, it can be divided into large-scale, medium-sized, small-scale, and microscale CHP. Biomass is best suited for decentralized, small-scale, and microscale CHP systems due to its intrinsic properties. On one hand, small-scale and microscale biomass CHP systems can reduce transportation cost of biomass and provide heat and power where they are needed. On the other hand, it is more difficult to find an end user for the heat produced in larger CHP systems. Generally speaking, the concept “small-scale CHP” means combined heat and power generation systems with electrical power less than 100 kW. “Microscale CHP” is also often used to denote small-scale CHP systems with an electric capacity smaller than 15 kWe. Biomass-based CHP systems are generally smaller than coal-based systems. And the power efficiency of biomass-based CHP is also lower, only about 85–90 %, as 30–34 % and 22 % of electricity will be used for biomass drying and solid-waste treatment, respectively. A typical CHP system at large scale is biomass integrated gasification combined cycles (BIGCC). The overall efficiency of the BIGCC system is about 86 % and the electrical efficiency is about 33 % (Miccio 1999). “VEGA” gasification system developed by Sydkraft AB Company of Sweden uses BIGCC technology for district heat and power supply. Buggenum IGCC system in the Netherlands uses the mixtures of biomass and coal to generate power (250 MW). Currently small- and medium-scale CHP systems have not been commercialized due to high investment, low return, and some technical barriers. Synthesis Techniques Syngas can be converted to a liquid fuel or chemicals through synthesis technology. The major synthesis technologies are methanol synthesis, Fischer-Tropsch synthesis, methane synthesis, hydroformylation of olefins synthesis, and hydrogen in organic synthesis. The features of different technologies are in Table 6. Fischer-Tropsch synthesis is one of the biomass indirect liquefaction technologies. Under the appropriate condition (20  40 bar, 180  250  C), syngas as raw material is synthesized into the liquid fuels (hydrocarbons with different chain lengths). Fischer-Tropsch synthetic oil can be divided into three categories according to different raw materials (see Table 7). The synthesis process includes gasification, gas purification, transformation and reforming, synthesis, and upgradation. The optimal molar ratio of H2 and CO for Fischer-Tropsch synthesis is 2  2.5, preferably 2.1. Currently the cheap Fe-based catalyst is commonly used for industrial Fischer-Tropsch synthesis. However, it will strengthen the water-gas reaction to produce too much useless CO2 at the expense of large CO consumption. Moreover, when it is used in slurry bed reactors at low temperatures, the small particles of Fe-based catalyst are hardly separated from the product wax. The Co-based catalyst precisely overcomes these deficiencies. Hence, the current developed catalysts are mostly cobalt-based catalysts with high activity, high factor of chain elongation, and long life. The main reactors are fixed bed and circulating fluidized bed. The F-T synthesis is used on a technical scale nowadays only at SASOL (coal based) in South Africa and at Shell (natural gas based) in Malaysia. However, biomass synthesis gas for the Fischer-Tropsch synthesis is still of less attention.

Page 14 of 34

Handbook of Climate Change Mitigation and Adaptation DOI 10.1007/978-1-4614-6431-0_27-2 # Springer Science+Business Media New York 2015

Table 6 The features of different synthesis technologies (Wang et al. 2008; Reinhard 2002) Synthesis Methanol synthesis

Product Methanol

Principle CO + 2H2 ! CH3OH + 90 kJ/mol

CO2 + 3H2 ! CH3OH + H2O + 49.6 kJ/mol

F-T synthesis Methane synthesis

F-T oil Methane

Hydroformylation

Aldehyde

CO + 2H2 ! [CH2-] + H2O  165 kJ/mol CO + 3H2 ! CH4 + H2O + 206.4 kJ/mol

H R

H

C= C

+ CO + H2 H R

CH2

CH2

CHO + R

CH CHO

Hydrogen in organic synthesis

Chemicals

A + nH2 ! BH2 n

CH3

Catalyst High-pressure process: coppercontaining catalysts Low-pressure process: CuO/ZnO/M (M = Al, CrO, mixed oxide of zinc and aluminum) Cobalt or iron Mg-promoted Ni catalysts with diatomaceous earthenware as carrier Cobalt carbonyl hydride, cobaltor rhodiumphosphine complexes

Industrialization Yes

Raney nickel, copper, molybdenum, especially inert metals (Pt, Pd)

No

Yes Yes

Yes

Table 7 The features of different Fischer-Tropsch synthetic oil Fisher-Tropsch synthesis fuels Coal-based oil (coal to liquid) Natural gas-based oil (gas to liquid) Biomass-based oil (biomass to liquid)

Advantages Oil quality is better than the products of direct liquefaction A high cetane number, does not contain aromatic compound and sulfur A neutral carbon fuel, does not contain the impurities that are always in mineral oil A high cetane number, can be used as additives or used as clean fuel for diesel engines

Disadvantages High content of arene, a low cetane number for diesel oil product, no applications for CO2 zero emission No applications for CO2 zero emission

Commercialization Sasol in Malaysia

Shell in South Africa No

Page 15 of 34

Handbook of Climate Change Mitigation and Adaptation DOI 10.1007/978-1-4614-6431-0_27-2 # Springer Science+Business Media New York 2015

Obstacles to Commercialization The application obstacles are divided into technical and nontechnical barriers to obstruct development of biomass gasification technology. Technical barriers are shown as follows: 1. Biomass resources: As a resource with the properties of low density and dispersion, there are substantial logistical problems in collection and transport as well as high costs. Moreover, biomass has the characteristic of its seasonality. So it is difficult to achieve large-scale gasification plants for power generation. 2. Feeding: In the feeding process, the problems with bridging, blockage, and instability are often caused due to low-dense materials and mixed residues with varying characteristics. 3. Gasification technology: There is a need to improve the equipment reliability. The immature technology makes it difficult to open market and commercialize. 4. Purification: The difficulties of purifying tail gas are how to solve the fouling and corrosion of the heat exchanger and pipes, tar removal/cracking, and continuous operation. 5. Prime mover: Experience about biomass syngas utilized in operation of prime mover is little, such as allowable contamination, allowable emissions, engine, fuel cell, Stirling, and turbine (specifications to product gas). Nontechnical barriers are shown as follows: 1. Emission standard: The standards of allowable emissions differ from country to country. 2. Public perception: At present, due to large investment, small return, and no significant effect of gasification technology on social benefits, the public are rather negative with no confidence. 3. Infrastructure: Many aspects affect economy of biomass gasification – investment channel, collection and transportation cost, and so on. To take the power generation for an example, some countries do not have regulations regarding the incorporation of electricity derived from biomass into the existing grid network. 4. Capital cost: Investment cost of gasification projects are high, particularly the cost of collection and transportation of raw materials. Sometimes in order to reduce costs, the system has to be as simple as possible. Therefore, this results in some sectors such as tar treatment, and gas cleaning cannot be perfectly designed, leaving some operation difficulties and environmental problems. 5. Environmental protection: In recent years, countries in the world advocate environmental protection energetically, as well as energy saving and emission reduction. But the real fact is that not all the biomass gasification technology can meet environmental requirements. Although some techniques, such as centralized gas supply systems for domestic cooking in rural areas, can achieve obvious social benefits, in practice gasification stations are difficult to really make a profit from it due to the high cost of antipollution measures.

Pyrolysis Pyrolysis is the initial chemical stage of combustion and has been used for charcoal production from wood since ancient Egyptian times. And biomass pyrolysis is a process in which biomass is heated in the absence of oxygen to decompose into char, gaseous products, and liquid product “bio-oil.” The significant development of biomass pyrolysis happens in the 1980s, when many researchers began to observe an increasing liquid product yield obtained from fast pyrolysis with a rapid heating rate and a short cooling time (Graham et al. 1984). Crude bio-oils have a high water content of about 15–30 %, a high acidity (PH value of 2.8–3.8), a high density of about 1,200 kg/m3, a low heating value of 14–18.5 MJ/kg, and a Page 16 of 34

Handbook of Climate Change Mitigation and Adaptation DOI 10.1007/978-1-4614-6431-0_27-2 # Springer Science+Business Media New York 2015

Table 8 Classification of biomass pyrolysis technologies (Huber et al. 2006; Mohan et al. 2006) Pyrolysis type Carbonization Slow pyrolysis Conventional pyrolysis Fast pyrolysis Flash pyrolysis Vacuum pyrolysis

Temperature ( C) 400 400–600 600 400–650 MPA. Tsao and Zasloff (1979) describe a detailed patented process for a fluidized bed dehydration with over 99 % yield of ethylene. Dow Chemical and Crystalsev, a Brazilian sugar and ethanol producer, announced the plans of 300,000 t/year ethylene plant in Brazil to manufacture 350,000 t/year of low-density polyethylene from sugarcane-derived ethanol. Braskem, a Brazilian petrochemical company, announced their plans to produce 650,000 t of ethylene from sugarcane-based ethanol which will be converted to 200,000 t/year of high-density polyethylene (C&E News 2007).

Three Carbon Compounds Glycerol Glycerol, also known as glycerine or glycerin, is a triol occurring in natural fats and oils. About 90 % of glycerol is produced from natural sources by the transesterification process. The rest 10 % is commercially manufactured synthetically from propylene (Wells 1999). Glycerol is a major by-product in the transesterification process used to convert the vegetable oils and other natural oils to fatty acid methyl and ethyl esters. Approximately 10 % by weight of glycerol is produced from the transesterification of soybean oil with an alcohol. Transesterification process is used to manufacture fatty acid methyl and ethyl esters which can be blended in refinery diesel. As the production of fatty acid methyl and ethyl esters increases, the quantity of glycerol manufactured as a by-product also increases the need to explore cost-effective routes to convert glycerin to value-added products. Glycerol currently has a global production of 500,000–750,000 t/year (Werpy et al. 2004). The USA is one of the world’s largest suppliers and consumers of refined glycerol. Referring to Fig. 13, glycerin can potentially be used in a number of paths for chemicals that are currently produced from petroleum-based feedstock. The products from the glycerol are similar to the products currently obtained from the propylene chain. Uniqema, Procter & Gamble, and Stepan are some of the companies that currently produce derivatives of glycerol such as glycerol triacetate, glycerol stearate, and glycerol oleate. Glycerol prices are expected to drop if biodiesel production increases, enabling its availability as a cheap feedstock for conversion to chemicals. Small increases in fatty acid consumption for fuels and products can increase world glycerol production significantly. For example, if the USA displaced 2 % of the on-road diesel with biodiesel by 2012, almost 800 million pounds of new glycerol supplies would be produced. Dasari et al. (2005) reported a low pressure and temperature (200 psi and 200  C) catalytic process for the hydrogenolysis of glycerol to propylene glycol that is being commercialized and received the 2006 EPA Green Chemistry Award. Copper chromite catalyst was identified as the most effective catalyst for the hydrogenolysis of glycerol to propylene glycol among nickel, palladium, platinum, copper, and copper chromite catalysts. The low pressure and temperature are the advantages for the process when compared to traditional process using severe conditions of temperature and pressure. The mechanism proposed forms an acetol intermediate in the production of propylene glycol. In a two-step reaction Page 20 of 38

Handbook of Climate Change Mitigation and Adaptation DOI 10.1007/978-1-4614-6431-0_28-2 # Springer Science+Business Media New York 2015

PLA analogs Oxidation

Glyceric Acid

Vegetable oils

Bond Breaking

Propylene Glycol

Transesterification

1,3-Propanediol

PLA and polyester fibres with better properties

Antifreeze, humectant, Sorona Fiber

Glycerol Direct Polymerization

Branched Polyesters and polyols

Possible conversions to propylene oxide and propylene; Acrylonitrile, acrylic acid and isopropyl alcohol

Unsaturated Polyurethane Resins for use in insulation

Auto parts, packaging, carpeting toys, textiles, plastics, computer disks, paints, coatings

Used in personal care products, food/beverages, drugs and pharmaceuticals

Fig. 13 Production and derivatives of glycerol (Adapted from Energetics (2000) and Werpy et al. (2004))

process, the first step of forming acetol can be performed at atmospheric pressure, while the second requires a hydrogen partial pressure. Propylene glycol yields >73 % were achieved at moderate reaction conditions. Karinen and Krause (2006) studied the etherification of glycerol with isobutene in liquid phase with acidic ion exchange resin catalyst. Five product ethers and a side reaction yielding C8-C16 hydrocarbons from isobutene were reported. The optimal selectivity toward the ethers was discovered near temperature of 80  C and isobutene/glycerol ratio of 3. The reactants for this process were isobutene (99 % purity), glycerol (99 % purity), and pressurized with nitrogen (99.5 % purity). The five ether isomers formed in the reaction included two monosubstituted monoethers (3-tert-butoxy-1,2-propanediol and 2-tert-butoxy1,3-propanediol), two disubstituted diethers (2,3-di-tert-butoxy-1-propanol and 1,3-di-tert-butoxy-2propanol), and one trisubstituted triether (1,2,3-tri-tert-butoxy propane). Tert-butyl alcohol was added in some of the reactions to prevent oligomerization of isobutene and improve selectivity toward ethers. Acrylic acid is a bulk chemical that can be produced from glycerol. Shima and Takahashi (2006) reported the production of acrylic acid involving steps of glycerol dehydration, in gas phase, followed by the application of a gas-phase oxidation reaction to a gaseous reaction product formed by the dehydration reaction. Dehydration of glycerol could lead to commercially viable production of acrolein, an important intermediate for acrylic acid esters, superabsorber polymers, or detergents (Koutinas et al. 2008). Glycerol can also be converted to chlorinated compounds such as dichloropropanol and epichlorohydrin. Dow and Solvay are developing a process to convert glycerol to epoxy resin raw material epichlorohydrin (Tullo 2007a). Several other methods for conversion of glycerol exist; however, commercial viability of these methods is still in the development stage. Some of these include catalytic conversion of glycerol to hydrogen and alkanes and microbial conversion of glycerol to succinic acid, polyhydroxyalkanoates, butanol, and propionic acid (Koutinas et al. 2008).

Page 21 of 38

Handbook of Climate Change Mitigation and Adaptation DOI 10.1007/978-1-4614-6431-0_28-2 # Springer Science+Business Media New York 2015

Lactic Acid Lactic acid is a commonly occurring organic acid, which is valuable due to its wide use in food and foodrelated industries and its potential for the production of biodegradable and biocompatible polylactate polymers. Lactic acid can be produced from biomass using various fungal species of the Rhizopus genus, which have advantages compared to the bacteria, including their amylolytic characteristics, low nutrient requirements, and valuable fermentation fungal biomass by-product (Zhang et al. 2007). Lactic acid can be produced using bacteria also. Lactic acid-producing bacteria (LAB) have high growth rate and product yield. However, LAB have complex nutrient requirements because of their limited ability to synthesize B-vitamins and amino acids. They need to be supplemented with sufficient nutrients such as yeast extracts to the media. This downstream process is expensive and increases the overall cost of production of lactic acid using bacteria. An important derivative of lactic acid is polylactic acid. BASF uses 45 % corn-based polylactic acid for its product ecovio ®. Propylene Glycol Propylene glycol is industrially produced from the reaction of propylene oxide and water (Wells 1999). Capacities of propylene glycol plants range from 15,000 to 250,000 t/year. It is mainly used (around 40 %) for the manufacture of polyester resins which are used in surface coatings and glass fiber-reinforced resins. A growing market for propylene glycol is in the manufacture of nonionic detergents (around 7 %) used in petroleum, sugar, and paper refining and also in the preparation of toiletries, antibiotics, etc. Five percent of propylene glycol manufactured is used in antifreeze. Propylene glycol can be produced from glycerol, a by-product of transesterification process, by a low pressure and temperature (200 psi and 200  C) catalytic process for the hydrogenolysis of glycerol to propylene glycol (Dasari et al. 2005) that is being commercialized and received the 2006 EPA Green Chemistry Award. Ashland Inc. and Cargill have a joint venture underway to produce propylene glycol in a 65,000 t/year plant in Europe (Ondrey 2007b, c). Davy Process Technology Ltd. (DPT) has developed the glycerin to propylene glycol process for this plant. The plant is expected to start up in 2009. The process is outlined in Fig. 14. This is a two-step process where glycerin in the gas phase is first dehydrated into water and acetol over a heterogeneous catalyst bed, and, then, propylene glycol is formed in situ in the reactor by the hydrogenation of acetol. The per pass glycerin conversion is 99 %, and by-products include ethylene glycol, ethanol, and propanols. Huntsman Corporation plans to commercialize a process for propylene glycol from glycerin at their process development facility in Conroe, Texas (Tullo 2007a). Dow and Solvay are planning to manufacture epoxy resin raw material epichlorohydrin from a glycerin-based route to propylene glycol.

Hydrogen

Hydrogenolysis

Glycerol

Hydrogen recycle

Separation

Glycerol recycle

Propylene glycol

Product Refining

Byproducts

Fig. 14 DPT process for manufacture of propylene glycol from glycerol by hydrogenolysis (Ondrey 2007c)

Page 22 of 38

Handbook of Climate Change Mitigation and Adaptation DOI 10.1007/978-1-4614-6431-0_28-2 # Springer Science+Business Media New York 2015

1,3-Propanediol 1,3-Propanediol is a derivative that can be used as a diol component in the plastic polytrimethylene terephthalate (PTT), a new polymer comparable to nylon (Wilke et al. 2006). Two methods to produce 1,3-propanediol exist, one from glycerol by bacterial treatment and another from glucose by mixed culture of genetically engineered microorganisms. A detailed description of various pathways to microbial conversion of glycerol to 1,3-propanediol is given by Liu et al. (2010). Mu et al. (2006) give a process for conversion of crude glycerol to propanediol. They conclude that a microbial production of 1,3-propanediol by Klebsiella pneumoniae was feasible by fermentation using crude glycerol as the sole carbon source. Crude glycerol from the transesterification process could be used directly in fed-batch cultures of K. pneumoniae with results similar to those obtained with pure glycerol. The final 1,3-propanediol concentration on glycerol from lipase-catalyzed methanolysis of soybean oil was comparable to that on glycerol from alkali-catalyzed process. The high 1,3-propanediol concentration and volumetric productivity from crude glycerol suggested a low fermentation cost, an important factor for the bioconversion of such industrial by-products into valuable compounds. A microbial conversion process for propanediol from glycerol using K. pneumoniae ATCC 25955 was given by Cameron and Koutsky (Cameron and Koutsky 1994). With a $0.20/lb of crude glycerol raw material, a product selling price of $1.10/lb of pure propanediol, and a capital investment of $15 MM, a return on investment of 29 % was obtained. Production trends in biodiesel suggest that the price of raw material (glycerol) is expected to go down considerably, and a higher return on investment can be expected for future propanediol manufacturing processes. DuPont Tate and Lyle Bio Products, LLC, opened a $100 million facility in Loudon, Tennessee, to make 1,3-propanediol from corn (CEP 2007). The company uses a proprietary fermentation process to convert the corn to Bio-PDO, the commercial name of 1,3-propanediol used by the company. This process uses 40 % less energy and reduces greenhouse gas emissions by 20 % compared with petroleum-based propanediol. Shell produces propanediol from ethylene oxide, and Degussa produces it from acrolein. It is used by Shell under the name Corterra to make carpets and DuPont under the name Sorona to make special textile fibers. Acetone Acetone is the simplest and most important ketone. It is colorless, flammable liquid miscible in water and a lot of other organic solvents such as ether, methanol, and ethanol. Acetone is a chemical intermediate for the manufacture of methacrylates, methyl isobutyl ketone, bisphenyl A, and methyl butynol, among others. It is also used as solvent for resins, paints, varnishes, lacquers, nitrocellulose, and cellulose acetate. Acetone can be produced from biomass by fermentation of starch or sugars via the acetone–butanol–ethanol fermentation process (Moreira 1983). This is discussed in detail in the butanol section below.

Four Carbon Compounds Butanol Butanol or butyl alcohol can be produced by the fermentation of carbohydrates with bacteria yielding a mixture of acetone and butyl alcohol (Wells 1999). Synthetically, butyl alcohol can be produced by the hydroformylation of propylene, known as the oxo process, followed by the hydrogenation of the aldehydes formed yielding a mixture of n- and iso-butyl alcohol. The use of rhodium catalysts maximizes the yield of n-butyl alcohol. The principal use of n-butyl alcohol is as solvent. Butyl alcohol/butyl acetate mixtures are good solvents for nitrocellulose lacquers and coatings. Butyl glycol ethers formed by the reaction of butyl alcohol and ethylene oxide are used in vinyl and acrylic paints and lacquers and to solubilize organic surfactants in surface cleaners. Butyl acrylate and methacrylate are important Page 23 of 38

Handbook of Climate Change Mitigation and Adaptation DOI 10.1007/978-1-4614-6431-0_28-2 # Springer Science+Business Media New York 2015

commercial derivatives that can be used in emulsion polymers for latex paints, in textile manufacturing, and in impact modifiers for rigid polyvinyl chloride. Butyl esters of acids like phthalic, adipic, and stearic acid can be used as plasticizers and surface-coating additives. The process for the fermentation of butanol is also known as Weizmann process or acetone–butanol–ethanol fermentation (ABE fermentation). Butyric acid-producing bacteria belong to the Clostridium genus. Two of the most common butyric acid-producing bacteria are C. butylicum and C. acetobutylicum. C. butylicum can produce acetic acid, butyric acid, 1-butanol, 2-propanol, and H2 and CO2 from glucose, and C. acetobutylicum can produce acetic acid, butyric acid, 1-butanol, acetone, H2, CO2, and small amounts of ethanol from glucose (Klass 1998). The acetone–butanol fermentation by C. acetobutylicum was the only commercial process of producing industrial chemicals by anaerobic bacteria that uses a monoculture. Acetone was produced from corn fermentation during World War I for the manufacture of cordite. This process for the fermentation of corn to butanol and acetone was discontinued in 1960s for unfavorable economics due to chemical synthesis of these products from petroleum feedstock. The fermentation process involves conversion of glucose to pyruvate via the Embden–Meyerhof–Parnas (EMP) pathway; the pyruvate molecule is then broken to acetyl-CoA with the release of carbon dioxide and hydrogen (Moreira 1983). Acetyl-CoA is a key intermediate in the process serving as a precursor to acetic acid, ethanol. The formation of butyric acid and neutral solvents (acetone and butanol) occurs in two steps. Initially, two acetyl-CoA molecules combine to form acetoacetyl-CoA, thus initiating a cycle leading to the production of butyric acid. A reduction in the pH of the system occurs as a result of increased acidity. At this step in fermentation, a new enzyme system is activated, leading to the production of acetone and butanol. Acetoacetyl-CoA is diverted by a transferase system to the production of acetoacetate, which is then decarboxylated to acetone. Butanol is produced by reducing the butyric acid in three reactions. Detailed descriptions of batch fermentation, continuous fermentation, and extractive fermentation systems are given by Moreira (1983). DuPont and BP are working with British Sugar to produce 30,000 t/year or biobutanol using corn, sugarcane, or beet as feedstock (D’Aquino 2007). UK biotechnology firm Green Biologics has demonstrated the conversion of cellulosic biomass to butanol, known as Butafuel. Butanol can also be used as a fuel additive instead of ethanol. Butanol is less volatile, not sensitive to water, less hazardous to handle, less flammable, has a higher octane number, and can be mixed with gasoline in any proportion when compared to ethanol. The production cost of butanol from biobased feedstock is reported to be $3.75/gal (D’Aquino 2007). Succinic Acid Succinic acid was chosen by DOE as one of the top 30 chemicals which can be produced from biomass. It is an intermediate for the production of a wide variety of chemicals as shown in Fig. 15. Succinic acid is produced biochemically from glucose using an engineered form of the organism Anaerobiospirillum succiniciproducens or an engineered Escherichia coli strain developed by DOE laboratories (Werpy et al. 2004). Zelder (2006) discusses BASF’s efforts to develop bacteria which convert biomass to succinate and succinic acid. The bacteria convert the glucose and carbon dioxide with an almost 100 % yield into the C4 compound succinate. BASF is also developing a chemistry that will convert the fermentation product into succinic acid derivatives, butanediol and tetrahydrofuran. Succinic acid can also be used as a monomeric component for polyesters. Snyder (2007) reports the successful operation of a 150,000 fermentation processes that use a licensed strain of E. coli at the Argonne National Laboratory. Opportunities for succinic acid derivatives include maleic anhydride, fumaric acid, dibasic esters, and others in addition to the ones shown in Fig. 15. The

Page 24 of 38

Handbook of Climate Change Mitigation and Adaptation DOI 10.1007/978-1-4614-6431-0_28-2 # Springer Science+Business Media New York 2015

g- butyrolactone (GBL) Reduction

Biomass

Butanediol (BDO)

Solvents, fibers such as lycra

Glucose Tetrahydrofuran (THF) Fermentation

Succinic Acid

Reductive Amination

Pyrrolidione N-methylpyrrolidione (NMP)

Green solvents, water soluble polymers (water treatment)

Straight Chain Polymers Direct Polymerization

Fibers Branched Polymers

Fig. 15 Succinic acid production and derivatives (Werpy et al. 2004)

overall cost of fermentation is one of the major barriers to this process. Low-cost techniques are being developed to facilitate the economical production of succinic acid (Werpy et al. 2004). BioAmber, a joint venture of Diversified Natural Products (DNP) and Agro Industries Recherche et Development, will construct a plant that will produce 5,000 t/year of succinic acid from biomass in Pomacle, France (Ondrey 2007d). The plant is scheduled for start-up in mid-2008. Succinic acid from BioAmber’s industrial demonstration plant is made from sucrose or glucose fermentation using patented technology from the US Department of Energy in collaboration with Michigan State University. BioAmber will use patented technology developed by (Guettler et al. 1996), for the production of succinic acid using biomass and carbon dioxide. Aspartic Acid Aspartic acid is an a-amino acid manufactured either chemically by the amination of fumaric acid with ammonia or the biotransformation of oxaloacetate in the Krebs cycle with fermentative or enzymatic conversion (Werpy et al. 2004). It is one of the chemicals identified in DOE top 12 value-added chemicals from biomass list. Aspartic acid can be used as sweeteners and salts for chelating agents. The derivatives of aspartic acid include amine butanediol, amine tetrahydrofuran, aspartic anhydride, and polyaspartic with new potential uses as biodegradable plastics.

Five Carbon Compounds Levulinic Acid Levulinic acid was first synthesized from fructose with hydrochloric acid by the Dutch scientist G.J. Mulder in 1840 (Kamm et al. 2006). It is also known as 4-oxopentanoic acid or g-ketovaleric acid. In 1940, the first commercial scale production of levulinic acid in an autoclave was started in the USA by A.E. Stanley, Decatur, Illinois. Levulinic acid has been used in food, fragrance, and specialty chemicals. The derivatives have a wide range of applications like polycarbonate resins, graft copolymers, and biodegradable herbicide. Levulinic acid (LA) is formed by treatment of 6 carbon sugar carbohydrates from starch or lignocellulosics with acid. Five carbon sugars derived from hemicelluloses like xylose and arabinose can also be

Page 25 of 38

Handbook of Climate Change Mitigation and Adaptation DOI 10.1007/978-1-4614-6431-0_28-2 # Springer Science+Business Media New York 2015

g -butyrolactone (GBL) Hemicellulose

Reduction

Methyl tetrahydrofuran

Fuel oxygenates, solvents

Xylose 1,4-pentanediol

Acid catalyzed dehydration

Levulinic Acid

Acetyl acrylates Oxidation

Acetic-acrylic succinic acids

Condensation

Diphenolic acid

Copolymerization with other monomers for property enhancement

Replacement for bisphenol used in polycarbonate synthesis

Fig. 16 Production and derivatives of levulinic acid (Adapted from Werpy et al. (2004))

converted to levulinic acid by addition of a reduction step subsequent to acid treatment. The following steps are used for the production of levulinic acid from hemicellulose (Klass 1998). Xylose from hemicelluloses is dehydrated by acid treatment to yield 64 wt% of furan-substituted aldehyde (furfural). Furfural undergoes catalytic decarbonylation to form furan. Furfuryl alcohol is formed by catalytic hydrogenation of the aldehyde group in furfural. Tetrahydrofurfuryl alcohol is formed after further catalytic hydrogenation of furfural. Levulinic acid is formed from tetrahydrofurfuryl alcohol on treatment with dilute acid. Werpy et al. (Werpy et al. 2004) report an overall yield of 70 % for the production of levulinic acid. A number of large-volume chemical markets can be addressed from the derivatives of levulinic acid (Werpy et al. 2004). Figure 16 gives the production of levulinic acid from hemicellulose and the derivatives of levulinic acid. In addition to the chemicals in the figure, the following derivative chemicals of LA also have a considerable market. Methyltetrahydrofuran and various levulinate esters can be used as gasoline and biodiesel additives, respectively. d-Aminolevulinic acid is a herbicide and targets a market of 200–300 million pounds per year at a projected cost of $2.00–3.00 per pound. An intermediate in the production of d-aminolevulinic acid is b-acetylacrylic acid. This material could be used in the production of new acrylate polymers, addressing a market of 2.3 billion pounds per year with values of about $1.30 per pound. Diphenolic acid is of particular interest because it can serve as a replacement for bisphenol A in the production of polycarbonates. The polycarbonate resin market is almost 4 billion lb/year, with product values of about $2.40/lb. New technology also suggests that levulinic acid could be used for production of acrylic acid via oxidative processes. Levulinic acid is also a potential starting material for the production of succinic acid. The production of levulinic acid-derived lactones offers the opportunity to enter a large solvent market, as these materials could be converted into analogs of N-methylpyrrolidinone. Complete reduction of levulinic acid leads to 1,4-pentanediol, which could be used for production of new polyesters. A levulinic acid production facility has been built in Caserta, Italy, by Le Calorie, a subsidiary of Italian construction Immobilgi (Ritter 2006). The plant is expected to produce 3,000 t/year of levulinic acid from local tobacco bagasse and paper mill sludge through a process developed by Biofine Renewables. Hayes et al. (2006) give the details of the Biofine process for the production of levulinic acid. This process received the Presidential Green Chemistry Award in 1999. The Biofine process involves a two-step reaction in a two-reactor design scheme. The feedstock comprises of 0.5–1.0 cm biomass

Page 26 of 38

Handbook of Climate Change Mitigation and Adaptation DOI 10.1007/978-1-4614-6431-0_28-2 # Springer Science+Business Media New York 2015

particles comprised of cellulose and hemicellulose conveyed to a mixing tank by high-pressure air injection system. The feed is mixed with 2.5–3 % recycled sulfuric acid in the mixing tank. The feed is then transferred to the reactors. The first reactor is a plug-flow reactor, where first-order acid hydrolysis of the carbohydrate polysaccharides occurs to soluble intermediates like hydroxymethylfurfural (HMF). The residence time in the reactor is 12 s at a temperature of 210–220  C and pressure of 25 bar. The diameter of the reactor is small to enable the short residence time. The second reactor is a back-mix reactor operated at 190–200  C and 14 bar and a residence time of 20 min. LA is formed in this reactor favored by the completely mixed conditions of the reactor. Furfural and other volatile products are removed, and the tarry mixture containing LA is passed to a gravity separator. The insoluble mixture from this unit goes to a dehydration unit where the water and volatiles are boiled off. The crude LA obtained is 75 % and can be purified to 98 % purity. The residue formed is a bone-dry powdery substance or char with calorific value comparable to bituminous coal and can be used in syngas production. Lignin is another by-product which can be converted to char and burned or gasified. The Biofine process uses polymerization inhibitors which convert around 50 % of both 5 and 6 carbon sugars to levulinic acid. Xylitol/Arabinitol Xylitol and arabinitol are hydrogenation products from the corresponding sugars xylose and arabinose (Werpy et al. 2004). Currently, there is a limited commercial production of xylitol and no commercial production of arabinitol. The technology required to convert the 5 carbon sugars, xylose and arabinose, to xylitol and arabinitol, can be modeled based on the conversion of glucose to sorbitol. The hydrogenation of the 5 carbon sugars to the sugar alcohols occurs with one of many active hydrogenation catalysts such as nickel, ruthenium, and rhodium. The production of xylitol for use as a building block for derivatives essentially requires no technical development. Derivatives of xylitol and arabinitol are described in Fig. 17. Itaconic Acid Itaconic acid is a C5 dicarboxylic acid, also known as methyl succinic acid, and has the potential to be a key building block for deriving both commodity and specialty chemicals. The basic chemistry of itaconic

Xylaric and xylonic acids Oxidation

New Uses Arabonic and arabinoic acids Biomass Lignocellulose

Bond Cleavage

Polyols (propylene and ethylene glycols) Antifreeze, UPRs Lactic acid

Hydrogenation

Xylitol/Arabinitol

Xylitol, xylaric, xylonic polyesters and nylons Direct Polymerization

Arabinitol, arabonic, arabinoic polyesters and nylons

New Polymer opportunities

Non-nutritive sweeteners, anhydrosugars, unsaturated polyester resins

Fig. 17 Production and derivatives of xylitol and arabinitol (Adapted from Werpy et al. (2004))

Page 27 of 38

Handbook of Climate Change Mitigation and Adaptation DOI 10.1007/978-1-4614-6431-0_28-2 # Springer Science+Business Media New York 2015

Biomass Sugars

Reduction

Anaerobic Fungal Fermentation

Methyl butanediol, butyrolactone, tetrahydrofuran family

New useful properties for BDO, GBL, THF family of polymers

Pyrrolidinones

Itaconic acid Direct Polymerization

Polyitaconic

New Polymer opportunities

Copolymer in styrene butadiene polymers (provides dye receptive for fibres); nitrile latex

Fig. 18 Production and derivatives of itaconic acid (Adapted from Werpy et al. (2004))

acid is similar to that of the petrochemical-derived maleic acid/anhydride. The chemistry of itaconic acid to the derivatives is shown in Fig. 18. Itaconic acid is currently produced via fungal fermentation and is used primarily as a specialty monomer. The major applications include the use as a copolymer with acrylic acid and in styrene–butadiene systems. The major technical hurdles for the development of itaconic acid as a building block for commodity chemicals include the development of very low-cost fermentation routes. The primary elements of improved fermentation include increasing the fermentation rate, improving the final titer, and potentially increasing the yield from sugar. There could also be some cost advantages associated with an organism that could utilize both C5 and C6 sugars.

Six Carbon Compounds Sorbitol Sorbitol is produced by the hydrogenation of glucose (Werpy et al. 2004). The production of sorbitol is practiced commercially by several companies and has a current production volume on the order of 200 million pounds annually. The commercial processes for sorbitol production are based on batch technology, and Raney nickel is used as the catalyst. The batch production ensures complete conversion of glucose. Technology development is possible for conversion of glucose to sorbitol in a continuous process instead of a batch process. Engelhard (now a BASF-owned concern) has demonstrated that the continuous production of sorbitol from glucose can be done continuously using a ruthenium on carbon catalyst (Werpy et al. 2004). The yields demonstrated were near 99 % with very high weight hourly space velocity. Derivatives of sorbitol include isosorbide, propylene glycol, ethylene glycol, glycerol, lactic acid, anhydrosugars, and branched polysaccharides (Werpy et al. 2004). The derivatives and their uses are described in Fig. 19. 2,5-Furandicarboxylic Acid FDCA is a member of the furan family and is formed by an oxidative dehydration of glucose (Werpy et al. 2004). The production process uses oxygen or electrochemistry. The conversion can also be carried out by oxidation of 5-hydroxymethylfurfural, which is an intermediate in the conversion of 6 carbon sugars into levulinic acid. Figure 20 describes some of the potential uses of FDCA.

Page 28 of 38

Handbook of Climate Change Mitigation and Adaptation DOI 10.1007/978-1-4614-6431-0_28-2 # Springer Science+Business Media New York 2015

Isosorbide PET equivalent polymers such as poltethylene isosorbide terephthalates

Dehydration

Biomass

Anhydrosugars

Glucose Hydrogenation

Propylene glycol Sorbitol

Bond Cleavage

Antifreeze, PLA Lactic acid

Direct Polymerization

Water soluble polymers, new polymer applications

Branched polysaccarides

Fig. 19 Production and derivatives of sorbitol (Adapted from Werpy et al. (2004))

Biomass C6 Sugars

Diols and Aminations Reduction

New useful properties for BDO, GBL, THF family of polymers

Levulinic and succinic acids

Oxidative Dehydration

Polythylene terephthalate analogs

2,5-Furan dicarboxylic acid Direct Polymerization

Furanoic polyamines

Furanoic polyesters for bottles, containers, films; polyamices market for use in new nylons

PET analogs with potentially new properties (bottles, films, containers)

Fig. 20 Production and derivatives of 2,5-FDCA (Werpy et al. 2004)

FDCA resembles and can act as a replacement for terephthalic acid, a widely used component in various polyesters, such as polyethylene terephthalate (PET) and polybutylene terephthalate (PBT) (Werpy et al. 2004). PET has a market size approaching 4 billion pounds per year, and PBT is almost a billion pounds per year. The market value of PET polymers varies depending on the application, but is in the range of $1.00–3.00/lb for uses as films and thermoplastic engineering polymers. PET and PBT are manufactured industrially from terephthalic acid, which, in turn, is manufactured from toluene (Wells 1999). Toluene is obtained industrially from the catalytic reforming of petroleum or from coal. Thus, FDCA derived from biomass can replace the present market for petroleum-based PET and PBT. FDCA derivatives can be used for the production of new polyester, and their combination with FDCA would lead to a new family of completely biomass-derived products. New nylons can be obtained from FDCA, either through reaction of FDCA with diamines or through the conversion of FDCA to 2,5-bis (aminomethyl)-tetrahydrofuran. The nylons have a market of almost 9 billion pounds per year, with product values between $0.85 and $2.20 per pound, depending on the application.

Page 29 of 38

Handbook of Climate Change Mitigation and Adaptation DOI 10.1007/978-1-4614-6431-0_28-2 # Springer Science+Business Media New York 2015

Nylon resins 12%

Polymers from Biomass

Other Polymers

Urethanes 26%

Cellulosics 29% Glycerinbased materials 12% Natural Rubber 68%

Other Polymers 3%

PHA and others 12%

Polytactic acid 38%

Fig. 21 Production of polymers from biomass in 2007 (13,000 million metric tons) and breakdown of “other polymers” (Tullo 2008)

Biopolymers and Biomaterials The previous section discussed the major industrial chemicals that can be produced from biomass. This section will be focused on various biomaterials that can be produced from biomass. Thirteen thousand million metric tons of polymers were made from biomass in 2007 as shown in Fig. 21 out of which 68 % are natural rubber. New polymers from biomass, which attribute to a total of 3 % of the present market share of biobased polymers, consist of urethanes, glycerin-based materials, nylon resins, polyhydroxyalkanoates (PHA), and polylactic acid (PLA) (Tullo 2008). A new product from a new chemical plant is expected to have a slow penetration (less than 10 %) of the existing market for the chemical that it replaces. However, once the benefits of a new product are established, for example, replacing glass in soda bottles with petrochemical-based polyethylene terephthalate, the growth is rapid over a short period of time. Most renewable processes for making polymers have an inflection point at $70 per barrel of oil, above which the petroleum-based process costs more than the renewable process. For example, above $80 per barrel of oil, polylactic acid (PLA) is cheaper than polyethylene terephthalate (PET) (Tullo 2008). Table 3 gives a list of companies that have planned new chemical production based on biomass feedstock along with capacity and projected start-up date. Government subsidies and incentives tend to be of limited time and short-term value. Projected bulk chemicals from biobased feedstocks are ethanol, butanol, and glycerin. Some of these biomaterials have been discussed in association with their precursor chemicals in the previous section. The important biomaterials that can be produced from biomass include wood and natural fibers, isolated and modified biopolymers, agromaterials, and biodegradable plastics (Vaca-Garcia 2008). These are outlined in Fig. 22. The production process for poly(3-hydroxybutyrate) is given by Rossell et al. (2006), and a detailed review for polyhydroxyalkanoates (PHA) as commercially viable replacement for petroleum-based plastics is given by Snell and Peoples (2009). Lignin has a complex chemical structure, and various aromatic compounds can be produced from lignin. Current technology is underdeveloped for the industrial scale production of lignin-based chemicals, but there is considerable potential to supplement the benzene–toluene–xylene (BTX) chain of chemicals currently produced from fossil-based feedstock. Osipovs (2008) discusses the extraction of aromatic compounds such as benzene from biomass tar.

Page 30 of 38

Handbook of Climate Change Mitigation and Adaptation DOI 10.1007/978-1-4614-6431-0_28-2 # Springer Science+Business Media New York 2015

Table 3 Companies producing biobased materials from biomass (Tullo 2008) Company name Telles

Capacity Product (t/year) Notes Polyhydroxyalkanoate 50,000 Joint venture between Metabolix and Archer (PHA) or Mirel Daniels Midland, fermented with K-12 strain of Escherichia coli genetically modified to produce PHA directly (about 3.5 % lower energy consumption compared to conventional plastics), biodegradable of PHA Cereplast Seymour, Completed, Polylactic acid (PLA)- 25,000 Cereplast working with PLA from NatureWorks Indiana 2008 based compound to make it more heat resistant comparable to polypropylene or polystyrene PSM China In Plastarch material 100,000 80 % industrial starch and 8 % cellulose mixed North production (PSM) with sodium stearate, oleic acid, and other America ingredients. It can be processed like a petrochemical plastic, can withstand moisture, and is heat tolerant Synbra The 2009 Polylactic acid (PLA) 5,000 PLA technology developed by Dutch lactic acid Netherlands maker Purac and Swiss process engineering firm, Sulzer Green Tianjin, – Polyhydroxyalkanoate 10,000 DSM has invested in this firm Bioscience China (PHA) Location Clinton, Iowa

Start-up date Q2, 2009

Biomass

Wood and Natural Fibers

Isolated and Modified Biopolymers as Biomaterials

Agromaterials, Blends, and Composites

Biodegradable Plastics

Wood, and plant fibers such as cotton, jute, linen, coconut fibers, sisal, ramie and hemp

Cellulose, cellulose esters, cellulose ethers, starch, chitin and chitos-an, zein, lignin derivatives

Agromaterials from plant residues, blends of synthetic polymers and starch, wood plastic composites (WPC), and wood based boards

Polyglycolic acid (PGA), Polylactic acid (PLA), Polycaprolactone (PCL), Polyhydroxyalkanoates (PHA) and cellulose graft polymers

Fig. 22 Biomaterials from biomass (Vaca-Garcia 2008)

Natural Oil-Based Polymers and Chemicals Natural oils are mainly processed for chemical production by hydrolysis and or transesterification. Oil hydrolysis is carried out in pressurized water at 220  C, by which fatty acids and glycerol are produced. The main products that can be obtained from natural oils are shown in Fig. 23. Transesterification is the acid-catalyzed reaction in the presence of an alcohol to produce fatty acid alkyl esters and glycerol. Fatty acids can be used for the production of surfactants, resins, stabilizers, plasticizers, dicarboxylic acids, etc. Epoxidation, hydroformylation, and metathesis are some of the other methods to convert oils to useful chemicals and materials. Sources of natural oil include soybean oil, lard, canola oil, algae oil, waste grease, etc.

Page 31 of 38

Handbook of Climate Change Mitigation and Adaptation DOI 10.1007/978-1-4614-6431-0_28-2 # Springer Science+Business Media New York 2015

Base oil in lubricants Fatty Acids Natural oils

Surfactants

Methyl soyate from soybean oil

Polymers, Resins and Plasticizers

Solvents

Adhesives

Polyols Glycerol

Transesterification Epoxidation

Established Processes for natural oil feedstock

Hydroformylation Metathesis

Process existing in Petroleum feedstocks, need research for natural oil feedstock

Fig. 23 Natural oil-based chemicals

Soybean oil can be used to manufacture molecules with multiple hydroxyl groups, known as polyols (Tullo 2007b). Polyols can be reacted with isocyanates to make polyurethanes. Soybean oil can also be introduced in unsaturated polyester resins to make composite parts. Soybean oil-based polyols have the potential to replace petrochemical-based polyols derived from propylene oxide in polyurethane formulations (Tullo 2007b). The annual market for conventional polyols is 3 billion pounds in the USA and 9 billion pounds globally. Dow Chemical, the world’s largest manufacturer of petrochemical polyols, also started the manufacture of soy-based polyols (Tullo 2007b). Dow uses the following process for the manufacture of polyols. The transesterification of triglycerides gives methyl esters which are then hydroformylated to add aldehyde groups to unsaturated bonds. This is followed by a hydrogenation step which converts the aldehyde group to alcohols. The resultant molecule is used as a monomer with polyether polyols to build a new polyol. Urethane Soy Systems manufactures soy-based polyols at Volga, South Dakota, with a capacity of 75 million pounds per year and supplies them to Lear Corp., manufacturer of car seats for Ford Motor Company. The company uses two processes for the manufacture of polyols: an autoxidation process replacing unsaturated bonds in the triglycerides with hydroxyl groups and a transesterification process where rearranged chains of triglycerides are reacted with alcohols. BioBased Technologies ® supplies soy polyols to Universal Textile Technologies for the manufacture of carpet backing and artificial turf. Johnson Controls uses their polyols to make 5 % replaced foam automotive seats. The company has worked with BASF and Bayer MaterialScience for the conventional polyurethanes and now manufactures the polyols by oxidizing unsaturated bonds of triglycerides. The company has three families of products with 96 %, 70 %, and 60 % of biobased content. Soybean oil can be epoxidized by a standard epoxidation reaction (Wool and Sun 2005). The epoxidized soybean oil can then be reacted with acrylic acid to form acrylated epoxidized soybean oil (AESO). The acrylated epoxidized triglycerides can be used as alternative plasticizers in polyvinyl chloride as a replacement for phthalates.

Page 32 of 38

Handbook of Climate Change Mitigation and Adaptation DOI 10.1007/978-1-4614-6431-0_28-2 # Springer Science+Business Media New York 2015

Application

Feed

Biodiesel

Methanol

Process

Biodiesel Vegatable oil and grease

Green diesel

Diesel Vegetable oil and grease

Green gasoline

VGO Vegetable oil and grease

Green olefins

Diesel Hydrotreater

Catalytic Cracker

Product

Biodiesel (FAME), Glycerol

Diesel, green diesel, propane, CO2, H2

Gasoline

VGO Vegetable oil and grease

Catalytic Cracker

Light olefins

Fig. 24 Processing routes for vegetable oils and grease (Holmgren et al. 2007)

Aydogan et al. (2006) give a method for the potential of using dense (sub-/supercritical) CO2 in the reaction medium for the addition of functional groups to soybean oil triglycerides for the synthesis of rigid polymers. The reaction of soybean oil triglycerides with KMnO4 in the presence of water and dense CO2 is presented in this paper. Dense CO2 is utilized to bring the soybean oil and aqueous KMnO4 solution into contact. Experiments are conducted to study the effects of temperature, pressure, NaHCO3 addition, and KMnO4 amount on the conversion (depletion by bond opening) of soybean–triglyceride double bonds (STDB). The highest STDB conversions, about 40 %, are obtained at the near-critical conditions of CO2. The addition of NaHCO3 enhances the conversion; 1 mol of NaHCO3 per mole of KMnO4 gives the highest benefit. Increasing KMnO4 up to 10 % increases the conversion of STDB. Holmgren et al. (2007) discuss the uses of vegetable oils as feedstock for refineries. Four processes are outlined as shown in Fig. 24. The first process is the production of fatty acid methyl esters by transesterification process. The second process is the UOP/Eni Renewable Diesel Process that processes vegetable oils combined with the crude diesel through hydroprocessing unit. The third and fourth processes involve the catalytic cracking of pretreated vegetable oil mixed with virgin gas oil (VGO) to produce gasoline, olefins, light cycle oil, and clarified slurry oil. Petrobras has a comparable H-Bio process where vegetable oils can also be used directly with petroleum diesel fractions.

Conclusion As in petroleum and natural gas, various fractions are used for the manufacture of various chemicals; biomass can be considered to have similar fractions. All types of biomass contain cellulose, hemicellulose, lignin, fats, and lipids and proteins as main constituents in various ratios. Separate methods to convert these fractions into chemicals exist. Biomass containing mainly cellulose, hemicellulose, and lignin, referred to as lignocellulosics, can also undergo various pretreatment procedures to separate the components. Steam hydrolysis breaks some of the bonds in cellulose, hemicellulose, and lignin. Acid hydrolysis solubilizes the hemicellulose by depolymerizing hemicellulose to 5 carbon sugars such as pentose, xylose, and arabinose. This can be separated for extracting the chemicals from 5 carbon sugars. Page 33 of 38

Handbook of Climate Change Mitigation and Adaptation DOI 10.1007/978-1-4614-6431-0_28-2 # Springer Science+Business Media New York 2015

The cellulose and lignin stream is then subjected to enzymatic hydrolysis where cellulose is depolymerized to 6 carbon glucose and other 6 carbon polymers. This separates the cellulose stream from lignin. Thus, three separate streams can be obtained from biomass. The cellulose and hemicellulose monomers, glucose, and pentose can undergo fermentation to yield chemicals like ethanol, succinic acid, butanol, xylitol, arabinitol, itaconic acid, and sorbitol. The lignin stream is rich in phenolic compounds, which can be extracted, or the stream can be dried to form char and used for gasification to produce syngas. Biomass containing oils, lipids, and fats can be transesterified to produce fatty acid methyl and ethyl esters and glycerol. Vegetable oils can be directly blended in petroleum diesel fractions, and catalytic cracking of these fractions produces biomass-derived fuels. Algae have shown great potential for use as source of biomass, and there have been algae strains which can secrete oil, reducing process costs for separation. Algae grow fast (compared to food crops), fix atmospheric and power plant flue gas carbon sources, and do not require freshwater sources. However, algae production technology on an industrial scale for the production of chemicals and fuel is still in the research and development stage. Growth of algae for biomass is a promising field of research. The glycerol from transesterification can be converted to propylene glycol, 1,3-propanediol, and other compounds which can replace current natural gas-based chemicals. Vegetable oils, particularly soybean oil, have been considered for various polyols with a potential to replace propylene oxide-based chemicals.

Future Directions These technologies outlined above can be further developed to produce a wide array of chemicals, and further research is needed for the commercialization of these chemicals. Nearly 5.6 billion metric tons of carbon dioxide were emitted to the atmosphere in 2008 from utilization of fossil resources (EIA 2010b). The world production of polymers from biomass was 13 billion metric tons. There is opportunity to further convert biomass to chemicals and materials, and further research is required in that direction. The derivatives and market penetration of new chemicals from biomass are needed. The lignin stream from cellulosic biomass is an important source of aromatic chemicals such as benzene, toluene, xylene, etc. and can contribute to the BTX chain of chemicals. This chapter outlined the various chemicals that are currently produced from petroleum-based feedstock that can be produced from biomass as feedstock. New polymers and composites from biomass are continually being developed which can replace the needs of current fossil feedstock-based chemicals.

References ACES (2010) H.R.2454 – American clean energy and security act of 2009. http://www.opencongress.org/ bill/111-h2454/show. Accessed 8 May 2010 Aden A, Ruth M, Ibsen K, Jechura J, Neeves K, Sheehan J, Wallace B (2002) Lignocellulosic biomass to ethanol process design and economics utilizing co-current dilute acid prehydrolysis and enzymatic hydrolysis for corn stover, NREL/TP-510-32438. National Renewable Energy Laboratory, Golden Aiello-Mazzarri C, Agbogbo FK, Holtzapple MT (2006) Conversion of municipal solid waste to carboxylic acids using a mixed culture of mesophilic microorganisms. Bioresour Technol 97(1):47–56 Austin GT (1984) Shreve’s chemical process industries, 5th edn. McGraw-Hill, New York. ISBN 0070571473

Page 34 of 38

Handbook of Climate Change Mitigation and Adaptation DOI 10.1007/978-1-4614-6431-0_28-2 # Springer Science+Business Media New York 2015

Aydogan S, Kusefoglu S, Akman U, Hortacsu O (2006) Double-bond depletion of soybean oil triglycerides with KMnO4/H2 in dense carbon dioxide. Korean J Chem Eng 23(5):704–713 Banholzer WF, Watson KJ, Jones ME (2008) How might biofuels impact the chemical industry? Chem Eng Prog 104(3):S7–S14 Brown RC (2003) Biorenewable resources: engineering new products from agriculture. Iowa State Press, Iowa. ISBN 0813822637 C&E News (2007) Dow to make polyethylene from sugar in Brazil. Chem Eng News 85(30):17 Cameron DC, Koutsky JA (1994) Conversion of glycerol from soy diesel production to 1,3- propanediol. Final report prepared for National Biodiesel Development Board. Department of Chemical Engineering, UW-Madison, Madison CEP (2007) $100-million plant is first to produce propanediol from corn sugar. Chem Eng Prog 103(1):10 D’Aquino R (2007) Cellulosic ethanol – tomorrow’s sustainable energy source. Chem Eng Prog 103(3):8–10 Dasari MA, Kiatsimkul PP, Sutterlin WR, Suppes GJ (2005) Low-pressure hydrogenolysis of glycerol to propylene glycol. Appl Catal Gen 281(1–2):225–231 DOE (2007) DOE selects six cellulosic ethanol plants for up to $385 million in federal funding. http:// www.energy.gov/print/4827.htm. Accessed 2 Oct 2007 DOE (2010a) Biomass multi-year program plan March 2010. Energy efficiency and renewable energy (US DOE). http://www1.eere.energy.gov/biomass/pdfs/mypp.pdf. Accessed 8 May 2010 DOE (2010b) Biomass energy databook. United States Department of Energy. http://cta.ornl.gov/bedb/ biofuels.shtml. Accessed 8 May 2010 Dutta A, Philips SD (2009) Thermochemical ethanol via direct gasification and mixed alcohol synthesis of lignocellulosic biomass, NREL/TP-510-45913. National Renewable Energy Laboratory, Golden EIA (2010a) Weekly United States spot price FOB weighted by estimated import volume (dollars per barrel), Energy Information Administration. http://tonto.eia.doe.gov/dnav/pet/hist/LeafHandler.ashx? n=PET&s=WTOTUSA&f=W. Accessed 8 May 2010 EIA (2010b) Annual energy outlook 2010, Energy Information Administration. Report no. DOE/EIA-0383(2010) EIA (2010c) Total carbon dioxide emissions from the consumption of energy (million metric tons), Energy Information Administration. http://tonto.eia.doe.gov/cfapps/ipdbproject/IEDIndex3.cfm?tid= 90&pid=44&aid=8. Accessed 8 May 2010 Energetics (2000) Energy and environmental profile of the U.S. chemical industry, Energy efficiency and renewable energy (US DOE). http://www1.eere.energy.gov/industry/chemicals/pdfs/profile_chap1. pdf. Accessed 8 May 2010 EPA (2010) Mandatory reporting of greenhouse gases rule, United States Environmental Protection Agency. http://www.epa.gov/climatechange/emissions/ghgrulemaking.html. Accessed 8 May 2010 EPM (2010) Plants list. Ethanol producers magazine. http://www.ethanolproducer.com/plant-list.jsp. Accessed 8 May 2010 Guettler MV, Jain MK, Soni BK (1996) Process for making succinic acid, microorganisms for use in the process and methods of obtaining the microorganisms. US Patent no. 5,504,004 Hayes DJ, Fitzpatrick S, Hayes MHB, Ross JRH (2006) The biofine process – production of levulinic acid, furfural and formic acid from lignocellulosic feedstock. In: Kamm B, Gruber PR, Kamm M (eds) Biorefineries – industrial processes and products. Wiley-VCH, Weinheim. ISBN 3-527-31027-4 Holmgren J, Gosling C, Couch K, Kalnes T, Marker T, McCall M, Marinangeli R (2007) Refining biofeedstock innovations. Petrol Tech Q 12(4):119–124

Page 35 of 38

Handbook of Climate Change Mitigation and Adaptation DOI 10.1007/978-1-4614-6431-0_28-2 # Springer Science+Business Media New York 2015

Holtzapple MT, Davison RR, Ross MK, Aldrett-Lee S, Nagwani M, Lee CM, Lee C, Adelson S, Kaar W, Gaskin D, Shirage H, Chang NS, Chang VS, Loescher ME (1999) Biomass conversion to mixed alcohol fuels using the MixAlco process. Appl Biochem Biotech 79(1–3):609–631 Humbird D, Aden A (2009) Biochemical production of ethanol from corn stover, 2008: state of technology model, NREL/TP-510-46214. National Renewable Energy Laboratory, Golden ICIS (2009) Ethylene. ICIS Chem Bus 276(15):40 Ito T, Nakashimada Y, Senba K, Matsui T, Nishio N (2005) Hydrogen and ethanol production from glycerol-containing wastes discharged after biodiesel manufacturing process. J Biosci Bioeng 100(3):260–265 Johnson DL (2006) The corn wet milling and corn dry milling industry - a base for biorefinery technology developments. In: Kamm B, Gruber PR, Kamm M (eds) Biorefineries – industrial processes and products. Wiley-VCH, Weinheim. ISBN 3-527-31027-4 Kamm B, Kamm M, Gruber PR, Kromus S (2006) Biorefinery systems – an overview. In: Kamm B, Gruber PR, Kamm M (eds) Biorefineries – industrial processes and products, vol 1. Wiley-VCH, Weinheim. ISBN 3-527-3102 Karinen RS, Krause AOI (2006) New biocomponents from glycerol. Appl Catal A 306:128–133 Klass DL (1998) Biomass for renewable energy, fuels and chemicals. Academic, California. ISBN 0124109500 Koutinas AA, Du C, Wang RH, Webb C (2008) Production of chemicals from biomass. In: Clark JH, Deswarte FEI (eds) Introduction to chemicals from biomass. Wiley, Chichester. ISBN 978-0-47005805-3 Liu D, Liu H, Sun Y, Lin R, Hao J (2010) Method for producing 1,3-propanediol using crude glycerol, a by-product from biodiesel production. Publication No. 2010/0028965 A1. http://www. freepatentsonline.com/20100028965.pdf. Accessed 8 May 2010 Moreira AR (1983) Acetone-butanol fermentation. In: Wise DL (ed) Organic chemicals from biomass. The Benjamin Cummind Publishing, Menlo Park. ISBN 0-8053-9040-5 Mu Y, Teng H, Zhang D, Wang W, Xiu Z (2006) Microbial production of 1,3-propanediol by Klebsiella pneumoniae using crude glycerol from biodiesel preparations. Biotechnol Lett 28(21):1755–1759 NETL (2011) Gasifipedia, supporting technologies, Methanation. http://www.netl.doe.gov/technologies/ coalpower/gasification/gasifipedia/5-support/5-12_methanation.html. Accessed 8 Mar 2011 Ondrey G (2007a) Coproduction of cellulose acetate promises to improve economics of ethanol production. Chem Eng 114(6):12 Ondrey G (2007b) Propylene glycol. Chem Eng 114(6):10 Ondrey G (2007c) A vapor-phase glycerin-to-PG process slated for its commercial debut. Chem Engr 114(8):12 Ondrey G (2007d) A sustainable route to succinic acid. Chem Eng 114(4):18 Osipovs S (2008) Sampling of benzene in tar matrices from biomass gasification using two different solidphase sorbents. Anal Bioanal Chem 391(4):1409–1417 Paster M, Pellegrino JL, Carole TM (2003) Industrial bioproducts: today and tomorrow. Department of Energy Report prepared by Energetics, Inc, Columbia. http://www.energetics.com/resourcecenter/ products/studies/Documents/bioproducts-pportunities.pdf Perlack RD, Wright LL, Turhollow AF, Graham RL (2005) Biomass as feedstock for a bioenergy and bioproducts industry: the technical feasibility of a billion-ton annual supply. USDA document prepared by Oak Ridge National Laboratory, ORNL/TM-2005/66, Oak Ridge Philip CB, Datta R (1997) Production of ethylene from hydrous ethanol on H-ZSM-5 under mild conditions. Ind Eng Chem Res 36(11):4466–4475

Page 36 of 38

Handbook of Climate Change Mitigation and Adaptation DOI 10.1007/978-1-4614-6431-0_28-2 # Springer Science+Business Media New York 2015

Phillips S, Aden A, Jechura J, Dayton D, Eggeman T (2007) Thermochemical ethanol via indirect gasification and mixed alcohol synthesis of lignocellulosic biomass, NREL/TP-510-41168. National Renewable Energy Laboratory, Golden Ritter S (2006) Biorefineries get ready to deliver the goods. Chem Eng News 84(34):47 Rossell CEV, Mantelatto PE, Agnelli JAM, Nascimento J (2006) Sugar-based biorefinery – technology for integrated production of Poly(3-hydroxybutyrate), sugar, and ethanol. In: Kamm B, Gruber PR, Kamm M (eds) Biorefineries – industrial processes and products, vol 1. Wiley-VCH, Weinheim. ISBN 3-527-31027-4 Shima M, Takahashi T (2006) Method for producing acrylic acid. US Patent no 7,612,230 Short PL (2007) Small French firm’s bold dream. Chem Eng News 85(35):26–27 Smith RA (2005) Analysis of a petrochemical and chemical industrial zone for the improvement of sustainability, M. S. thesis. Lamar University, Beaumont Snell KD, Peoples OP (2009) PHA bioplastic: a value-added coproduct for biomass biorefineries. Biofuels Bioprod Biorefin 3(4):456–467 Snyder SW (2007) Overview of biobased feedstocks. Twelfth new industrial chemistry and engineering conference on biobased feedstocks, Council for Chemical Research, Argonne National Laboratory, Chicago, 11–13 June 2007 Spath PL, Dayton DC (2003) Preliminary screening – technical and economic feasibility of synthesis gas to fuels and chemicals with the emphasis on the potential for biomass-derived syngas, NREL/TP-51034929, National Renewable Energy Laboratory, Golden. http://www.nrel.gov/docs/fy04osti/34929. pdf. Accessed 8 May 2010 Takahara I, Saito M, Inaba M, Murata K (2005) Dehydration of ethanol into ethylene over solid acid catalysts. Catal Lett 105(3–4):249–252 Thanakoses P, Alla Mostafa NA, Holtzapple MT (2003a) Conversion of sugarcane bagasse to carboxylic acids using a mixed culture of mesophilic microorganisms. Appl Biochem Biotechnol 107(1–3):523–546 Thanakoses P, Black AS, Holtzapple MT (2003b) Fermentation of corn stover to carboxylic acids. Biotechnol Bioeng 83(2):191–200 Tolan JS (2006) Iogen’s demonstration process for producing ethanol from cellulosic biomass. In: Kamm B, Gruber PR, Kamm M (eds) Biorefineries – industrial processes and products, vol 1. WileyVCH, Weinheim. ISBN 3-527-31027-4 Tsao U, Zasloff HB (1979) Production of ethylene from ethanol. US Patent no 4,134,926 Tullo AH (2007a) Soy rebounds. Chem Eng News 85(34):36–39 Tullo AH (2007b) Firms advance chemicals from renewable resources. Chem Eng News 85(19):14 Tullo AH (2008) Growing plastics. Chem Eng News 86(39):21–25 Vaca-Garcia C (2008) Biomaterials. In: Clark JH, Deswarte FEI (eds) Introduction to chemicals from biomass. Wiley, Chichester. ISBN 978-0-470-05805-3 Varisli D, Dogu T, Dogu G (2007) Ethylene and diethyl-ether production by dehydration reaction of ethanol over different heteropolyacid catalysts. Chem Eng Sci 62(18–20):5349–5352 Wells GM (1999) Handbook of petrochemicals and processes, 2nd edn. Ashgate, Brookfield Werpy T, Peterson G, Aden A, Bozell J, Holladay J, White J, Manheim A (2004) Top value added chemicals from biomass: vol 1 Results of screening for potential candidates from sugars and synthesis gas. Energy Efficiency and Renewable Energy (US DOE). http://www1.eere.energy.gov/biomass/pdfs/ 35523.pdf. Accessed 8 May 2010 Wilke T, Pruze U, Vorlop KD (2006) Biocatalytic and catalytic routes for the production of bulk and fine chemicals from renewable resources. In: Kamm B, Gruber PR, Kamm M (eds) Biorefineries – industrial processes and products, vol 1. Wiley-VCH, Weinheim. ISBN 3-527-31027-4 Page 37 of 38

Handbook of Climate Change Mitigation and Adaptation DOI 10.1007/978-1-4614-6431-0_28-2 # Springer Science+Business Media New York 2015

Wool RP, Sun XS (2005) Bio-based polymers and composites. Elsevier Academic, Amsterdam. ISBN 0-12-763952-7 Zelder O (2006) Fermentation – a versatile technology utilizing renewable resources. In: Raw material change: coal, oil, gas, biomass – where does the future lie? Ludwigshafen, 21–22 Nov 2006. http://www.basf.com/group/corporate/en/function/conversions:/publish/content/innovations/ events-presentations/raw-material-change/images/BASF_Expose_Dr_Zelder.pdf. Accessed 8 May 2010 Zhang ZY, Jin B, Kelly JM (2007) Production of lactic acid from renewable materials by Rhizopus fungi. Biochem Eng J 35(3):251–263

Page 38 of 38

Handbook of Climate Change Mitigation and Adaptation DOI 10.1007/978-1-4614-6431-0_29-2 # Springer Science+Business Media New York 2015

Hydrogen Production Qinhui Wang* Institute for Thermal Power Engineering, Zhejiang University, Hangzhou, Zhejiang, China

Abstract Hydrogen (H2) is mainly used in chemical industry currently. In the near future, it will also become a significant fuel due to advantages of reductions in greenhouse gas emissions, enhanced energy security, and increased energy efficiency. To meet future demand, sufficient H2 production in an environmentally and economically benign manner is the major challenge. This chapter provides an overview of H2 production pathways from fossil hydrocarbons, renewable resources (mainly biomass), and water. And high-purity H2 production by the novel CO2 sorption-enhanced gasification is highlighted. The current research activities, recent breakthrough, and challenges of various H2 production technologies are all presented. Fossil hydrocarbons account for 96 % of total H2 production in the world. Steam methane reforming, oil reforming, and coal gasification are the most common methods, and all technologies have been commercially available. However, H2 produced from fossil fuel is nonrenewable and results in significant CO2 emissions, which will limit its utilization. H2 produced from biomass is renewable and CO2 neutral. Biomass thermochemical processes such as pyrolysis and gasification have been widely investigated and will probably be economically competitive with steam methane reforming. However, research on biomass biological processes such as photolysis, dark fermentation, photo-fermentation, etc., is in laboratory scale and the practical applications still need to be demonstrated. H2 from water splitting is also attractive because water is widely available and very convenient to use. However, water splitting technologies, including electrolysis, thermolysis, and photoelectrolysis, are more expensive than using large-scale fuel-processing technologies and large improvement in system efficiency is necessary. CO2 sorption-enhanced gasification is the core unit of zero emission systems. It has been thermodynamically and experimentally demonstrated to produce H2 with purity over 90 % from both fossil hydrocarbons and biomass. The major challenge is that the reactivity of CO2 sorbents decays through multi-calcination–carbonation cycles.

Keywords Hydrogen production; Energy security; Energy efficiency; Fossil fuel; Renewable; Resources; CO2 sorption-enhanced gasification; Zero emission system; Steam methane reforming; Oil reforming; Coal gasification; Biomass; Pyrolysis; Biomass gasification; Supercritical water gasification; Photosynthesis; Dark fermentation; Photo-fermentation; Biological water–gas shift reaction; Water electrolysis; Alkaline electrolyzer; PEM electrolyzer; Solid oxide electrolysis cells; Water thermochemical splitting; Iodine–sulfur process; UT-3 process; Water photoelectrolysis; Decarbonizing energy; Carbonation; Calcination; Combustion; Zero Emission Carbon process; HyPr-RING process; Advanced gasification–combustion technology; Combined gasification and combustion; Zero Emission Gas Power *Email: [email protected] Page 1 of 35

Handbook of Climate Change Mitigation and Adaptation DOI 10.1007/978-1-4614-6431-0_29-2 # Springer Science+Business Media New York 2015

Project; Absorption-enhanced reforming process; Cyclic calcination–carbonation; CaO reactivity; Tar; Sintering; Mild calcination; Hydration; Nano-sized sorbent; Advanced High-Temperature Reactor; Nuclear; CO2 capture and storage

Introduction Fossil fuels (i.e., petroleum, natural gas, and coal), which meet most of the world’s energy demand today, are being depleted fast. Also, it is now widely acknowledged that combustion of fossil fuels contributes to the buildup of CO2 in the atmosphere, which in turn contributes to the greenhouse effect, causing the wellknown global warming. Many engineers and scientists agree that the solution to this problem would be to replace the existing fossil system by the hydrogen energy system. The idea of a hydrogen economy with decarbonizing energy supply has merit. Additional drivers for the switch to a H2 energy economy can include opportunities for increased energy security through greater diversity of resources for supply and greater efficiency and versatility with the mastery of hydrogen fuel cell technology. Hydrogen is the simplest element known to man. It is also the most plentiful gas in the universe. Hydrogen gas is the lightest gas; thus, it rises in the atmosphere. Therefore, hydrogen as a gas (H2) is not found by itself on earth. It is found only in compound form with other elements. Hydrogen combined with carbon forms different compounds such as methane (CH4), coal, and petroleum. Hydrogen combined with oxygen forms water. And hydrogen is also found in growing things – biomass. The amount of energy produced during hydrogen combustion is higher than released by any other fuel on a mass basis, with a lower heating value (LHV) 2.4, 2.8, and 4 times higher than that of methane, gasoline, and coal, respectively. The product of hydrogen combustion is only water, and thus, the utilization of hydrogen is pollutant zero emission. About 38 Mt (5000 petajoules) of hydrogen is produced worldwide annually, a market valued at about $60 billion (Levin and Chahine 2010). An idyllic vision of a “hydrogen economy” is one in which H2 and electricity are the sole energy carriers and both are produced without harmful emissions, from renewable resources. H2 would be used in transport, industrial, commercial, and residential applications, where fossil fuels are currently used. As hydrogen is not an energy source, but a carrier, so it must be produced from other natural sources, not only fossil fuels but also biomass and water. Sufficient H2 production to meet future demand is the major challenge in moving toward a H2 energy economy.

Hydrogen Production from Fossil Fuel At the present time, H2 is mainly used in chemical industry, e.g., to upgrade crude oil and synthesize methanol and ammonia in the petroleum and chemical reactors. Fossil fuel is the major sources to produce hydrogen, which amounts to 96 % of total hydrogen production in the world. The mostly common hydrogen production methods are (1) steam methane reforming (SMR) (48 %), (2) oil reforming (30 %), and (3) coal gasification (18 %) (Ewan and Allen 2005). Although ammonia and methanol are also used for H2 production, the proportion is minor. During the transition phase to a sustainable hydrogen economy, hydrogen from fossil fuel will continue to be paid large attention due to the need of considerable cost reduction and technology improvement throughout the entire hydrogen system (production, delivery, storage, conversion, and application).

Page 2 of 35

Handbook of Climate Change Mitigation and Adaptation DOI 10.1007/978-1-4614-6431-0_29-2 # Springer Science+Business Media New York 2015

Fig. 1 A diagram of a typical SMR process

Steam Methane Reforming (SMR) The dominant industrial process used to produce hydrogen is the SMR process. About 59 % of hydrogen production comes from SMR of natural gas, but hydrogen production by SMR is responsible for the emission of about 30 million tonnes of CO2 per year (Levin and Chahine 2010). The first industrial application of SMR was implemented in 1930 (Barelli et al. 2008). And it is a mature technology, which has been in use for several decades as an effective mean for hydrogen production. The SMR process is characterized by multiple-step and harsh reaction conditions. Typically, four steps are necessary, namely, (1) desulfurization, (2) steam reforming, (3) water–gas shift (WGS), and (4) H2 purification. Figure 1 shows the diagram of a typical SMR process. Desulfurization Sulfur that contained in raw natural gas will lead to catalyst deactivation and facility destruction during H2 production; thus, desulfurization from natural gas to keep the sulfur content being in a very low level is a primary and necessary procedure. Typically, the desulfurization proceeds by two steps. The first step is wet desulfurization, which is usually performed by the natural gas provider. In this step, natural gas reacts with monoethanolamine (MEA) to remove most sulfur content. The reaction can be expressed by two equations: 2CH3 CH2 OHNH2 þ H2 S⇄ðCH3 CH2 OHNH3 Þ2 S ðCH3 CH2 OHNH3 Þ2 S þ H2 S⇄2ðCH3 CH2 OHNH3 ÞHS The MEA solvent can be recovered through being heated to a higher temperature (>105  C). After wet desulfurization, the sulfur concentration in the raw natural gas will be lowered to approximately 200 ppm. The second step, dry desulfurization, is conducted just prior to the SMR reactor. The aim of dry desulfurization is to realize organic sulfur removal and reach a very low sulfur concentration (2–3 kWe units. The most common catalyst for WGS is Cu based, although some interesting work is currently being done with molybdenum carbide, platinum-based catalysts, and Fe-Pd alloy catalysts. To further reduce the carbon monoxide, a preferential oxidation (PrOx) reactor or a carbon monoxide selective methanation reactor can be used. The PrOx and methanation reactors each have their advantages and challenges. The preferential oxidation reactor increases the system complexity because carefully measured concentrations of air must be added to the system. However, these reactors are compact and if excessive air is introduced, some hydrogen is burned. Methanation reactors are simpler in that no air is required; however, for every molecule of carbon monoxide reacted, three hydrogen molecules are consumed. Also, the carbon dioxide reacts with the hydrogen, so careful control of the reactor conditions need to be maintained in order to minimize unnecessary consumption of hydrogen. Currently, preferential oxidation is the primary technique being developed. The catalysts for both these systems are typically noble metals such as platinum, ruthenium, or rhodium supported on Al2O3. H2 Purification The effluent gas from WGS reactors still contains considerable amounts of CO2, CO, and CH4 gases. In order to obtain H2 with purity higher than 99 %, pressure swing adsorption (PSA) processes are designed and conducted after WGS reactors. Production of pure hydrogen by using PSA processes has become the state-of-the-art technology in the chemical and petrochemical industries. Several 100 PSA-H2 process units have been installed around the world. In the PSA units, impurity gases with high boiling points are absorbed on the absorbent (zeolites or active carbon) bed at high pressures; however, H2 can pass through the absorbent bed due to the fact that it has the lowest boiling point. The absorbents are then regenerated by lowering down the unit pressure to release the absorbed impurities. In this way, pure H2 is separated from the impurities and fed into the plant’s H2 grid. The released impurities (tail gases) are recycled to the steam reformer burners to provide the necessary heat for the endothermic reforming reactions. Currently, the research goals consisted of developing new H2-PSA processes for (a) increasing the primary and secondary product recoveries while maintaining their high purities and (b) reducing the absorbent inventory and the associated hardware costs. A considerable effort was also made to develop new absorbents or to modify existing absorbents in order to achieve these research goals. It became a common practice to use more than one type of absorbents in these PSA processes (as layers in the same absorbent vessel or as single absorbents in different vessels) in order to obtain optimum absorption capacity and selectivity for the feed gas impurities while reducing the coabsorption of H2, as well as for their efficient desorption under the operating conditions of the PSA processes.

Page 5 of 35

Handbook of Climate Change Mitigation and Adaptation DOI 10.1007/978-1-4614-6431-0_29-2 # Springer Science+Business Media New York 2015

Oil Reforming Oil reforming is another significant commercial H2 production technology. Comparing with heavy oil, such as bitumen or residual oil, which is easier to suffer from coke formation resulting to catalyst deactivation, light oil with relatively low molecular weight is much favorable to produce H2. Generally, four reforming techniques, namely, steam reforming, partial oxidation (POX), autothermal reforming (ATR), and pyrolysis, are used to produce hydrogen from oil (Holladay et al. 2009). In fact, these techniques can also use methane as raw material and all should proceed with the similar four steps as mentioned in steam methane reforming (section “Steam Methane Reforming (SMR)”): desulfurization, reforming (steam), water–gas shift (WGS), and purification (not necessary for pyrolysis). This section will mainly focus on the distinction among each technology. Steam Reforming Steam reforming is typically the preferred process for hydrogen production in industry, using either natural gas or oil. The mechanism can be expressed by the following equation:  n Cm Hn þ mH2 O ! mCO þ m þ H2 2 Oil steam reforming is an endothermic reaction and requires an external heat source. It has advantages of not requiring oxygen, having a lower operating temperature than POX and ATR, and producing syngas with a high H2/CO ratio (3:1) which is beneficial for H2 production. However, it does have the highest emissions of the three processes. The catalysts used for oil steam reforming are similar to those in the SMR process. Developing improved and economically available catalysts with high resistance to coke formation is the main research goal. Partial Oxidation (POX) Partial oxidation (POX) of hydrocarbons and catalytic partial oxidation (CPOX) of oil have been proposed for use in hydrogen production for automobile fuel cells and some commercial applications. It converts oil to hydrogen by partially oxidizing (combusting) the material with oxygen, as shown in the equation: Cm Hn þ

m n O2 ! mCO þ H2 2 2

Partial oxidation has advantages of minimal methane slip, higher sulfur tolerance, and beneficial H2/CO ratio (1:1 to 2:1) favored for the feeds to hydrocarbon synthesis reactors such as Fischer–Tropsch. However, in order to reduce coke formation, the non-catalytic partial oxidation process needs operating at high temperatures (1300–1500  C). Although catalysts can be added to the partial oxidation system to lower the operating temperatures, it is proving hard to control temperature because of coke and hot spot formation due to the exothermic nature of the reactions. Krummenacher et al. (2003) have had success using catalytic partial oxidation for decane, hexadecane, and diesel fuel. But the high operating temperatures (>800  C and often >1000  C) (Krummenacher et al. 2003) and safety concerns may make their use for practical, compact, portable devices difficult due to thermal management (Holladay et al. 2004). In addition, this process requires an expensive and complex oxygen separation unit in order to feed pure oxygen to the reactor.

Page 6 of 35

Handbook of Climate Change Mitigation and Adaptation DOI 10.1007/978-1-4614-6431-0_29-2 # Springer Science+Business Media New York 2015

Autothermal Reforming (ATR) Autothermal reforming adds steam to catalytic partial oxidation (CPOX). The reaction mechanism can be expressed as: m n m m þ H2 Cm Hn þ H2 O þ O2 ! mCO þ 2 4 2 2 Autothermal reforming is typically conducted at a lower pressure than POX reforming and has a low methane slip. It consists of a thermal zone where POX or CPOX is used to generate the heat needed to drive the downstream steam reforming reactions in a catalytic zone. The heat from the POX negates the need for an external heat source, simplifying the system and decreasing the start-up time. A significant advantage for this process over steam reforming is that it can be stopped and started very rapidly while producing a larger amount of hydrogen than POX alone. There is some expectation that this process will gain favorability with the gas–liquids industry due to favorable gas composition for the Fischer–Tropsch synthesis, ATR’s relative compactness, lower capital cost, and potential for economies of scale (Wilhelm et al. 2001). However, for ATR to operate properly, both the oxygen to fuel ratio and the steam to carbon ratio must be properly controlled at all times in order to control the reaction temperature and product gas composition while preventing coke formation. Similar to POX, this process also needs an expensive oxygen separation unit. Pyrolysis Pyrolysis is another H2 production technology where the raw oil is decomposed (without water or oxygen present) into hydrogen and carbon. The reactions can be written in the following form: Cm Hn ! mC þ

n H2 2

Since no water or air is present, no carbon oxides (e.g., CO or CO2) are formed, eliminating the need for secondary reactors (WGS, PrOx, PSA, etc.). Thus, this process offers significant emission reduction. Among the advantages of this process are fuel flexibility, relative simplicity and compactness, clean carbon by-product, and reduction in CO2 and CO emissions. One of the challenges with this approach is the potential for fouling by the carbon formed, but proponents claim this can be minimized by appropriate design (Guo et al. 2005). Pyrolysis may play a significant role in the future. In Norway, the Kvaerner Oil and Gas Company has developed an attractive technique to simultaneously produce carbon and H2 by oil plasma pyrolysis. It is said that this technique has an energy efficiency of 1.1 kW h m3 H2, and the commercial operation is feasible now.

Coal Gasification Coal is an abundant energy source in many parts of the world. H2 production by coal gasification is considered to be a promising option before economical H2 production pathways from renewable energy sources are developed. Coal gasification can be defined as the reaction of solid fuels with air, oxygen, steam, carbon dioxide, or a mixture of these gases at a temperature exceeding 700  C to yield a gaseous product suitable for use either as a source of energy or as a raw material for the synthesis of chemicals, liquid fuels, or other gaseous fuels. Figure 2 shows the diagram of a typical gasification process. Coal gasification is currently used to produce H2 as an intermediate for the synthesis of chemicals. However, large-scale H2 production project mainly for power generation is also under development. A well-known example is the FutureGen project sponsored by the department of energy (DOE) in the USA, which is a 10-year, US$ 1 billion, demonstration project started from February 2003 (Collot 2006). Page 7 of 35

Handbook of Climate Change Mitigation and Adaptation DOI 10.1007/978-1-4614-6431-0_29-2 # Springer Science+Business Media New York 2015

Fig. 2 A diagram of a typical gasification process

This section shows not only the three main conventional coal gasification technologies – moving bed gasification, fluidized bed gasification, and entrained flow gasification – but also an alternative method, denoted underground coal gasification (UCG). Moving Bed Gasification Moving bed gasification is only suitable for solid fuels with a particle size in the range of 5–80 mm. Typically, a mixture of steam and oxygen is introduced at the bottom of the reactor and runs counterflow to the coal. Coal residence times in moving bed gasifiers are of the order of 15–60 min for high-pressure steam/oxygen gasifiers and can be several hours for atmospheric steam/air gasifiers. The pressure in the bed is typically of the order of 3 MPa for commercial gasifiers with tests realized at up to 10 MPa. Maximum temperatures in the combustion zone are typically in the range of 1500–1800  C for slagging gasifiers and 1300  C for dry ash gasifiers. Although moving bed gasifiers are presently less used than entrained flow gasifiers for the construction of new power plants, moving bed gasification presents the advantage of being a mature technology. The main requirement of moving bed gasifiers is good bed permeability to avoid pressure drops and channel burning that can lead to unstable gas outlet temperatures and composition as well as risk of a downstream explosion. A typical advanced moving bed technique is the British Gas/Lurgi (BGL) technology (Bailey 2001). It is said that this technology will be adopted in the Kentucky Pioneer Energy project, which is an Integrated Coal Gasification Combined Cycle (IGCC) project cosponsored by Global Energy Inc and DOE of USA. Table 1 shows the process characteristics of BGL technology. Fluidized Bed Gasification Fluidized bed gasification can only operate with solid crushed coals in the range of 0.5–5 mm. Coals are introduced into an upward flow of gas (either air or oxygen/steam) that fluidizes the bed of fuel while the reaction is taking place. The bed is either formed of sand/coke/char/sorbent or ash. Residence time of the feed in the gasifier is typically in the order of 10–100 s but can also be much longer, with the feed experiencing a high heating rate from the entry in the gasifier. High levels of back-mixing ensure a uniform temperature distribution in the gasifier. Fluidized bed gasifiers usually operate at temperatures well below the ash fusion temperatures of the fuels (900–1050  C) to avoid ash melting, thereby avoiding clinker formation and loss of fluidity of the bed. The low operating temperatures may lead to incomplete carbon conversion of coal, but this can be overcome by char recirculation into the gasifier. Advanced fluidized bed gasifiers are also operated at elevated pressures. Among the main advantages of this type of gasifier are that they can operate at variable loads and more tolerant to coals with high sulfur content. Page 8 of 35

Handbook of Climate Change Mitigation and Adaptation DOI 10.1007/978-1-4614-6431-0_29-2 # Springer Science+Business Media New York 2015

Table 1 Process characteristics of BGL technology (Collot 2006; Fc and Yf 2006) Feeding mode and operating conditions Gasifier

Ash removal system Cooling and cleaning modes

Remarks

Lumped coal together with a flux is discharged at the top of the gasifier as a sequence of batches. A distributor plate slowly rotates to ensure even distribution of the coal Double-walled cylindrical reactor surrounded by a steam jacket. O2 and steam are added toward the bottom of the bed through tuyeres, resulting in high internal temperature within the gasifier (2000  C) Slagging gasifier. Molten ash is tapped off and quenched with water Tars, high-boiling-point hydrocarbons and particulates are removed in a quench vessel and reinjected in the bed near the tuyeres. The gas (450–500  C) is cooled and cleaned by a water quench and scrubbed to remove H2S It is a slagging gasifier modified from the Lurgi dry ash gasifier and not suitable for high reactive coals. O2 consumption is higher than Lurgi dry ash gasifier. It is still difficult to develop very large commercial unit meeting the demand for large-scale industrial gasifier

Table 2 Process characteristics of HTW and KRW technologies (Collot 2006; Fc and Yf 2006) HTW Feeding mode and operating conditions Gasifier Ash removal system Cooling and cleaning modes Remarks

KRW Feeding mode and operating conditions Gasifier Ash removal system Cooling and cleaning modes Remarks

Coal dropped from a bin via a gravity pipe into the gasifier. Operating pressure is 1–3 MPa Bed is formed of particles of ash, semicoke, and coal and is maintained at 800  C Dry ash removal through a discharge screw Using cyclone to remove particulates, water, or fire tube cooling system Plan to replace old Lurgi dry ash reactors at Vresova IGCC plant in Czech Republic. It is promising due to the elevated operation temperature and pressure compared to the conventional Winkler gasifier Lock hoppers, operating pressure is up to 2 MPa Coal partial combustion around the feed nozzle forming 1150–1260  C high temperature zone Ash agglomerating to large particles then separated from the remaining coal char Raw gas is cooled from 900  C to 600  C and enters a hot gas cleaning system. A portion of the gas is recycled to the gasifier Used in the Pinon Pine IGCC plant. Carbon content in the ash can be greatly lowered down

But for fluidized bed gasification, it is necessary to process coals with a higher ash fusion temperature than the operating temperature of the gasifier to avoid ash agglomeration (which causes uneven fluidization in dry ash, fluidized bed gasifiers). Two types of fluidized bed gasification technologies have been operated at commercial scale. They are High-Temperature Winkler (HTW) and Kellogg Rust Westinghouse (KRW) gasification technologies, respectively, both of which can be used in IGCC plants. Table 2 gives the process characteristics of HTW and KRW technologies. Entrained Flow Gasification In entrained flow gasifiers, coal particles concurrently react at high speed with steam and oxygen or air in a suspension mode called entrained fluid flow. Short gas residence times (seconds) give them a high load capacity but also require coal to be pulverized. Coal can either be fed dry (commonly using nitrogen as a transport gas) or wet (carried in slurry water) into the gasifier. They usually operate at high temperatures of 1200–1600  C and pressures in the range of 2–8 MPa. Although entrained flow gasifiers are the most Page 9 of 35

Handbook of Climate Change Mitigation and Adaptation DOI 10.1007/978-1-4614-6431-0_29-2 # Springer Science+Business Media New York 2015

Table 3 Process characteristics of Texaco and Shell technologies (Collot 2006; Fc and Yf 2006) Texaco Feeding mode and operating conditions Gasifier Ash removal system Cooling and cleaning modes Remarks Shell Feeding mode and operating conditions Gasifier Ash removal system Cooling and cleaning modes

Remarks

Slurry fed through burners at the top of the gasifier. Operate at temperatures in the range of 1250–1450  C and 3–8 MPa pressures Pressure vessel with refractory lining The molten slag flows out toward the bottom of the gasifier with the raw gas and is water quenched and removed through a lock hopper Raw gas can either be cooled or cleaned from slag by water quenching or radiant cooler There are six Texaco-owned gasification facilities worldwide that produce power, chemicals, and H2 from coal. It has wide applicability to various coal types Coal powders are transported by N2 gases, operation at 2–4 MPa, at 1500  C and above A carbon steel vessel enclosed by a non-refractory membrane wall Molten slag is removed through a slag tap and water quenched Syngas is quenched with cooled recycled product gas and further cooled in a syngas cooler. Raw gas is cleaned in ceramic filters. Fifty percent gas is recycled to act as a quenching medium There are five gasification plants using the Shell gasification technology till 2006. Only the Nuon Power Buggenum IGCC plant in the Netherlands is fed with coal. More plants are planned to be built in China and the USA

widely used gasifiers, more critical operational requirements are needed compared to moving bed and fluidized gasifiers, such as significant cooling of the raw syngas before being cleaned; controlling the coal/ oxidant ratio within narrow limits through the entire operation in order to maintain a stable flame close to the injector tip; and strict requests on coal properties including a minimum ash content required for gasifiers with slag self-coating walls, a maximum ash content fixed for each type of entrained flow gasifier, ash composition (SiO2, CaO, iron oxides) limitations to avoid the refractory cracking, optimum ash fusion temperature and critical temperature viscosity recommended for smooth slag tapping, etc. Entrained flow gasification is the most widely used technology. Table 3 shows the process characteristics of Texaco and Shell technologies, representing the wet feed and dry feed entrained flow gasification, respectively. Underground Coal Gasification Underground coal gasification does not need the construction of surface plants. In the process, injection and production wells are drilled from the surface and linked together in a coal seam. Once the wells are linked, air or oxygen is injected, and the coal is ignited in a controlled manner. Water present in the coal seam or in the surrounding rocks flows into the cavity formed by the combustion and is utilized in the gasification process. The produced gases (primarily H2, CO, CH4, and CO2) can be used to generate electric power or synthesize chemicals after being cleaned. The former Soviet Union (FSU) performed intensive research on UCG from 1930s to 1960s, and over 15 Mt of coal has been gasified underground in the FSU, generating 50 Gm3 of gas. Due to the discovery of extensive natural gas in Siberia in 1970s, FSU declined the usage of UCG. As a result of the increasing energy needs in recent years, interest in UCG has been rejuvenated all over the world (Shafirovich and Varma 2009). It is said that China is generally believed to have the largest UCG program currently underway. A pilot industrial UCG plant at the Gonggou coal mine, Wulanchabu, Northern Inner Mongolia Autonomous Region, is under construction. This $112 million project is a joint venture between the China University of Mining and Technology and Hebei Xin’ao Group. The UCG process has several advantages over surface coal gasification such as Page 10 of 35

Handbook of Climate Change Mitigation and Adaptation DOI 10.1007/978-1-4614-6431-0_29-2 # Springer Science+Business Media New York 2015

lower capital investment costs (due to the absence of a manufactured gasifier), no handling of coal and solid wastes at the surface (ash remains in the underground cavity), no human labor or capital for underground coal mining, minimum surface disruption, no coal transportation costs, and direct use of water and feedstock available in situ. In addition, cavities formed as a result of UCG could potentially be used for CO2 sequestration. However, construction of a UCG process is quite complex as lots of criteria should be strictly considered, such as the coal seam conditions (thickness, depth, coal seam dip, coal amount and ranks), groundwater protection, and land-use restrictions.

Hydrogen Production from Biomass Biomass comprises all the living matter present on earth. It is derived from growing plants including algae, trees, and crops or from animal manure. The biomass resources are the organic matters in which the solar energy is stored in chemical bonds. It generally consists of carbon, hydrogen, oxygen, and nitrogen. Sulfur is also present in minor proportions. Some biomass also consists of significant amounts of inorganic species. Biomass is the fourth largest source of energy in the world, accounting for about 12 % of the world’s primary energy consumption in and about 22 % of the primary energy consumption in the developing countries in 2006 (Loo and Koppejan 2008). Since biomass is renewable and consumes atmospheric CO2 during growth, it can have a small net CO2 impact compared to fossil fuels. Biomass can be converted into useful forms of energy products using a number of different processes. Generally, there are two routes for biomass conversion into hydrogen-rich gas, namely, (i) thermochemical conversion and (ii) biochemical/biological conversion. The yield of hydrogen is low from biomass since the hydrogen content in biomass is low to begin with (approximately 6 % vs. 25 % for methane) and the energy content is low due to the 40 % oxygen content of biomass. Thus, hydrogen from biomass has major challenges. There are no completed technology demonstrations (Kalinci et al. 2009). However, biomass still has the potential to accelerate the realization of hydrogen as a major fuel of the future.

Thermochemical Conversions Thermochemical conversion involves a series of cyclical chemical reaction for releasing hydrogen. There are main three methods for biomass-based hydrogen production via thermochemical conversions: (i) pyrolysis, (ii) conventional gasification, and (iii) SCWG (supercritical water gasification), respectively. Pyrolysis Pyrolysis is the heating of biomass at a temperature of 650–800 K at 0.1–0.5 MPa in the absence of air to convert biomass into liquid oils, solid charcoal, and gaseous compounds. Pyrolysis can be further classified into slow pyrolysis and fast pyrolysis. As the products are mainly charcoal, slow pyrolysis is normally not considered for hydrogen production. Fast pyrolysis is a high-temperature process, in which the biomass feedstock is heated rapidly in the absence of air to form vapor and subsequently condensed to a dark brown mobile bio-liquid. The products of fast pyrolysis can be found in all gas, liquid, and solid phases: (i) Gaseous products include H2, CH4, CO, CO2, and other gases depending on the organic nature of the biomass for pyrolysis. (ii) Liquid products include tar and oils that remain in liquid form at room temperature like acetone, acetic acid, etc. (iii) Solid products are mainly composed of char and almost pure carbon plus other inert materials.

Page 11 of 35

Handbook of Climate Change Mitigation and Adaptation DOI 10.1007/978-1-4614-6431-0_29-2 # Springer Science+Business Media New York 2015

Although most pyrolysis processes are designed for biofuel production, hydrogen can be produced directly through fast or flash pyrolysis if high temperature and sufficient volatile phase residence time are allowed as follows: Biomass þ heat ! H2 þ CO þ CH4 þ otherproducts Methane and other hydrocarbon vapors produced can be steam reformed for more hydrogen production: CH4 þ H2 O ! CO þ 3H2 In order to increase the hydrogen production, water–gas shift reaction can be applied as follows: CO þ H2 O ! CO2 þ H2 Besides the gaseous products, the oily products can also be processed for hydrogen production. The pyrolysis oil can be separated into two fractions based on water solubility. The water-soluble fraction can be used for hydrogen production while the water-insoluble fraction for adhesive formulation. Experimental study has shown that when Ni-based catalyst is used, the maximum yield of hydrogen can reach 90 %. With additional steam reforming and water–gas shift reaction, the hydrogen yield can be increased significantly. Temperature, heating rate, residence time, and type of catalyst used are important pyrolysis process control parameters. In favor of gaseous products especially in hydrogen production, high temperature, high heating rate, and long volatile phase residence time are required. These parameters can be regulated by selection among different reactor types and heat transfer modes, such as gas–solid convective heat transfer and solid–solid conductive heat transfer. Fluidized bed reactor exhibits higher heating rates, and thus, it appears to be the promising reactor type for hydrogen production from biomass pyrolysis. Some inorganic salts, such as chlorides, carbonates, and chromates, exhibit beneficial effect on pyrolysis reaction rate. As tar is difficult to be gasified, extensive studies on the catalytic tar elimination were carried out to converting more tar into product gases (Han and Kim 2008; Shen and Yoshikawa 2013). Effect of inexpensive dolomite and CaO on the decomposition of hydrocarbon compounds in tar has been conducted (Simell et al. 1997). The catalytic effects of other catalysts (Ni-based catalysts, Y-type zeolite, K2CO3, Na2CO3, and CaCO3) and various metal oxides (Al2O3, SiO2, ZrO2, TiO2, and Cr2O3) have also been investigated. Among the different metal oxides, Al2O3 and Cr2O3 exhibit better catalytic effect than the others. Among the catalysts, Na2CO3 is better than K2CO3 and CaCO3. Although noble metals Ru and Rh are more effective than Ni catalyst and less susceptible to carbon formation, they are not commonly used due to their high costs (Garcia et al. 2000). In order to evaluate hydrogen production through pyrolysis of various types of biomass, extensive experimental investigations have been conducted in recent years. Agricultural residues; peanut shell; postconsumer wastes such as plastics, trap grease, mixed biomass, and synthetic polymers; and rapeseed have been widely tested for pyrolysis for hydrogen production. In order to solve the problem of decreasing reforming performance caused by char and coke deposition on the catalyst surface and in the bed itself, fluidized catalyst beds are usually used to improve hydrogen production from biomass-pyrolysis-derived biofuel. Yeboah et al. (2002) constructed a demonstration plant for hydrogen production from peanut shell pyrolysis and steam reforming in a fluidized bed reactor, and the production rates of 250 kg H2/day was achieved. Padro and Putsche (1999) estimated the hydrogen production cost of biomass pyrolysis to be in the range of US$ 8.86/GJ to US$ 15.52/GJ depending on the facility size and biomass type. For comparison, the costs of hydrogen production by wind-electrolysis systems and PV-electrolysis systems are US$ 20.2/GJ and US$ 41.8/GJ, respectively. It can be seen that biomass pyrolysis is a competitive Page 12 of 35

Handbook of Climate Change Mitigation and Adaptation DOI 10.1007/978-1-4614-6431-0_29-2 # Springer Science+Business Media New York 2015

method for renewable hydrogen production. Demirbas (2006) carried out pyrolysis and gasification experiments in a self-designed device, and the highest yields (% dry and ash free basis) were obtained from the pyrolysis (46 %) and steam gasification (55 %) of wheat straw while the lowest yields from olive waste. Yang et al. (2006) studied on pyrolysis of palm oil wastes in a countercurrent fixed bed. The total gas yield was enhanced greatly while the temperature increased from 500  C to 900  C and reached the maximum value (70 wt.%, on the raw biomass sample basis) at 900  C with big portions of H2 (33.49 vol.%) and CO (41.33 vol.%). The optimum residence time (9 s) was found to get a higher H2 yield (10.40 g/kg (daf)). The effect of adding chemicals (Ni, g-Al2O3, Fe2O3, La/Al2O3, etc.) on gas product yield was investigated, and adding Ni showed the greatest catalytic effect with the maximum H2 yield achieved at 29.78 g/kg (daf). Gasification Gasification is the conversion of biomass into a combustible gas mixture via the partial oxidation at high temperatures, typically varying from 800  C to 900  C. It is applicable to biomass having moisture content less than 35 %. Biomass is converted completely to CO and H2 although in practice, some CO2, water, and other hydrocarbons including methane in an ideal gasification. The char compositions occurring by the fast pyrolysis of biomass can be gasified with gasifying agents. Air, oxygen, and steam are widely used gasifying agents. Reaction conditions along with heating values are mentioned as follows: (i) Oxygen gasification: It yields a better quality gas of heating value of 10–15 MJ/Nm3. In this process, the temperatures between 1000  C and 1400  C are achieved. O2 supply may bring a simultaneous problem of cost and safety. (ii) Air gasification: It is most widely used technology as being cheap, single product is formed at high efficiency and without requiring oxygen. A low-heating value gas is produced containing up to 60 % N2 having a typical heating value of 4–6 MJ/Nm3 with by-products such as water, CO2, hydrocarbons, tar, and nitrogen gas. The reactor temperatures between 900  C and 1100  C have been achieved. (iii) Steam gasification: Biomass steam gasification converts carbonaceous material to permanent gases (H2, CO, CO2, CH4, and light hydrocarbons), char, and tar. This method has some problems such as corrosion, poisoning of catalysts, and minimizing tar components. Hydrogen can be produced from the gasification gaseous products through the same procedure of steam reforming and water–gas shift reaction as discussed in the pyrolysis section. As the products of gasification are mainly gases, this process is more favorable for hydrogen production than pyrolysis. In order to optimize the process for hydrogen production, a number of efforts have been made by researchers to test hydrogen production from biomass gasification with various biomass types and at various operating conditions. Using a fluidized bed gasifier along with suitable catalysts, it is possible to achieve hydrogen production about 60 vol.%. Such high conversion efficiency makes biomass gasification an attractive hydrogen production alternative. In addition, the costs of hydrogen production by biomass gasification are competitive with natural gas reforming. Taking into account the environmental benefit as well, hydrogen production from biomass gasification should be a promising option based on both economic and environmental considerations. One of the major issues in biomass gasification is to deal with the tar formation that occurs during the process. The unwanted tar may cause the formation of tar aerosols and polymerization to a more complex structure, which are not favorable for hydrogen production through steam reforming. Currently, three methods are available to minimize tar formation: (i) proper design of gasifier, (ii) proper control and operation, and (iii) proper additives/catalysts. The operation parameters, such as temperature, gasifying Page 13 of 35

Handbook of Climate Change Mitigation and Adaptation DOI 10.1007/978-1-4614-6431-0_29-2 # Springer Science+Business Media New York 2015

agent, and residence time, play an important role in the formation and decomposition of tar. It has been reported that tar could be thermally cracked at temperature above 1273 K (Milne et al. 1998). The use of some additives (dolomite, olivine, and char) inside the gasifier also helps tar reduction. When dolomite is used, 100 % elimination of tar can be achieved (Sutton et al. 2001). Catalysts not only reduce the tar content but also improve the gas product quality and conversion efficiency. Dolomite, Ni-based catalysts, and alkaline metal oxides are widely used as gasification catalysts. Researches on iron-based catalysts and the novel carbon-supported catalysts were reported recently (Xu et al. 2010). Process modifications by two-stage gasification and secondary air injection in the gasifier are also useful for tar reduction. Another problem of biomass gasification is the formation of ash that may cause deposition, sintering, slagging, fouling, and agglomeration. To resolve the ash-associated problems, fractionation and leaching have been employed to reduce ash formation inside the reactor. Though fractionation is effective for ash removal, it may deteriorate the quality of the remaining ash. On the other hand, leaching can remove biomass’ inorganic fraction, as well as improve the quality of the remaining ash. More recently, gasification of leached olive oil waste in a circulating fluidized bed reactor was reported for gas production that demonstrated the feasibility of leaching as a pretreatment technique for gas production (Garcı́aIbañez et al. 2004). Supercritical Water Gasification (SCWG) The properties of water displayed beyond critical point plays a significant role for chemical reactions, especially in the gasification process. Below the critical point, both the liquid and gas phases exhibit different properties, although it is apparent that these properties become increasingly alike as the temperature arises. Ultimately, when it reaches the critical point (temperature >374  C, pressure >22 MPa), the properties of both liquid and gas become identical. Over the critical point, the properties of this SCW vary in between liquid-like or gas-like conditions. SCW is completely miscible with organic substance as well as with gases. When biomass has high moisture content above 35 %, it is likely to gasify biomass in a supercritical water condition, where biomass can be rapidly decomposed into small molecules or gases in a few minutes at a high efficiency. Supercritical water gasification is a promising process to gasify biomass with high moisture contents due to the high gasification ratio (100 % achievable), high hydrogen volumetric ratio (50 % achievable), and avoidance of biomass drying. In the past 25 years, the US Pacific Northwest Laboratory, Hawaii Natural Energy Institute, Forschungszentrum Karlsruhe in Germany, National Institute for Resources and Environment in Japan, State Key Laboratory of Multiphase Flow in Power Engineering in China (Guo et al. 2007), and other research institutions have had some in-depth researches on the hydrogen production by SCWG of some organic compounds without catalysts. Studies covered glucose, methanol, cellulose, lignin, and some real biomass compounds and organic waste/water. As successful demonstrations have been accumulated, detailed reaction mechanism, kinetics, and thermodynamics have built a solid foundation for subsequent investigations (Guo et al. 2010). And in recent years, extensive research has been carried out to evaluate the suitability of various wet biomass gasification in supercritical water conditions. However, the works have been mostly on a laboratory scale and in an early development stage. The solubility of biomass components in hot-compressed water has been first studied by Mok and Antal (1992). The results show that in hot-compressed waters, about 40–60 % of the biomass sample is soluble, though the reaction is maintained slightly below the critical water condition. Minowa et al. (1998) reported hydrogen production from cellulose gasification in hot-compressed water (subcritical) using nickel catalyst. Resende and Savage (2010) gasified cellulose and lignin in supercritical water, using quartz reactors, and quantified the catalytic effect of metals by adding them to these reactors in different forms. Yu et al. (1993) reported that the gasification of glucose at supercritical water condition, such as 873 K and 34.5 MPa, was different from the nonsupercritical water condition. One advantage is that, Page 14 of 35

Handbook of Climate Change Mitigation and Adaptation DOI 10.1007/978-1-4614-6431-0_29-2 # Springer Science+Business Media New York 2015

during gasification, neither tar nor char formation occurs. This early finding stimulated extensive interests in supercritical water research. Using glucose as a model compound, hydrogen yield of more than 50 vol.% can be achieved with the use of proper catalysts in supercritical water condition. Tubular reactors are widely used in supercritical water gasification because of their robust structures to withstand high pressure. Calzavara et al. (2005) made an evaluation of the energy efficiency of biomass gasification, and results show that the energy efficiency from thermodynamic calculation reaches 60 % when considering hydrogen, carbon monoxide, and methane as valuable species in the ideal case. Including energy recovery from the water at 280 bar and 740  C, the overall energy yield reaches 90 %, if the heat loss is ignored. Although supercritical water gasification is still at its early development stage, the technology has already shown its economic competitiveness with other hydrogen production methods. Spritzer and Hong (2003) have estimated the cost of hydrogen produced by supercritical water gasification to be about US$ 3/GJ (US$ 0.35/kg). Hydrogen production from biomass thermochemical processes has already been shown to be attractive economically and demonstrated to be a feasible option. However, it should be noted that hydrogen gas is normally produced together with other gas constituents. Thus, separation and purification of hydrogen gas are required. Nowadays, several methods, such as CO2 absorption, drying/chilling, and membrane separation, have been successfully developed for hydrogen gas purification. It is expected that biomass thermochemical conversion processes will be available for large-scale hydrogen production in the near future.

Biological Conversion Another method for biomass-based hydrogen is biological conversions. These are summarized as photosynthesis process, fermentative hydrogen production, and hydrogen production by BWGS (biological water–gas shift reaction). All processes depend on hydrogen production enzymes. Photosynthesis Process Many phototropic organisms, such as purple bacteria, green bacteria, cyanobacteria, and several algae can be used to produce hydrogen with the aid of solar energy. Microalgae, such as green algae and cyanobacteria, absorb light energy and generate electrons. The electrons are then transferred to ferredoxin (FD) using the solar energy absorbed by photosystem. However, the mechanism varies from organism to organism but the main steps are similar. Direct Biophotolysis Direct biophotolysis of hydrogen production is a biological process using microalgae photosynthetic systems to convert solar energy into chemical energy in the form of hydrogen: solar energy

2H2 O ƒƒƒƒƒƒ! 2H2 þ O2 Two photosynthetic systems are responsible for photosynthesis process: (i) photosystem I (PSI) producing reductant for CO2 reduction and (ii) photosystem II (PSII) splitting water and evolving oxygen. In the biophotolysis process, two photons from water can yield either CO2 reduction by PSI or hydrogen formation with the presence of hydrogenase. In green plants, due to the lack of hydrogenase, only CO2 reduction takes place. On the contrary, microalgae, such as green algae and cyanobacteria (blue-green algae), contain hydrogenase and, thus, have the ability to produce hydrogen. In this process, electrons are generated when PSII absorbs light energy. The electrons are then transferred to the ferredoxin (Fd) using the solar energy absorbed by PSI.

Page 15 of 35

Handbook of Climate Change Mitigation and Adaptation DOI 10.1007/978-1-4614-6431-0_29-2 # Springer Science+Business Media New York 2015

Since hydrogenase is sensitive to oxygen, it is necessary to maintain the oxygen content at a low level under 0.1 % so that hydrogen production can be sustained. This condition can be obtained by the use of green algae Chlamydomonas reinhardtii that can deplete oxygen during oxidative respiration. However, due to the significant amount of substrate being respired and consumed during this process, the efficiency is low. Recently, mutants derived from microalgae were reported to have good O2 tolerance and thus higher hydrogen production. The efficiency can be increased significantly using mutants for hydrogen production. Benemann (1998) estimated the cost of direct biophotolysis for hydrogen production to be $20/GJ assuming that the capital cost is about US$ 60/m2 with an overall solar conversion efficiency of 10 %. Hallenbeck and Benemann (2002) performed similar cost estimation and reported the capital cost of US$ 100/m2. However, in their estimation, some practical factors were neglected, such as gas separation and handling. Indirect Biophotolysis The concept of indirect biophotolysis involves the following four steps: (i) biomass production by photosynthesis; (ii) biomass concentration; (iii) aerobic dark fermentation yielding 4 mol hydrogen/mol glucose in the algae cell, along with 2 mol of acetates; and (iv) conversion of 2 mol of acetates into hydrogen. In a typical indirect biophotolysis, cyanobacteria are used to produce hydrogen via the following reactions: 12H2 O þ 6CO2 ! C6 H12 O6 þ 6O2 C6 H12 O6 þ 12H2 O ! 12H2 þ 6CO2 Markov et al. (1997) investigated the indirect biophotolysis with cyanobacterium Anabaena variabilis exposed to light intensities of 45–55 A mol1 m2 and 170–180 A mol1 m2 in the first stage and second stage, respectively. Photoproduction of hydrogen at a rate of about 12.5 ml H2/gcdw h (cdw, cell dry weight) was found. In the study on indirect biophotolysis with cyanobacterium Gloeocapsa alpicola by Troshina et al. (2002), it was found that maintaining the medium at pH value between 6.8 and 8.3 yielded optimal hydrogen production. Increasing the temperature from 30  C to 40  C can increase the hydrogen production twice as much. The hydrogen production rate through indirect biophotolysis is comparable to hydrogenase-based hydrogen production by green algae. The estimated overall cost is US$ 10/GJ of hydrogen (Hallenbeck and Benemann 2002). However, it should be pointed out that indirect biophotolysis technology is still under active research and development. The estimated cost is subject to a significant change depending on the technological advancement. Fermentative Hydrogen Production Bio hydrogen production can be realized by anaerobic (dark fermentation) and photoheterotrophic (light fermentation) microorganisms using carbohydrate-rich biomass as a renewable resource. The first step is the acid or enzymatic hydrolysis of biomass to highly concentrated sugar solution which is further fermented by anaerobic organisms to produce volatile fatty acids (VFA), hydrogen, and CO2. The organic acids are further fermented by the photoheterotrophic bacteria (Rhodobacter sp) to produce CO2 and H2 which is known as the light fermentation. Combined utilization of dark and photo-fermentations was reported to improve the yield of hydrogen formation from carbohydrates.

Page 16 of 35

Handbook of Climate Change Mitigation and Adaptation DOI 10.1007/978-1-4614-6431-0_29-2 # Springer Science+Business Media New York 2015

Dark Fermentation Fermentation by anaerobic bacteria as well as some microalgae, such as green algae on carbohydrate-rich substrates, can produce hydrogen at 30–80  C especially in a dark condition. Unlike a biophotolysis process that produces only H2, the products of dark fermentation are mostly H2 and CO2 combined with other gases, such as CH4 or H2S, depending on the reaction process and the substrate used. With glucose as the model substrate, maximum 4 mol H2 is produced per mole glucose when the end product is acetic acid: C6 H12 O6 þ 2H2 O ! 2CH3 COOH þ 4H2 þ 2CO2 When the end product is butyrate, 2 mol H2 is produced: C6 H12 O6 þ 2H2 O ! CH2 CH2 CH2 OOH þ 2H2 þ 2CO2 However, in practice, the 4 mol H2 production/mol glucose cannot be achieved because the end products normally contain both acetate and butyrate. The amount of hydrogen production by dark fermentation highly depends on the pH value, hydraulic retention time (HRT), and gas partial pressure. For the optimal hydrogen production, pH should be maintained between 5 and 6. Partial pressure of H2 is yet another important parameter affecting the hydrogen production. When hydrogen concentration increases, the metabolic pathways shift to produce more reduced substrates, such as lactate, ethanol, acetone, butanol, or alanine, which in turn decrease the hydrogen production. Besides the pH value and partial pressure, HRT (hydraulic retention time) also plays an important role in hydrogen production. Ueno et al. (1996) have reported that an optimal HRT of 0.5 day could affect maximum hydrogen production (14 mmol/g carbohydrate) from wastewater by anaerobic microflora in the presence of chemostat culture. When HRT was increased from 0.5 day to 3 days, hydrogen production rate was reduced from 198 to 34 mmol l1 day1, while the carbohydrates in the wastewater were decomposed at an increasing efficiency from 70 % to 97 %. Due to the fact that solar radiation is not a requirement, hydrogen production by dark fermentation does not demand much land and is not affected by the weather condition. Hence, the feasibility of the technology yields a growing commercial value. Photo-Fermentation Photosynthetic nonsulfur (PNS) bacteria have the ability to convert VFAs to H2 and CO2 under anoxygenic conditions. PNS bacteria also have the ability to use carbon sources like glucose, sucrose, and succinate rather than VFA for H2 production. The most widely known PNS bacteria used in photofermentative H2 production are Rhodobacter sphaeroides O.U001, Rhodobacter capsulatus, R. sphaeroides-RV, Rhodobacter sulfidophilus, Rhodopseudomonas palustris, and Rhodospirillum rubrum (Argun and Kargi 2011). As presented in the equation below, theoretically 4 mol of H2 can be produced from 1 mol of acetic acid when acetic acid is the only VFA present in fermentation medium: CH3 COOH þ 2H2 O ! 4H2 þ 2CO2 , DG ¼ þ104kJ Hydrogen can be produced by photo-fermentation of various types of biomass wastes. However, these processes have three main drawbacks: (i) use of nitrogenase enzyme with high-energy demand, (ii) low solar energy conversion efficiency, and (iii) demand for elaborate anaerobic photobioreactors covering large areas. Hence, at the present time, photo-fermentation process is not a competitive method for hydrogen production.

Page 17 of 35

Handbook of Climate Change Mitigation and Adaptation DOI 10.1007/978-1-4614-6431-0_29-2 # Springer Science+Business Media New York 2015

Design of photobioreactors enabling efficient H2 production is still a challenge. Light distribution inside photobioreactors constitutes the most important parameter effecting H2 production rate. Thus, optimization of light distribution with high reactor surface area was reported as an essential factor to enhance the light efficiency in photo-fermentation. Operating parameters also affect the photofermentation process efficiency. The concept of net energy ratio (NER) is used to determine the process efficiency which is the ratio of total energy produced to the energy required for plant operations like mixing, pumping, aeration, and cooling. Biological Water–Gas Shift Reaction (BWGS) The BWGS is a relatively new method for hydrogen production. Some bacteria (certain photoheterotrophic bacteria), such as Rubrivivax gelatinosus, are capable of performing water–gas shift reaction at ambient temperature and atmospheric pressure. Such bacteria can survive in the dark by using CO as the sole carbon source to generate adenosine triphosphate (ATP) coupling the oxidation of CO with the reduction of H+ to H2: CO þ H2 O $ CO2 þ H2 , DG ¼ 20:1kJ=mol In equilibrium, the dominating products are CO2 and H2. Therefore, this process is favorable for hydrogen production. Organisms growing at the expense of this process are the gram-negative bacteria, such as R. rubrum and Rubrivivax gelatinosus, and the gram-positive bacteria, such as Carboxydothermus hydrogenoformans. Under anaerobic conditions, CO induces the synthesis of several proteins, including CO dehydrogenase, Fe–S protein, and CO-tolerant hydrogenase. Electrons produced from CO oxidation are conveyed via the Fe–S protein to the hydrogenase for hydrogen production. Biological water–gas shift reaction for hydrogen production is still under laboratory scale and only few works have been reported. The common objectives of these works were to identify suitable microorganisms that had high CO uptake and to estimate the hydrogen production rate. Kerby et al. (1995) observed that under dark, anaerobic conditions in the presence of sufficient nickel, the doubling time of R. rubrum was less than 5 h by the oxidation of CO to CO2 coupled with the reduction of protons to hydrogen. However, R. rubrum requires light to grow and hydrogen production is inhibited by medium CO partial pressure above 0.2 atm. An alternative new chemoheterotrophic bacterium Citrobacter sp. Y19 was tested by Jung et al. (2002) for hydrogen production using water–gas shift reaction. The maximum hydrogen production activity was found to be 27 mmol/g cell h, which is about three times higher than R. rubrum. Recently, Wolfrum et al. (2003) have conducted a detailed study to compare the biological water–gas shift reaction with conventional water–gas shift processes. Their analysis showed that biological water–gas shift process was economically competitive when the methane concentration was under 3 %. The hydrogen production cost from biological water–gas shift reaction ranged from US$ 1.75/kg (US$ 14.6/GJ) to around US$ 2.25/kg (US$ 18.8/GJ) for a methane concentration between 1 % and 10 %. Compared with thermochemical water–gas shift processes, the cost of biological water–gas shift processes is lower due to the elimination of reformer and associated equipment.

Hydrogen Production from Water There is abundant water resource on the earth and it is widely available almost everywhere. Thus, hydrogen production from water is a convenient option and the amount can be boundless. Extensive research efforts have focused on this promising hydrogen production route. In fact, its commercial use

Page 18 of 35

Handbook of Climate Change Mitigation and Adaptation DOI 10.1007/978-1-4614-6431-0_29-2 # Springer Science+Business Media New York 2015

Fig. 3 A diagram of a typical water electrolysis process

dates back to the 1890s. Hydrogen production from water splitting consists of three categories: electrolysis, thermolysis, and photoelectrolysis.

Water Electrolysis Water electrolysis is essentially the conversion of electrical energy to chemical energy in the form of hydrogen, with oxygen as a useful by-product. Figure 3 shows the diagram of a typical water electrolysis process. It is realized by an electrical current passing through two electrodes to break water into hydrogen and oxygen. The most common water electrolysis technology is alkaline based, but more proton exchange membrane (PEM) electrolysis and solid oxide electrolysis cell (SOEC) units are developing. Alkaline Electrolyzer Alkaline systems are the most developed and lowest in capital cost. They have the lowest efficiency so they have the highest electrical energy costs. Alkaline electrolyzers are typically composed of electrodes, a microporous separator, and an aqueous alkaline electrolyte of approximately 30 wt.% KOH or NaOH. In alkaline electrolyzers, nickel with a catalytic coating, such as platinum, is the most common cathode material. For the anode, nickel or copper metals coated with metal oxides, such as manganese, tungsten, or ruthenium, are used. The liquid electrolyte is not consumed in the reaction but must be replenished over time because of other system losses primarily during hydrogen recovery. In an alkaline cell, the water is introduced in the cathode where it is decomposed into hydrogen and OH. The OH travels through the electrolytic material to the anode where O2 is formed. The hydrogen is left in the alkaline solution. The hydrogen is then separated from the water in a gas–liquid separation unit outside of the electrolyzer. The typical current density is 100–300 mA cm2 and alkaline electrolyzers typically achieve efficiencies of 50–60 % based on the lower heating value of hydrogen. The overall reactions at the anode and cathode are:

Page 19 of 35

Handbook of Climate Change Mitigation and Adaptation DOI 10.1007/978-1-4614-6431-0_29-2 # Springer Science+Business Media New York 2015

Cathode: 2H2 O þ 2e ! H2 þ 2OH Anode: 2OH !

1 O2 þ H2 O 2

Overall reaction: H2 O ! H2 þ

1 O2 DH ¼ 288kJ=mol 2

PEM Electrolyzer PEM electrolyzers build upon the recent advances in PEM fuel cell technology. They are more efficient than alkaline and do not have the corrosion and seal issues as SOEC but cost more than alkaline systems. PEM-based electrolyzers typically use Pt black, iridium, ruthenium, and rhodium for electrode catalysts and a Nafion membrane which not only separates the electrodes but acts as a gas separator. In PEM electrolyzers, water is introduced at the anode where it is split into protons and oxygen. The protons travel through the membrane to the cathode, where they are recombined into hydrogen. The O2 gas remains behind with the unreacted water. There is no need for a separation unit. Depending on the purity requirements, a drier may be used to remove residual water after a gas–liquid separation unit. PEM electrolyzers have low ionic resistances, and therefore, high currents of >1600 mA cm2 can be achieved while maintaining high efficiencies of 55–70 %. The reactions at the anode and cathode are: Anode: 2H2 O ! O2 þ 4Hþ þ 4e Cathode: 4Hþ þ 4e ! 2H2 Overall reaction: H2 O ! H2 þ

1 O2 DH ¼ 288kJ=mol 2

Solid Oxide Electrolysis Cells Solid oxide electrolysis cells (SOEC) are essentially solid oxide fuel cells operating in reverse. These systems replace part of the electrical energy required to split water with thermal energy. The higher temperatures increase the electrolyzer efficiency by decreasing the anode and cathode over potentials which cause power loss in electrolysis. It is said that an increase in temperature from 375 to 1050 K can reduce the combined thermal and electrical energy requirements by close to 35 % (Utgikar and Thiesen 2006). Another advantage of SOEC units is the use of a solid electrolyte which, unlike KOH for alkaline systems, is noncorrosive, and it does not experience any liquid and flow distribution problems. A SOEC operates similar to the alkaline system in that an oxygen ion travels through the electrolyte (typically ZrO2) leaving the hydrogen in unreacted steam stream. The reactions are shown as follows:

Page 20 of 35

Handbook of Climate Change Mitigation and Adaptation DOI 10.1007/978-1-4614-6431-0_29-2 # Springer Science+Business Media New York 2015

Cathode: 2H2 O þ 4e ! 2H2 þ 2O2



Anode: 

2O2 ! O2 þ 4e Overall reaction: H2 O ! H2 þ

1 O2 DH ¼ 288kJ=mol 2

SOEC electrolyzers are the most electrically efficient but are the least developed of the technologies. A major challenge of SOEC technology is that the high-temperature operation requires the use of costly materials and fabrication methods in addition to a heat source. The materials are similar to those being developed for solid oxide fuel cells (SOFC), yttria-stabilized zirconia (YSZ) electrolyte, nickelcontaining YSZ anode, and metal-doped lanthanum metal oxides and have the same problems with seals which are being investigated.

Water Thermochemical Splitting Water thermochemical splitting is also called water thermolysis, in which heat alone is used to decompose water to hydrogen and oxygen. It is well known that water will decompose at 2500  C, but materials stable at this temperature and also sustainable heat sources are not easily available. Thus, chemical reagents have been proposed to lower the temperatures. Research in this area was prominent from the 1960s through the early 1980s. However, essentially all research and development stopped after the mid-1980s, until recently. There are more than 300 water splitting cycles referenced in the literature (Hydrogen 2005). All of the processes have significantly reduced the operating temperature. In choosing the process, there are five criteria which should be met. (1) Within the temperatures considered, the DG (differential Gibbs free energy) of the individual reactions must approach zero. This is the most important criterion. (2) The number of steps should be minimal. (3) Each individual step must have both fast reaction rates and rates which are similar to the other steps in the process. (4) The reaction products cannot result in chemical by-products, and any separation of the reaction products must be minimal in terms of cost and energy consumption. (5) Intermediate products must be easily handled. Currently, there are several processes which meet the five criteria, such as the UT-3 process and the sulfuric acid decomposition process. The mechanisms of these two processes are shown as follows: 1. Iodine–sulfur process 2H2 OðlÞ þ I2 ðgÞ þ SO2 ðgÞ ! 2HIðlÞ þ H2 SO4 ðlÞ ð100  120 C, exothermicÞ 2HIðgÞ ! H2 ðgÞ þ I2 ðgÞ ð400  500  C, exothermicÞ 1 H2 SO4 ðlÞ ! H2 OðgÞ þ SO2 ðgÞ þ O2 ðgÞ ð850  C, endothermicÞ 2

Page 21 of 35

Handbook of Climate Change Mitigation and Adaptation DOI 10.1007/978-1-4614-6431-0_29-2 # Springer Science+Business Media New York 2015

Fig. 4 A diagram of the iodine–sulfur water thermochemical splitting process

Overall reaction: 1 H2 O ! H2 þ O2 2 Figure 4 gives the diagram of a typical water thermochemical splitting process for hydrogen production using the iodine–sulfur process. 2. UT-3 process CaBr2 ðsÞ þ H2 OðgÞ ! CaOðsÞ þ 2HBrðgÞ ð700  750  C, endothermicÞ 1 CaOðsÞ þ Br2 ðgÞ ! CaBr2 ðsÞ þ O2 ðgÞ ð500  600  C, exothermicÞ 2 Fe3 O4 ðsÞ þ 8HBrðgÞ ! 3FeBr2 ðsÞ þ 4H2 OðgÞ þ Br2 ðgÞ ð200  300  C, exothermicÞ 3FeBr2 ðsÞ þ 4H2 OðgÞ ! Fe3 O4 ðsÞ þ 6HBrðgÞ þ H2 ðgÞ ð550  600  C, endothermicÞ Overall reaction: 1 H2 O ! H2 þ O2 2 However, water thermochemical splitting is still not competitive with other hydrogen generation technologies in terms of cost and efficiency which is the major focus of research in those processes (Norbeck et al. 1996a). In addition, these processes require large inventories of highly hazardous corrosive materials. The requirements of high temperature, high pressure, and corrosion result in the need for new materials. The US DOE has active projects investigating several of these processes focused on improving materials, lowering cost, and increasing efficiency (Hydrogen 2005). Current research and development on hydrogen from water thermochemical splitting are ongoing in Canada on technologies that couple synergistically with Canada’s present and future nuclear reactors. Also, several countries (Japan, USA, France) are currently advancing nuclear technology and corresponding thermochemical cycles. Sandia National Laboratories in the USA and CEA in France are developing a hydrogen pilot plant with a sulfur–iodine (S–I) cycle. The KAERI Institute in Korea is collaborating with China to produce Page 22 of 35

Handbook of Climate Change Mitigation and Adaptation DOI 10.1007/978-1-4614-6431-0_29-2 # Springer Science+Business Media New York 2015

hydrogen with the HTR-10 reactor. The Japan Atomic Energy Agency plans to complete a large sulfur–iodine plant to produce 60,000 m3/h of hydrogen by 2020, an amount sufficient for about one million fuel cell vehicles. It is believed that scaling up the processes may lead to improved thermal efficiency overcoming one of the principle challenges faced by this technology. In addition, a better understanding of the relationship between capital costs, thermodynamic losses, and process thermal efficiency may lead to decreased hydrogen production costs (Funk 2001). The current processes all use four or more reactions, and it is believed that an efficient two-reaction process as shown in the following equations may make this technology viable (Funk 2001): 1 ZnOðsÞ ! ZnðgÞ þ O2 ð2300 K, endothermicÞ 2 ZnðlÞ þ H2 O ! ZnOðsÞ þ H2 ð700 K, exothermicÞ

Water Photoelectrolysis Photoelectrolysis uses sunlight to directly decompose water into hydrogen and oxygen and uses semiconductor materials similar to those used in photovoltaics. In photovoltaics, two doped semiconductor materials, a p-type and an n-type, are brought together forming a p–n junction. At the junction, a permanent electric field is formed when the charges in the p- and n-type of material rearrange. When a photon with energy greater than the semiconductor material’s bandgap is absorbed at the junction, an electron is released and a hole is formed. Since an electric field is present, the hole and electron are forced to move in opposite directions which, if an external load is also connected, will create an electric current. This type of situation occurs in photoelectrolysis when a photocathode, p-type material with excess holes, or a photoanode, n-type of material with excess electrons, are immersed in an aqueous electrolyte, but instead of generating an electric current, water is split to form hydrogen and oxygen. The process can be summarized for a photoanode-based system as follows: (1) A photon with greater energy than the bandgap strikes the anode creating an electron–hole pair. (2) The holes decompose water at the anode’s front surface to form hydrogen ions and gaseous oxygen, while the electrons flow through the back of the anode which is electrically connected to the cathode. (3) The hydrogen ions pass through the electrolyte and react with the electrons at the cathode to form hydrogen gas (Turner et al. 2008). (4) The oxygen and hydrogen gases are separated, for example, by the use of a semipermeable membrane, for processing and storage. Current photoelectrodes used in PEC (photon-to-electron conversion) that are stable in aqueous solutions have a low efficiency for using photons to split water to produce hydrogen. The target efficiency is >16 % solar energy to hydrogen. This encompasses three material system characteristics necessary for efficient conversion: the bandgap should (i) fall in the range sufficient to achieve the energetics for electrolysis and yet allow maximum absorption of the solar spectrum (this is 1.6–2.0 eV for single photoelectrode cells and 1.6–2.0/0.8–1.2 eV for top/bottom cells in stacked tandem configurations), (ii) have a high quantum yield (>80 %) across its absorption band to reach the efficiency necessary for a viable device, and (iii) straddle the redox potentials of the H2 and O2 half reactions with its conduction and valence band edges, respectively. The efficiency is directly related to the semiconductor bandgap (Eg), i.e., the energy difference between the bottom of the conduction band and the top of the valence band, as well as the band edge alignments, since the material or device must have the correct energy to split water. The energetics are determined by the band edges, which must straddle water’s redox potential with sufficient margins to account for inherent energy losses. Cost-efficient, durable catalysts with appropriate Eg and band edge positions must be developed. To achieve the highest efficiency possible in a tandem

Page 23 of 35

Handbook of Climate Change Mitigation and Adaptation DOI 10.1007/978-1-4614-6431-0_29-2 # Springer Science+Business Media New York 2015

configuration, “current matching” of the photoelectrodes must be done. Electron transfer catalysts and other surface enhancements may be used to increase the efficiency of the system. These enhancements can minimize the surface overpotentials in relation to the water and facilitate the reaction kinetics, decreasing the electric losses in the system. Fundamental research is ongoing to understand the mechanisms involved and to discover and develop appropriate candidate surface catalysts for these systems (Licht 2005). In addition, it is possible to use suspended metal complexes in solution as the photochemical catalysts (Norbeck et al. 1996b). Typically, nanoparticles of ZnO, Nb2O5, and TiO2 (the material of choice) have been used (Norbeck et al. 1996b). The advantages of these systems include the use of low-cost materials and the potential for high efficiencies. Current research involves overcoming the low light absorption and unsatisfactory stability in time for these systems.

Sorption-Enhanced H2 Production with In Situ CO2 Capture Using CarbonContaining Resources It is now widely acknowledged that “decarbonizing” energy supply will be essential in the near future due to the well-known global warming. Although utilization of H2 is clean and no pollution, the production of H2 from fossil fuels actually produces CO2 emission. A typical SMR hydrogen plant with the capacity of one million m3 of hydrogen per day produces 0.3–0.4 million standard cubic meters of CO2 per day. If hydrogen is to be produced by coal gasification, the amount of CO2 emissions would be doubled compared to SMR. Further, with regard to end-use applications of H2, additional costs and process complexity are incurred for gas cleaning. Taking fuel cell applications, for example, the CO content in the product gas must be closely managed, a CO concentration of less than 10 ppm is required for low-temperature proton exchange membranes and alkaline fuel cells. The cost of separating H2 from a H2-rich gas with impurities, such as CO, CH4, and tar, incurs major cost penalties. The increasing attentions on global warming and the demands for pure H2 production together result in the great interest in the research on sorption-enhance gasification system where high-purity H2 production and in situ CO2 capture can be realized in one single reactor. Figure 5 shows a simple diagram of the system. It is seen that the core unit of the system is the dual gasification–regeneration reactors. And the system is apparently characterized by the addition of CaO additives to the gasifier. The corresponding introduced influences include the following: the water–gas reaction and the water–gas shift reaction are both enhanced to produce more hydrogen due to the CO2 absorption by the CaO carbonation reaction,

Fig. 5 A simple diagram of the sorption-enhanced gasification system Page 24 of 35

Handbook of Climate Change Mitigation and Adaptation DOI 10.1007/978-1-4614-6431-0_29-2 # Springer Science+Business Media New York 2015

(ii) the necessary external energy consumption for hydrocarbons steam gasification can be partially substituted by the releasing heat of carbonation, and (iii) the formation of pyrolysis tars in the presence of CaO additives could be reduced. The reaction mechanisms of the system are as follows: Reactions in the gasifier: Water–gas reaction CðsÞ þ H2 OðgÞ ! COðgÞ þ H2 ðgÞðf or coalÞ CH4 ðgÞ þ H2 OðgÞ ! COðgÞ þ 3H2 ðgÞ ðf or natural gasÞ CH1:5 O0:7 ðsÞ þ 0:3H2 OðgÞ ! COðgÞ þ 1:05H2 ðgÞ ðf or typical biomassÞ Water–gas shift reaction COðgÞ þ H2 OðgÞ ! CO2 ðgÞ þ H2 ðgÞ Carbonation reaction CaOðsÞ þ CO2 ðgÞ ! CaCO3 ðsÞ The global reaction in the gasifier can be summarized as: CðsÞ þ 2H2 OðgÞ þ CaOðsÞ ! CaCO3 ðsÞ þ 2H2 ðgÞ ðf or coalÞ CH4 ðgÞ þ 2H2 OðgÞ þ CaOðsÞ ! CaCO3 ðsÞ þ 4H2 ðgÞ ðf or natural gasÞ CH1:5 O0:7 ðsÞ þ 1:3H2 OðgÞ þ CaOðsÞ ! CaCO3 ðsÞ þ 2:05H2 ðgÞ ðf or typical biomassÞ Reactions in the regenerator: Combustion reaction CðsÞ þ O2 ðgÞ ! CO2 ðgÞ Calcination reaction CaCO3 ðsÞ ! CaOðsÞ þ CO2 ðgÞ It should be noted that beside Ca-based oxides, a number of candidate CO2 sorbents have been also studied including potassium-promoted hydrotalcite (K-HTC) and mixed metal oxides of Li and Na (Harrison 2008). HTCs are members of the family of double-layered hydroxides that, when doped with K2CO3, can serve as high-temperature CO2 sorbents. They react rapidly and the sorbent regeneration is possible with less external energy input. But HTCs have much lower CO2 capacity than Ca-based sorbents and are also considerably more expensive. Mixed metal oxide sorbents of Li and Na such as Li2ZrO3, Li4SiO4, and Na2ZrO3 were spawned, on the one hand, by the desire to find a replacement for Ca-based sorbents that could be regenerated at lower temperature and, on the other hand, would have considerably higher CO2 capacity than HTC. However, because of less favorable thermodynamic properties associated with these sorbents, the equilibrium CO2 pressures are higher and product H2 concentrations must be lower than can be obtained using Ca-based sorbents at equivalent reaction

Page 25 of 35

Handbook of Climate Change Mitigation and Adaptation DOI 10.1007/978-1-4614-6431-0_29-2 # Springer Science+Business Media New York 2015

Fig. 6 The near-zero emission system proposed by Zhejiang University

conditions. Anyway, Ca-based sorbents are considered to be the most promising option. As a result, current studies on sorption-enhanced H2 production are mostly being conducted using CaO. This section also just discusses sorption-enhanced gasification using Ca-based sorbents. Different feedstocks including both solid fuels (coal, biomass) and natural gas are all summarized.

Sorption-Enhanced H2 Production from Solid Fuels A new near-zero emission coal (also biomass) utilization technology with combined gasification and combustion has been proposed by Zhejiang University in China (Qinhui et al. 2003; Wang et al. 2006; Guan et al. 2007; Han et al. 2010). Figure 6 displays the diagram of the system. In this system, solid fuels are partly gasified with steam in a pressured circulation fluidized bed gasifier, producing H2, CO, and CO2. As CaO is used as the CO2 acceptor to absorb CO2 and release the heat for the gasification processes in the gasifier, CO is depleted from the gas phase by the water–gas shift reaction. The H2-rich gas stream produced in the gasifier is oxidized in the solid oxide fuel cell. The remaining char with low reaction activity is transferred in a circulating fluidized bed combustor together with the carbonated CaCO3. The char and the unreacted H2, in the hot off-gas from the fuel cell, are oxidized with oxygen in the combustor to supply the heat for the CaCO3 calcination. The CO2-rich gas stream produced in the combustor is suitable for disposal after the heat is recovered by a gas–steam-combined cycle. The authors firstly examined the influences of gasifier operation temperature, pressure and fuel type (coal and biomass), and H2O/C on hydrogen production based on chemical equilibrium calculation (Wang et al. 2006; Guan et al. 2007). The results showed that the increase of CaO addition can obviously increase H2 mole fraction in C/H2O reaction products. The process may achieve high conversion efficiency from coal energy to electrical energy (around 65.5 %) with near-zero gaseous emissions. Our study (Han et al. 2010) also showed that the CaO additives cannot only absorb CO2 gases but also enhance the tar reduction reactions in biomass steam gasification with in situ CO2 capture. Sorption-enhanced coal/ biomass gasification in pressurized fluidized bed reactor is also being performed in Zhejiang University. Biomass and coal gasification experiments were carried out aiming to investigate the influences of operation variables such as CaO to carbon mole ratio (CaO/C), H2O to carbon mole ratio (H2O/C), reaction temperature (T), and pressure(P) on hydrogen (H2) production(Han et al. 2010, 2013; Wang et al. 2014). Pressurized operation not only promoted gasification reactions but also apparently enhanced CaO carbonation. Within the experimental ranges investigated in the biomass gasification work, H2 fraction and H2 yield were both elevated with the increase in reaction pressure, CaO/C, H2O/C, and

Page 26 of 35

Handbook of Climate Change Mitigation and Adaptation DOI 10.1007/978-1-4614-6431-0_29-2 # Springer Science+Business Media New York 2015

T. Pressurized operation also increased the carbon conversion and cold gas efficiency for CaO sorptionenhanced sawdust gasification. A maximum H2 output with a fraction of 67.7 % and a yield of 68 g/kg sawdust was achieved at CaO/C = 1.2, H2O/C = 0.89, T = 680  C, and pressure of 4 bar. In the case of Chinese bituminous coal as feedstock, the highest H2 concentration of 77.98 vol.% was achieved under a condition of 4 bar(highest pressure condition the system can achieve), 750  C, [H2O]/[C] = 2, and [Ca]/[C] = 1. In Japan, the hydrogen production by the reaction-integrated novel gasification (HyPr-RING) process is under development (Lin et al. 2001). The mechanism of this process is very similar to the system proposed by Zhejiang University. HyPr-RING process has been conducted for both coal and biomass. For coal, conditions in the gasifier of 873–973 K and 3 MPa are reported to result in slightly over 50 % carbon conversion with about 90 % H2 in the product gas. The remainder of the product gas is predominantly CH4 with less than 0.4 % (CO + CO2). The regenerator operates at 1073 K and 0.1 MPa. For biomass, Lin et al. (Hanaoka et al. 2005) examined the H2 production from woody biomass by steam gasification using CaO as a CO2 sorbent. Firstly, it is said that in the absence of CaO, the product gas contained CO2. On the other hand, in the presence of CaO ([Ca]/[C] = 1, 2, and 4), no CO2 was detected in the product gas. And at a [Ca]/[C] of 2, the maximum yield of H2 was obtained. Secondly, they reported that the H2 yield and conversion to gas were largely dependent on the reaction pressure and exhibited the maximum value at 0:6 MPa, which indicated a much lower pressure compared to other carbonaceous materials such as coal (>12 MPa) and heavy oil (>4.2 MPa) in steam gasification using a CO2 sorbent. As a result, they concluded that woody biomass is one of the most appropriate carbonaceous materials in H2 production by steam gasification using CaO as a CO2 sorbent, taking the reaction pressure into account. A further kinetic study conducted at 923 K and pressure of 6.5 MPa using a batch reactor with 50 cm3 capacity also demonstrated the complete absorption of CO2 from the gasification syngas (Fujimoto et al. 2007). Another significant sorption-enhanced gasification process is the absorption-enhanced reforming (AER) developed within the frame of EU Project AER-Gas II. The atmospheric dual fluidized bed technology developed at Vienna University of Technology realizes the steam gasification through circulation of hot bed material. The technology has been realized in pilot plant scale of 100 kW fuel input (at Vienna University of Technology) as well as in industrial scale at the combined heat and power plant (CHP) guessing in an industrial scale of 8 MW fuel input in Austria. A comparison of dual fluidized bed gasification of biomass with and without selective transport of CO2 from the gasification to the combustion reactor is performed by using the facility with 100 kW fuel input. In the case of convention gasification, the hydrogen content in the product gas of gasifier is about 40 vol.% (dry basis). However, in the case of carbonate addition to the bed material, much higher hydrogen content up to 75 vol.% (dry basis) can be achieved at lower gasification temperatures (Pfeifer et al. 2009). The first time application of the AER process on the 8 MW industrial facility also realizes the continuous CO2 removal by cyclic carbonation of CaO and calcination of CaCO3. Results obtained in the industrial facility are presented to be comparable with those obtained at pilot plant scale (Koppatz et al. 2009). In addition, other similar sorption-enhanced gasification processes for solid fuels are also under development. One is the ZEC process developed at Los Alamos National Laboratory (Ziock et al. 2001). It is designed to first hydrogasified coal to produce CH4, which is then reformed to H2 using the calcium-based sorption-enhanced process. A system analysis performed by Nexant Corp. (Nawaz and Ruby 2001) estimated coal to electricity conversion efficiency on the order of 70 %. Research on this concept is continuing in a joint study at Cambridge University and Imperial College in the UK (Gao 2009). The other is the innovative fuel-flexible advanced gasification–combustion (AGC) process developed by General Electric Energy and Environmental Research Corporation (GE EER) (Rizeq et al. 2001). The R&D on the AGC technology is being conducted under a Vision-21 award from the US DOE NETL with co-funding from GE EER, Southern Illinois University at Carbondale (SIU-C), and Page 27 of 35

Handbook of Climate Change Mitigation and Adaptation DOI 10.1007/978-1-4614-6431-0_29-2 # Springer Science+Business Media New York 2015

the California Energy Commission (CEC). The AGC technology converts coal and air into three separate streams of pure hydrogen, sequestration-ready CO2, and high-temperature/high-pressure oxygendepleted air to produce electricity in a gas turbine. The program integrates lab-, bench-, and pilot-scale studies to demonstrate the AGC concept. Besides, research on lab-scale H2 production from sorptionenhanced solid fuel gasification was also performed by Madhukar (Mahishi and Goswami 2007) and Wei et al. (2008). Fan et al. (2008) and his research group from Ohio State University developed the concept of Calcium Looping Process (CLP) for clean coal and biomass conversion and hydrogen production, and comprehensive simulations allow for a direct comparison of the CLP with other processes developed for post-combustion carbon dioxide removal. The comparison indicates that the CLP always provides a lower energy penalty under similar operating conditions.

Sorption-Enhanced H2 Production from Natural Gas The effectiveness of both sorption-enhanced steam methane reforming (SE-SMR) and the use of calciumbased CO2 sorbents have been demonstrated in previous works. In particular, Rostrup-Nielsen (1984) reports that the first description of the addition of a CO2 sorbent to a hydrocarbon-steam-reforming reactor was published in 1868. Williams (1933) was issued a patent for a process in which steam and methane react in the presence of a mixture of lime and reforming catalyst to produce hydrogen. A fluidized bed version of the process was patented by Gorin and Retallick (1963). Brun-Tsekhovoi et al. (1988) published limited experimental results and reported potential energy saving of about 20 % compared to the conventional process. Recently, Kumar et al. (1999) reported on a process known as unmixed combustion (UMC), in which the reforming, shift, and CO2 removal reactions are carried out simultaneously over a mixture of reforming catalyst and CaO-based CO2 sorbent. In related work, Hufton et al. (2000) reported on H2 production through SE-SMR using a K2CO3-treated HTC sorbent, although the extremely low CO2 working capacity above was discussed. Average purity of H2 was about 96 % while CO and CO2 contents were less than 50 ppm. The methane conversion to H2 product reaches 82 %. The conversion and product purity are substantially higher than the thermodynamic limits for a catalystonly reactor operated at these same conditions (28 % conversion, 53 % H2, 13 % CO/CO2). In an earlier work, Balasubramanian et al. (1999) showed that a gas with a H2 content up to 95 % (dry basis) could be produced in a single reactor containing reforming catalyst and CaO formed by calcination of high-purity CaCO3. The reported methane conversion was 88 %. Arstad et al. (2012) studied the continuous hydrogen production by SE-SMR using a CFB reactor with calcined natural dolomite as CO2 sorbent and Ni/NiAl2O4 as catalyst. The sorbent and catalyst materials we have used appear to have quite good mechanical properties at the time scale used (8 h), but only a fraction of the sorbent’s CO2 capacity appears to be in use. Johnsen et al. (2006) use dolomite as CO2 sorbent in SE-SMR investigation, and a 100 mm-diameter bubbling fluidized bed reactor was operated alternating between reforming/carbonation conditions and higher-temperature calcination conditions to regenerate the sorbent. Equilibrium H2 concentration of above 98 % on a dry basis was reached at 600  C and 1.013  105 Pa. Esther Ochoa-Fernández et al. (2007) compared the conventional steam reforming plus CO2 capture with the SE-SMR system, and SE-SMR resulted in competitive H2 yields and thermal efficiencies. The best efficiencies were obtained using CaO as acceptor due to its more favorable thermodynamics and high reaction rates, but the stability of CaO has to be improved, while Na2ZrO3 is a promising alternative due to the good kinetics for CO2 removal and the stability. A large number of numerical study and simulation works on the SE-SMR are also carried out these years. Zhen-shan Li and Cai (2007) developed mathematical models of multiple cycles for SE-SMR and Ca-based sorbent regeneration in a fixed bed reactor, of which the results agree with experimental data. The effect of reactivity decay of dolomite, CaO/Ca12Al14O33, and limestone sorbents on sorptionenhanced hydrogen production and sorbent regeneration processes was studied. Jannike Solsvik and Page 28 of 35

Handbook of Climate Change Mitigation and Adaptation DOI 10.1007/978-1-4614-6431-0_29-2 # Springer Science+Business Media New York 2015

Jakobsen (2011) studied the performance of a combined catalyst/sorbent pellet design for the SE-SMR process. Different mathematical model complexities have been studied and parameter sensitivity analyses have been performed, which showed that the combined pellet performance is promising compared to the conventional two-pellet design. Reijers et al. (2009a, b) built a one-dimensional reactor model to describe the performance of an SE-SMR and water–gas shift reactor and verified using the results of an analytical solution for the increase of CH4 conversion over the bed and finally validated using the results of SE-SMR laboratory-scale experiments. Solieman et al. (2009) presented an analysis of the relation between different process conditions and parameters during both adsorption and desorption modes using Aspen Plus, and a relatively high (methane) reforming reaction conversion of 85 % could be achieved at 600  C, 17 bar, and a steam to carbon ratio (S/C) of 3. Compared to Li2ZrO3 and BaO, CaO is the most suitable sorbent for achieving the targeted 85 % carbon capture ratio. Wang et al. (2011) developed a three-dimensional (3D) Eulerian two-fluid model with an in-house code to simulate the Ca-based SE-SMR process using such model combined with the reaction kinetics. Jakobsen and Halmøy (2009) built an SE-SMR reactor model which comprises simplified mathematical representation of the flow regime, differential equations for mass and heat transfer, sub-model for chemical reaction kinetics, and absorption equilibria. The model was used to investigate various operational modes for the reformer as well as for comparison of the reformer performance with use of various sorbents (Li4SO4, Na2ZrO3, CaO). Di Carlo et al. (2010) investigated the SE-SMR process numerically through computational fluid dynamics Eulerian–Eulerian Code. Dry hydrogen mole fraction of >0.93 is predicted for temperatures of 900 K and a superficial gas velocity of 0.3 m/s with a dolomite/catalyst ratio >2. Fernandez et al. (2012) present a dynamic pseudo-homogeneous model to describe the operation of a packed bed reactor in which the SE-SMR reaction is carried out under adiabatic conditions. The results demonstrated that the SER process can yield a CH4 conversion and H2 purity of up to 85 % and 95 %, respectively, under operating conditions of 923 K and 3.5 MPa, a steam/carbon ratio of 5, and a space velocity of 3.5 kg/m2 s. One process that utilizes natural gas is designated Zero Emission Gas Power Project (ZEG) and is being led by the Institute of Gas Technology in cooperation with Christian Michelsen Research AS and Prototech AS in Norway. A brief discussion of the process may be found on the Internet, and an update on the status of the project was recently presented by Johnsen (2007). A number of candidate sorbents have been considered with Arctic dolomite, which does not require pretreatment for sulfur removal, receiving the most attention. H2 is to be used to produce electricity in a high-temperature solid oxide fuel cell with the exhaust heat used for sorbent regeneration. Electrical efficiencies from 50 % to 80 % based on the net power output (LHV) of four process configurations having varying degrees of heat integration are reported. The other sorption-enhanced H2 production process from natural gas is the Pratt and Whitney Rocketdyne (PWR) process. It is now in the pilot stages. While few details have been released, the company claims a 90 % size reduction, 30–40 % reduction in capital costs, 5–20 % higher H2 yield, and reduced product purification requirements that will lead to a smaller PSA system. The comparisons are relative to a standard steam methane reforming process with PSA purification. Upon completion of the current pilot tests, PWR plans to construct a 5 MM scf/d commercial demonstration plant (Stewart PAE and WR, 2007, Personal Communication).

Reactivity of CaO Sorbents Throughout Cyclic Calcination–Carbonation (CC) Reactions A critical challenge for applications of the sorption-enhanced gasification process is the activity durability (Florin and Harris 2008) of CaO sorbent. It is estimated that the CO2 capture process would not be economical unless the value of CaO conversion after 20 cycles increased to a value of at least 0.45. However, previous studies show that CaO sorbents lose activity dramatically during cyclic CC reactions,

Page 29 of 35

Handbook of Climate Change Mitigation and Adaptation DOI 10.1007/978-1-4614-6431-0_29-2 # Springer Science+Business Media New York 2015

which would increase both consumption of fresh sorbents and storage of spent sorbent, consequently reduce process economic, and result in environmental problem. Reasons that are responsible for the calcium-based sorbent reactivity loss can be summarized as: (i) Thermodynamic equilibrium limitation. Higher temperatures are favorable for H2 generation; however, increasing the temperature at a constant total pressure will limit the capture of CO2 by CaO sorbents. (ii) Tars and coke formation. Interaction between CaO and the tar and coke is expected to hamper CO2 capture (Delgado et al. 1996). There is a trade-off between the optimal temperatures for eliminating tar and decomposing coke and maximizing CO2 capture by CaO. (iii) Sintering of sorbents. Sintering leads to a reduction in both surface area and pore volume, which in turn affects the rate and extent of gas–solid reactions. (iv) Decay in reactivity through multiple CO2 capture and release cycles. Abanades and Alvarez (2003) concluded that the decay in activity throughout CC cycles was due to a decrease in microporosity and an increase in meso-porosity. They proposed a simple equation to estimate the CaO conversion, XN, after the Nth CC cycles, claiming that values of fm = 0.77 and fw = 0.17 fit most experimental data of both previous researchers and themselves well: XN ¼ f N m ð1  f w Þ þ f w

(3)

In order to improve the reactivity of calcium-based sorbents, various methods have been proposed, including (i) using mild calcination conditions, (ii) steam/water hydration or addition, (iii) the use of nanosized sorbent particles, and (iv) thermal pretreatment. Barker (1974) hypothesized that if the particle size (diameter) of CaO is smaller than the product layer thickness that may form on a single particle, then 100 % conversion could be achieved. Barker reported a conversion of 0.93 after 24 h of carbonation, maintained for 30 reaction cycles. The use of mild calcination conditions, i.e., inert atmospheres (N2 or Ar) and low temperatures (700  C), were reported to produce a more reactive sorbent (Hughes et al. 2004). However, it may be necessary to use steam as a diluent gas in the regenerator to lower down the CO2 partial pressure while simultaneously obtaining high-purity CO2 gases. The introduction of a water hydration step, or the utilization of steam as a “carbonation–catalyst,” has been reported to enhance CO2 capture through multiple reaction cycles (Hughes et al. 2004; Kuramoto et al. 2003; Manovic and Anthony 2007). Rong et al. (2013) studied the effects of hydration temperature, steam concentration, and hydration frequency on the sorbent reactivity during 10 carbonation–calcination cycles using a pressurized thermogravimetric analyzer with reagent-grade CaCO3 used as a precursor under atmospheric pressure. In comparison to other steam reactivation strategies, such as the steam addition during the carbonation and calcination process, separate steam hydration after calcination has shown excellent reactivation performance. In conclusion, the development of a CO2 sorbent, which is resistant to physical deterioration and maintains high chemical reactivity through multiple CO2 capture and release cycles, is the limiting step in the scale-up and commercial operation of the sorption-enhanced H2 production process.

Future Directions Given the advantages inherent in fossil fuels, such as their availability, relatively low cost, and the existing infrastructure for delivery and distribution, they are likely to play a major role in energy and H2 production in the near to medium-term future. However, H2 production from fossil fuels produces large CO2 emission to the atmosphere, which may diminish the environmental appeal of H2 as an ecologically clean fuel. As a result, H2 production from fossil fuels must consider the CO2 capture problem in long-term future.

Page 30 of 35

Handbook of Climate Change Mitigation and Adaptation DOI 10.1007/978-1-4614-6431-0_29-2 # Springer Science+Business Media New York 2015

Biomass is a potentially a reliable energy resource for hydrogen production. Biomass is renewable, abundant, and easy to use. Over the life cycle, net CO2 emission is nearly zero due to the photosynthesis of green plants. Although the yield of H2 is low from biomass since the hydrogen content in biomass is low to begin with (approximately 6 % vs. 25 % for methane) and the energy content is low due to the 40 % oxygen content of biomass, the thermochemical pyrolysis and gasification hydrogen production methods are economically viable and are said to become competitive with the conventional natural gas reforming method. Biological dark fermentation is also a promising hydrogen production method for commercial use in the future. With further development of these technologies, biomass will play an important role in the development of sustainable hydrogen economy. Hydrogen production from water electrolysis has been commercially available. Regarding the CO2 emission, electricity produced from renewable resources (such as wind, solar, hydro, biomass, tidal, etc.) is favored to be used for water electrolysis. Thermochemical water decomposition is one alternative process competitive to water electrolysis. The nuclear power systems have a great potential to be integrated with H2 production from water decomposition. The Advanced High-Temperature Reactor (AHTR) concept, proposed for the US Department of Energy’s Generation IV nuclear plant development program, is specifically designed for H2 production (via high-temperature water electrolysis or thermochemical cycles). Thermochemical water-splitting cycles, such as UT-3 cycle and sulfur–iodine cycle, can potentially produce higher overall energy efficiencies (around 50 %) compared to electrolysis-based systems (around 24 %). However, a major shift away from the negative public perception of nuclear energy would be necessary in order to base a long-term energy scenario on the nuclear–hydrogen option. In addition, H2 production by direct water splitting, using the solar photocatalysis route, could become favorable if conversion efficiencies were increased by a factor of 2–3. It is anticipated that the low-cost, environmentally friendly photocatalytic water splitting for hydrogen production will play an important role in the hydrogen production and contribute much to the coming hydrogen economy. However, it is still very far from practical utilization. Sorption-enhanced H2 production with in situ CO2 capture and then CO2 sequestration in geologic formations (e.g., deep coal seams, depleted oil and gas reservoirs, and salt domes), the ocean, aquifers, terrestrial ecosystems, etc., provides a promising solution for the CO2 release during H2 production from fossil fuels. For the future development, challenges for CO2 sequestration such as bringing its cost down and understanding the reservoir options (e.g., size, permanence, and, most importantly, environmental effect) should also be paid significant attention, besides improving the CaO sorbent cyclic reactivity to be practical.

References Abanades JC, Alvarez D (2003) Conversion limits in the reaction of CO2 with lime. Energy Fuel 17(2):308–315 Argun H, Kargi F (2011) Bio-hydrogen production by different operational modes of dark and photofermentation: an overview. Int J Hydrog Energy 36(13):7443–7459 Arstad B, Prostak J, Blom R (2012) Continuous hydrogen production by sorption enhanced steam methane reforming (SE-SMR) in a circulating fluidized bed reactor: sorbent to catalyst ratio dependencies. Chem Eng J 189–190:413–421 Bailey R (2001) Projects in development Kentucky pioneer energy lima energy. Gasification Technologies Balasubramanian B et al (1999) Hydrogen from methane in a single-step process. Chem Eng Sci 54(15–16):3543–3552 Page 31 of 35

Handbook of Climate Change Mitigation and Adaptation DOI 10.1007/978-1-4614-6431-0_29-2 # Springer Science+Business Media New York 2015

Barelli L et al (2008) Hydrogen production through sorption-enhanced steam methane reforming and membrane technology: a review. Energy 33(4):554–570 Barker R (1974) The reactivity of calcium oxide towards carbon dioxide and its use for energy storage. J Appl Chem Biotech 24(4–5):221–227 Benemann JR (1998) Process analysis and economics of biophotolysis of water. IEA Hydrogen Program, Paris Brun-Tsekhovoi A et al (1988) The process of catalytic steam-reforming of hydrocarbons in the presence of carbon dioxide acceptor. In: Hydrogen energy progress VII, Proceedings of the 7th world hydrogen energy conference Calzavara Y et al (2005) Evaluation of biomass gasification in supercritical water process for hydrogen production. Energy Convers Manag 46(4):615–631 Collot A-G (2006) Matching gasification technologies to coal properties. Int J Coal Geol 65(3–4):191–212 Delgado J, Aznar MP, Corella J (1996) Calcined dolomite, magnesite, and calcite for cleaning hot gas from a fluidized bed biomass gasifier with steam: life and usefulness. Ind Eng Chem Res 35(10):3637–3643 Demirbas MF (2006) Hydrogen from various biomass species via pyrolysis and steam gasification processes. Energy Sources Part A 28(3):245–252 Di Carlo A et al (2010) Numerical investigation of sorption enhanced steam methane reforming process using computational fluid dynamics eulerian–eulerian code. Ind Eng Chem Res 49(4):1561–1576 Ewan BCR, Allen RWK (2005) A figure of merit assessment of the routes to hydrogen. Int J Hydrog Energy 30(8):809–819 Fan LS, Li FX, Ramkumar S (2008) Utilization of chemical looping strategy in coal gasification processes. Particuology 6(3):131–142 Fc D, Yf Y (2006) Hydrogen production and storage technologies. Chemical Industry Press, Beijing Fernandez JR, Abanades JC, Murillo R (2012) Modeling of sorption enhanced steam methane reforming in an adiabatic fixed bed reactor. Chem Eng Sci 84:1–11 Florin NH, Harris AT (2008) Enhanced hydrogen production from biomass with in situ carbon dioxide capture using calcium oxide sorbents. Chem Eng Sci 63(2):287–316 Fujimoto S et al (2007) A kinetic study of in situ CO2 removal gasification of woody biomass for hydrogen production. Biomass Bioenergy 31(8):556–562 Funk JE (2001) Thermochemical hydrogen production: past and present. Int J Hydrog Energy 26(3):185–190 Gao L (2009) A study of the reaction chemistry in the production of hydrogen from coal using a novel process concept. Imperial College London Garcia LA et al (2000) Catalytic steam reforming of bio-oils for the production of hydrogen: effects of catalyst composition. Appl Catal A Gen 201(2):225–239 Garcı́a-Ibañez P, Cabanillas A, Sánchez JM (2004) Gasification of leached orujillo (olive oil waste) in a pilot plant circulating fluidised bed reactor. Preliminary results. Biomass Bioenergy 27(2):183–194 Gorin E, Retallick WB (1963) Method for the production of hydrogen. US Patents. p. 3,108,857 Guan J et al (2007) Thermodynamic analysis of a biomass anaerobic gasification process for hydrogen production with sufficient CaO. Renew Energy 32(15):2502–2515 Guo Y, Fang W, Lin R (2005) Zhejiang daxue xuebao (gongxue ban). J Zhejiang Univ (Eng Sci) 39:538–541 Guo LJ et al (2007) Hydrogen production by biomass gasification in supercritical water: a systematic experimental and analytical study. Catal Today 129(3–4):275–286

Page 32 of 35

Handbook of Climate Change Mitigation and Adaptation DOI 10.1007/978-1-4614-6431-0_29-2 # Springer Science+Business Media New York 2015

Guo Y et al (2010) Review of catalytic supercritical water gasification for hydrogen production from biomass. Renew Sustain Energy Rev 14(1):334–343 Hallenbeck PC, Benemann JR (2002) Biological hydrogen production; fundamentals and limiting processes. Int J Hydrog Energy 27(11–12):1185–1193 Han J, Kim H (2008) The reduction and control technology of tar during biomass gasification/pyrolysis: an overview. Renew Sustain Energy Rev 12(2):397–416 Han L et al (2010) Influence of CaO additives on wheat-straw pyrolysis as determined by TG-FTIR analysis. J Anal Appl Pyrolysis 88(2):199–206 Han L et al (2013) H2 rich gas production via pressurized fluidized bed gasification of sawdust with in situ CO2 capture. Appl Energy 109:36–43 Hanaoka T et al (2005) Hydrogen production from woody biomass by steam gasification using a CO2 sorbent. Biomass Bioenergy 28(1):63–68 Harrison DP (2008) Sorption-enhanced hydrogen production: a review. Ind Eng Chem Res 47(17):6486–6501 Holladay JD, Wang Y, Jones E (2004) Review of developments in portable hydrogen production using microreactor technology. Chem Rev 104(10):4767–4790 Holladay JD et al (2009) An overview of hydrogen production technologies. Catal Today 139(4):244–260 Hufton J et al (2000) Sorption enhanced reaction process (SERP) for the production of hydrogen. In: Proceedings of the 2000 US DOE hydrogen program review Hughes RW et al (2004) Improved long-term conversion of limestone-derived sorbents for in situ capture of CO2 in a fluidized bed combustor. Ind Eng Chem Res 43(18):5529–5539 Hydrogen FC (2005) Infrastructure technologies program: multi-year research, development and demonstration plan. US Department of Energy, Energy Efficiency and Renewable Energy, Washington, DC Jakobsen JP, Halmøy E (2009) Reactor modeling of sorption enhanced steam methane reforming. Energy Procedia 1(1):725–732 Johnsen K (2007) Sorption enhanced steam methane reforming- reactor configurations and sorbent development. In: The third international workshop on in-situ CO2 removal Johnsen K et al (2006) Sorption-enhanced steam reforming of methane in a fluidized bed reactor with dolomite as -acceptor. Chem Eng Sci 61(4):1195–1202 Jung GY et al (2002) Hydrogen production by a new chemoheterotrophic bacterium Citrobacter sp. Y19. Int J Hydrog Energy 27(6):601–610 Kalinci Y, Hepbasli A, Dincer I (2009) Biomass-based hydrogen production: a review and analysis. Int J Hydrog Energy 34(21):8799–8817 Kerby RL, Ludden PW, Roberts GP (1995) Carbon monoxide-dependent growth of Rhodospirillum rubrum. J Bacteriol 177(8):2241–2244 Koppatz S et al (2009) H2 rich product gas by steam gasification of biomass with in situ CO2 absorption in a dual fluidized bed system of 8 MW fuel input. Fuel Process Technol 90(7–8):914–921 Krummenacher JJ, West KN, Schmidt LD (2003) Catalytic partial oxidation of higher hydrocarbons at millisecond contact times: decane, hexadecane, and diesel fuel. J Catal 215(2):332–343 Kumar RV, Cole JA, Lyon RK (1999) Unmixed reforming: an advanced steam reforming process. In: Preprints of symposia, 218th. ACS national meeting Kuramoto K et al (2003) Repetitive carbonation–calcination reactions of Ca-based sorbents for efficient CO2 sorption at elevated temperatures and pressures. Ind Eng Chem Res 42(5):975–981 Levin DB, Chahine R (2010) Challenges for renewable hydrogen production from biomass. Int J Hydrog Energy 35(10):4962–4969 Li Z-S, Cai N-S (2007) Modeling of multiple cycles for sorption-enhanced steam methane reforming and sorbent regeneration in fixed bed reactor. Energy Fuel 21(5):2909–2918 Page 33 of 35

Handbook of Climate Change Mitigation and Adaptation DOI 10.1007/978-1-4614-6431-0_29-2 # Springer Science+Business Media New York 2015

Licht S (2005) Solar water splitting to generate hydrogen fuel – a photothermal electrochemical analysis. Int J Hydrog Energy 30(5):459–470 Lin SY et al (2001) Hydrogen production from hydrocarbon by integration of water-carbon reaction and carbon dioxide removal (HyPr-RING method). Energy Fuel 15(2):339–343 Loo SV, Koppejan J (2008) The handbook of biomass combustion and co-firing. Earthscan, London Mahishi MR, Goswami DY (2007) An experimental study of hydrogen production by gasification of biomass in the presence of a sorbent. Int J Hydrog Energy 32(14):2803–2808 Manovic V, Anthony EJ (2007) Steam reactivation of spent CaO-based sorbent for multiple CO2 capture cycles. Environ Sci Technol 41(4):1420–1425 Markov SA et al (1997) Photoproduction of hydrogen by cyanobacteria under partial vacuum in batch culture or in a photobioreactor. Int J Hydrog Energy 22(5):521–524 Milne TA, Abatzoglou N, Evans RJ (1998) Biomass gasifier“ tars”: their nature, formation, and conversion. National Renewable Energy Laboratory, Golden Minowa T, Zhen F, Ogi T (1998) Cellulose decomposition in hot-compressed water with alkali or nickel catalyst. J Supercrit Fluids 13(1–3):253–259 Mok WSL, Antal MJ (1992) Uncatalyzed solvolysis of whole biomass hemicellulose by hot compressed liquid water. Ind Eng Chem Res 31(4):1157–1161 Nawaz M, Ruby J (2001) Zero emission coal alliance project conceptual design and economics. In: 26th international technical conference on coal utilization & fuel systems, (The Clearwater Conference) Norbeck JM et al (1996a) Hydrogen fuel for surface transportation, vol 160. SAE, Warrendale Norbeck J et al (1996b) Hydrogen fuel for surface transportation. Society of Automotive Engineers, Warrendale Ochoa-Fernández E et al (2007) Process design simulation of H2 production by sorption enhanced steam methane reforming: evaluation of potential CO2 acceptors. Green Chem 9(6):654–662 Padró CEG, Putsche V (1999) Survey of the economics of hydrogen technologies. National Renewable Energy Laboratory, Golden Pfeifer C, Puchner B, Hofbauer H (2009) Comparison of dual fluidized bed steam gasification of biomass with and without selective transport of CO2. Chem Eng Sci 64(23):5073–5083 Qinhui W et al (2003) New near-zero emissions coal utilization technology with combined gasification and combustion. Power Eng 23(5):2711–2715 Reijers HTJ et al (2009a) Modeling study of the sorption-enhanced reaction process for CO2 capture. I model development and validation. Ind Eng Chem Res 48(15):6966–6974 Reijers HTJ et al (2009b) Modeling study of the sorption-enhanced reaction process for CO2 capture. II. Application to steam-methane reforming. Ind Eng Chem Res 48(15):6975–6982 Resende FLP, Savage PE (2010) Effect of metals on supercritical water gasification of cellulose and lignin. Ind Eng Chem Res 49(6):2694–2700 Rizeq R, Lyon R, Zamansky V (2001) Fuel-flexible AGC technology for H2, power, and sequestrationready CO2. In: The proceedings of the 26th international technical conference on coal utilization & fuel systems, Clearwater Rong N et al (2013) Steam hydration reactivation of CaO-based sorbent in cyclic carbonation/calcination for CO2 capture. Energy Fuel 27:5332 Rostrup-Nielsen JR (1984) Catalytic steam reforming. Springer, Berlin Shafirovich E, Varma A (2009) Underground coal gasification: a brief review of current status. Ind Eng Chem Res 48(17):7865–7875 Shen Y, Yoshikawa K (2013) Recent progresses in catalytic tar elimination during biomass gasification or pyrolysis – a review. Renew Sustain Energy Rev 21:371–392

Page 34 of 35

Handbook of Climate Change Mitigation and Adaptation DOI 10.1007/978-1-4614-6431-0_29-2 # Springer Science+Business Media New York 2015

Simell PA et al (1997) Catalytic decomposition of gasification gas tar with benzene as the model compound. Ind Eng Chem Res 36(1):42–51 Solieman AAA et al (2009) Calcium oxide for CO2 capture: operational window and efficiency penalty in sorption-enhanced steam methane reforming. Int J Greenhouse Gas Control 3(4):393–400 Solsvik J, Jakobsen HA (2011) A numerical study of a two property catalyst/sorbent pellet design for the sorption-enhanced steam–methane reforming process: modeling complexity and parameter sensitivity study. Chem Eng J 178:407–422 Spritzer MH, Hong GT (2003) Supercritical water partial oxidation. In: Proceedings of the 2002 US DOE hydrogen program review. NREL/CP-570-30535 Sutton D, Kelleher B, Ross JRH (2001) Review of literature on catalysts for biomass gasification. Fuel Process Technol 73(3):155–173 TeGrottehuis W, King D, Brooks K (2002) Optimizing microchannel reactors by trading-off equilibrium and reaction kinetics through temperature management. In: 6th international conference on microreaction technology Troshina O et al (2002) Production of H2 by the unicellular cyanobacterium Gloeocapsa alpicola CALU 743 during fermentation. Int J Hydrog Energy 27(11–12):1283–1289 Turner J et al (2008) Renewable hydrogen production. Int J Energy Res 32(5):379–407 Ueno Y, Otsuka S, Morimoto M (1996) Hydrogen production from industrial wastewater by anaerobic microflora in chemostat culture. J Ferment Bioeng 82(2):194–197 Utgikar V, Thiesen T (2006) Life cycle assessment of high temperature electrolysis for hydrogen production via nuclear energy. Int J Hydrog Energy 31(7):939–944 Wang Z et al (2006) Thermodynamic equilibrium analysis of hydrogen production by coal based on Coal/ CaO/H2O gasification system. Int J Hydrog Energy 31(7):945–952 Wang Y, Chao Z, Jakobsen H (2011) Numerical study of hydrogen production by the sorption-enhanced steam methane reforming process with online CO2 capture as operated in fluidized bed reactors. Clean Techn Environ Policy 13(4):559–565 Wang Q et al (2014) Enhanced hydrogen-rich gas production from steam gasification of coal in a pressurized fluidized bed with CaO as a CO2 sorbent. Int J Hydrog Energy 39:5781 Wei LG et al (2008) Hydrogen production in steam gasification of biomass with CaO as a CO2 absorbent. Energy Fuel 22(3):1997–2004 Wilhelm DJ et al (2001) Syngas production for gas-to-liquids applications: technologies, issues and outlook. Fuel Process Technol 71(1–3):139–148 Williams R (1933) Hydrogen production. US Patents. p. 1,938,20 Wolfrum EJ, et al (2003) Biological water gas shift development. DOE hydrogen, fuel cell, and infrastructure technologies program review Xu C et al (2010) Recent advances in catalysts for hot-gas removal of tar and NH3 from biomass gasification. Fuel 89(8):1784–1795 Yang H et al (2006) Pyrolysis of palm oil wastes for enhanced production of hydrogen rich gases. Fuel Process Technol 87(10):935–942 Yeboah Y et al (2002) Hydrogen from biomass for urban transportation. In: Proceedings of the US DOE hydrogen program review Yu D, Aihara M, Antal MJ (1993) Hydrogen production by steam reforming glucose in supercritical water. Energy Fuel 7(5):574–577 Ziock H-J, Lackner KS, Harrison DP (2001) Zero emission coal power, a new concept. In: Proceedings of the first national conference on carbon sequestration

Page 35 of 35

Handbook of Climate Change Mitigation and Adaptation DOI 10.1007/978-1-4614-6431-0_30-2 # Springer Science+Business Media New York 2015

Nuclear Energy and Environmental Impact K. S. Rajaa*, B. Pesica and M. Misrab a Chemical and Materials Engineering, University of Idaho, Moscow, USA b Department of Metallurgical Engineering, University of Utah, Salt Lake City, UT, USA

Abstract Nuclear energy is attracting revived interest as a potential alternate for electric power generation in the event of increased concerns about global warming. Compared to energy produced by combustion of a carbon atom in coal, fission of a U-235 atom will produce about 10 millions times more energy. However, storage of the nuclear waste is an environmental issue. This chapter has four sections with a major focus on introduction of nuclear power plants and reprocessing of spent nuclear fuels. Different nuclear fuel cycles and nuclear power reactors are introduced in the first section, and the cost–benefits of different energy sources are compared. Fuel burnup and formation of fission products are discussed along with operational impacts and risk analyses in the second section. The third section discusses design of nuclear structural components and various degradation modes. Section four discusses reprocessing issues of nuclear spent fuels. Reprocessing of spent nuclear fuel may be an economically viable option and reduces high-radioactive load in the nuclear waste repositories as well. However, there is a concern about proliferation of weapons-grade plutonium separated during reprocessing. Containment of radionuclides in different waste forms is also discussed in this section.

Introduction to Nuclear Energy Radioactive decay of heavy metals such as uranium, plutonium, thorium, etc., can be converted into a useful energy form. Radioactivity occurs by emission of charged particles (such as a and b) and electromagnetic waves (g ray). For heavier nuclei (elements with atomic number>40), more neutrons are required for a stable configuration so that the electrostatic repulsion force between the protons can be overcome (Jevremovic 2005). When the nucleus has too many or too few neutrons, it will be in a nonequilibrium condition. In order to reach a stable configuration, the nucleus undergoes a spontaneous transformation by rearranging its constituent particles. This is accomplished by the emission of an alpha particle, a beta particle (either b or b+), a neutron, or a proton. Depending on the energy conservation, gamma radiation may or may not be present during the radioactive decay. In brief, when atoms containing nuclei in the nonequilibrium condition try to reach stable condition, the excess energy of the nuclei is emitted as radiation. In this process, the material disintegrates. According to Einstein’s principle (E = mc2), the disintegrated matter is converted into energy. For example, burning of 1 kg of uranium in a nuclear reactor results in conversion of 0.87 g of matter into energy which amounts to (0.8  103 kg)  (3.0  108 m/s)2 = 7.8  1013 J. For comparison, combustion of 1 kg of gasoline will release only 5  107 J of energy, six orders of magnitude smaller than 1 kg of uranium (Murray 2001). In addition to high specific energy, the nuclear energy has an advantage of not releasing carbon dioxide into the atmosphere. Combustion of 1 kg of gasoline will release about 3.2 kg of carbon dioxide to the environment. An anthracite coal–based power plant will release about 1.2 kg of CO2 for every KWh *Email: [email protected] *Email: [email protected] Page 1

Handbook of Climate Change Mitigation and Adaptation DOI 10.1007/978-1-4614-6431-0_30-2 # Springer Science+Business Media New York 2015

electricity generated, whereas the lifetime CO2 emission of nuclear power plants, considering the electricity used for mining and processing operations from fossil fuel power plants, will be 100–140 g of CO2 /kWh electricity generated (Storm van Leeuwen and Smith). The major advantages of nuclear energy are: High specific energy No CO2 emission Spent fuel can be reprocessed and reused, thus conserving natural resources Possibility to produce more nuclear fuel than consumed by using fast breeder reactors Lower operating cost in terms of fuel cost compared to fossil fuel power plants Disadvantages are: • • • •

Large capital cost and longer construction time of power plants Long-term storage of nuclear waste which is an issue Exposure to radioactivity in case of accidents Potential proliferation of weapons-grade fuel during reprocessing

Nuclear power plants attract more safety and environmental concerns from the public than other power plants. This chapter addresses some of the environmental issues associated with nuclear power generation. The first three sections introduce nuclear fuel cycles, nuclear power reactors, and issues on operational safety. Information on nuclear spent fuels reprocessing, waste management, and long-term storage is given in the last section.

Nuclear Fuel Cycles

Conversion of nuclear energy can be achieved by fission or fusion reactions. Most of the commercial nuclear power reactors operate based on nuclear fission reaction. The average energy of neutrons used for power generation is about 0.1 eV, which are called thermal neutrons. Neutrons that have energy in the order of 2 MeV are called fast neutrons. Uranium is the most common fissile material used in the nuclear reactors. Naturally mined uranium has 99.24 % U-238, 0.72 % U-235, and 0.0054 % U-234. U-235 is a fissile isotope. Fissile isotopes are the ones that undergo fission reaction upon absorption of slow neutrons (neutrons having energy 4000 h). It is well established by molecular dynamics simulations and experimental results that at room temperature water (both in gas and liquid phases) adsorbs preferentially on Si sites by a dissociative chemisorption process (Cicero et al. 2004; Liu et al. 2012). The reactivity of Si-terminated SiC surface with water manifests into a corrosion process (by formation of Si–H and Si–OH bonds, reactions 10 and 11). Recently, nanocrystalline 3C–SiC has been used as electrodes for high-efficiency electrochemical hydrogen evolution (He et al. 2012). On the other hand, water dissociation on the C-terminated surface is reported to be energetically unfavorable even at high temperature (Liu et al. 2010). The relaxed H. . .H distance between H3+O and Si–H site is reported to be 0.125 nm, whereas for C–H site the corresponding distance of H. . .H is 0.275 nm. The larger H2O–SiC distance of the C-terminated surface and relatively small binding energy (17 % Cr) are considered to have better SCC resistance than austenitic stainless steels. This is true only when the Ni, Cu, and Co contents are below certain levels (Bond and Dundar 1977). However, 8–12 % Cr steels are subjected to both SCC and hydrogen embrittlement. Apart from the environmental factors such as dissolved oxygen, presence of sulfate and chloride ions, etc., microstructural condition of the material also controls the cracking behavior. Untempered martensite and acicular bainite phases are found to be more prone to hydrogen cracking than tempered martensite and bainite + ferrite phases (Kerr et al. 1987). Generally it is observed that pitting is associated with the initiation of SCC or corrosion fatigue in this type of material. Mostly intergranular cracking is observed along the prior austenite grain boundaries. However, it is not very clear why only the prior austenite grain boundaries are the most preferred site for cracking and not other boundaries such as interlath boundaries or interfaces between two martensite packets. Probably certain solute elements segregated in the austenite grain boundaries may have more affinity to hydrogen, as discussed by Leslie (1977). But, Auger electron spectroscopy carried out on these fracture surfaces did not throw much light on this aspect. Hydrogen cracking resistance of ferritic/martensitic steel is significant for fusion wall application because direct transmutation, water–lithium interactions, radiolysis of water, and corrosion could charge hydrogen into the steel. Hydrogen cracking could be enhanced by other irradiation damage mechanisms such as RIS, increased defect density, etc. Page 17

Handbook of Climate Change Mitigation and Adaptation DOI 10.1007/978-1-4614-6431-0_30-2 # Springer Science+Business Media New York 2015

Environmental Aspect Radiolysis Radiolysis is a complex issue affected by water chemistry, neutron flux (not fluence), flow rate, temperature, etc. Radiation causes decomposition of water into many species which affect the corrosion potential. At high hydrogen levels (>1 ppm), radiolysis is sufficiently suppressed so that it has very little effect on changing the corrosion potential (Maziasz and McHargue 1987). The interior of the cracks was not found to be polarized by radiation, as the corrosion potentials of cracks and tight crevices were not altered. Flux Dependence The structural materials are exposed to temperatures of 290–350  C in water reactors. In the case of a BWR, the temperature is constant at 288  C, whereas in a PWR, the temperature varies with location to a maximum of 400  C in the baffle plates. The fast flux in a BWR is around 7  1017 n/m2 s (E > 1 MeV), and in a PWR, it is 20–30 % higher than in a BWR. Radiation damage in materials is quantified in terms of displacements per atom (dpa) as calculated by approved methods. Empirically 1.4 dpa per 1025 neutrons (n)/m2 (E > 1 MeV) is used for LWRs. From this, the fast flux can be back calculated to be 107 dpa/s in the core of LWRs and 1.5–4  107 dpa/s in test reactors. In fast reactors, the fast flux is given approximately as 106 dpa/s, and the temperature also is higher (>370  C) in fast reactors. So, the data generated in fast reactors cannot be compared with those of LWRs. The thermal to fast flux ratio also is an important issue. The thermal neutrons are those which are in thermal equilibrium with neighboring atoms and with energies below 0.5 eV. Radiation Water Chemistry and Corrosion Potential Radiation causes breakdown of water into primary species (H+, eaq) and molecules such as H2O2, O2, H2, etc. The concentration of species is proportional to the square root of the radiation flux. Fast neutron radiation has a stronger effect on water chemistry than other types of radiation such as thermal neutrons, beta particles, and gamma radiation (Suzuki et al. 1991). This feature is because of the higher linear energy transfer (LET) and the higher neutron flux of fast neutrons. It is generally believed that the corrosion potential has more influence than the concentration of oxidizing and reducing species in controlling SCC. The initial concentrations of oxygen and hydrogen are found to be important in determining the final corrosion potential after irradiation. Though a large increase in concentration of some species occurs after irradiation, the change in corrosion potential is not drastic. When hydrogen is present at more than 200 ppb and at 0 ppb O2, there is no radiation-induced elevation of corrosion potential, whereas the presence of H2O2 increases the corrosion potential.

Crack Initiation and Propagation It is generally observed that SCC initiation preferentially occurs at sites like pits and second phase particles. Preferential dissolution of secondary phases or inclusions creates a crevice where the local electrolyte chemistry and local strain level become more favorable for SCC initiation by a slip dissolution mechanism. In the case of IASCC, irradiated microstructural features (like Cr depletion, Si and P segregation, etc.) and the presence of hard phases such as oxides make the crack initiation process much easier. Oxide particles effectively participate in IASCC initiation by two proposed mechanisms as follows: (1) Oxides are hard to deform. So, under load, the shear stress at the interface of the oxide matrix increases to very high levels as the ductile matrix around the particle deforms. This results in failure in the bonding, creating a crevice where the local chemistry of the electrolyte changes to more a conducive

Page 18

Handbook of Climate Change Mitigation and Adaptation DOI 10.1007/978-1-4614-6431-0_30-2 # Springer Science+Business Media New York 2015

condition for promoting SCC. (2) Alternately, the oxide could fracture creating a microcrack which can either extend into the matrix or create a very high stress intensity for easy SCC initiation. Strain at crack initiation (SCI) was proposed as the definition for IASCC initiation in slow strain rate tensile testing (SSRT) at 107 s1 strain rate. It was defined as the strain at which the stress–strain curve of SSRTs began to depart from that of tensile tests, when plotted using the same coordinates. Higher SCI means SCC initiation starts at higher strain. Though the intergranular (IG) fracture ratio decreases with decreasing dissolved oxygen (DO), it increases inversely below 10 ppb of DO. These phenomena may indicate the continuum of initiation of IASCC from BWR conditions to PWR conditions. Crack Propagation: Gamma ray irradiation is not expected to affect the microstructure or microchemistry of the material. However, it decomposes water into many kinds of radiolytic products of which hydrogen peroxide (H2O2) is very important to IASCC. In the 288  C BWR environment, gamma irradiation accelerated the crack growth to varying degrees depending on the water chemistry, flux, etc. For example, the average crack growth rates in unirradiated, irradiated with gamma ray for fluxes of 5  106, and 9  106 R/h were 7.2  1010, 1  109, and 1.3  109 m/s, respectively. From these values, the crack growth rates in low-conductivity pure water could be observed to be marginally affected by gamma ray irradiation. The effect of dissolved oxygen (DO) on crack velocity with additions of Na2SO4 is similar in both irradiated and unirradiated test conditions. Addition of sulfate ions showed more effect in accelerating the crack growth than did irradiation. DO also had a similar effect. Suppressing the DO content decreased the crack growth rate. Though crack velocity increased with sulfate ions as in the case of the unirradiated condition, DO had a major effect in controlling the crack behavior in the irradiated condition also. Nitrate additions were found to be less aggressive than sulfate additions in a BWR environment for 304 SS. Dissolved hydrogen showed greater beneficial effect in suppressing crack growth. The mechanism of crack growth mitigation by hydrogen injection could be explained by analyzing the corrosion potential of the system. The presence of molecules like H2O2 and O2 increases the free corrosion potential which falls into the cracking range, and hence, the crack velocity is enhanced following the slip dissolution model and Faraday’s law. Whereas when hydrogen is introduced into the environment, it helps the recombination of species and thus reduces the corrosion potential well below the cracking range. IASCC tests were carried out on irradiated stainless steel samples under BWR condition using the slow strain rate testing method. They presented average crack growth data by dividing the maximum crack depth by total test duration. The maximum crack growth rate divided by the test time was suppressed by hydrogen water chemistry (HWC) below 3  1021 neutrons (n)/cm2, but not above 3  1021 n/cm2. It was observed that variations in either fluence level (3  1020–9  1021 n/cm2; E > 1 MeV) or flux level (1.5  1013–7.6  1013 n/cm2 s) did not affect the crack velocity drastically (a maximum of a factor of two).

Critical Issues on Selection of Candidate Materials for Advanced Nuclear Reactors Advanced systems selected for Generation IV reactors require high operating temperatures in the range of 500–1000  C, depending on the coolant and longer service life. The fuels of the advanced reactors will have very high-burnup capabilities and fast neutron spectra. The construction materials of Generation IV reactors will be exposed to severe environmental conditions in combination with increased radiation damage. Therefore, selection of structural materials for advanced reactors requires a thorough understanding of materials’ behavior in the extreme service conditions. The structural materials of advanced nuclear reactors will undergo degradation primarily due to three factors, viz., (1) exposure to high temperature and service stresses (high-temperature degradation), (2) irradiation damage, and (3) interaction with service environments. The first two factors are common among all the types of reactors, and therefore, the data generated at high temperatures and irradiation levels relevant to the service conditions can be used for material qualification for different type of reactors, Page 19

Handbook of Climate Change Mitigation and Adaptation DOI 10.1007/978-1-4614-6431-0_30-2 # Springer Science+Business Media New York 2015

as the operating temperature of most of the advanced reactors is in the range of 500–800  C. However, the third factor, interaction with environment, is reactor specific. The material should possess higher resistance to corrosion attack in the service environment. Among the various types of advanced reactors, liquid metal (particularly liquid sodium and lead–bismuth eutectic)-cooled fast reactors are considered in this study. Some of the critical issues pertaining to each major degradation modes will be discussed in this presentation. The materials considered for advanced reactor structural applications can be classified into three major categories viz., (1) ferritic–martensitic-type Fe–Cr alloys, (2) austenitic alloys (stainless steels and Ni–Cr–Mo alloys), and (3) oxide dispersion strengthened (ODS) alloys. Refractory metal-based alloys are not considered in this work. Merits and disadvantages of the first two categories of the materials will be analyzed based on the critical degradation issues. High-Temperature Degradation Major Issues and Temperature Limits The major issues of high-temperature degradation are phase stability, oxidation, and creep–fatigue interaction. It is widely believed that thermal effects will offset the irradiation effects at high temperatures because of increased diffusivities and stress relaxation effects. This may be true for annihilation of point defects. However, effect of radiation-induced segregation could be aggravated at high temperatures. Available literature data indicate that the maximum service temperatures of different alloys are limited by chemistry and microstructure. For example, ferritic/martensitic steel with a maximum Cr content of 12 % can service up to 650  C and austenitic stainless steels up to 800  C, nickel-based alloys up to 900  C, and ODS alloys up to 1050  C. The interaction of creep–fatigue is considered to be of primary importance. Fatigue, Creep, and Creep–Fatigue Interaction Creep or creep–fatigue interaction of structural materials at elevated temperatures over a long period of time in advanced reactor environments is a critical issue. High temperature and the temperature gradient during start-ups, in-services, and shutdowns induce both static and cyclic thermal stresses. These constitute the stress factors that generate creep and creep–fatigue interaction. In addition, components such as thread roots in steam turbine casing bolts, pipe, and branch connections in reactors endure multiaxial stresses. The earlier studies (Brinkman and Korth 1973) investigated the effect of heat-to-heat variation on fatigue and creep–fatigue resistance of type 304 stainless steel at 593  C. Carbide precipitation was considered as the reason of increasing low-cycle fatigue (LCF) resistance. Additionally, a fairly uniform distribution of inter- and intragranular carbides M23C6 was considered to increase the resistance to the tensile hold time effect. Generally, zero hold time tests revealed transgranular fracture surfaces, while intergranular features were obtained even with hold times as short as 0.01 h. This is also illustrated by the studies of Schaaf (1988) (Fig. 5). The creep–fatigue failure can be categorized into three modes: fatiguedominated failure with almost transgranular features, creep–fatigue interaction (both transgranular and intergranular), and creep-dominated failure with mainly intergranular cracks. In recent years, creep–fatigue properties of liquid metal fast breeder reactor (LMFBR) candidate structural materials, such as austenitic 304 L, 304NG, 316LN, and AISI 321, were investigated at 600  C (Rho and Nam 2002; Nilsson 1988; Min and Nam 2003). It was observed that nitrogen addition improved fatigue life under creep–fatigue condition. The density of Cr-rich carbides formed at the grain boundary of 304NG (0.08 % N) was lower than that of 304 L (0.03 % N). Planar slip planes of 316LN initiated under creep–fatigue interaction probably enhanced stress concentration immediately next to Page 20

Handbook of Climate Change Mitigation and Adaptation DOI 10.1007/978-1-4614-6431-0_30-2 # Springer Science+Business Media New York 2015

Fatigue dominatedtransgranular cracking

Creep dominatedintergranular cracking

Fatigue – creep interaction

Fig. 5 The three failure modes: fatigue dominated (left), creep–fatigue interaction (left), creep dominated (middle) (Van Der Schaaf 1988)

grain boundaries and promoted intergranular fatigue fracture. In the case of AISI 321, it was observed that the creep–fatigue life of TiC-aged specimen was 40 % longer than that of Cr23C6 aged, although the two carbide densities at grain boundaries were similar. It is suggested that the interfacial free energy between TiC and grains is lower than that between Cr23C6 and grains in AISI 321. In addition, irradiation creep accumulates in reactor materials. It is known that irradiation creep has very weak temperature dependence. However, creep remains high at temperatures as low as 60  C (Grossbeck et al. 1990). It is postulated that migration of vacancies and migration of interstitials are two independent mechanisms of irradiation creep. The effect of irradiation is to lower the endurance of plastic strain range. So far, most of the experimental studies on creep–fatigue interaction were conducted by using low-cycle fatigue tests with and without tensile strain hold in air at temperatures ranging from 400 to 600  C. The accumulated data in simulated reactor environments at high temperature up to 800  C is inadequate for a better understanding of the creep–fatigue interaction mechanism. For example, oxidation and solubility of alloying elements in high-temperature liquid metal have to be considered as possible factors affecting creep–fatigue behavior. Also, carbide precipitation at component weld joints and heataffected zone (HAZ) may have different behaviors from base metals. Creep–Fatigue Life Prediction In this section, selected creep–fatigue life prediction methods are reviewed without considering the irradiation effects. Suauzay et al. (2004) analyzed their experimental results of creep–fatigue behavior of 316LN at 500  C using linear damage accumulation model. This model is based on Miner’s rule, expressed as NF N pf F

þ

tF relax tF

¼1

NF: number of cycles to failure for a th hold time (tn > 0) NFPf: number of cycles to failure in pure fatigue, based on the Coffin–Manson relation (tn = 0) tF = NFth tFcreep: failure time in pure creep condition given as tFcreep = H / sr, where H and r are creep coefficients ð th dt relax tF ¼ N F creep sðtÞ 0 tF

(15)

(16)

Page 21

Handbook of Climate Change Mitigation and Adaptation DOI 10.1007/978-1-4614-6431-0_30-2 # Springer Science+Business Media New York 2015

Tsuji and Nakajima (1994) evaluated the damage accumulation of Hastelloy-XR in HTGR environment at 700–950  C by applying the life fraction rule and ductility exhaustion rule. The creep damage during strain holding time was given as X n ðDt i =DtRi Þ (17) DcL ¼ i

DCL: creep damage by life fraction rule Dti: strain holding period for particular temperature and stress tRi: rupture time based on the Larson–Miller parameter n: number cycles for failure in the experimental condition with trapezoidal strain wave form (fatigue–creep components) The ductility-exhaustion rule is given as Dcd ¼

X

n

ðe_min Dti =eRi Þ

(18)

i

Dcd: creep damage by ductility exhaustion rule Dti: strain holding period for particular temperature and stress e_min: minimum creep rate calculated from the Larson–Miller parameter eRi: strain at rupture It was observed that the ductility exhaustion rule predicted the fatigue life under the effective creep condition more successfully than the life fraction rule. Most of the creep–fatigue life prediction models are based on phenomenology of failures. For example, ferritic/martensitic steels and nickel-based superalloys showed damage accumulation at the crack tip or crack process zones. In these materials even compressive stress hold times were found to affect the damage accumulation. In case of austenitic stainless steels, creep–fatigue damage occurs by grain boundary cavitation, and tensile hold time is considered to be more important. The proposed damage accumulation function based on grain boundary cavitation phenomenon is given as (Nam 2002)

DCF ¼ Dem p

 8  300  C), vacancy clusters in austenitic stainless steels become thermally unstable. The presence of voids and swelling is observed at higher temperatures. Under certain conditions small gas-filled bubbles can grow to form voids, referred to as swelling, as the volume of material increases beyond the size limitation dictated by the thermodynamic equilibrium of gas. Both hydrogen and helium play an important role in swelling of a material. A swelling rate of 1 % per dpa is maintained at temperatures above 425  C. The lower limit of temperature for swelling is observed to be affected by displacement rate. Incoming neutron Interstitial

vacancy

PKA

Fig. 7 Schematic illustration of generation of a primary knock-on atom (PKA) (Maziasz 1993) Page 24

Handbook of Climate Change Mitigation and Adaptation DOI 10.1007/978-1-4614-6431-0_30-2 # Springer Science+Business Media New York 2015

Radiation-Induced Microchemistry In austenitic stainless steels, depletion of Cr and Fe and enrichment of Ni have been observed. The Cr and Fe have higher diffusivity than Ni. Therefore, they migrate away from the interface, enriching the boundary with Ni. This could be attributed to the inverse Kirkendall segregation. Segregation of Si and P at grain boundaries is observed by an uphill diffusion process. Along with Cr and Fe, minor alloying elements such as Mn, Ti, and Mo also get depleted at grain boundaries. Mn levels drop to 0.5 at% at grain boundaries in type 304 SS. In type 316 SS, more than 50 % depletion of Mo after irradiation to 3 dpa has been reported (Cookson and Was 1995). For the same level of irradiation, enrichment of Si occurred to levels of about 6–8 at%. Nickel-silicide precipitation also has often been reported to form at dislocation loops at temperatures >380  C and at higher doses (>20 dpa) (Kimura et al. 1996). At higher doses (PWR relevant, >10 dpa) sulfur segregation can be expected due to the burnup of Mn in MnS inclusions and subsequent release of S. Radiation-induced Cr depletion could retard carbide formation at grain boundaries. Radiation-induced segregation of Ni and Si could lead to formation of g0 or G phase at higher temperatures (Shiba et al. 1996). Mechanical Properties In general, it is observed that with increases in irradiation dose, the yield strength of the material increases. The ultimate tensile strength also increases, but the increase is not as great as for the yield strength. Formation of higher densities of vacancies and interstitials is attributed as the cause for this increase. Suzuki et al. (Holt 1974) reported increases in strength for various grades of austenitic stainless steels with increase in neutron fluence as shown in Fig. 8. However, a saturation level is reached at the 3  1025 n/m2 fluence level (E > 1 MeV) beyond which no significant increase in strength could be observed. The increase in yield strength (Ds) of the 304 SS irradiated in BWR environment at 288  C showed a relation of Ds = 1.1  103  (neutron fluence, n/m2)0.27. It was observed that type 304 SS was more prone to irradiation hardening than was type 316. Composition has two effects, viz., (1) certain alloy elements help nucleate Frank loops and (2) stacking fault energy (SFE) is altered. Low SFE results in more hardening. Also a low SFE can lead to nucleation of twins as an alternative deformation mechanism to dislocation glide. Alloying elements such as Ni, Mo, and C increase the SFE in austenitic stainless steel, and Cr, Si, Mn, and N tend to decrease the SFE.

Increase in yield strength, MPa

1000

100 1.E+23

1.E+24

1.E+25

Neutron fluence (E > 1 MeV),

1.E+26

n/m2

Fig. 8 Typical relation between the increase of the 0.2 % yield stress of austenitic stainless steels and neutron fluence (E > 1 MeV) after irradiation in BWR environment at 288  C (Koyama et al. 2007) Page 25

Handbook of Climate Change Mitigation and Adaptation DOI 10.1007/978-1-4614-6431-0_30-2 # Springer Science+Business Media New York 2015

Loss of work hardening and uniform elongation is observed after irradiation. The elongation decreases significantly with increasing dose. This kind of loss in work hardening and hence uniform ductility could be attributed to the irradiated microstructure, where annihilation of barriers occurs due to their interaction with dislocation. Interacting with obstacles, dislocations multiply in unirradiated material which results in development of back stresses and hence work hardening of the material. However, in irradiated conditions, the obstacles such as loops and voids can be destroyed when they interact with moving dislocations, resulting in work softening. This behavior causes flow localization, and hence, the slip band spacing increases, ultimately reducing the macroscopic deformation. At higher temperatures (above 600  C), the ductility is observed to be severely affected by He embrittlement. When a large void population develops near 400  C, the fracture mode is observed to be transgranular channel. The reduction of fracture toughness of irradiated SS can be attributed to the higher population of voids so that fracture occurs at an early stage by dislocation channeling or highly heterogeneous deformation–decohesion ahead of the crack tip23. RIS of Ni at voids also results in brittle behavior of a material. This preferential segregation of Ni at voids results in matrix depletion of Ni and hence destabilizes the austenite. The strain-induced martensite transformation, possible in the destabilized austenite, acts as low-energy path for crack propagation 24. This mechanism for cracking resulted in quasi-cleavage fracture with an overall fracture toughness of 80 MPa m1/2 after the austenitic material has been irradiated to high dose (1.6  1023 n/cm2) at 425  C. Irradiation hardening and softening are important factors in determining the fusion reactor life limits as creep properties are affected by these changes. In ferritic steels, the irradiation hardening is attributed to the formation of small defect clusters and dislocation loops, with associated precipitation of small carbides such as M2C, M6C, etc. Kimura et al. (1996) studied the irradiation hardening behavior of 9Cr-2 WV steel and reported saturation of irradiation hardening at a dose level of about 10–15 dpa. Irradiating at above 430  C resulted in softening at dose levels of 40–60 dpa. Swelling was found to be associated only with hardening, in this study. Shiba et al. (1996) investigated the response of F82H steel to irradiation at low damage levels (72 %; Cr, 14–17 %; and Fe, 6–10 % form major constituents) and 42 that use Alloy 690 (Ni, >58 %; Cr, 27–31 %; and Fe, 7–11 % form major constituents) as tubing material. To improve mechanochemical properties, these materials are subjected to mill annealing (Alloy 600) or thermal treatment (Alloy 600 or 690), which forms the important factor, other than the alloy composition, in determining its degradation. The tube support plates are typically fabricated using 405 ferritic stainless steels. While the primary reason for degradation and failure of the tubes used to be thinning of tubing material due to water flow, the recent failures and inspections indicate that accelerated degradation is becoming an issue of concern. At the center of this is the failure of steam generator tubes in January 2012, after less than 3 years of operation, at the San Onofre plant in California which led to the leakage of radioactive material from inside the tubes to the outside water. While the migration from Alloy 600 to 690 was primarily conducted due to improved corrosion resistance of Alloy 690 (provided by higher chromium content), the mechanical properties of Alloy 690 are not superior to that of Alloy 600. Therefore, Alloy 690 would be expected to be more susceptible to mechanically induced failure such as fretting and fatigue. Moreover, since the steam generator transfers excess heat from reactor core to outside, these tubes are exposed to extreme temperatures and (320  C) and pressures (150 bar). Preliminary reports from the San Onofre nuclear plant indicated that the accelerated degradation was in part due to increased fretting from flow-induced vibration. This type of cyclic loading, in addition to the normal load (contact stress) due to fretting conditions, results in damage accumulation beneath the contacting surface of Alloy 690. The mechanism of fretting in LWR environments is complex because the failure occurs due to a combination of several synergistic processes such as fretting fatigue, fretting corrosion, and fretting wear. The material removal occurs in the following stages: (1) formation of highly plastic deformed surface layer, (2) fracture of the work-hardened layer, and (3) removal of wear debris and propagation of cracks in the deformed subsurface. The localized material loss due to fretting has two consequences in the LWR environment, namely, (1) accelerated corrosion of small worn-out areas that become anodes and large unaffected areas that act as cathodes and (2) fatigue crack initiation from the Page 27

Handbook of Climate Change Mitigation and Adaptation DOI 10.1007/978-1-4614-6431-0_30-2 # Springer Science+Business Media New York 2015

worn-out area that acts as a stress concentrator. Another important aspect is the microstructure of the alloy. Greater resistance to wear was observed with the large grain structures and coarse carbides along the grain boundaries of nickel-based alloys. Carbide morphology also influenced the wear resistance. Continuous grain boundary carbides showed increased propensity to crack formation (and hence low wear resistance) as compared to discrete grain boundary carbides.

Cast Stainless Steel Components Cast stainless steels are extensively used in light water reactors (LWRs) as alloys for coolant piping and auxiliary piping components such as pump casings, valve bodies and fittings, elbows, and nozzles. Similar to the weld microstructure of austenitic stainless steels, the cast microstructure also contains delta ferrite. The ferrite content varies from 3 % to 12 % in welds and up to 40 % in cast austenitic stainless steel components. The delta ferrite is required to mitigate hot cracking during solidification and control the intergranular corrosion. Mechanical strength and stress corrosion cracking resistance are improved by the ferrite phase present in the austenite matrix. Depending on the chemical composition, the primary solidification phase could be austenite or ferrite. When the primary solidification phase is austenite, the ferrite is present as interdendrites. Partitioning of the solute elements occur in the interdendritic regions that affect the chemical and mechanical properties when compared to the equiaxed wrought microstructures. The heterogeneity in the chemical composition also results in detrimental microstructural changes such as spinodal decomposition and precipitation of topologically close packed (TCP) phases during long time exposures to service temperatures that lead to thermal embrittlement. The popular grades of cast austenite + ferrite duplex structure stainless steels in nuclear service are the CF3 and CF8 series of alloys. Among these, the CF3, CF3A, CF3M, CF8, CF8A, and CF8M are the most widely used alloys (equivalents of 304 and 316 wrought grades). These alloys typically have 17–21 wt% Cr and 8–13 wt% Ni. The digit following the letters CF refers to the carbon content of the alloys “3” for 0.03 % and “8” for 0.08 %. The fourth letter “A” denotes higher ferrite control which raises strength above that of the normal CF grades, and the letter “M” denotes addition of Mo to the nominal compositions of CF grade alloys. The macroscopic cast structure is generally divided into two categories depending on the casting process, namely, (1) static cast structure which contains columnar grain structure at the ends and equiaxed (randomly speckled) grains at the center (Calonne et al. 2004) and (2) centrifugally cast structure which contains long columnar grains at the outer wall and a mixture of equiaxed and columnar structures in the inner regions (Anderson et al. 2007). Embrittlement due to thermal aging of cast stainless steels at service conditions in the temperature range of 280–320  C has been a major concern (Chung and Leax 1990). The main transformations are the spinodal decomposition of a into a and a chromium-rich phase a0 , precipitation of a G phase (Ni16Ti6Si7), e, and p (a nitride phase). Primarily, the formation of Cr-rich a0 (martensite) phase strengthens ferrite and decreases the toughness. With increased temperature (>550  C), other embrittling phases such as s, w, Z, M23C6 carbide, and g2 austenite are form aided by the presence of the ferrite/austenite interfaces. Sigma phase is a tetragonal crystal composed of (Cr,Mo)x (Ni,Fe)y. The chi (w) phase is a body-centered cubic with a typical composition of Fe36Cr12Mo10. The typical stoichiometry of the Laves (Z) phase is Fe2Mo with a hexagonal structure. These topologically close packed (TCP) phases have large lattice parameters and large number of atoms in a lattice that show directional properties. Since these TCP phases nucleate at the high surface energy sites (grain boundaries and phase boundaries), cohesive strength of the grains is significantly reduced and brittle failure is often observed. It is important to note that the cold working accelerates the formation of the TCP phases by increased diffusion. Therefore, formation of Laves phase in cold-worked structure can be a high possibility even at reactor service temperatures. Corrosion fatigue data for cast stainless steels in water containing 200 ppb and 8 ppm of dissolved oxygen (DO) at 289  C have been generated and compiled by Shack and Kassner of the Argonne National Page 28

Handbook of Climate Change Mitigation and Adaptation DOI 10.1007/978-1-4614-6431-0_30-2 # Springer Science+Business Media New York 2015

Laboratory (Shack and Kassner). In general, the corrosion fatigue crack growth rate is assumed to be related to the air fatigue crack growth through a power law given as ðda=dtÞenv ¼ A ðda=dtÞm air

(23)

For stress ratio R < 0.9, A = 4.5  105 for DO = 200 ppb, and A = 1.5  104 for DO = 8 ppm, m = 0.5. Kawaguchi et al. (1997) studied the thermal embrittlement behavior of centrifugally cast CF8M duplex stainless steel after aging at 300–450  C for up to 40,000 h. The aging treatment was quantified by a temper parameter denoted as P and given as P = log(t) + 0.4343(Q/R)(T11 T21), where t = aging time, Q = activation energy for the embrittlement (typically 100 kJ/mol), T = temperature, and R = gas constant. The ferrite content of the samples varied from 15 % to 17.5 %. Spinodal decomposition of d-ferrite to Cr-rich a0 phase (size, 5 nm) was observed after the following aging conditions: 300  C for 104 h, 350  C for 3000 h, and 450  C for 300 h. The precipitation of larger (50 nm) G phase was observed only at longer aging times than that required for spinodal decomposition and at higher temperatures. For example, aging at 300  C for 40,000 h did not show the presence of G phase. Thermal aging at 350  C for 104 h and 450  C for 3000 h showed occurrence of the G phase. Spinodal decomposition was considered the main reason for the thermal embrittlement behavior of the CF8M cast stainless steel based on the Charpy V-notch energy of 230 J that decreased from the 300 J of the as-cast samples. The use of subsize CT samples for the evaluation of the fracture toughness and the validation of the results with 1 T-CT samples was investigated by Jayet-Gendrot et al. (1998). Mini-CT specimens (5 mm thick) were extracted from the skin of the cast stainless steel elbows of a PWR unit that underwent 86,898 h of service at around 323  C. The J-integral values of the mini-CT specimens (82 kJ/m2 at 0.2 mm of Da offset) were observed to be in good agreement with those derived from the 1 T-CT specimens. The effect of thermal aging on the low-cycle fatigue (LCF) behavior of the cast stainless steel in room temperature air was evaluated by Kwon et al. (2001). The samples were evaluated in as-cast and aged conditions (430  C for 300 and 1800 h), and the LCF behavior was described by the relation 0 sf b Det ¼ N þ e0f N cf 2 E f

(24)

where Det = total strain range, sf0 = fatigue strength coefficient, E = Young’s modulus, b = Basquin’s exponent, ef0 = fatigue ductility coefficient, c = fatigue ductility exponent, and Nf = cycles to failure. The values of (sf0 /E), (b), (ef0 ), and (c) of the 300 h aged samples were higher than that of un-aged samples. However, increasing the aging time to 1800 h resulted in lower values than that of the un-aged samples. Jeong et al. (2009) evaluated the effect of strain hardening on the environmental fatigue behavior of CF8M under PWR conditions. The material was taken in the as-cast condition with 25 % ferrite. The tests were carried out at 316  C and 15 MPa with 30 ml of dissolved hydrogen per kg of H2O and < 5 ppb of DO. Cyclic hardening was observed during the initial 200 cycles that showed peak loads which increased with increase in the strain amplitude. The fatigue test data points were scattered within the ASME design curve and the ASME mean curve. The same group also evaluated the effect of strain rate on the fatigue behavior (Jeong et al. 2011). The strain rate was varied from 0.004 % s1 to 0.04 % s1. The number of cycles to failure increased with increase in the strain rate almost by an order of magnitude. The increase in the strain amplitude from 0.4 % to 0.8 % decreased the number of cycles to failure (from 2750 to 150 cycles at 0.004 % s1 and from 13,500 cycles to 1500 cycles at 0.04 % s1 strain rate).

Page 29

Handbook of Climate Change Mitigation and Adaptation DOI 10.1007/978-1-4614-6431-0_30-2 # Springer Science+Business Media New York 2015

Table 1 Cost comparison of electricity generation in the USA using different fuel sources (for the year 2008) Fuel source Oil Gas Coal Nuclear

Cost ($/kWh) 0.18 0.082 0.033 0.02

Cicero et al. (2009) analyzed a CASS CF8M component (motor-operated valve of the reactor water cleanup (RWCU) system of a BWR unit) that was in service for 40 years using FITNET-FFS procedure and the ASME code. The ferrite content of the component was about 15 %. If the ferrite content was more than 10 %, the aging effect due to service temperature was needed to be considered for structural integrity analysis. The RWCU system was subjected to more than 60 major thermal cycles in the temperature range of 30–300  C and a stable operating temperature of 250  C in 14 years. There were other minor temperature excursions at around 250  C. The maximum service stress calculated at the neck of the valve was about 86 MPa, and the critical flaw size was much larger than that could be detected by inspection techniques. Wang et al. (2010) used nano-indentation technique to evaluate the thermal aging damage mechanism of the CASS. The specimens were aged at 400  C for 100–3000 h representing service life of 0.7–21.48 years according to the corresponding Arrhenius relation. Dislocation pileup at the Cr-rich clusters of a0 spinodal decomposed phase was attributed to the observed embrittlement.

Cost–Benefit Analysis Nuclear power is highly competitive with other forms of power generation such as fossil fuel power and renewable energy-based power generation. The cost of fuel is much less than that of fossil fuels. However, the capital cost is high because of increased margin of safety precautions and cost involved toward storage of spent fuels. While calculating the cost of nuclear power, the cost involved in waste management and decommissioning cost are fully considered ( Economics of Nuclear Power). In 2010, the cost of 1 kg of uranium as UO2 reactor fuel is calculated as $ 2555. At 45,000 MW-day/ton burnup, 360,000 kWh electrical energy can be generated per kg of fuel. Therefore, the fuel cost per kWh energy is 0.77 cent. The US electricity production cost using different fuel sources in the year 2008 is given in Table 1. This includes cost of fuel, operation, and maintenance. Capital cost is not considered. The capital cost includes: • Bare plant – engineering, procurement, and construction (EPC) • The owner’s cost (land, cooling infrastructure, administration and associated buildings, site works, switch yard, transmission, project management, license, etc.) • Cost escalation due to increased labor and materials • Inflation • Financing and interest of financing The typical construction period of a nuclear power plant is about 48–54 months. Decommissioning cost is about 9–15 % of initial capital cost, which is about 0.1–0.2 cent per kWh of energy generated in the USA. The EPC cost in the year 2008 was about $ 3000/kW.

Page 30

Handbook of Climate Change Mitigation and Adaptation DOI 10.1007/978-1-4614-6431-0_30-2 # Springer Science+Business Media New York 2015

Table 2 Typical composition of nuclear fuel and spent nuclear fuel 235 U 238 U 236 U 239 Pu 240 Pu 241 Pu 242 Pu Fission product

Fresh nuclear fuel 3.3 96.7 – – – – – –

Spent nuclear fuel 0.81 94.30 0.51 0.52 0.21 0.10 0.05 3.5

Spent Fuel and Reprocessing When the spent fuel assembly is removed from the reactor, it is stored at the reactor site and allowed to cool before reprocessing or disposal. Typical compositions of fresh and spent fuels are listed in Table 2. Most of the commercial reactor spent fuels are in water-filled swimming-pool-type structures. This type of arrangement is chosen because water is inexpensive, has good heat transfer coefficient by convection, and provides shielding, and visibility in water gives an opportunity to detect undesired events, if any. The limitation of water as a cooling medium in spent nuclear fuel is that water is a neutron monitor and active electrolyte for corrosion reactions. The typical PWR operating cycle is about one year when 1/3 of the core is replaced with new fuel. After one year of operation, the fuel assembly, which weighs about 1300 lbs, is removed from the core and transferred to an interim storage facility. The radiation levels of the unshielded fuel assembly are more than millions of rems per hour. The spent fuel assemblies are placed in vertical stainless steel racks. In order to prevent reaching critical conditions of the spent nuclear fuel assemblies, these are stored in well-separated conditions. Furthermore, neutron-absolving materials such as boron carbide or boron rods are inserted to inhibit neutron multiplication. The pool storage facility is designed only for interim storage – until the spent fuel is cooled down to low temperature. The remnant radioactive decay has subsided. Afterward the spent fuel will be taken for reprocessing or, in the absence of reprocessing, to a long-term storage facility.

Dry Storage As an alternate to wet pool storage, dry storage using metal casks and concrete modules is practical. The heat generated during radioactive decay of the spent fuel is removed by the force convection of air, in case of modular concrete vault storage. Metal casks are provided with fins for faster heat transfer. These metal casks, if properly designed, can also be used for transportation of spent nuclear fuels. For transportation of spent nuclear fuel, the metal casks are provided with (1) protection against direct radiation exposure to workers and the public, (2) provision for radioactive heat removal, and (3) neutron absorbers to prevent criticality. The metal casks can contain about 7 PWR assemblies or 18 BWR assemblies. The body of the cask is made of stainless steel of 5 m long and 1.5 m wide. Shielding is provided by depleted uranium or lead metal. It has an outer stainless steel shell and a corrugated stainless steel jacket that circulates water as neutron shielding fins are provided for external air forced cooling and minimal impact damage. The spent fuel casks for transportation are constructed so sturdily that it can withstand the impact of being dropped from a height of 10 m onto an unyielding surface (metal anvil) and pass the crash test of a 130 km/h locomotive crash on a stationary cask-loaded tractor-trailer rig. It can also withstand fire for up to a 125 min burn in JP-4 fuel at 980–1150  C. Page 31

Handbook of Climate Change Mitigation and Adaptation DOI 10.1007/978-1-4614-6431-0_30-2 # Springer Science+Business Media New York 2015

Spent Fuels

Mechanical Disintegration Off-Gas Treatment Dissolution in Nitric acid Acid Recovery

High Level Liquid Waste

Solvent Extraction using tri-n-butyl phosphate (TBP) in kerosene

Solvent Treatment

Addition of U(IV) Partitioning of Pu and U

Conversion to PuO2

UO2 conversion

Reprocessed Uranium

Reprocessed Plutonium

Fig. 9 Flow diagram of PUREX process of reprocessing spent nuclear oxide fuels

Transmutation Transmutation of transuranic elements such as plutonium, neptunium, americium, and curium can be conducted by irradiating with fast neutrons. In this process, the original actinide isotopes are transformed to radioactive and nonradioactive fission products. This process is important for nuclear waste management, since the isotopes of actinides have half-lives of thousands of years and alpha emitters. Transmuting these isotopes to short-lived fission products helps eliminate the radioactive hazardous associated with long-lived radionuclides.

Reprocessing

The spent fuel contains about 3.5 % fission products that predominantly contain neutron poisons such as Xe135 and I-137. Accumulation of fission products and depletion of fissile U-235 in the nuclear fuel make the sustainability of the nuclear chain reaction very difficult. Therefore, the nuclear fuel is removed from the reactor core. Currently, about 10,500 tons (of heavy metal) of spent fuel is disposed every year from nuclear reactors. The purpose of reprocessing is to separate the actinides from the fission products so that it can be reused as nuclear fuel. This decreases the burden on uranium mining and results in a more sustainable use of nuclear energy as a renewable energy source. Reprocessing can be carried over using aqueous or nonaqueous processes. Aqueous Reprocessing The aqueous process is based on the solvent extraction. Figure 9 illustrates the process flow. First, the spent nuclear fuel is dissolved in nitric acid. The Zircaloy cladding is removed separately. The aqueous solution containing dissolved spent fuel is taken for the solvent extraction in an organic solution of kerosene containing tributyl phosphate (TBP). When the aqueous solution comes in contact with the organic TBP, hexavalent uranium (U6+) and tetravalent plutonium (Pu4+) are extracted by TBP. Almost all the fission products remain in the nitric acid solution which is extracted as high-level liquid waste. In the solvent extraction partitioning step, Pu 4+ is reduced to Pu3+ by adding Li+ as a reductant. The Pu3+ is Page 32

Handbook of Climate Change Mitigation and Adaptation DOI 10.1007/978-1-4614-6431-0_30-2 # Springer Science+Business Media New York 2015

removed by dissolving in nitric acid solution. The recovered Pu can be used as a raw material for fast breeder reactor fuels in the future. The uranium species remaining in the solution can be recovered by processing through a series of scrubbing columns and purification columns. The purified uranium can be enriched and used as a fuel after converting to UO2. The ability to separate plutonium from uranium is considered a potential proliferation concern. Therefore, modifications are made in the PUREX process to avoid separation of plutonium. In the modified processes, uranium is separated while keeping Pu, minor actinides, and fission products in the waste solution. Later, the actinides are separated as a group. Another modification of the PUREX process is coprocessing. If the intent of reprocessing of spent fuel is to use the recovered actinides for producing mixed oxide fuel (MOX), then coprocessing is the right method. In this process, partitioning of U or Pu does not take place. Therefore, proliferation of Pu for weapon is not a concern. In the coprocessing method, 30 vol.% TBP in n-dodecane is used as solvent and a 2.5 M HNO3 solution is used as scrub solution. The aqueous feed solution containing 4.2 M HNO3, 2 M UO2, + Pu, and 1.25 M FP is fed through solvent extraction column of TBP in n-dodecane. Uranium and plutonium are complexed with the TBP, and thus, fission products are separated. The U + Pu complexed with organic phase is washed with dilute nitric acid. The resulting nitrate solution of U + Pu is treated with peroxides or oxalates to form precipitates of U + Pu peroxide or oxalate. These oxalate precipitates are calcined to form UO3 or U3O8 and reduced in hydrogen atmosphere to form UO2. There are several variations in the PUREX process. Table 3 lists these modified PUREX processes. Pyroprocessing Pyrochemical or pyrometallurgical processing using LiCl–KCl molten salt systems is considered one of the most feasible alternatives to the PUREX process for safe and proliferation-resistant recovery of nuclear fuel elements from the spent fuels. This technology may also be useful for separating actinides from the high-level waste generated by the PUREX process. Pyrometallurgical process is preferred because of the stability of the molten salts to high radiation and shorter cooling times (OCDE/NEA Report). Reprocessing of metallic fuels involves separation of actinides from the fission products by electro-transport in a molten salt electrolyte. Since rare earth elements (as part of fission products) have similar chemical properties as that of actinides and show neutronic poison effect, separation of fission products is important for efficiently recycling the actinides. Spent oxide fuels also can be reprocessed by the pyrometallurgical electrorefining method. In this case, the spent oxide fuel is reduced to metal form by lithium (Koyama et al. 2007) or chlorinated in the presence of a reductant such as carbon (Yang et al. 1997) before anodic dissolution or direct dissolution in the presence of an oxidizer such as CdCl2 into the molten salt (Koyama et al. 1997). The major advantages of the pyroprocessing spent fuel are as follows: • The process is proliferation resistant since Pu is not separated from minor actinides. • Interim storage of spent nuclear fuel may not be required since the pyroprocessing is capable of handling spent fuels in hot conditions as the process takes place in temperatures greater than 500 degrees Celsius. • No liquid wastage is generated for disposal. Therefore, waste management becomes easy. • The process can be adopted for in-line reprocessing at the reactor site. • This process can accept several forms of fuel such as uranium oxide, carbide, nitride, mixed oxides, and pure heavy metals. • Very short turnaround time results in cost saving. • Generation of minimum transuranic waste.

Page 33

Handbook of Climate Change Mitigation and Adaptation DOI 10.1007/978-1-4614-6431-0_30-2 # Springer Science+Business Media New York 2015

Table 3 Variations of aqueous–organic reprocessing of nuclear spent oxide fuels (Adopted from the Nuclear Technology Review Supplement, International Atomic Energy Agency, Vienna, 2008) Process DIAMEX

Purpose Extraction of minor actinides and lanthanides from HLLW

TOGDA

Ditto

TRUEX

SANEX-S

Transuranic (TRU) element extraction from HLLW Selective actinide extraction process for group separation of actinides from lanthanides Ditto

TALSPEAK

Ditto

ARTIST

Ditto

SESAME

Selective extraction and separation of americium by means of electrolysis

CSEX CCD-PEG

Cs extraction Extraction of Cs and Sr from raffinate

SREX GANEX

Sr extraction Uranium extraction + other processes for further separation

SANEX-N

Special aspects Diamide extraction process solvent based on amides as alternate to phosphorous reagent generates minimum organic waste as the solvent is totally combustible Tetra-octyl-diglycol-amide Amide similar to DIAMEX Extraction by using carbamoyl methyl phosphine oxide (CMPO) together with TBP Process for separating actinides from lanthanides from HLLW by using neutral N-bearing extractants, viz., bis-triazinylpyridines (BTPs) Use of acidic S-bearing extractants, for example, synergistic mixture of Cyanex-301 with 2,2-bipyridyl Trivalent actinide–lanthanide separation by phosphorus reagent extraction from aqueous komplexes. Use of HDEHP as extractant and DRPA as the selective actinide complexing agent Amide-based radio-resources treatment with interim storage of transuranics. This process is made up of (1) phosphorusfree branched alkyl monoamides (BAMA) for separation of U and Pu, (2) TOGDA for actinide and lanthanide recovery, and (3) N-donor ligand for actinide–lanthanide separation Process for separating Am from Cm by oxidation of Am to A (VI), subsequent extraction with TBP for separation from Cm Using calix-crown extractants Chlorinated cobalt dicarbollide and polyethylene glycol (CCD-PEG) in sulfone-based solvent is planned for extraction of Cs and Sr from UREX raffinate Using dicyclohexano-18-crown-6 ether A series of five solvent extraction flow sheets that perform the following operations: (1) recovery of Tc and U (UREX); (2) recovery of Cs and Sr (CCD-PEG); (3) recovery of Pu and Np (NPEX); (4) recovery of Am, Cm, and rare earth fission products (TRUEX); and (5) separation of Am and Cm from the rare earth fission products (Cyanex-301)

The limitations of the process are requirements of facilities with oxygen- and moisture-free environment, arid construction materials that withstand very high temperature, and highly corrosive molten halide environment. Reprocessing of Spent Metallic Fuel Metallic fuels are used in experimental fast breeder reactors with liquid sodium as coolant. Reprocessing of this spent fuel (U–Zr, U–Pu + Zr alloys) is carried on by first chopping them into small pieces, loaded onto an anode basket made of SS, and dissolving them by applying anodic potential in an electrorefining cell. The electrolyte is typically an eutectic of LiCl–KCl at 500  C. By applying an anodic potential to the stainless steel basket containing the chopped fuels, the pellets are oxidized and dissolved in the molten salt. Dissolved actinides are present as chlorides in the molten salt. Lanthanides in the fission product are

Page 34

Handbook of Climate Change Mitigation and Adaptation DOI 10.1007/978-1-4614-6431-0_30-2 # Springer Science+Business Media New York 2015

Anode +

Solid cathode (−)

Liquid Cd cathode (−)

U3+

Liquid Cd

Anode fuel basket Dissolution U deposit on solid steel U3+,

Pu3+,

MA.

REE3+

LiCl-KCl eutectic

MA3+ U3+

Pu3+

Fig. 10 Schematic arrangement of electrorefining cell for pyroprocessing of spent nuclear fuel in molten LiCl–KCl

Table 4 Redox potentials and activity coefficients of actinides in LiCl–KCl eutectic melt at different temperatures (Roy et al. 1996, pp 2487–2492) Actinide system U(III)/U Pu(III)/Pu Am(II)/Am

Potential at different temperatures of LiCl–KCl (V vs. Cl/Cl2) (g = activity coefficient) 673 K 723 K 773 K 823 K 2.53 2.49 2.45 2.42 (g = 2  103) (g = 3.1  103) 2.845 2.808 2.775 – (g = 1  103) (g = 2.3  103) (g = 4.1  103) – 2.843 – –

converted to lanthanide chlorides and dissolved in the molten salt. Addition of CdCl2 to the LiCl–KCl mixture helps transfer most of the actinides and lanthanides as chlorides in the molten salt bath. Gaseous fission products are out-gased. Undissolved cladding materials and noble fission products will be recovered as solids from the reprocessing cell. During the electrorefining process, uranium is recovered from the molten salt by application of a constant cathodic current density to a steel cathode in a shape of a cylindrical rod, as shown in Fig. 10. The resultant cathodic potential is just sufficient to electrodeposit only uranium onto the steel cathode. After depositing uranium, when the ratio of plutonium to uranium is greater than 2 (Pu/U > 2), now the electrodeposition process is continued with liquid cadmium as cathode. In this step, plutonium is recovered along with americium (Am) in the form of Pu–xAmxCd6 compound. More than 10 wt% of Pu is collected using this method. A high separation factor between actinides and rare earths within a MClx–LiCl–KCl system has also been reported when liquid bismuth is used as liquid cathode. After the actinide recovery, the molten salt is solidified and scrubbed to remove fission products through a zeolite column. The redox potentials of actinides and lanthanides are given in Tables 4 and 5, respectively. The lanthanides show more negative potentials than actinides. Among the actinides, uranium shows less negative reduction potential than plutonium and americium. Therefore, under a sufficient cathodic polarization, uranium will be reduced first. Electrodeposition of the uranium on the solid steel cathode

Page 35

Handbook of Climate Change Mitigation and Adaptation DOI 10.1007/978-1-4614-6431-0_30-2 # Springer Science+Business Media New York 2015

Table 5 Redox potentials of lanthanides dissolved in LiCl–KCl eutectic at 450  C Reduction potential at 450  C, V vs. Cl/Cl2 3.1 (Kuznetsov et al. 2005) 3.26 (Castrillejo et al. 2002) 3.02 (Hamel et al. 2004) 3.32 (Castrillejo et al. 2005a) 3.36 (IAEA 2001) 3.15 (Caravaca et al. 2007) 3.41 (Castrillejo et al. 2005b)

Redox couple La3+/La Ce3+/Ce Nd3+/Nd Dy3+/Dy2+ Dy2+/Dy Gd3+/Gd Pr3+/Pr

Table 6 Activity coefficients of actinides in liquid cadmium at 450  C Activity coefficient in liquid cadmium at 450  C 15 2.8  103 3.1  105 1.1  104

Element U Np Pu Am

decreases the concentration of the uranium (III) ions in the melt. Therefore, the redox potential of U(III) will move to more negative potentials with continuation of the electrorefining process. The electrorefining process is switched to liquid cadmium cathode because of the following reasons: (1) Liquid cadmium as cathode decreases the activity of actinides other than uranium as shown in Table 6; (2) the lower activity coefficient brings the redox potentials of all actinides closer so that these elements can be deposited together; and (3) recovery of Pu along with other minor actinides gives better proliferation resistance. The five orders of magnitude smaller activity coefficient of Pu as compared to that of U could be attributed to formation of PuCd6 compounds in the liquid Cd cathode (Shirai et al. 2000). When Pu is electrodeposited onto liquid cadmium cathode, the reduction potential is shifted by 0.3 V in the positive direction as compared to the electrodeposition onto a solid surface. This shift in the positive direction brings the reduction potential of Pu closer to the reduction potential of U(III). The shift in the reduction potential of PU(III) in liquid cadmium cathode can be explained by using the Nernst equation: Pu3þ þ 3 e ! Pu

(25)

2:3RT ½Pu3þ  log E ¼E þ 3F ½gPu

(26)

1

0

Since the value of g is 3.1  105 in the liquid cadmium, the redox potential is shifted almost by 0.25 V in the positive direction. Sustained operation of the electrometallurgical reprocessing cell results in accumulation of fission products in the electrolyte and depletion of the uranium ions in the salt. The variation in composition of the electrolyte could potentially alter the operating conditions of the cell because of the significant changes in the thermophysical properties and interfacial electrochemical behavior of the molten salt systems. For better process control, a detailed database of the electrochemical properties of the molten salt system is required. When multiple fission product elements are present in the electrolyte, the reduction behavior of the actinides could significantly be altered because of possible underpotential

Page 36

Handbook of Climate Change Mitigation and Adaptation DOI 10.1007/978-1-4614-6431-0_30-2 # Springer Science+Business Media New York 2015

reduction of lanthanides and slower diffusion kinetics of actinides. This is important for determining limits on the use of the molten salt electrolyte before it needs to be purified or disposed. Thermodynamic and transport properties of binary LnX3–MX systems have been investigated widely (Gaune-Escard et al. 1994; Takagi et al. 1997; Gong et al. 2005) where Ln = La, Ce, Pr, Nd, Gd, Tb, and Eu; M = Li, L, Na, Cs, and Rb; and X = F, Cl, I, and Br. Addition of lanthanide chloride to alkali metal chloride results in formation of a variety of stoichiometric compounds such as M3LnCl6, MLn2Cl7, M2LnCl5, M3Ln5Cl18, etc. Formation of compounds and complexes in the molten salt system affects the electrical conductivity and other thermophysical properties. Stoichiometric compounds show minimum electrical conductivity. Structural disordering increases the number of current carriers and improves the conductivity. The specific electrical conductivity of LnCl3 ranged from 0.11 to 0.4 Sm1 at 1000–1250 K. The activation energy for electrical conduction was about 28–30 kJ/mol. Polymerization of the melt was reported to play a significant role in increasing the electrical conductivity of the molten salt system2. Existence of octahedral complex anions of LnCl63 in the LnCl3 melts and formation of dimers have been proposed by the following reaction (Ikeda et al. 1988): 2LnCl6 3 ! Ln2 Cl11 5 þ Cl

(27)

Since free Cl ions are produced by the above dimerization reaction, the conductivity of the melt increases. Both polymerization of melt and presence of free chloride ions could affect the activity and mobility of the cations, and in turn the separation kinetics could be altered. Standard potentials of actinides in LiCl–KCl eutectic salt and separation of the actinides from rare earths by electrorefining have been widely reported by many research groups (Sakamura et al. 1998; Roy et al. 1996; Serrano and Taxil 1999). Recently, Castrillejo and coworkers (2005c) reported electrochemical behavior of a series of lanthanide elements in LiCl–KCl eutectic melt in the temperature range of 400–550  C. Cyclic voltammetry results of binary, ternary, and quaternary LnCl3-(LiCl–KCl)Eutectic systems at 500  C indicate that the incipient potentials of cathodic reduction waves shifted to less negative values with increased additions of lanthanide components. The positive shift in the potential of reduction wave is, in general, associated with two phenomena, viz., (1) under potential deposition, the interaction of reducing species (R) with the substrate (S) is energetically more favorable than the species–species (R-R) interaction, and (2) when two species (A and B) are present in the electrolyte, formation of a compound (AnBm) is more favorable by having a negative free energy (DG), and the deposition potential is positively shifted from the redox potential of the more negative species by an amount (DG/nF) (Cohen 1983). The CV results of binary system (single component lanthanide addition) do not show any underpotential deposition of pure lanthanide elements. However, in this investigation, addition of more than one lanthanide chloride in the LiCl–KCl eutectic resulted in considerable shift in the incipient potential of the cathodic wave. According to Hume-Rothery principles, atoms having similar size (size difference 1100 K, the Zircaloy cladding will balloon up because of the rapid heating and burst. This altered geometry of the fuel rod will affect the geometry of the coolant flow channels in the core. Some locations will have restricted access to the coolant because of the ballooning effect. If sufficient water is added, core damage can be suppressed at this stage. Rapid Oxidation: This stage is initiated at 1500 K. When Zircaloy reacts with steam, hydrogen is produced as given by the following reaction and a large amount of heat is released: Zr þ 2H2 O ! ZrO2 þ 4H2 þ 6:5 MJ=kg of Zr If water is added at sufficient rate and volume, the core will be quenched and progression of damage could be stopped. If the water is not sufficient or the rate of heat removal is less than the rate of heat generated, the damage propagates to the next stage. Debris Bed Formation: When the temperature reaches 1700 K, the molten control materials will flow to the lower part of the core (which is submerged in the water) where the temperature is low and solidify. At 2150 K melting of Zircaloy occurs. Molten Zircaloy along with dissolved UO2 may flow downward and solidify at the lower portion of the core. These solidified debris will form a cohesive bed leading to restricted flow of coolant in the lower region of the core. Relocation of Lower Plenum: When molten core materials (which are experiencing 1500–2150 K) fall to the lower region of the core which is at 550 K, steam is generated rapidly leading to occurrence of steam explosion. Furthermore, this steam oxidizes any unoxidized molten Zircaloy which generates hydrogen at a faster rate. These reactions lead to overpressurization of the system. Re-criticality also may occur in the relocated core debris when the control materials are not present in the required concentration. Understanding of the sequence of core damage is necessary to design preventive measures of core meltdown. Future work on nuclear safety should concentrate on a reliable ECCS that can be operated even in the worst-case scenario as experienced in the Tohoku Tsunami. Future work also should focus on a reliable system, with public acceptance, for a long-term safe storage of nuclear spent fuel. Future Fuel Cladding Materials: Zr–Sn alloys such as Zircaloy-2 and Zircaloy-4 are currently used as fuel cladding tubes in the current light water reactors because of their low neutron absorption cross sections for thermal neutrons, reasonable creep resistance, and corrosion resistance in high-temperature high-pressure water (Wray and Marra 2011). These cladding materials perform well under normal operating conditions and give a reasonable safety margin under design basis accident (DBA) scenarios. However, under beyond design basis accident (BDBA) conditions, such as a loss-of-coolant accident event that occurred in the Fukushima Daiichi power plant, zirconium-based cladding materials undergo severe degradation because the peak clad temperature (PCT) exceeds the design limit of 1204  C (Charit

Page 43

Handbook of Climate Change Mitigation and Adaptation DOI 10.1007/978-1-4614-6431-0_30-2 # Springer Science+Business Media New York 2015

and Murty 2008). When the Zr-alloy cladding is exposed to high-temperature steam environment, an exothermic Zr-steam reaction generates more heat than that of radioactive decay which in turn oxidizes the entire cladding material. The current US design regulation (10 code of Federal Regulation 50.46) limits the equivalent cladding reacted (ECR) thickness to 17 % of the initial cladding thickness under DBA conditions. Furthermore, copious amount of hydrogen is generated during the steam oxidation reaction of zirconium that may result in explosion. Therefore, one of the goals of the Fuel Cycle R&D program is to develop high-performance LWR fuel and cladding materials that are resistant against different severe accident scenarios. In addition to the enhanced safety margin, the next-generation fuel clads should have the required properties to perform under high-burnup operating conditions (> 40 MWd/kg of U). At a high level of burnup, high fission gas pressures are realized along with higher creep deformation. In addition, neutron damage to the cladding makes it more susceptible to failure. There could be situations of fuel cladding chemical interaction (FCCI) involving fuel constituent redistribution (Carmack et al. 2009). Hence, improved cladding and matrix materials for pin-type and dispersion-type fuels with low FCCI potential, high strength, radiation tolerance, and high-temperature oxidation resistance are highly desirable for accident-tolerant fuel cladding materials. Recently, renewed interest has emerged in aluminum-bearing ferritic alloys despite the neutronic penalty in LWR applications. For example, the APMT alloy (nominal composition, Fe-22 Cr-5 Al-3 Mo- < 0.05C, wt%) is being considered for its extreme high-temperature oxidation resistance even beyond 1200  C due to the protective nature of alumina-based scale (Terrani et al. 2013). This alloy is conventionally used in high-temperature furnace elements. While the alloy has shown promise in terms of oxidation resistance at elevated temperatures, this alloy has not been adequately assessed for advanced fuel cladding applications. Furthermore, addition of “reactive” elements such as Y, Hf, Zr, etc., has been considered to improve the oxidation resistance of alumina-forming alloys (Guo et al. 2014). The details of growth stresses during steam oxidation of alumina layers and the effect of reactive elements on the diffusion and electronic behavior of the oxide layers are not studied in detail. Such an understanding is pertinent for the design of new FeCrAlRE cladding materials that show improved LOCA resistance. In addition to FeCrAl alloys, other materials such as Mo (Nelson et al. 2013) and ferritic ODS alloys (Klueh et al. 2005) are also actively investigated for fuel cladding applications. The design of the new cladding alloy will be based on the following considerations (Knief 1992; Pint et al. 2013): • The target mechanical properties under unirradiated conditions: – The tensile strength at room temperature will be greater than 600 MPa. – The yield strength at 1200  C will be about 100 MPa (versus 50 MPa of the Zr-4 alloy at 800  C). – A 100 h creep rupture strength at 1200  C will be about 50 MPa (versus 5 MPa at 800  C of the Zr-4 alloy). – Elastic modulus 100 GPa at 1200  C. • Understanding irradiation effects: – Formation of dislocation loops and a0 phase; phase stability. – Fracture toughness after irradiation to 20 dpa level is 50 MPa√m (compared to 12–15 MPa√m of Zr alloy); dimensional changes 600  C) – Effect of reactive elements (actinides, Zr, Hf, Sc, etc.) on the diffusivity of Al3+, VAl3 VO2+, and O2 and adhesion of oxide layer – Understanding the origin of oxide growth stresses during steam oxidation, electronic properties, and the stability of oxide layer under LOCA condition Page 44

Handbook of Climate Change Mitigation and Adaptation DOI 10.1007/978-1-4614-6431-0_30-2 # Springer Science+Business Media New York 2015

It is well documented (Lim et al. 2013) that higher concentration of Cr in ferritic steel leads to Cr-rich a0 and s phase formation during thermal aging between 350  C and 550  C. Since the normal operating temperature of the LWR falls in the embrittling temperature range, the effect of spinodal decomposition should be considered. It is observed that Al partitions to Fe-a phase and the partitioning factor increases with the aging time in Fe–20Cr–5Al ODS alloy (Capdevila et al. 2008). Under the LOCA condition, the a0 phase would be dissolved in the matrix, and therefore, spinodal decomposition may not be an issue. Since the formation of a0 does not affect the distribution of scale-forming Al, high-temperature oxidation resistance of the alloy may not be impaired by the embrittlement aging at low temperatures. However, ductility will be severely affected. The low-temperature (up to 350  C) corrosion resistance of the FeCrAlRE alloys will be imparted by a Cr-rich oxide layer in the high-temperature high-pressure water under normal operating conditions. The required oxidation resistance under LOCA conditions could be attributed to the formation of an impervious a-Al2O3 film which is stable at temperatures above 1040  C. Transient aluminum oxides such as g-Al2O3 and d or y-Al2O3 are stable at temperature ranges 500–800  C and 800–1040  C, respectively. The transformation of transient oxides into a-Al2O3 is accompanied by a 10 % volume contraction that results in accumulation of tensile stresses. If the oxide scale contains multiple oxide phases, the mismatch in the coefficient of thermal expansion again leads to build up of stresses. When starvation of oxygen occurs during high-temperature exposure, generation of oxygen vacancies (VO2+) is expected at the expense of oxygen sublattice (OOx) following the reaction: OO x ! 1⁄2 O2 þ VO 2þ þ 2e

(36)

Similarly, under oxygen-rich conditions, aluminum ion vacancies could be generated by incorporating the oxygen atoms into the lattice from the adsorbed oxygen molecule following the reaction: ⁄ O2 ! OO x þ 2=3 VAl 3 þ 2hþ

12

(37)

These cation and anion vacancies are important in the formation of oxide layer through the reaction 2VAl 3 þ 3VO 2þ þ 2AlAl x þ 3OO x ! Al2 O3

(38)

However, when the concentration of the vacancies reaches a nonequilibrium condition, the stability of the oxide layer is affected by forming porosity either at the oxide/atmosphere interface due to condensation of oxygen vacancies or at the oxide/metal interface due to condensation of cation vacancies. Since grain boundaries act as short circuit diffusion paths for the transportation of atoms and ions, the presence of aliovalent ions in the oxide layer and reactive elements at the grain boundaries of the alloy could significantly alter the diffusivities of both oxygen and aluminum species. Hindering the diffusion of species that form an oxide will significantly decrease the oxidation rate. In addition to affecting the diffusivities, the RE can also modify the electronic states of the oxide layer and thereby affect the oxidation kinetics (Heuer et al. 2011).

References Anderson MT, Crawford SL, Cumblidge SE, Denslow KM, Diaz AA, Doctor SR (2007) NUREG/CR6933, PNNL-16292, March 2007 Bloom EE (1998) J Nucl Mater 263:7

Page 45

Handbook of Climate Change Mitigation and Adaptation DOI 10.1007/978-1-4614-6431-0_30-2 # Springer Science+Business Media New York 2015

Bond AP, Dundar HJ (1977) In: Staehle RW, Hochmann J, MdRight RD, Slater RE (eds) Stress corrosion cracking of ferritic stainless steels. NACE, Houston, p 1136 Brinkman CR, Korth GE (1973) Heat-to-heat variations in the fatigue and creep–fatigue behavior of AISI type 304 stainless steel at 593 C. J Nucl Mater 48(3):293–306 Calonne V, Gourgues AF, Pineau A (2004) Fatigue Fract Eng Mater Struct 27:31–43 CANDU Reactors, Information from: http://www.aecl.ca/Reactors.htm Capdevila C, Miller MK, Russell KF, Chao J, Gonzalez-Carrasco JL (2008) Phase separation in PM 2000 Fe-base ODS alloy. Mater Sci Eng A 490:277–288 Caravaca C, De Cordoba G, Tomas MJ, Rosado M (2007) Electrochemical behavior of Gd in molten LiCl-KCl. J Nucl Mater 360:25–31 Carmack WJ et al (2009) Metallic fuels for advanced reactors. J Nucl Mater 392(2):139–150 Carter ML (2004) Mater Res Bull 39:1075 Castrillejo Y, Bermejo MR, Pardo R, Martinez AM (2002) Use of electrochemical techniques for study of solubilization of cerium compounds in molten chloride. J Electroanal Chem 322:124–140 Castrillejo Y et al (2005a) Electrochemistry of Dy in LiCl-KCl. Electrochim Acta 50:2047–2057 Castrillejo Y et al (2005b) Electrochemical behavior of Pr(III) in molten chlorides. J Electroanal Chem 575:61–74 Castrillejo J et al (2005c) Electrochim Acta 50:2047; (2006) 51:1941; (2008) 53:5106; (2005) J Electroanal Chem 575:61–74 Celestian AJ et al (2008) J Am Chem Soc 130:11689 Charit I, Murty KL (2008) Creep behavior of niobium-modified zirconium alloys. J Nucl Mater 374(3):354–363 Chen GZ, Fray DJ, Farthing TW (2000) Nature 407(6802):361–364 Choo KN, Pyun SI, Kim YS (1995) J Nucl Mater 226:9–14 Chung HM, Leax TR (1990) Mater Sci Technol 6:249–262 Cicero G, Catellani A, Galli G (2004) Phys Rev Lett 93:016102 Cicero S, Setien J, Gorrochategui I (2009) Nucl Eng Des 239:16–22 Cohen U (1983) J Electrochem Soc 130:1480 Cookson JM, Was GS (1995) Proceedings of the seventh international conference on environmental degradation of materials in nuclear power systems water reactors, NACE, Breckenridge, p 1109 Dahlkamp F (1993) Uranium ore deposits. Springer, Berlin. ISBN 3540532641 Domagala RF, McPherson DJ (1954) Trans AIME 200:238 “Economics of Nuclear Power” reported in http://www.world-nuclear.org/info/inf02.html Fullwood RR, Hall RE (1988) Probabilistic risk assessment in the nuclear power industry: fundamentals and applications. Pergamon Press, Oxford Galkin NP, Veryatin UD, Yakhonin IF, Lugonov AF, Dymkov YM (1982) The conversion of uranium hexafluoride to dioxide. At Energ 52(1):36–39 Gaune-Escard M, Bogacz A, Rycerz L, Szczepaniak W (1994) Thermochim Acta 236:67–80 Gogotsi YG et al (1996) J Mater Chem 6:595–604 Gong W, Gaune-Escard M, Rycerz L (2005) J Alloys Compd 396:92–99 Grobe M, Lehmann E, Steinbruck M, Kuhne G, Stuckert J (2009) J Nucl Mater 385:339–345 Grossbeck ML, Ehrlich K, Wassilew C (1990) An assessment of tensile, irradiation creep, creep rupture, and fatigue behavior in austenitic stainless steels with emphasis on spectral effects. J Nucl Mater 174(2–3):264–281 Guo H, Wang D, Gong S, Xu H (2014) Effect of reactive elements on oxidation behavior of b-NiAl at 1200  C. Corros Sci 78:369–377 Hallstadius L, Johnson S, Lahoda E (2012) Prog Nucl Energy 57:71–76 Page 46

Handbook of Climate Change Mitigation and Adaptation DOI 10.1007/978-1-4614-6431-0_30-2 # Springer Science+Business Media New York 2015

Hamel C, Chamelot P, Taxil P (2004) Nd cathode process in molten fluoride. Electrochim Acta 49:4467–4476 Hazebroucq S, Picard GS, Adamo C (2005) A theoretical investigation of Gd(III) salvation in molten salts. J Chem Phys 122:224512 He C, Wu X, Shen J, Chu PK (2012) Nano Lett 12:1545–1548 Hejzlar P, Mattingly BT, Todreas NE, Driscoll MJ (1997) Nucl Eng Des 167:375–392 Henager CH et al (2008) J Nucl Mater 378:9–16 Heuer AH, Hovis DB, Smialek JL, Gleeson B (2011) Alumina scale formation: a new perspective. J Am Ceram Soc 94:S146–S153 Hirayama H, Kawakubo T, Goto A (1989) J Am Ceram Soc 72:2049–2053 Holt RA (1974) J Nucl Mater 51: 309; (1974) 50: 207 IAEA (2001) Safety assessment and verification for nuclear power plants – a safety guide. Safety standards series, No. NS-G-1.2. ISBN 92-0-101601-8 Ikeda M, Miyagi Y, Igarashi K, Mochinaga J, Ohno H (1988) The 20th symposium on molten salt chemistry, C303, Yokohama, 10 Nov 1988 Jayet-Gendrot S, Ould P, Meylogan T (1998) Nucl Eng Des 184:3–11 Jeong I-S, Ha G-H, Jun H-I (2009) J Loss Prev Process Ind 22:879–883 Jeong IS, Kim W, Kim TR, Jeon HI (2011) Nucl Eng Tech 43:83–88 Jevremovic T (2005) Nuclear principles in engineering. Springer, New York Jiang C et al (2009) Phys Rev B 79:132110 Kawaguchi S, Sakamoto N, Takano G, Matsuda F, Kikuchi Y, Mraz L (1997) Nucl Eng Des 174:273–285 Kerr R, Solana F, Bernstein IM, Thompson AW (1987) Metall Trans A 18A:1011 Kim WJ, Hwang HS, Park JY, Ryu WS (2003) J Mater Lett 22:581–584 Kimura A et al (1996) Irradiation hardening of reduced activation martensitic steels. J Nucl Mater 233–237(Pt A):319–325 Kiran Kumar M, Aggarwal S, Kain V, Saario T, Bojinov M (2010) Nucl Eng Des 240:985–994 Klueh RL, Alexander DJ (1996) Impact behavior of reduced-activation steels irradiated to 24 dpa. J Nucl Mater 233–237(Pt A):336–341 Klueh RL, Shingledecker JP, Swinderman RW, Hoelzer DT (2005) Oxide dispersion-strengthened steels: a comparison of some commercial and experimental alloys. J Nucl Mater 341:103–114 Knief RA (1992) Nuclear engineering: theory and technology of commercial nuclear power. Hemisphere Publishing Corporation, Washington DC Koyama T, Iizuka M, Shoji Y, Fujita R, Tanaka H, Kobayashi T, Tokiwai M (1997) An experimental study of molten salt reprocessing. J Nucl Sci Tech 34(4):384–393 Koyama T, Hijikata T, Usami T, Inoue T, Kitawaki S, Shinozaki T, Myochin M (2007) Integrated experiments on electrometallurgical processing using PuO2. J Nucl Sci Tech 44(3):382–392 Kraft T, Nickel KG, Gogotsi YG (1998) J Mater Sci 33:4357–4364 Krass AS, Boskma P, Elzen B, Smit WA (1983) Uranium enrichment and nuclear weapon proliferation. Taylor and Francis, London Kuan P, Hanson DJ (1991) INL report EGG-M-91375 Kuznetsov SA, Hayashi H, Minato K, Gauno-Escard M (2005) Determination of U and RE metals separation coefficients in LiCl-KCl melt. J Nucl Mater 344:169–172 Kwon J, Woo S, Lee Y, Park J, Park Y (2001) Nucl Eng Des 206:35–44 Leslie WC (1977) Stress corrosion cracking and hydrogen embrittlement of iron base alloys. NACE, Houston, p 52 Li J, Yang Y, Li L, Lou J, Luo X, Huang B (2013) J Appl Phys 113:023516 Lide DR (1997) Handbook of chemistry and physics, 78th edn. CRC Press, Boca Raton Page 47

Handbook of Climate Change Mitigation and Adaptation DOI 10.1007/978-1-4614-6431-0_30-2 # Springer Science+Business Media New York 2015

Lim J, Hwang IS, Kim JH (2013) Design of alumina forming FeCrAl steels for lead cooled fast reactors. J Nucl Mater 441:650–660 Lippmann W, Knorr J, Nöring R, Umbreit M (2001) Nucl Eng Des 205:13–22 Liu Y, Su KH, Wang X, Wang Y, Zeng QF, Cheng LF, Zhang LT (2010) Chem Phys Lett 501:87–92 Liu Y, Su KH, Zeng QF, Cheng LF, Zhang LT (2012) Theor Chem Acc 131:1101 Makhijani A, Chalmers L, Smith B. Uranium Enrichment, Institute for Energy and Environmental Research, 15 Oct 2004. http://www.ieer.org/reports/uranium/enrichment.pdf Maziasz PJ (1993) Overview of microstructural evolution in neutron-irradiated austenitic stainless steels. J Nucl Mater 205:118–145 Maziasz PJ, McHargue CJ (1987) Int Metal Rev 32:190 MIN KS, Nam SW (2003) Correlation between characteristics of grain boundary carbides and creepfatigue properties in AISI 321 stainless steel. J Nucl Mater 322:91–97 Morss LR, Edelstein NM, Fuger J (eds) (2006) The chemistry of the actinide and transactinide elements, 3rd edn. Springer, Dordrecht Murray RL (2001) Nuclear energy: an introduction to the concepts, systems, and applications of nuclear processes. Butterworth Heinemann, Woburn Nam SW (2002) Assessment of damage and life prediction of austenitic stainless steel under high temperature creep-fatigue interaction condition. Mater Sci Eng A322(1–2):64–72 Nelson AT, Sooby ES, Kim YJ, Cheng B, Maloy SA (2013) High temperature oxidation of molybdenum in water vapor environments. J Nucl Mater 448(1–3):441–447 Ni N, Lozano-Perez S, Sykes J, Grovenor C (2011) Ultramicroscopy 111:123–130 Nilsson JO (1988), ASTM STP 942, 543, American Society for Testing Materials, Philadelphia OCDE/NEA report: accelerator-driven systems (ADS) and fast reactors (FR) in advanced nuclear fuel cycles. A comparative study, (2002) 1 Okamoto Y (1998) Phys Rev B 58:6760 Olander DR (1978) The Gas Centrifuge. Scientific American, August 1978, p 37 Opila EJ (2003) J Am Ceram Soc 86:1238–1248 Opila EJ, Hann RE Jr (1997) J Am Ceram Soc 80:197–205 Pint BA, Terrani KA, Brady MP, Cheng T, Keiser JR (2013) High temperature oxidation of fuel cladding candidate materials in steam-hydrogen environments. J Nucl Mater 440:420–427 RHO BS, Nam SW (2002) Heat effects of nitrogen on low-cycle fatigue properties of Type 304L austenitic stainless steels tested with and without tensile strain hold. J Nucl Mater 300:65–72 Roy JJ et al (1996) J Electrochem Soc 143:2487 Rudling P, Adamson R, Cox B, Garzarolli F, Strasser A (2008) High burn-up fuel issues. Nucl Eng Technol 40(1):1–8 Sakamura Y et al (1998) J Alloys Compd 271–273:592–596 Senor DJ, Youngblood GE, Moore CE, Trimble DJ, Newsome GA, Woods JJ (1996) Fusion Technol 30:943 Serrano K, Taxil P (1999) J Appl Electrochem 29:505 Shack WJ, Kassner TF (1994) Review of Environmental Effects on Fatigue Crack Growth of Austenitic Stainless Steels, NUREG/CR-6176, ANL-94/1, U.S. Nuclear Regulatory Commission, Washington, DC, NRC FIN L2424 Shapiro J (1990) Radiation protection, 3rd edn. Harvard University Press, Cambridge, MA Shen X, Pantelides ST (2013) J Phys Chem Lett 4:100–104 Shiba K et al (1996) Irradiation response on mechanical properties of neutron irradiated F82H. J Nucl Mater 233–237(Pt A):309–312 Shimada S, Onuma T, Kiyono H (2006) J Am Ceram Soc 89:1218–1225 Page 48

Handbook of Climate Change Mitigation and Adaptation DOI 10.1007/978-1-4614-6431-0_30-2 # Springer Science+Business Media New York 2015

Shirai O, Iizuka M, Iwai T, Suzuki Y, Arai Y (2000) J Electroanal Chem 490:31–36 Shoesmith DW (2006) Corrosion 62:703–722 Storm van Leeuwen JW, Smith P (2005) Nuclear power: the energy balance. http://www.stormsmith.nl/ Suauzay M et al (2004) Creep-fatigue behaviour of an AISI stainless steel at 550 C. Nucl Eng Des 232:219–236 Suzuki S, Saito K, Kodama M, Shima S, Saito T (1991) SmiRt 11 transactions, vol. D, August 1991, Tokyo Takagi R, Rycerz L, Gaune-Escard M (1997) J Alloys Compd 257:134–136 Tan L, Allen TR, Barringer E (2009) J Nucl Mater 394:95–101 Terrani KA, Zinkle SL, Snead LL (2013) Advanced oxidation-resistant iron-based alloys for LWR fuel cladding. J Nuc Mater 448:374–379 Thorium fuel cycle–potential benefits and challenges, International Atomic Energy Agency, Vienna, IAEA-TECDOC-1450, May 2005 Tsuji H, Nakajima H (1994) Creep-fatigue Damage Evaluation of a Nickel-base Heat-resistant Alloy Hastelloy XR in Simulated HTGR Helium Gas Environment. J Nucl Mater 208:293–299 Van Der Schaaf B (1988) The effect of neutron irradiation on the fatigue and fatigue-creep behaviour of structural materials. J Nucl Mater 155–157:156–163 Wang ZX, Xue F, Guo WH, Shi HJ, Zhang GD, Shu G (2010) Nucl Eng Des 240:2538–2543 Wigeland RA et al (2006) Nucl Technol 154:95 Wray P, Marra J (2011) Materials for nuclear energy in the post-Fukushima era. Am Ceram Soc Bull 90(6):24–28 Yang YS, Kang YH, Lee HK (1997) Estimation of optimum experimental parameters in chlorination of UO2 with Cl2 gas and carbon for UCl4. Mater Chem Phys 50:243–247 Yilmazbahyan A, Breval E, Motta AT, Comstock RJ (2006) J Nucl Mater 349:265–281 Yokobori T, Yokobori AT Jr (2001) High temperature creep, fatigue and creep-fatigue Interaction in engineering materials. Int J Press Vessel Pip 78:903–908 Zhang H et al (2010) J Am Ceram Soc 93:1148–1155

Page 49

Handbook of Climate Change Mitigation and Adaptation DOI 10.1007/978-1-4614-6431-0_31-2 # Springer Science+Business Media New York 2015

Fusion Energy Hiroshi Yamada* Department of Helical Plasma Research, National Institute for Fusion Science, Toki, Gifu, Japan

Abstract Nuclear fusion is the power of the sun and all shining stars in the universe. Controlled nuclear fusion toward ultimate energy sources for human beings has been developed intensively worldwide for this half a century. A fusion power plant is free from concern of exhaustion of fuels and production of CO2. Therefore it has a very attractive potential to be an eternal fundamental energy source and will contribute to resolving problems of climate change. On the other hand, unresolved issues in physics and engineering still remain. It will take another several decades to realize a fusion power plant by integration of advanced science and engineering such as control of high-temperature plasma exceeding 100 million  C and breeding technology of tritium by generated neutrons. The research and development has just entered the phase of engineering demonstration to extract 500 MW of thermal energy from fusion reaction in the 2020s. The demonstration of electric power generation is targeted in the 2040s.

Introduction Nuclear fusion is the power of the sun and all shining stars in the universe. An artificial sun on the Earth, that is, controlled nuclear fusion, has a very attractive potential to offer an environmentally friendly and intrinsically safe energy source. Tremendous efforts have been paid globally in these 50 years toward the realization of controlled nuclear fusion (Meade 2010; Braams and Stott 2002). Hereafter, nuclear fusion is simply referred as fusion. At this moment, there still remain unresolved issues for a fusion reactor even with state-of-the-art science and technology. It would be said that it will still take another 30 years to realize the first fusion reactor. Nonetheless, fusion is no longer a dream or a mirage and the targeted goal and a roadmap to reach the goal can be defined clearly. Symbolically, the construction of the International Thermonuclear Experimental Reactor (ITER) (http://www.iter.org/; Green 2003), which plans to produce more than 500 MW of heat by fusion, has been just started by international collaboration. The fuel for nuclear fusion is isotopes of hydrogen: deuterium and tritium. Deuterium can be extracted from water and tritium can be transmuted from lithium, which is abundant, in a fusion reactor. Therefore fusion is an inexhaustible energy source. When these fuels are heated up beyond 100 million  C, fusion reaction occurs. At this extremely high-temperature state, fuels become plasma which is ionized gas consisting of ions and electrons (Eliezer and Eliezer 2001). High temperature means that ions and electrons have large kinetic energy. It is necessary to put nuclei (ions) sufficiently close to each other to drive fusion reaction. Large kinetic energy is required to overcome the repulsive force between nuclei with positive electric charge. The product of fusion reaction is helium. To control fusion reaction, it is required to integrate advanced science and technology such as deep understanding of complex plasma physics, development of materials against high heat and neutron loads, and critical engineering related to superconductivity, vacuum, and electricity. Nuclear fusion was discovered in 1932, which is earlier than nuclear fission in 1939. Although the physics study was initiated almost at the same time, these two nuclear reactions have traced different *Email: [email protected] Page 1 of 27

Handbook of Climate Change Mitigation and Adaptation DOI 10.1007/978-1-4614-6431-0_31-2 # Springer Science+Business Media New York 2015

history. Nuclear fission was used for an atomic bomb in 1942 when it was only 3 years later since its discovery. The fission reactor started power generation in 1951, and more than 400 fission power plants are operated to provide base load of electricity worldwide now. In contrast, nuclear fusion was used for a hydrogen bomb in 1952, and its peaceful use for power generation awaits for another couple of decades. These two nuclear reactions are quite different and consequently they contrast with each other from the aspect of their engineering control. Nuclear fission occurs in heavy atoms such as uranium and plutonium. Some isotopes of these heavy atoms are unstable and break apart easily or spontaneously. Although purification of fuels of nuclear fission requires a huge facility and operating cost, it has been industrialized. The control of nuclear fission means suppression of runway of reactions. In contrast, nuclear fusion does not occur easily. Since the reaction occurs between light nuclei which have positive electric charge, extremely high energy is required to bring nuclei closely enough to fuse. This reaction only occurs naturally only in the sun and stars. The required temperature is in the millions of  C. Therefore the control of nuclear fusion means how to heat the fuels to this extremely high temperature and keep them. The scientific assessment of a fusion reactor has been almost completed by more than 50-year research, and the development stage is shifting to the assessment of engineering and technological feasibility. A fusion reactor is not a dream but a target within hailing distance. While another couple of decades of research and development is necessary to realize fusion energy, its realization will be able to resolve global issues related to environment and energy and change social structure. Patient long-term research and development should be conducted with global social endorsement of this highly innovative technology. Then, steady progress will enable commercial reactors to deliver one million kW of electric power to the grid in 2050. The fusion power plant has a promising potential to provide the base load of electricity in the later half of this century. Two methodologies which are magnetic confinement fusion (Lie et al. 2010) and inertia confinement fusion (Mima 2010) are being developed in parallel worldwide. This chapter is devoted to the present status and prospect of magnetic confinement fusion which is now stepping up to engineering demonstration from successful scientific demonstration.

Why Fusion for Global Warming Suppression? Fusion is on the stage of research and development, and it will take another half century to commercialize a fusion reactor. Nonetheless, fusion offers attractive advantages to other energy sources in terms of waste, fuel, and safety. 1. Waste Fusion does not emit CO2. The effect of power plants on global warming is assessed by CO2 emission intensity with consideration of construction and operation of a plant, consumed fuel, and release of methane in digging, etc. Figure 1 shows the CO2 emission intensity of thermal power plants, a fission reactor and a fusion reactor (Report of Japan Atomic Energy Commission in 2005). Coal-fired, oil-fired, and LNG-fired thermal stations emit much larger amount of CO2 than other power stations. Although the CO2 emission intensities are reduced to one third by employing CO2 collection, they are still major players to emit CO2. Fusion power plant does not emit CO2 in operation, and its CO2 emission intensity is a little bit larger than hydraulic and nuclear fission power plants. Fusion power is produced by nuclear reaction and fusion is not free from nuclear waste. However, a product of fusion reaction is helium, which is not radioactive at all, and nuclear waste is limited to structure materials with neutron-induced activation. Absence of very long-lived radioactive waste promises annihilation of radiotoxicity in the order of 100 years (see Fig. 2) (Jacquinot 2010). This Page 2 of 27

300 250

Hydroelectric

Light Water Fission

Fusion

Wind

fuel fuel fuel fuel fuel

Photovoltanics

50

CCS LNG

100

CCS Coal

150

LNG

Oil

200

Coal

CO2 Emission Intensity (Cg/kWh)

Handbook of Climate Change Mitigation and Adaptation DOI 10.1007/978-1-4614-6431-0_31-2 # Springer Science+Business Media New York 2015

0

Fig. 1 Carbon dioxide emission intensity of fired (coal, oil, LNG), renewable (solar, wind), fusion, fission, and hydroelectric power plants. CCS stands for carbon capture and storage. Each bar is separated to the contributions from fuel and construction of a power plant

1.00E+00

Fission 1.00E–01 Coal PWR EFR A EFR B Model 1 MINERVA-w MINERVA-H Model 4 Model 5 Model 6

1.00E–02

1.00E–03

1.00E–04 Fusion

1.00E–05 Coal 1.00E–06 0

100

200 300 Years after shutdown

400

Fig. 2 Relative radiotoxicity of fission and fusion reactors versus time after shutdown. The bands correspond to differences in the fuel cycle (reprocessing) for fission and to the choice of structural material for fusion. The bottom black line is the radiotoxicity of coal (Reproduction of Fig. 1 in Jacquinot (2010))

property would ease the management of radioactive wastes compared with fission reactors. Hazard potential due to radioactivity of a fusion reactor is one thousandth of a fission reactor. 2. Fuel Fuels of fusion are abundant atoms: deuterium and lithium. They are substantially inexhaustible and widely distributed on Earth. Thirty-three grams of deuterium exists in 1 m3 of water, which means 4.5  1013 t in oceans and is still a tiny amount of water itself. The amount of lithium as a mineral Page 3 of 27

Handbook of Climate Change Mitigation and Adaptation DOI 10.1007/978-1-4614-6431-0_31-2 # Springer Science+Business Media New York 2015

resource is estimated 940 million t and that in oceans is 230 billion t. Compared with these abundant fuels, a fusion power station producing one million kW of electricity only consumes 0.1 t of deuterium and 10 t of lithium a year. Readers can evaluate sustainability of fusion energy in terms of fuels easily. 3. Safety Fusion reaction occurs in very high-temperature gas; plasma and fuels are supplied to a reactor like a gas burner. Fuels do not stay for longer than a minute in a reactor core. The fusion reaction is intrinsically quenched by any accident to disturb the burning condition. Unlike the fission reaction, which is essentially a chain reaction in massive fuels, the fusion reaction does not run away in principle. Since fusion itself is completely unrelated to uranium and plutonium, it does not cause proliferation of nuclear weapons. Extreme temperature as high as 100 million  C is required to make fusion happen. Even if fuels of fusion (deuterium and tritium) are available, fusion does not take place. Therefore it is emphasized that fusion does not strictly adhere the nonproliferation treaty unlike fission. Also, it should be noted that there is a fusion-fission hybrid which utilizes neutrons generated by fusion reaction to drive fission reaction. This concept is not free from a proliferation issue.

What Is Fusion? Fusion Reaction Solar energy, which not only human kind but also almost all lives on the Earth enjoy, is delivered as light from the sun. The energy of light originates from fusion reaction taking place in the core of the sun. Four nuclei of hydrogen are fused into a nucleus of helium there. This fusion reaction has been taking place continuously, and the sun has been burning stably in these five billion years and will continue to burn in another five billion years. Physical process of this fusion reaction in the sun was identified in late 1930 after the establishment of quantum mechanics (Bethe and Peierls 1935). Studies to realize this reaction in a laboratory and utilize this reaction as energy source were launched soon after this discovery. The special theory of relativity by Einstein gives the famous formula E = mc2, where E, m, and c are energy, mass, and velocity of light, respectively. This formula means energy and mass are equivalent. It is known that the total rest mass of nuclei changes when the combination of nuclei is reorganized by nuclear reaction. If the rest mass after reaction is smaller than that before the reaction, loss of mass is transformed into energy. This relation is not only limited to nuclear reactions but also applicable to chemical reactions. However, while the loss of mass is usually amounted to one thousandth in the case of nuclear reaction, that in the case of chemical reaction is only in the order of 100 millionth. This is the reason why a nuclear reaction produces 100,000 to 1 million times larger power than a chemical reaction. Figure 3 shows the mass per one nucleon (proton or neutron) which composes an atomic nucleus from the lightest element, hydrogen, to the heaviest element, uranium, in nature. Even at the nuclear reaction, the number of nucleon is conserved. Therefore, this figure indicates that mass is lost when combination (fusion) of lighter elements like hydrogen generates a heavier element like helium. Mass is also lost at the breakup of heavier elements like uranium to lighter elements. This is fission reaction has been already used in nuclear power plants. The mass of a composing nucleon is lightest as iron, thus the most stable element. In stars like the sun, fusion reaction proceeds stage by stage and ultimately generates iron. Heavier elements than iron are generated by another process, such as neutron capture at a supernova explosion. The fact that heavier elements than iron exist on the Earth means that the solar system is on and after the second generation which experienced a supernova explosion since the initiation of the universe. There are a variety of fusion reactions and each has its own specific probability of reaction. Since this probability of nuclear reaction between particles has the dimension of area (m2), it is referred to as cross section and expressed by s. Probability of fusion reaction between two particles has been well Page 4 of 27

Handbook of Climate Change Mitigation and Adaptation DOI 10.1007/978-1-4614-6431-0_31-2 # Springer Science+Business Media New York 2015

Mass per one nucleus

1.008

hydrogen deuterium

1.006 tritium 1.004 lithium

1.002

helium carbon

1.000 0.998

0

50

gold

silver

iron

100 150 Mass number

uranium

200

250

Fig. 3 Change of mass per one nucleus composing an atom

Hydrogen Proton

Electron

Deuterium Neutron

Tritium

Neutrons

Fig. 4 Isotopes of hydrogen

investigated and quantified by various kinds of experiments using accelerators. The probability of the reaction of four hydrogen nuclei to a helium nucleus, which takes place in the core of the sun, is extremely low. The sun is so huge (100 times larger diameter than that of the Earth) that it can keep burning by this fusion reaction with very low probability. Therefore another fusion reaction of hydrogen isotopes (see Fig. 4) which has the largest probability is required to realize a fusion reaction in a plant size on the Earth. This reaction is the combination of D (deuterium) and T (tritium). In the case of the fusion reaction between D (deuterium) and T (tritium), the probability has the maximum at the relative speed of these two particles of 3  106 m/s. In order to use fusion reaction for energy production beyond a basic experiment of elementary particle physics by an accelerator, massive number of fusion reaction should be controlled. A cluster of particles with this speed forms very hightemperature gas: plasma. Then, the ensemble average of probability over distribution functions of all particles is more meaningful to evaluate released power. While the cross section is a function of the energy Page 5 of 27

Handbook of Climate Change Mitigation and Adaptation DOI 10.1007/978-1-4614-6431-0_31-2 # Springer Science+Business Media New York 2015 10−21

e - 3H 3

He

T T-

p-T

10−23

T- 3H e

(σv) (m3 sec−1)

D-T

D- 3H e DD

10−22

10−24

10−25

10−26

1

10

102

103

Kinetic temperature (keV)

Fig. 5 Fusion reaction rate between light atoms (Reproduction from Laby Online (2005))

of particles, reaction rate (the number of reactions per unit volume and unit time) expressed by < sv > with the unit of m3/s at the specific temperature is calculated by the integration of the cross section with regard to the velocity space. The reaction rate of representative fusion reactions is shown in Fig. 5 (Laby Online 2005). In the case of D (deuterium) and T (tritium), the cross section has the peak around several tens keV (1,000 million  C – note that 1 eV (electron volt) corresponds to 11,600 K). Its rate equation is described as D þ T ! He þ n Consequently, helium and neutron are generated and simultaneously the energy of 17.6 MeV (2.8  1012 J) is released. From the law of momentum conservation, the kinetic energy delivered to helium and neutron is 3.5 and 14.1 MeV, respectively. Fusion power density Pfusion is expressed by Pfusion ¼ nD nT sv>DT QDT ;

(1)

where nD, nT, DT, and QDT are particle density of deuterium, particle density of tritium, rate of DT fusion reaction, and released energy by one DT fusion reaction (17.6 MeV = 3.5 MeV + 14.1 MeV), respectively. For example, a typical presumed condition of fusion reactor with nD = nT = 1  1020/m3 and the temperature of 20 keV (230 million  C) gives fusion power of 11 MW/m3.

Page 6 of 27

Handbook of Climate Change Mitigation and Adaptation DOI 10.1007/978-1-4614-6431-0_31-2 # Springer Science+Business Media New York 2015

Fig. 6 Major nuclear reactions in a fusion reactor

While deuterium exists as 1/7,000 (0.015 %) in hydrogen, abundance of tritium is quite low in nature. Therefore, a fusion reactor produces tritium in itself through the reaction of lithium with neutron which is generated by the following two reactions: n þ 6 Li ! 4 He þ T þ 4:8MeV n þ 7 Li ! 4 He þ T þ n  2:5MeV Natural lithium is composed of 7.4 % 6Li and of 92.6 % 7Li. While the reaction between a neutron and 6Li releases 4.8 MeV, the 7Li reaction only occurs with neutron fast enough to absorb 2.5 MeV of energy. Therefore, enriched 6Li to several tens% is placed around the fusion reactor core like a blanket to breed tritium (see Fig. 6). Various forms of breeding material have been proposed, such as ceramics like Li2O and Li2TiO3, liquid metals like Li and LiPb, etc. Techniques for isotope separation of lithium have been established as the column exchange separation method which uses the difference in affinity for mercury and the vacuum distillation method which uses the difference in the mean free path of the evaporated isotopes. Fuels from the amount of lithium in a single cellular phone (around 0.3 g) and deuterium extracted from only 3 l of ordinary water produce energy of 78,000 MJ which is equivalent to electricity of 22,000 kWh. A typical family in developed countries can be furnished with this electricity for a year. A fusion power plant with electric power production of one million kW consumes 0.1 t of deuterium and 10 t of lithium a year as fuel. Needless to say, deuterium is truly abundant in seawater. Technology extracting heavy water (D2O) is available as an industrial process. Since the fusion energy is one million times larger than the chemical binding energy, the cost for electrolysis of heavy water to get deuterium is easily recovered. Lithium is an abundant mineral resource and also available from seawater. Collection of lithium from seawater has not been industrialized yet; however, promising technologies are being developed. The increasing demand of lithium for batteries accelerates these technologies. Therefore, a fusion reactor is free from the issue of fuel.

Difference Between Fusion and Fission Reactors

While both fusion and fission accompany huge energy released by loss of mass at the change of nuclei, there exist contrasting features between them.

Page 7 of 27

Handbook of Climate Change Mitigation and Adaptation DOI 10.1007/978-1-4614-6431-0_31-2 # Springer Science+Business Media New York 2015

The first difference can be seen in the way how the reaction is controlled. The fission reaction that has been already employed in a power plant is driven by the absorption of neutrons into uranium-235. One fission reaction releases two or three neutrons, and consequently a chain reaction takes place. This means only one neutron can trigger a continuous and even explosive reaction within a certain amount of uranium-235, in principle. In a fission power plant, uranium fuel for several-yearlong operations is mounted on a reactor and burned gradually by applying the brake with control rods absorbing neutrons. In a fusion reactor, in contrast, hydrogen isotope fuel is fed continuously into a reactor like a gas burner. Therefore when the refueling is stopped, fusion reaction stops immediately. Burning takes place in plasma state which will be described in detail later. A very high temperature more than 100 million  C is required to give rise to a fusion reaction. This necessary condition is broken so easily since fusion is free from chain reaction in principle. For example, too much amount of fuel drops the temperature and quickly stops the fusion reaction since fusion power cannot keep the sufficiently high temperature of the inlet fuel. The second difference is distinguished by products of reaction. In case of fission, elements with the mass number around 90–100, such as strontium and yttrium, and elements with the mass number around 130–140, such as iodine and barium, are produced as ash. Majority of these have large radioactivity and need a careful treatment as high-level radioactive waste. Also unburnable uranium-238 is converted to plutonium-239. While this plutonium can be used as fission fuel in a reactor, it is a long-lived radioactive element and has very high toxicity. Plutonium can be used to make nuclear weapons and must be controlled strictly under the Nuclear Non-Proliferation Treaty. On the other hand, the product from fusion reaction is a stable element: helium. Simultaneously produced neutrons are used to make tritium by reacting with lithium in a surrounding blanket. Neutrons are also absorbed in peripheral components of a reactor and may activate them. Tritium is also a radioactive element with a half-life of 12 years and changes to helium-3 by the b-decay. Therefore, it should be noted that a fusion reactor is not free from issues related to radioactivity. However it is much mitigated. Its hazard potential can be compared by a potential radioactive risk factor. This factor assesses the risk of the maximum accident of reactors by how much air is required to dilute released radioactive elements to the tolerable level to human body. When iodine-131 and tritium, which are easily absorbed in the human body in fission and fusion reactors, respectively, are compared, the risk of a fusion reactor is less than that of a fission reactor by a factor of 1,500. The risk of a whole activated material of a reactor is about one hundredth at the operation, and the risk of fusion reactor decays quickly after shutdown since a majority of produced radioactive elements have short half-lives. The present material design of a fusion reactor aims at the reuse of materials after 100-year cooling phase. Both fission and fusion power stations need fuel processing; however, the level of risks related to proliferation and radioactive wastes in the processing is much mitigated for a fusion power station. In the case of a fission power station, used fuels contain high-level radioactive wastes as a fission product, and plutonium is transformed from uranium-238. High-level radioactive wastes are hazardous and should be controlled safely for an extremely long time. Reprocessing of used fuels breeds fuels (plutonium), which is, in turn, concerned for proliferation. It should be also pointed out that this fuel processing is done in a fuel-cycle factory which is usually located apart from a fission power station. Tight security in transportation of used fuels and new fuels between a fission power station and a fuel-cycle factory should be in force. In the case of a fusion reactor, in contrast, tritium is bred in a fusion power station through the reaction between lithium and neutrons as described in the previous chapter. This process is confined in a fusion power station. Therefore, transportation of radioactive tritium outside a fusion power station is not required.

Page 8 of 27

Handbook of Climate Change Mitigation and Adaptation DOI 10.1007/978-1-4614-6431-0_31-2 # Springer Science+Business Media New York 2015

Superconducting magnet

Heat exchanger

Blanket neutrons

Li

Steam D-T Plasma

T

Turbine-Generator

Refueling

Plasma facing components

Electric power

Coolant

Condenser

Helium ash pumping

Sea water Deuterium extractor

Tritium extractor

Fig. 7 Conceptual schematic view of a fusion power plant Nucleus Electron Molecules

Solid (ice)

Liquid (water)

Gas (steam)

Plasma

Fig. 8 Four states of matter

Core of Fusion Reactor: Burning Plasma A schematic diagram of a fusion reactor is shown in Fig. 7. The energy source of a fusion reactor is the burning plasma in the core. In this chapter, the principle to confine the plasma leading to burning is described.

Characteristics of Plasma

The fusion reaction requires temperature beyond 100 million  C, which is higher than in the core of the sun by more than one order of magnitude. At this high temperature, all materials become plasma, which is ionized gas. It is well known that the state of material has three phases: solid, liquid, and gas. And when material is heated to ten thousand  C, molecules composing gas dissociate into atoms and then electric restraint between nucleus with positive electric charge and electrons with negative electric charge is unbounded. This state is the fourth state of matter, plasma (see Fig. 8). All fixed stars shining in the sky including the sun are a mass of plasma. On the Earth, a flash of lightning and aurora are natural plasma, and plasma is used for neon lights and plasma displays. It is necessary to confine high-temperature plasma to ignite fusion reaction and maintain burning. Here it should be noted that confinement does not mean absolute confinement so as not to release anything. To prevent fuel cooling, thermal insulation is needed like in a fireplace to keep burning. It is also necessary to supply new fuels continuously. Therefore, confinement here is defined as sustainment of the phase with

Page 9 of 27

Handbook of Climate Change Mitigation and Adaptation DOI 10.1007/978-1-4614-6431-0_31-2 # Springer Science+Business Media New York 2015

Magnetic field line

V Ion (nucleus)

B

F

Lorentz force F = qv × B

Electron

Fig. 9 Motion of an ion and an electron restricted by a magnetic field line

sufficient condition of burning and continuous replacement of fuels. The temperature should be kept beyond 100 million  C. Usual materials such as metal used for a gas cylinder cannot withstand high temperature of plasma. In other words, plasma is cooled down by the cylinder wall. In addition to temperature, appropriate density as high as 1  1014 ions per 1 cc (1  1020 ions/m3) is also required to keep burning. This density is one over 200,000 of air, which means burning plasma is very rare. It should be noted, however, that the pressure of burning plasma reaches 10 atmospheres because of high temperature of 100 million  C. Balancing force against this pressure is required to confine the plasma. In the case of the sun, its own gravity balances the expansion due to the plasma pressure, since gravitation is, unfortunately, not large enough to realize the fusion burning condition by the same scheme on the Earth. There are two alternative potential concepts to realize and control fusion reaction, which are inertial confinement and magnetic confinement. Very fast compression and heating of a small D/T fuel cell can be achieved by highly intensive laser reaching several hundred terawatt or even petawatt. This has been investigated to realize the required condition for fusion in very short timescale as long as the inertia confines the fuels (Atzeni and Meyer-TerVehn 2004). The outer layer of the fuel cell (typically a few millimeters in diameter) is heated by intensive laser itself or converted X-ray and explodes outward. This ablation produces the force to compress the inner part of the fuel cell. This implosion energy leads the D/T fuel to ignition. The other method to keep the burning plasma is magnetic field confinement. Magnetic field forms invisible bottle to contain plasma apart from material wall in steady state. Since the plasma is composed of charged particles (ions and electrons), the invisible bottle made by magnetic field can confine the plasma. Charged particles rotate around the field line by the Lorentz force and consequently their motion is restricted by a magnetic field line as shown in Fig. 9. It should be noted that the rotating directions of positively charged ions and negatively charged electrons are opposite to each other. This is the principle of magnetic confinement of plasmas in a microscopic (particle) view. In a macroscopic (fluid) view, the expanding pressure of the plasma is pushed back by the pressure of magnetic field which is usually 20 times larger than the pressure of the plasma.

Magnetic Confinement of Plasma

If the magnetic field line intersects the material, the charged particles hit the material along the magnetic field line. Therefore, circulating magnetic field lines without end is required to avoid interaction with the material wall. Figure 10 shows the basic concept, where electric current on the major axis generates the circulating magnetic field lines. An important point here is that the strength of magnetic field is inversely proportional to the distance from the major axis. The charged particles rotate around the magnetic field lines and its rotating radius, which is called the Larmor radius, is inversely proportional to the strength of

Page 10 of 27

Handbook of Climate Change Mitigation and Adaptation DOI 10.1007/978-1-4614-6431-0_31-2 # Springer Science+Business Media New York 2015

B ∝ 1/R

B VB

R

Fig. 10 Generation of circulating magnetic field line without an end. Electric current I generates magnetic field B

a

Ion

Electron

b Ion

B ΔB

E B Electron

Fig. 11 Drift motion of charged particles in a nonuniform magnetic field. E and B denote an electric field and magnetic field, respectively. (a) A drift by the gradient of magnetic field. (b) A drift by resultant electric field due to the gradient of magnetic field

magnetic field. Therefore, the rotating radius becomes small in approaching the major axis and large in going away from the major axis. The combination of this change and rotating motion results in the vertical motion of particles. Remembering the difference of rotational direction of ions and electrons, these two kinds of particles are separated upside down (see Fig. 11a). This separation of charged particles generates vertical electric field, which accelerates or decelerates the charge particles. Since the rotating radius is proportional to the velocity of the charged particle, rotating motion is affected by the electric field as shown in Fig. 11b. In this case, both ions and electrons go away from the major axis and are lost eventually. As a result, simple circulating magnetic field lines cannot confine charged particles. By twisting magnetic field lines in a torus, the upper part and the lower part can be short-circuited and consequently unfavorable charge separation can be avoided. In reality, sophisticated modification of simple circulating magnetic field lines is required to keep high-temperature plasma stable. One element twists the magnetic field lines and another element forms nested magnetic surfaces composed of numerous turns around a doughnut. Most simply speaking, centrifugal force driven by the motion along the bended magnetic field line and electric field generated by charge separation are compensated by the geometrical arrangement. There are two ways to form a magnetic bottle with fulfillments of these requirements. One is called “tokamak” (Wesson 2004) which was invented by Sakharov and Leontovitch (1961) and Tamm in the former Soviet Union in the 1950s. This concept is based on the combination of externally generated

Page 11 of 27

Handbook of Climate Change Mitigation and Adaptation DOI 10.1007/978-1-4614-6431-0_31-2 # Springer Science+Business Media New York 2015

a

b Plasma

Toroidal field coil

Plasma currents

Helical coil

Fig. 12 Concepts of magnetic confinement fusion. (a) Tokamak and (b) helical system

circulating magnetic field and the magnetic field generated by circulating currents in the plasma (Fig. 12a). Circulating magnetic field looking like a doughnut can hold the plasma apart from the material wall. A set of planar coils arranged around a doughnut generates simple circulating magnetic field and circulating currents in the plasma are driven by the principle of transformer. This concept is axisymmetric and simple in machine construction as well as theoretical analysis of plasma physics. Another concept is a helical system. Only twisted (helical) coils generate the magnetic field to confine the plasma (see Fig. 12b). The American physicist Spitzer (1954) and the Japanese physicist Uo (1961) are pioneers in this concept. Their inventions are called stellarator and heliotron, respectively. A helical system does not require the currents in the plasma to generated twisted magnetic field. Therefore a helical system is free from issues related to the plasma currents, which are critical in a tokamak. A helical system has an intrinsic advantage of steady-state and stable operation. Although the complicated threedimensional geometry has prevented the progress of this concept both experimentally and theoretically, the development is being accelerated by the first large-scale experiment (Large Helical Device: LHD (http://www.lhd.nifs.ac.jp/en/) in Japan) and large-scale simulations. Confinement capability has been proved to be equivalent to a tokamak. Although the physical picture of particle confinement is well documented, plasma also behaves as a fluid. Dynamics of plasma is highly nonlinear and the modeling of plasma motion is still a challenging issue. Heat loss due to turbulence in the plasma has not been fully understood yet. The confinement capability of plasma is compared with the containment of water in a bucket with holes (see Fig. 13a). Supply of water from an external faucet P is balanced with the leak from holes L and consequently the water level W is kept. When the faucet is closed, the water level goes down exponentially with a specific time constant t. In the case of fusion plasma, the plasma stored energy W is modeled by dW =dt ¼ Pin  W =tE ;

(2)

where Pin is input heating power and tE is called an energy confinement time. If there is no external heating, the plasma stored energy decays with exp(t/tE) as shown in Fig. 13b. Power balance in a fusion reactor is schematically shown in Fig. 14. It should be noted that the fusion energy carried by fusion-producing helium contributes to heating of the plasma. Another fusion product, neutrons, is not confined by the magnetic field because of no electric charge. The energy multiplication factor Q of a fusion reactor is defined based on this picture by Q ¼ Pfusion =Pin

(3)

Page 12 of 27

Handbook of Climate Change Mitigation and Adaptation DOI 10.1007/978-1-4614-6431-0_31-2 # Springer Science+Business Media New York 2015

a b P

W

1.0

0.5 W 0

0

0.5

1.0

1.5

2.0

t/τ L

Fig. 13 (a) Concepts of confinement. Water is compared to energy. (b) Level (volume) of water decreases exponentially

P

out

Pin

Ploss

Fusion Palpha

Pneutron P

fusion

Fig. 14 Conceptual diagram of power balance in a fusion reactor

This Q value should be larger than 50 to establish a fusion reactor as an energy source and the condition of Q = 1 is called breakeven. In steady state, Eq. 2. gives Pin ¼ W =tE :

(4)

The combination of Eq. 1 in section “Fusion Reaction” and Eq. 4 yields Q / nD , nT , sv>DT =ðW =tE Þ;

(5)

where the bracket means the volume averaged value. The cross section < sv > DT > can be approximated well by the temperature T squared in the targeted temperature range around 10 keV and nD and nT are ideally the same. Also the plasma stored energy

Page 13 of 27

Handbook of Climate Change Mitigation and Adaptation DOI 10.1007/978-1-4614-6431-0_31-2 # Springer Science+Business Media New York 2015

101

τE in experiments (s)

100

10−1

10−2

10−3 10−3

10−2

10−1

100

101

τE predicted by scaling (s)

Fig. 15 Comparison of an energy confinement time in experiments with prediction from the scaling

W can be rephrased by < nT >, where n is the representative particle density. Consequently, Q is expressed approximately by < n 2 T 2>/ tE and then < n > tE. More simply, nTtE is called a fusion triple product and the most important parameter to describe the performance of fusion plasma. In early stage of fusion energy development, J. D. Lawson defined the condition to produce net energy (Lawson 1957) and indicated that the breakeven condition corresponds to 1  1021 m3 keV s. More specifically, a typical target is simultaneous achievement of the density of 1  1020 m3, the temperature of 10 keV (around 120 million  C), and the energy confinement time of 1 s. Although the plasma turbulence predominating the energy confinement has not been understood from the first principle yet, empirical scaling tolerable enough to foresee a reactor has been already available (Lawson 1957; ITER Physics Basis Editors 1999). The energy confinement time is described by the power laws with plasma and operational parameters, for example (ITER Physics Basis Editors 1999), 0:69 1:39 0:58 0:78 0:19 R a k M ; tE ¼ 0:0562I 0:93 B0:15 n0:41 19 P

where I is the circulating current in tokamak plasma in MA, B is the magnetic field in T, n19 is the line averaged density in 1019 m3, P is the heating power in MW, R is the major radius of the torus in m, a is the minor radius of the torus in m, k is the elongation of the poloidal cross section of the plasma (the plasma cross section is usually vertically elongated prolate shape and k is the ratio of the height and the breadth of the plasma cross section), and M is the mass number (1 for hydrogen and 2 for deuterium). For helical systems, another scaling expression has been proposed (Dinklage et al. 2007) and both scaling expressions share large commonality in physics. As shown in Fig. 15, the scaling fits the experimental observation by a factor of 2 in 3 orders of magnitude.

Engineering Elements of Fusion Reactor Structure of a Fusion Reactor As shown in Fig. 7, fundamental components in the core are (1) confined plasma as energy source due to fusion reaction, (2) plasma-facing components surrounding the plasma, (3) blanket to receive neutrons Page 14 of 27

Handbook of Climate Change Mitigation and Adaptation DOI 10.1007/978-1-4614-6431-0_31-2 # Springer Science+Business Media New York 2015

and generate heat and tritium, and (4) superconducting magnets to generate confining magnetic field. Heat generated in the blanket is transferred by coolant like water. The consequent process of electric power generation is the same as a fission power plant and a thermal power plant. In addition, affiliate facilities which are not seen in other power plants are vacuum pumping system and heating system to bring the plasma to ignition. A fusion reactor is basically a large-scale electromagnetic and nuclear device which requires extremely high-level integration of engineering and physics. Steady-state control of the plasma is a primary demand. Safety and materials are also key issues. Damage on plasma-facing components by high-energy (14 MeV) neutrons and helium irradiation should be assessed precisely to guarantee safety over their lifetime. For safe steady-state operation, peak heat loads exceeding 10 MW/m2 should be managed safely. Also, an economically competitive power station must minimize the internal circulating power consumed in the plant. A fusion reactor needs a variety of large-scale electric facilities such as vacuum and cooling pumps, cryogenic system, magnets, and heating and control system. This internal circulating power in the present fission power station is only 3–4 % of the generated electric power. If the circulating power becomes significantly large to operate a fusion reactor, a fusion reactor cannot gain economical attractiveness. The abovementioned major three components besides the core plasma are explained in detail in the following sections.

Plasma-Facing Component and Structure Material When the plasma is contaminated by impurities other than deuterium and tritium, radiation loss is enhanced to cool the plasma and fuels are diluted. Therefore, the plasma is generated in an airtight vacuum vessel. Although the plasma is held apart from the wall of the vacuum vessel by the magnetic field, a part of highly energetic particles and particles neutralized by the charge exchange process bombard the plasma-facing components located on the wall. Here it should be noted that the plasma with the pressure as high as 10 atmospheres is pressed down by the magnetic field and that the space between the plasma and the wall of the vacuum vessel is almost vacuum with very rare neutral gas. While the temperature of the burning plasma exceeds 100 million  C, the direct interaction between the burning plasma and the plasma-facing component is avoided by the magnetic field. Even in this thermal insulation, the heat load to the plasma-facing component reaches 10 MW/m2 due to radiation and the fluxes of neutrons and charge exchanged neutrals. The operational temperature of the plasma-facing component is evaluated up to 900 C, and the first planned material is tungsten which has high melting temperature (3,380 C). Although carbon is widely used as the plasma-facing component in the present fusion experiment devices, it is not compatible with the reactor condition due to large erosion and retention of tritium. Neutrons generated by fusion reaction are not confined by the magnetic field and penetrate into the structure materials. Therefore, the plasma-facing components and structure materials are required to have sufficient tolerance against the heat and neutron loads. Relatedly, employed materials should have good heat removal property and reduced activation. Also it is preferable to keep sufficient tightness and mechanical strength during a lifetime of a plant. Alloys such as stainless steel have been usually used in the current experimental devices, but these alloys do not fulfill the requirement of a reactor. Ferritic steel is a promising material for the first generation of a reactor, and advanced materials using vanadium silicon carbide are being developed. In addition to heat and neutron loads, helium generates bubbles in the plasma-facing components and causes swelling and consequent blistering. Since falling flakes deteriorate plasma performance, materials should suppress this effect in addition to securing soundness of components themselves. In general, materials show degradation of its properties, such as dimensional instabilities, yield strength, ductility, creep rate, fatigue life, and fracture toughness. Neutron radiation often accelerates Page 15 of 27

Handbook of Climate Change Mitigation and Adaptation DOI 10.1007/978-1-4614-6431-0_31-2 # Springer Science+Business Media New York 2015

Fig. 16 Schematic view of the International Fusion Materials Irradiation Facility (IFMIF). PIE and RFQ stand for postirradiation examination and radio-frequency quadrupole, respectively

this process (Zinkle 2005). The standard to assess irradiation damage is displacements per atom (dpa) (Norgett et al. 1975). The dose of 1 dpa corresponds to a 14 MeV neutron wall loading of 0.1 MW year/m2 in steels. The structure component of a fusion power plant is expected to have a neutron dose of 100–150 dpa around temperatures of 500–600 C. While stainless steel can be used in the experimental reactor level (ITER) where the neutron fluence is limited to 3 dpa, development of new material is prerequisite for a fusion reactor as a power plant. The promising material for the first generation of a fusion reactor is low-activation ferritic steel, which has been used in fuel tubes for a fast breeder fission reactor and evaluated to be used up to 40 dpa by 14 MeV neutron radiation. This tolerance corresponds to 1-year operation of a fusion reactor. Innovative and attractive materials such as vanadium alloy (V-4Cr-4Ti) (Muroga et al. 2002) and silicon carbide (SiC/SiC) (Katoh et al. 2007) are also under development. In addition to mechanical properties, physical properties such as electric conductivity change due to neutron irradiation. These complicated phenomena depend on energy and dose of neutrons and operating temperature. A new neutron irradiation facility is planned to evaluate irradiation property of materials precisely for reliable design of a fusion reactor. This facility is called the International Fusion Materials Irradiation Facility (IFMIF) (Martone 1996) and simulates 14 MeV neutrons at the maximum capability of 50 dpa/year. The schematic view of IFMIF is shown in Fig. 16. The report of Martone (1996) defines the mission of IFMIF as to provide an accelerator-based, D-Li neutron source to produce high-energy neutrons at sufficient intensity and irradiation volume to test samples of candidate materials up to about a full lifetime of anticipated use in fusion energy reactors. IFMIF would also provide calibration and validation of data from fission reactor and other accelerator-based irradiation tests. It would generate an engineering base of material-specific activation and radiological properties data as well as support the analysis of materials for use in safety, maintenance, recycling, decommissioning, and waste disposal systems. A deuterium beam with 40 MeV and 250 mA irradiates a lithium target and generates neutrons with the energy peak at 14 MeV through the D-Li stripping reaction. The Engineering Validation and Engineering Design Activities (EVEDA) for IFMIF are now being conducted by Japan-EU cooperation in Rokkasho, Japan (Garin et al. 2009).

Page 16 of 27

Handbook of Climate Change Mitigation and Adaptation DOI 10.1007/978-1-4614-6431-0_31-2 # Springer Science+Business Media New York 2015

Blanket The blanket surrounds the plasma with protection by the plasma-facing components. The function of blanket is to produce tritium and extract heat from neutrons generated by the fusion reaction. Tritium breeding ratio (TBR) is a critical parameter for a fusion reactor. TBR is a measure of breeding capability of the blanket and is defined as TBR ¼ rate of tritium production in the blanket=rate of burning tritium in the plasma Since the abundance of tritium in nature is tiny, a fusion reactor is required to produce more tritium than burned tritium, which means TBR >1. The blanket consists of breeding material for tritium, multiplier of neutrons, and coolant which are designed to fulfill three major specifications: (1) sufficient tolerance against heat, neutrons, and electromagnetic forces due to confining magnetic field; (2) the tritium breeding ratio with more than 1; and (3) sufficiently high efficiency of heat removal. Tritium breeding material produces more tritium than consumed tritium by using the reaction between lithium and neutron described in section “Fusion Reaction.” There are two major categories in the form of lithium. One is a solid-breeding scheme in ceramic made of lithium and another is a liquid-breeding scheme of pure lithium(Li), lithium lead (LiPb), or molten salt (FLiBe). Solid breeding is progressing faster due to the advantages of easy handling and chemical stability. Liquid breeding has advantages of much reduction of radiation damage, simple design for easy maintenance, and potentially high TBR. However, liquid-breeding material is chemically active in general. In particular, careful attention should be paid to a chemical reaction with water which is the secondary coolant and corrosion of the cooling channel. Also liquid metal is an electrically conducting fluid and the electromagnetic force under the strong magnetic field prevents efficient flow. Therefore, research and development has been conducted to resolve these issues. One neutron is generated by one fusion reaction between deuterium and tritium, and a fusion reactor must produce more than one tritium by this one neutron. Since some neutrons are absorbed in surrounding structure and lost, all neutrons cannot be used to breed tritium. Therefore, it is needed to multiply neutrons by the reaction using beryllium such as 9

Be þ n ! 2n þ 24 He:

This kind of neutron multiplier is inserted between the plasma-facing component and tritium breeding material. Also the shield is located behind the breeding material to reflect neutrons back to use them efficiently and protect superconducting magnet located behind the breeding material. Coolant should be compatible with tritium breeding material and have sufficient heat removal capability. The most conservative combination is to use solid breeding and water or helium as coolant. Operating temperatures are around 300 and up to 500 C for water cooling and helium gas cooling, respectively. The blanket must hold a critical compound role in a fusion reactor. In addition, constraints due to configuration of magnets and economical viewpoints require the thickness of blanket limited to around 1 m. In spite of limited availability of neutron fluence on ITER (around 3 dpa), the ITER project definition states that “ITER should test tritium breeding module concepts that would lead in a future reactor to tritium self-sufficiency and to the extraction of high-grade heat and electricity production” (Aymar 2001). Toward this goal, several fusion reactor-relevant Test Blanket Modules (TBM) (Giancarli et al. 2006) are proposed. Page 17 of 27

Handbook of Climate Change Mitigation and Adaptation DOI 10.1007/978-1-4614-6431-0_31-2 # Springer Science+Business Media New York 2015 DEMO

100

Magnetic Energy (GJ)

ITER 10 Normal JT-60 JET 1

TFTR

ToreSupra

0.1 1980

LHC

LHD

JT-60SA

LCT KSTAR EAST

W7-X

TRIAM-1M 1990

2000

2010

2020

Year

Fig. 17 Development of large-scale superconducting magnets in terms of magnetic energy. Three large tokamaks employing normal conductors and the superconducting magnets for the Large Hadron Collider (LHC) are also plotted as references (Reproduction of Fig. 4 in Yamada et al. (2009))

Superconducting Magnet

To confine the burning plasma described in “Core of Fusion Reactor: Burning Plasma,” the strong magnetic field exceeding 10 T at the magnets is needed. Since the volume of the plasma is around 1,000 m3, magnets produce this strong magnetic field with sufficient accuracy to cover the large volume. It is well known that Joule heating due to resistivity is accompanied by currents. The loss of this energy is critical in a fusion power plant. Therefore, superconducting magnets are inevitable since they are free from energy loss due to Joule heating because of no resistivity at cryogenic temperature. Superconducting magnets in a fusion power plant are characterized by large-scale, sufficient tolerance and preservation of accuracy against the large electromagnetic force and tolerance against nuclear heating and activation. Superconducting magnets using alloys such as NbTi and a compound such as Nb3Sn have been developed to fulfill these specifications. Figure 17 shows the development of large superconducting magnets in fusion devices (Yamada et al. 2009). The largest operating magnet system for fusion is the Large Helical Device (LHD) (Imagawa et al. 2010), and its magnetic stored energy is close to 1 GJ. The Large Hadron Collider (LHC) employs two large detectors with large-scale superconducting magnets, ATLAS and CMS, and each magnetic stored energy exceeds 1 GJ. The total stored magnetic energy of LHC reaches 15 GJ (Ross 2010). The stored magnetic energy of the superconducting magnet system in ITER is 50 GJ (Mitchell et al. 2010), which is well beyond the achievements so far. The specification of the magnets for ITER requires the mechanical tolerance against 1 GPa, the withstanding voltage of 10 kV, and irradiation dose on electric insulation of 10 MGy, which are the present technological limits. The prototype magnet employing Nb3Sn conductors has demonstrated 13 T (Kato et al. 2001) and fabrication of real components has started. The specification required for a fusion reactor would be higher than that of ITER. The solution to the issue of Nb3Sn having the critical current density that degrades by strain is inevitable to achieve higher magnetic field for a fusion reactor than in ITER. A strong candidate is Nb3Al because of its outstanding property of critical current density against strain and magnetic field (Koizumi et al. 2005). Although basic engineering advantage has been already established for Nb3Al, R&D to mitigate difficulty in mass production and cost is still required for its application to a fusion reactor. Page 18 of 27

Handbook of Climate Change Mitigation and Adaptation DOI 10.1007/978-1-4614-6431-0_31-2 # Springer Science+Business Media New York 2015

Further development of conductors is being conducted to pursue capability to carry higher currents under higher magnetic field than these established conductors. In particular, a high-temperature superconductor which does not need cryogenic operation by liquid helium will have a big impact on a design of a fusion reactor.

Present Status and Future Direction of Nuclear Fusion Fusion research was started as classified military research about 60 years ago. Then, global scientific research activity toward a peaceful use of fusion energy was launched by declassification at the second United Nations Conference on the Peaceful Uses of Atomic Energy in Geneva in 1958. Tabletop-sized experiments demonstrated proof of principle of physical ideas, and medium-sized experiments with the major diameter of up to 3 m extended plasma parameters to the order of ten million  C. Then three largescale tokamaks, TFTR (Hawryluk et al. 1998), JET (http://www.jet.efda.org/; Pamera and Solano 2001), and JT-60U (Ohyama et al. 2009), with the diameter of about 6 m and the plasma volume of several tens to more than 100 m3 were constructed in the 1980s to demonstrate scientific feasibility of fusion. As an alternative line, a helical system is catching up with tokamak by large facilities, LHD (http://www.lhd. nifs.ac.jp/en/; Yamada et al. 2009; Komori et al. 2010) and Wendelstein 7-X (http://www.ipp.mpg.de/ ippcms/eng/pr/forschung/w7x/; Bosch et al. 2010). In parallel with convergence to the first demonstration of burning plasma on ITER, a variety of experimental project are being conducted to resolve unresolved issues and create innovation by worldwide efforts as shown in Fig. 18. Although the fusion power plant has not been realized like a fission power plant, the progress in these 50 years is remarkable (Meade 2010). For example, the most typical index to describe performance of fusion plasma, the fusion triple product of temperature, density, and energy confinement time, has been improved in the same speed as the density of an integrated circuit, which refers to the famous Moore’s law (doubled in 18–24 months) (see Fig. 19) (Webster 2003). Figure 20 is the so-called Lawson diagram, which shows the performance of fusion plasmas on the plane of the product of central ion density and energy confinement time, and temperature. Recent experiments on JET (Team 1992) and JT-60U (Ishida et al. 1999) achieved the breakeven condition Q = 1 in the 1990s. It should be noted that the breakeven conditions have been equivalently satisfied by using only deuterium. Also more than 10 MW of real fusion power generation has been demonstrated using deuterium and tritium on TFTR (Bell et al. 1995) and JET (Gibson 1998) even for a short time period as long as a few seconds (see Fig. 21). These two major achievements, breakeven and DT burning, have motivated the next generation of a tokamak experimental reactor. Based on accumulated achievements by worldwide tokamaks (Ikeda et al. 2007), fusion power development is stepping up the stage. Seven leading parties of fusion research, China, EU, India, Japan, Korea, Russia, and the USA, have jointly started construction of the International Thermonuclear Experimental Reactor (ITER) (http://www.iter.org/) in Cadarache, France. For this distinguished international project, the ITER Organization was formally established on October 24, 2007, after ratification of the ITER Agreement in each member party. ITER will be built largely (90 %) through in-kind contribution by the domestic agencies of seven parties. ITER is the largest tokamak ever built. Its plasma volume is close to 1,000 m3 (see Fig. 22), and the total weight reaches 23,000 t. The goal of ITER is the demonstration of control of burning plasma and engineering feasibility of a fusion reactor. ITER plans to demonstrate 500 MW of fusion power production by DT fusion reaction at the temperature of 150 million  C for 500 s in the 2020s. This amount of fusion power is expected to be ten times larger than the external heating power put into the plasma, which means Q = 10. Figure 23 is the schedule of ITER (Ikeda 2010). The latest argument suggests an updated schedule that is a bit behind. The first plasma Page 19 of 27

Fig. 18 Experimental facilities for magnetic confinement of fusion plasma in the world

Handbook of Climate Change Mitigation and Adaptation DOI 10.1007/978-1-4614-6431-0_31-2 # Springer Science+Business Media New York 2015

Page 20 of 27

Handbook of Climate Change Mitigation and Adaptation DOI 10.1007/978-1-4614-6431-0_31-2 # Springer Science+Business Media New York 2015

1000

ITER target of Ti = 18 keV, nTτ = 3.4 atmosphere seconds

JT - 60U

Relative Magnitude

100

JT - 60U

Fusion: Triple product nTτ doubles every 1.8 years

JT - 60U

JT - 60U JET TFTR DIII-D

JT - 60U JT - 60U

10

Pentium 4 Merced P7

TFTR Alcator C

1

Pentium Pro P6 Pentium P5

JET

80486

Alcator A 0.1

80286

TFR 0.01

SppS

8080

Accelerators: Energy doubles every 3 years

T3 0.001

4004 1965

Tevatron

8086

ST

LHC

80386

PLT

Moore’s Law: Transistor number doubles every 2 years

ISR 1970 1975

1980

1985

1990

1995

2000

2005

Year

Fig. 19 The rapid progress toward harnessing fusion as a power source compares very favorably with progress in other high technologies such as computing performance and particle accelerators. This figure was originally produced by J. B. Lister, CRPP Lausanne (crpp www.epfl.ch), and M. Greenwald, MIT (www.psfc.mit.edu) (Reproduction of Fig. 3 in Webster (2003))

Fig. 20 Lawson diagram for magnetic fusion illustrating progress over 50 years (Courtesy of the Japan Atomic Energy Agency: Naka Fusion Institute. Reproduction of Fig. 10 in Meade (2010))

Page 21 of 27

Handbook of Climate Change Mitigation and Adaptation DOI 10.1007/978-1-4614-6431-0_31-2 # Springer Science+Business Media New York 2015

JET (1997)

15

10

JET (1997)

5

Q∼0.2

JET (1991) 0

0

1.0

JG01.326.10c

Fusion power (MM)

Q∼0.65 TFTR (1994)

2.0

3.0

4.0

5.0

6.0

Time (s)

Fig. 21 Progress in fusion power and energy in time, from JET and TFTR which are capable of DT operation (Reproduction from http://figures.jet.efda.org/JG01.326-10c.eps)

Fig. 22 A cutaway view of ITER (courtesy of the ITER Organization). The major diameter of a doughnut-shaped plasma is 12.4 m. Duplication from Nature 459: 488–489, 2009. Seven parties in the world share the responsibility of construction

with hydrogen is planned to be ignited in 2019, and the experimental campaign of DT burning will start in 2027. ITER will also be a test bed for blanket technology as discussed in section “Blanket.” The goal of ITER is defined as engineering demonstration of fusion energy. However, it should be noted that ITER does not

Page 22 of 27

Handbook of Climate Change Mitigation and Adaptation DOI 10.1007/978-1-4614-6431-0_31-2 # Springer Science+Business Media New York 2015

Fig. 23 Schedule of ITER (Reproduction of Fig. 13 in Ikeda (2010))

have a plan to generate electric power. Then, a demonstration reactor which fulfills the requirement of a power plant including economical validity to some extent will come in the next. ITER is certainly the necessary condition to proceed to a demonstration fusion reactor but not sufficient for a demonstration fusion reactor. In particular, material development should be conducted by an intensive neutron irradiation facility (IFMIF) (Martone 1996) in parallel to assess property of materials, in particular lifetime. ITER adopts the tokamak concept described in section “Magnetic Confinement of Plasma.” Tokamak is the most promising concept to demonstrate controlled burning plasma from the presently available knowledge. However, when the system is assessed from the viewpoint of a fusion power plant, it is serious and critical to overcome the issues related to the control of huge currents in the plasma. In the case of ITER, electric current of 15 MA (1.5  107A) flows in very rare gas (plasma) with weight of less than 1 g. This plasma current should be stably held in steady state. This requirement poses two critical issues. One is avoidance of current disruption. Since a huge plasma current has huge electromagnetic energy, abrupt destruction of the plasma current called disruption occurs when the stability of the current is lost. This phenomenon happens in the order of 1 ms; huge transient electromagnetic forces are generated in the machine component. Therefore the control and mitigation of disruption is a prerequisite for a tokamak fusion reactor. Another requirement is current drive. In addition to transient induction as in a transformer, a reliable and efficient current-drive scheme should be established. Fortunately, to some extent, hightemperature plasma in a doughnut shape has a physical mechanism to drive the circulating currents spontaneously, called bootstrap currents. However these currents are not sufficient to sustain the burning plasma and an external source to drive the sufficient current. This means that some amount of produced electric power in a tokamak fusion power plant is consumed to drive the plasma currents. Simultaneous achievements of spontaneous current fraction of 70 % and efficiency of current drive from the plug of 50 % are required to deliver electric power to the grid economically. This requirement is very demanding, and ITER will not be able to resolve this issue. Therefore a new tokamak facility JT-60SA (Ishida et al. 2010) to explore the steady-state tokamak operation is now under construction by bilateral collaboration of EU and Japan.

Page 23 of 27

Handbook of Climate Change Mitigation and Adaptation DOI 10.1007/978-1-4614-6431-0_31-2 # Springer Science+Business Media New York 2015

Fig. 24 Photograph of the plasma vacuum vessel of LHD: the Large Helical Device (Courtesy of the National Institute for Fusion Science, Japan). The major diameter of a twisted doughnut-shaped plasma is 7.8 m

Since a helical system which is alternative to a tokamak does not need plasma currents to confine the plasma and is free from challenging issues related to the plasma current, it is an extremely attractive concept of a steady-state stable reactor. Nonetheless, complex shape of magnets has caused troubles and difficulties in both experimental and theoretical approaches, and the progress of research and development lagged behind tokamak by one generation. However, the first large-scale helical device, LHD (see Fig. 24), has been in operation since 1998, and remarkable progress has been achieved recently (Komori et al. 2010). LHD employs superconducting magnets and has the capability of steady-state operation in both physics and engineering aspects. LHD has achieved the comparable plasma parameters such as temperature of 75 million  C and already demonstrated 1 h long operation of high-temperature plasma with 12 million  C. Another helical device, Wendelstein 7-X, is now under construction in Germany and will be operational in 2015 (Bosch et al. 2010). In the coming couple of decades, physical study and engineering demonstration of burning plasma will be conducted in ITER in parallel with research and development of steady-state operation by advanced tokamaks and helical systems. Reactor engineering, in particular material development, should be also pursued toward the establishment of an economical fusion reactor. Integration of all this knowledge will lead to the first demonstration fusion reactor which produces electric power of one million kW in the 2040s (see Fig. 25). The establishment of fusion as energy source is targeted in the mid of this century (Masionnier et al. 2005). The National Ignition Facility (http://lasers.llnl.gov/) in the USA plans to demonstrate ignition by the completely different inertia confinement scheme in 2011. Operation is limited to a single-shot basis due to the availability of highly intensive laser, and the inertia confinement is in the stage of scientific demonstration.

Summary Fusion is an energy source of the sun, and controlled fusion as an energy source for human beings has been developed intensively worldwide for this half a century. A fusion power plant is free from concern of exhaustion of fuels and production of CO2 and has an advantage to a nuclear fission power plant in terms

Page 24 of 27

Handbook of Climate Change Mitigation and Adaptation DOI 10.1007/978-1-4614-6431-0_31-2 # Springer Science+Business Media New York 2015

Fig. 25 The growth in scale of tokamak devices from JET, which produced the first DT fusion power, through ITER, aiming for Q = 10 at 500 MW thermal, to a DEMO reactor producing 1GW electrical (Reproduction of Fig. 15 in Ikeda (2010))

of high-level radioactive waste. Therefore it has a very attractive potential to resolve global warming and to be an eternal fundamental energy source. On the other hand, unresolved issues still remain. It will take another several decades to realize a fusion power plant by integration of advanced science and engineering such as control of high-temperature plasma exceeding 100 million  C and breeding technology of tritium by generated neutrons. The research and development has just entered the phase to start the project to extract 500 MW of thermal energy from fusion reaction in the 2020s. The demonstration of electric power generation is targeted in the 2040s. Even the first-generation fusion demonstration reactor will produce electricity of one million kW. Fusion reaction itself has been already demonstrated in an unpeaceful manner as a hydrogen bomb which is ignited by an atomic bomb. In peaceful use of fusion energy, a fusion power plant employs completely different principle that the fusion reaction in plasma is controlled stably in steady state. Since fusion energy is free from nuclear proliferation and unfair distribution of fuels, geopolitical issues can be much mitigated by its realization. Fusion energy, a sun on the Earth, has attractive and critical potential to resolve diversified issues related to energy and to change global social structure. Lastly, the further sources about fusion can be found in books as cited by McCraken and Stott (2005), Stacey (2010), Kikuchi (2011), and Chen (2011).

References Atzeni S, Meyer-Ter-Vehn J (2004) The physics of inertial fusion. Clarendon, Oxford Aymar R (2001) Summary of the ITER final design report. ITER document G A0 FDR 4 01-06-28 R 0.2, Garching ITER joint work site, 9 July 2001 Bell M et al (1995) Overview of DT results from TFTR. Nucl Fusion 35:1429–1436 Bethe H, Peierls R (1935) Quantum theory of the diplon. Proc R Soc Lond A 148:146–156 Bosch HS et al (2010) Construction of wendelstein 7-X engineering a steady-state stellarator. IEEE Trans Plasma Sci 38:265–273 Braams CM, Stott PE (2002) Nuclear fusion: half a century of magnetic confinement fusion research. IOP, London Chen FF (2011) An indispensable truth, how fusion power can save the planet. Springer, London

Page 25 of 27

Handbook of Climate Change Mitigation and Adaptation DOI 10.1007/978-1-4614-6431-0_31-2 # Springer Science+Business Media New York 2015

Dinklage A et al (2007) Physics model assessment of energy confinement time scaling in stellarators. Nucl Fusion 47:1265–1273 Eliezer S, Eliezer Y (2001) The fourth state of matter: an introduction to plasma science. IOP, London Garin P et al (2009) Main baseline of IFMIF/EVEDA project. Fusion Eng Des 84:259–264 Giancarli L et al (2006) Breeding blanket modules testing in ITER: an international program on the way to DEMO. Fusion Eng Des 81:393–405 Gibson A (1998) Deuterium-tritium plasmas in the Joint European Torus (JET): behavior and implications. Phys Plasmas 5:1839–1846 Green BJ (2003) ITER: burning plasma physics experiment. Plasma Phys Cont Fusion 45:687–706 Hawryluk RJ et al (1998) Fusion plasma experiments on TFTR: a 20 year retrospective. Phys Plasmas 5:1577–1589 Ikeda K (2010) ITER on the road to fusion energy. Nucl Fusion 50:014002 Ikeda K et al (2007) ITER progress in the ITER physics basis. Nucl Fusion 47(E01):S1–S414 Imagawa S et al (2010) Overview of LHD superconducting magnet system and its 10-year operation. Fusion Sci Technol 58:560–570 Ishida S et al (1999) JT-60U high performance regime. Nucl Fusion 39:1211–1226 Ishida S et al (2010) Status and prospect of the JT-60SA project. Fusion Eng Des 85:2070–2079 ITER Physics Basis Editors (1999) ITER Physics Basis. Nucl Fusion 39:2137–2638 Jacquinot J (2010) Fifty years in fusion and the way forward. Nucl Fusion 50:014001 Kato T et al (2001) First test results for the ITER central solenoid model coil. Fusion Eng Des 56–57:59–70 Katoh Y et al (2007) Current status and critical issues for development of SiC composites for fusion applications. J Nucl Mater 367–370:659–671 Kaye and Laby Online (2005) Tables of physical & chemical constants, 16th edn. 2.1.4 Hygrometry version 1.0. Available at http://www.kayelaby.npl.co.uk/ Kikuchi M (2011) Frontiers in fusion research. Springer, London Koizumi N et al (2005) Development of advanced Nb3Al superconductors for a fusion demo plant. Nucl Fusion 45:431–438 Komori A et al (2010) Goal and achievements of large helical device project. Fusion Sci Technol 58:1–11 Lawson JD (1957) Some criteria for a power producing thermonuclear reactor. Proc Phys Soc Sect B 70:6–10 Lie J, Zhang J, Duan X (2010) Magnetic fusion development for global warming suppression. Nucl Fusion 50:014005 Martone M (ed) (1996) IFMIF-international fusion materials irradiation facility conceptual design activity, final report. ENEA frascati report, RT/ERG/FUS/96/11 Masionnier D et al (2005) A conceptual study of commercial fusion power plants, final report of the European fusion power plant conceptual study (PPCS). European fusion development agreement, EFDA(05)-27/4.10. Available at http://www.efda.org/eu_fusion_programme/downloads/scientific_ and_technical_publications/PPCS_overall_report_final.pdf McCraken G, Stott P (2005) Fusion: the energy of the universe. Elsevier Academic, London Meade D (2010) 50 years of fusion research. Nucl Fusion 50:014004 Mima K (2010) Inertial fusion development: the path to global warming suppression. Nucl Fusion 50:014006 Mitchell N et al (2010) Status of the ITER magnets. Fusion Eng Des 84:113–121 Muroga T et al (2002) Vanadium alloys – overview and recent results. J Nucl Mater 307–311:547–554 Norgett MJ et al (1975) A proposed method of calculating displacement dose rates. Nucl Eng Des 33:50–54 Page 26 of 27

Handbook of Climate Change Mitigation and Adaptation DOI 10.1007/978-1-4614-6431-0_31-2 # Springer Science+Business Media New York 2015

Ohyama N et al (2009) Overview of JT-60U results towards the establishment of advanced tokamak operation. Nucl Fusion 49:104007 Pamera J, Solano ER (2001) From JET to ITER: preparing the next step in fusion research. EFDA-JETPR(01)16, EFDA, Culham Science Centre, Abington Report of Japan Atomic Energy Commission in 2005. Japanese. Available at http://www.aec.go.jp/jicst/ NC/senmon/kakuyugo2/siryo/kettei/houkoku051026/index.htm Ross L (2010) Superconductivity: its role, its success and its setbacks in the large hadron collider of CERN. Supercond Sci Technol 23:034001 Sakharov AD, Leontovitch MA (eds) (1961) Plasma physics and the problem of controlled thermonuclear reactions, vol 1. Pergamon, London, p 21 Spitzer L Jr et al (1954) Problems of the stellarator as a useful power source, NYO-6047; PM-S-14, Princeton University, N.J. Project Matterhorn Stacey WM (2010) Fusion: an introduction to the physics and technology of magnetic confinement fusion. Wiley-VCH, Weinheim Team JET (1992) Fusion energy production from deuterium-tritium plasma in the JET tokamak. Nucl Fusion 32:187–203 Uo K (1961) The confinement of plasma by the heliotron magnetic field. J Phys Soc Jpn 16:1380–1395 Webster AJ (2003) Fusion: power for the future. Phys Educ 38:135–142 Wesson J (2004) Tokamaks, The international series of monographs on physics. Oxford University Press, Oxford Yamada H et al (2009) 10 years of engineering and physics achievements by the large helical device project. Fusion Eng Des 84:186–193 Zinkle SJ (2005) Fusion material science: overview of challenges and recent progress. Phys Plasmas 12:058101

Page 27 of 27

Handbook of Climate Change Mitigation and Adaptation DOI 10.1007/978-1-4614-6431-0_32-2 # Springer Science+Business Media New York 2015

Harvesting Solar Energy Using Inexpensive and Benign Materials Susannah Leea, Melissa Vandivera, Balasubramanian Viswanathanb and Vaidyanathan (Ravi) Subramaniana* a Department of Chemical and Metallurgical Engineering, Chemical and Materials Engineering Department, LME 310, MS 388, University of Nevada, Reno, NV, USA b National Center for Catalysis Research, Indian Institute of Technology Madras, Chennai, India

Abstract Historically, the growth and prosperity of human civilization have mainly been propelled by fossil energy (coal and petroleum) usage. Decades of tested and proven technologies have led to a continuous increase in demand for fossil-based fuels. As a result, we are now finding ourselves at the threshold of a critical tipping point where environmental consequences and global climate can be irreversibly affected and hence cannot be ignored. More than ever before, our unending and rapidly growing need for energy has necessitated urgent action on efforts to examine alternative forms of energy sources that are eco-friendly, sustainable, and economical. There are several alternatives to fossil-based fuels. These include biomass, solar, wind, geothermal, and nuclear options as prominent and possible sources. All these options can assist us with reducing our dependence on fossil fuels. Solar energy, being one of them, has the unique potential to meet a broad gamut of current global energy demand. These include domestic applications such as solar-assisted cooking, space, heating, as well as industrial processes such as drying. Solar energy utilization in several key areas such as electricity generation (photovoltaics), clean fuel production (hydrogen), environmental remediation (photocatalytic degradation of pollutants), and reduction of greenhouse gases (CO2 conversion to value-added chemicals) is also of great interest. A key challenge that must be addressed to boost commercialization of solar energy technologies, and common to these applications, is material properties and solar energy utilization efficiency. To realize large-scale and efficient solar energy utilization, application-based materials with a unique combination of properties have to be developed. The material has to absorb visible light and be cost competitive, composed of earth-abundant elements, and nontoxic, all at the same time. This chapter consists of ten sections. The first introduction section consists of a detailed discussion on the importance of energy in human activity, the effects of fossil fuels on climate and human lifestyle, and materials that meet many of the above criteria. The second section provides a short and critical comparison of solar energy with other alternatives. The third section provides a quick review of the basic concepts of solar energy. The commonly employed toolkits used in the characterization of materials for solar energy conversion are discussed in section four. Some of these tools can be used to evaluate specific optical, electronic, and catalytic properties of materials. Section five discusses the main categories of materials that are either commercialized or under development. The challenges to developing new materials for solar energy conversion are addressed in section Materials for Solar Energy Utilization. Section seven outlines some of the main strategies to test the promising materials before a large-scale commercialization attempt is initiated. Section eight profiles companies and institutions that are engaged in efforts to evaluate, improve, and commercialize solar energy technologies. This segment provides information about the product from a few representative companies around the world and their niche in the commercial market. Section nine provides a general outlook into the trend in solar energy utilization, commercialization, and its future. Finally, section ten provides the authors’ concluding perspective about the solar *Email: [email protected] Page 1 of 35

Handbook of Climate Change Mitigation and Adaptation DOI 10.1007/978-1-4614-6431-0_32-2 # Springer Science+Business Media New York 2015

Table 1 2003 US energy consumption Source Coal Natural gas Petroleum Nuclear Renewable Total

Amount 1.08  109 t 21.8  1012 ft3 6.72  109 bbl 757  109 kWh 578  109 kWh

QBtu 22.6 22.5 39.1 7.97 6.15 98.3

Percent 23.0 22.9 39.8 8.1 6.3 100

energy as a pathway for reducing our dependence on fossil fuels. At the conclusion of this chapter, we have also provided over 100 references that are highly recommended for further in-depth study into various aspects of solar energy.

Introduction Importance of Energy in Human History Energy has been one of the basic requirements for human activity and has played a pivotal role in human history. Research has been undertaken that correlates the increased availability of energy within a society to its citizens to an increase in the standard of living conditions. The nineteenth century saw the ushering of a technology revolution, a period in time that contributed to a significant improvement in the quality of human life while witnessing an increasing demand for energy in order to maintain these improved standard living conditions. Fossil fuels were critical to the industrial revolution that accompanied technological development during this era. The latter twentieth century saw a rapid increase in the demand for various other forms of energy in different parts of the world. Moreover, it is anticipated that this voracious demand for energy will only increase in the foreseeable future as many more countries of the world strive to improve quality of life for their citizenry (Hultman 2007; Weiss et al. 2009; Rotmans and Swart 1990).

Present Sources of Large-Scale Energy The US Department of Energy’s 2003 Annual Energy Report divides US energy usage into four main categories with a percentage of the total US 98.3 QBtu/year usage: residential usage (21.23 %), commercial (17.55 %), industrial (32.52 %), and transportation (26.86 %). This same report then proceeds to break down the 2003 US energy consumption which is shown in Table 1. As is evident from Table 1, fossil fuels (coal, petroleum, and natural gas) make up 85.7 % of the US energy consumption, making them the first and by far the predominantly used sources (Danielsen 1978). The reason for a predominantly fossil-fuel-based economy is that (1) the technologies and infrastructure using fossil-based fuels have been well developed over several decades and (2) the comparatively lower cost of fossil fuels in relation to other types of energy sources. The next highest US energy consumption after the fossil fuel is nuclear energy. Nuclear energy has been extensively exploited as an energy source in several developed countries as an alternate to fossil fuels and is a promising eco-friendly energy source for large-scale applications (Lenzen 2008; Germogenova 2002). However, nuclear energy has a high capital cost, vulnerability to man-made disruption, and the potential to be used for destructive purposes. Furthermore, a tremendous challenge lies in changing negative public perception about nuclear technology, stemming from perceived danger due to prior unfortunate nuclear-related incidents in Chernobyl, USSR, and Three Mile Island, USA. It is to be

Page 2 of 35

Handbook of Climate Change Mitigation and Adaptation DOI 10.1007/978-1-4614-6431-0_32-2 # Springer Science+Business Media New York 2015

Table 2 Issues with continued reliance on fossil fuels as a primary resource for mankind’s energy needs Problem areas for fossil fuels Climate change issues

Specifics Global warming Sea level rise Alternation of weather pattern resulting from temperature change

Health hazard

Tailpipe/stack emissions

Economic risks

Unsteady supply of increasingly finite resources Geopolitical instability

Others

Man-made disruption Land destruction

Description Emission of greenhouse gases such as CO2 traps solar heat which raises atmospheric temperature Rise in sea levels due to global warming can lead to flooding of low-lying areas Draughts, floods, hurricanes, and tornados can critically impair local/regional economic activities such as agriculture and even lead to displacement of the population SOx, NOx, and particulates in emissions can reduce air quality by promoting smog formation which may lead to health hazards such as lung cancer Increasing demand for finite resources can lead to spiraling prices (price fluctuations) that can hurt or even stunt economic growth Extreme reliance on very few sources where political situation can become unfavorable Disruption of stable supply of energy due to activities such as terrorism Environmental impact on local animal and plant life

noted that the trend in the United States’ energy consumption has been reflected in several developed countries as well.

Issues with Large-Scale Usage of Fossil Fuels In spite of the availability of several energy sources for large-scale usage, fossil fuels have been one of the most cost competitive, easily accessible, widely available, and therefore a more attractive option. It continues to be the primary form of cheap energy source in many countries with wide-ranging economic portfolios. However, with the continued use of fossil fuels coupled with a demand, their detrimental impact on climate and environment has forced us to reexamine the viability of relying further on this form of energy as mankind’s primary resource for the future. Speculated major concerns are climate change, health hazards, and potential for economic chaos. The details of some leading concerns regarding continued dependence on fossil fuel as a primary resource are listed in Table 2.

Need for an Alternate Energy Focus A closer look at other alternatives to meet mankind’s demand for energy is urgently needed. The reasons cited in Table 2 highlight the need for a serious reexamination of mankind’s approach to identifying, researching, and implementing possible options of energy resources. The key criteria for choosing an alternative energy form are: (a) sustainability, (b) eco-friendliness, (c) availability, (d) cost (capital and operating), (e) political will to change status quo by modifying governmental public policies, (f) population support, (g) technological reliability, and (h) safety aspects. It has to be first understood that no single form of energy can offset fossil-fuel usage completely and continue to meet the rising demands globally. It is also perhaps a smart decision to avoid focusing on just one form of alternate energy, but explore a diversified energy portfolio. It is generally agreed that an energy portfolio containing a mix of various forms of non-fossil-based alternative ranked using the above criteria should be tailored based on region- or country-specific needs.

Page 3 of 35

Handbook of Climate Change Mitigation and Adaptation DOI 10.1007/978-1-4614-6431-0_32-2 # Springer Science+Business Media New York 2015

Green options

Biomass

Solar

Wind

Geothermal

Scheme 1 Green energy options that have potential to meet global energy needs

Options Available to Us There are several non-fossil-fuel-based alternatives that have been examined as possible energy sources (Tester et al. 2005). The United States and Europe are leading the effort in examining the induction of non-fossil alternatives into mainstream energy sector. However, there is still a long way to go in this direction. For example, current US consumption of renewable energy forms – wind, biomass, geothermal, and solar – is 6.3 %, a very small fraction of the total 98.3 QBtu of energy used by the United States. Some of the pros and cons of these green alternatives are discussed next. Wind Energy Wind power is broadly defined as the conversion of wind energy into a useful form of energy-utilizing machinery such as sailing vessels, windmills, and wind turbines (Scheme 1). Wind energy shows promise as a replacement for fossil fuels as an energy source; theoretical estimates indicate that global output from wind can be the equivalent of 5,800 quads of energy per year (AWEA 2010) (1 quad = 172 million barrels of oil =425 million tons of coal). Moreover, wind power has certain advantages over other renewable forms of energy such as solar energy, for the wind can blow day and night, sunny or cloudy, and often is strongest during dark, overcast winter storms when energy is needed for heating and getting solar energy is not possible. However, wind power also has its limitations. Many devices that convert wind energy need specific wind velocities to work efficiently, and as a result, these specific wind velocities are often location specific, limiting the areas in which wind energy conversion devices can be used. Furthermore, contentious issues such as potential harm to endangered birds due to the rotating blades, noise concerns, health concerns, and the effects on aesthetics of the landscape due to the presence of several hundred windmills in a farm need to be resolved. Countries such as the United States (Knoll and Klink 2009) and England (Price et al. 1996) are seriously considering or have projects underway to harvest wind energy. The data from such case study locations should be carefully examined and appropriate changes have to be made to address the aforementioned concerns to exploit wind energy on a larger scale. Biomass Energy Biomass is a renewable energy source because the energy it contains comes from the sun. Plants capture the sun’s energy via the process of photosynthesis. Photosynthesis converts carbon dioxide from the air and water from the ground into carbohydrates, complex compounds composed of carbon, hydrogen, and oxygen. Later when these carbohydrates are combusted, fermented, or gasified for energy utilization, they turn back into carbon dioxide and water and release the sun’s energy that they contain. Through this cyclic process, biomass functions as a sort of natural and potentially infinite battery for storing solar energy. Depending on the biomass source and method used for releasing the captured energy, biomass energy can have the potential to supply 79 QBtu of energy (this is 80 % of the US energy consumption). However, in order to reach this output, the current 350  106 acres of land being harvested in the United States would have to be used solely for biomass production. This leads to the main disadvantage of biomass – the land needed to produce the biomass often leads to competition with land for food, destruction of forests, and with some biomass technologies, such as ethanol, food crops are used directly (Sanderson 2007).

Page 4 of 35

Handbook of Climate Change Mitigation and Adaptation DOI 10.1007/978-1-4614-6431-0_32-2 # Springer Science+Business Media New York 2015

However, biomass from solid maniple waste and new research investigating biomass produced from nontraditional sources such as coffee waste or algae-based biofuel production possibly positively influence technological and commercial advances within this field (Oliveira et al. 2008; Kondamudi et al. 2008). Geothermal Power Geothermal power utilizes the continuous flow of heat energy from the hot interior of the earth to its surface, by means of space heating and the generation of electricity. Unlike fossil fuels, biomass, wind, and solar, geothermal has the capacity to sustain itself in a continuous closed-loop system using heat from the earth’s crust. Moreover, the world’s geothermal energy reserve is recorded at 108 QBtu, a million times the total yearly US energy consumption. Unfortunately, current geothermal energy is limited by locations where natural reserves occur and the heat energy can be tapped in a commercially viable manner; however, there are new technologies being researched, such as the Normal Geothermal Gradient and Hot Dry Rock technologies, which would expand geothermal usage tremendously. Solar Power It is to be noted that solar energy can be considered as the indirect source for wind (solar-driven temperature changes cause wind movement) and biomass (chlorophyll pigments absorbing sunlight to grow plants/biomass). However, we do not focus on that aspect often when we talk about solar power. Solar power harnesses the radiant light and heat given off by the sun and is unquestionably the most universally available and least utilized form of renewable energy resource. It is estimated that the earth receives 162,000 TW of energy from the sun (Ginley et al. 2008). If one assumes that earth has a land mass of approximately 20 %, the fraction of energy reaching land is 32,400 TW, a fraction of the world’s yearly energy consumption! If it is possible to build systems that can harness this solar energy, it could solve mankind’s energy problems. However, the biggest challenge is the development of materials that can economically and efficiently convert solar energy into useful forms at a commercially viable efficiency. This chapter focuses on solar energy and some of the factors that are pivotal to using solar energy as a resource for meeting global energy needs. For further details on these topical areas, the readers are referred to four chapters on biomass and one chapter on wind energy in this text.

Solar Energy The following sections assume that the reader already has a fundamental knowledge of solar energy. For a review of these concepts, there are numerous publications in circulation that cover these fundamentals in greater detail.

What Is Solar Energy? Solar radiation consists of light of different wavelengths (energy). The energy associated with each wavelength can be estimated using the equation E ¼ h lc, where E = energy (eV), h = Planck’s constant (6.6  1034 J s), l = wavelength of light (nm), and c = velocity of light (3  108 m/s). As the wavelength of light increases, the energy associated with that wavelength decreases. Solar energy received at the surface of the earth also depends on the location (zenith angle) and effects of atmospheric interference (pollution or turbidity). In general, the irradiance at the surface reduces toward the poles and increases with atmospheric pollution. The solar spectrum can be divided into several regions. These Page 5 of 35

Handbook of Climate Change Mitigation and Adaptation DOI 10.1007/978-1-4614-6431-0_32-2 # Springer Science+Business Media New York 2015

Table 3 Advantages and disadvantages of solar energy Advantages 1. Universally available, infinite energy source, and free. Complementary technologies ensure continuous availability 2. Clean eco-friendly, very low maintenance, supports local economy “green jobs,” and does not contribute to global warming 3. Sustainable and free of geopolitical instabilities. No security issues 4. Political support and incentives to switch to solar energy systems are favored in many countries 5. Solar energy usages range from food processing (solar cookers) to large-scale electricity generation

Disadvantages Large area required to produce sizeable power (may not be possible to harvest solar energy in densely populated areas) Some materials used currently in solar energy conversion can be expensive and toxic and may require carefully planned disposal protocols Weather patterns can be a source of unpredictable interference Public awareness about incentives (rebates) and education is still low and needs a significant boost Solar conversion efficiencies in most applications are low. Efficiency improvement via materials development is a key challenge

include far-UV (1,400 nm). The distribution of energy associated with sunlight can be identified to different regions and may be approximated as UV 35 %) (Baur et al. 2007), but one of the issues is the transparency required to activate underlying layers. Photoactive polymers with fullerenes as the electron transport agent are significantly simple to process compared to Si-based devices but are limited in their stability during long-term operations (Cravino 2007; Liang et al. 2008).

Materials for Photovoltaics, Water Splitting, and CO2 Reduction The following section provides a list of materials for photovoltaic, water splitting, and CO2 reduction applications. The selection of materials is based on meeting one or more of the following criteria: cost effectiveness, eco-friendly, and ease of synthesis. Photovoltaics Solar cells can be distinguished on the basis of their overall solar-to-electric conversion efficiency into several categories. Several reviews have discussed different aspects of PV (Bube 1990; Gratzel 2005; Thomas et al. 1999; Green 2007; Guenes and Sariciftci 2008; Catchpole et al. 2001). Table 5 provides a Page 16 of 35

Handbook of Climate Change Mitigation and Adaptation DOI 10.1007/978-1-4614-6431-0_32-2 # Springer Science+Business Media New York 2015

Table 5 Materials for photovoltaic applications Material CdTe CuInSe2

TiO2

a-Si CuInS2

Thin crystalline silicon ZnO/Al2O3 CuGaInSe2 Nafion ZnPc/C60 PbSe Carbon nanotube (w/TiO2) P3HT/PCBM

FeS2

FeS and FeS2

Reason Low-cost preparation technique, high conductivity, appropriate band gap, recycling methods developed Low-cost, non-vacuum preparation technique

Continuous non-vacuum process by simple printing techniques Combination rapid thermal process and layer-by-layer spin coating preparation TiO2 nanotube array in ionic liquid electrolyte cell TiO2 nanorod assembly Low-cost encapsulation method Low-cost vapor deposition preparation Low-cost, non-vacuum preparation technique with solution coating and reduction-sulfurication technique Synthesized hollow nanospheres from common inorganic metal salts using surfactant-assisted chemical route Thin-film reduced cost

Refs. Oktik (1988), Bosio et al. (2006), Miles et al. (2005) Oktik (1988), Kaelin et al. (2004), Eberspacher et al. (2001) Kay and Gratzel (1996) Tao et al. (2010) Kuang et al. (2008) Wei et al. (2006) Kondo et al. (1997) Hou and Choy (2005) Todorov et al. (2006) Zhang et al. (2008) Catchpole et al. (2001), Shah et al. (2006)

Cheaper hybrid PV cells High-efficiency, low-cost thin film Charge transport material to be used with ZnO or CdTe Organic PV cell, low-cost, experiment with rubrene doping Lower-cost, high-efficiency semiconductor material Alternative to platinum as a counter-electrode in DSSCs

Damonte et al. (2010) Miles et al. (2005) Feng et al. (2009) Taima et al. (2009) Hanrath et al. (2009) Muduli et al. (2009), Lee et al. (2009)

BJH cell that is lightweight, flexible, low-cost production Preparation by low-cost quick-drying technique, improved efficiency over other techniques P3HT nanowires and PC61BM or PC71CM Lower cost due to abundance and production than silicon and > or = efficiency Nanosheet films from reaction of iron foil and sulfur powder, for photocathodes in tandem solar cell with TiO2 as photoanode

Honda et al. (2009) Ouyang and Xia (2009) Xin et al. (2010) Wadia et al. (2009) Hu et al. (2008)

comparison of the different technologies available today and how these technologies rank with respect to each other. Many of the materials identified here either have been commercialized or offer promise for commercialization due to aspects such as cost competitiveness or eco-friendliness or ease of process ability. In general, all solar cell technologies known today are designed for niche applications and come with advantages and disadvantages. Therefore, the choice of a solar cell technology is usually made based on the type of application and the length of time the cell is expected to be in service. A summary of the advantages and disadvantages of the different types of cells is provided in the following sections. (Chemical formulas are shown in the table for brevity. Readers are referred to citations for details.)

Page 17 of 35

Handbook of Climate Change Mitigation and Adaptation DOI 10.1007/978-1-4614-6431-0_32-2 # Springer Science+Business Media New York 2015

Table 6 Materials for photocatalytic water splitting to produce hydrogen Material a-Si

Properties Relatively high conversion efficiency, no catalyst degradation, low-cost hydrogen production Inexpensive, efficient, and renewable hydrogen source

TiO2

Surface engineering to increase active sites for reaction Carbon-doped TiO2 increases efficiency of water splitting Nano-size photocatalyst, low-cost, environmentally friendly Nanostructured photocatalyst to reduce material cost Nanotube and nanowire arrays for improved efficiency Carbon modified n-type TiO2 photoelectrodes to increase conversion efficiency

Si/TiO2 Fe2O3

Efficient photocatalyst prepared by environmentally friendly microwave-assisted hydrothermal process Si doping improves efficiency, low-cost solar-to-chemical conversion

ZnO

Require smaller overpotential to oxidize water, single solar cell power, lower production costs Ag-Fe2O3 nanocomposite photocatalyst as efficient, low-cost PEC Doping to improve efficiency Thin layer of Fe2O3 using nanostructured host scaffold of WO3 Low-cost oxide semiconductor

SrTiO3

Low-cost oxide semiconductor

WO3

Fe3+/Fe2+ redox over WO3, efficient photocatalyst, low-cost option Nanoporous WO3 for improved efficiency High H2 evolution in presence Na2S/Na2SO3 as sacrificial electron donors under visible light radiation Cheaper synthesis than similar photocatalyst Cu2O powders in coupled with WO3 in suspension had good H2 evolution High absorption efficiency, nontoxic, elements abundant

CuInS2 Cu2O

In2O3 SnO2/a-Fe2O3 CdS (CdS/TiO2)

Nitrogen doping shows better photoelectrochemical activity for water splitting than N-doped TiO2 High purity, low-cost, environmentally friendly production CdS glass composite to reduce photocorrosion of powder form CdS/TiO2 nanotubes showed greater efficiency than either material alone

Refs. Rocheleau et al. (1998) Kelly and Gibson (2006) Nowotny et al. (2006) Park et al. (2006) Ni et al. (2007) Hu et al. (2010) Shankar et al. (2009) (Shaban and Khan 2008) Somasundaram et al. (2007) Takabayashi et al. (2004) Nowotny et al. (2006) Jang et al. (2009a) Jang et al. (2009b) Sivula et al. (2009) Aroutiounian et al. (2005) Aroutiounian et al. (2005) Miseki et al. (2010) Guo et al. (2007) Zheng et al. (2009) Ma et al. (2008) Kawai et al. (1992) Somasundaram et al. (2007) Reyes-Gil et al. (2007) Niu et al. (2010) Liu et al. (2010) Li et al. (2010)

Water Splitting Water splitting can be performed in the presence or absence of sacrificial agents. Based on the approach employed, several reviews have discussed the materials (Kudo and Miseki 2009; Aroutiounian et al. 2005; Best and Dunstan 2009; Wang et al. 2009; Rajeshwar 2007; Woodhouse and Parkinson 2009). The following segments list some of the popular materials that have been used successfully for water splitting. Properties of the materials are listed in column 2. Oxides, oxide composites, and non-oxides are common materials for driving water splitting reactions. Other materials such as perovskites, sillenites, and pyrochlores are also promising families of compounds that demonstrate water

Page 18 of 35

Handbook of Climate Change Mitigation and Adaptation DOI 10.1007/978-1-4614-6431-0_32-2 # Springer Science+Business Media New York 2015

Table 7 Materials for photocatalytic reduction of CO2 Material TiO2

Reason TiO2 anchored on glass act as active photocatalyst for reduction of CO2 with H2O Highly dispersed anchored TiO2 to reduce CO2 to CH4, Cu loading increased CH3OH TiO2 nanoparticles, found 14 nm to be optimum photocatalyst Simple synthesis methods to form highly active nanocomposite photocatalyst TiO2 pellets reduced CO2 in the presence of water vapor under UV irradiation Cu-loaded TiO2 increases photoreduction CO2, shown Cu(I) as primary active site Cu–TiO2 optical fibers transform CO2 to hydrocarbons at higher efficiencies Highly dispersed TiO2 within zeolite cavities for efficient CO2 reduction TiO2 on a SnO2 glass substrate to form bilayer catalyst, high photocatalytic activity

CdS

Effect of metal depositing on TiO2, improved efficiency CdSe/Pt/TiO2 photocatalyst producing high yield of CH4 with CH3OH, H2, and CO as minor products Effective photocatalytic reduction, increased efficiency with excess Cd2+

Ti-Si

Ti-containing silicon thin films higher reduction than powdered photocatalyst

Titanium silicalite

UV irradiation reduction of CO2 with H2 to CH4, Ti believed to provide active site

Poly (3-alkylthiophene) BiVO4 CaFe2O4

Photocatalyst in the presence of phenol to produce salicylic acid Photocatalytic ethanol production under visible light Nonpoisonous, cheap, p-type semiconductor with small band gap

Ga2O3

Photoreduction of CO2 with H2 at room temperature and ambient pressure

InTaO4

Common water splitting semiconductor, now tested CO2 reduction. Reduction potential increased by adding NiO cocatalyst Photocatalytic reduction of CO2 to CO in presence of H2

CdSe

(K, Na, Li)TaO3

Refs. Anpo (1995) Anpo et al. (1995) Koci et al. (2009) Li et al. (2008) Tan et al. (2006) Tseng et al. (2004) Wu et al. (2005) Yamashita et al. (1998) Tada et al. (2000) Xie et al. (2001) Wang et al. (2010) Fujiwara et al. (1997) Ikeue et al. (2002) Yamagata et al. (1995) Kawai et al. (1992) Liu et al. (2009) Matsumoto et al. (1994) Teramura et al. (2008) Pan and Chen (2007) Teramura et al. (2010)

splitting. However, these materials may be difficult to synthesize, and more research has to be performed to determine the applicability of such materials for water splitting reactions (Table 6). CO2 Conversion Due to global environmental concern, the research in utilization of solar energy for CO2 conversion and/or control is gaining momentum. Several articles (Hinogami et al. 1998; Koci et al. 2009; Li et al. 2008; Wang et al. 2010; Tseng et al. 2004; Wu et al. 2005) have addressed this topic, and readers are directed to these articles for further information. Table 7 lists some of the articles that demonstrate the application of a few leading and representative materials for CO2 conversion.

Page 19 of 35

Handbook of Climate Change Mitigation and Adaptation DOI 10.1007/978-1-4614-6431-0_32-2 # Springer Science+Business Media New York 2015

Challenges and Limitations to Materials Mathematical models that consider thermodynamic limits and the near impossibility to convert solar energy to other forms of energy without generating entropy pins the maximum attainable theoretical efficiency of conversion of solar energy at 85 % (Wurfel 2002). Specific to photovoltaics, silicon (Si) solar cells (both single and polycrystalline) have been by far the most studied devices with the greatest market penetration and demonstrate the highest efficiencies (10–25 % for wafers, 4–20 % for modules) (Green 2007; Miles et al. 2007). However, increasing demand for Si, material processing, and device manufacturing costs have led to the opportunity for other non-Si-based technologies to enter the commercial market (Green 2007; Guenes and Sariciftci 2008). Thin-film processing technologies that use amorphous silicon (a-Si) is less expensive if single junction solar cells are of interest. However, single junction a-Si solar cells have low efficiencies (3–4 %), and employing amorphous thin films in a multijunction type cell (e.g., using a-Si and a-SixGe1x) can improve efficiencies up to 6–8 % (Green 2007) and make them commercially viable (Guha and Yang 2006) but again increases cost. Comparative efficiencies of silicon-based and non-silicon-based solar cells are discussed at length in literature (Goetzberger et al. 2003). Alternate to Si cells are compound semiconductor solar cells; GaAs, InGaP, and copper indium gallium selenides (CIGS) are popular examples that have tremendous commercial potential but are presently limited by processing cost and hence used only in niche areas such as space applications (Bosi and Pelosi 2007). Using these in a multijunction format to boost efficiencies to the order of 6–8 % and possibly reducing processing cost could bring the technology for terrestrial use (application in on-demand and on-site power generation) (Bosi and Pelosi 2007). Alternate concepts on how to overcome efficiency limitations using tandem cells, intermediate band gap solar cells, and quantum dot (QD) solar cells as discussed in this review have to be explored (Solanki and Beaucarne 2007). Dye-sensitized solar cells (DSSC) may be a cost-effective option, a significant limitation being dye cost and stability and corrosion of metal components of the cell due to the usage of the popular iodine–iodide-based, charge shuttling electrolyte (Toivola et al. 2009). Recombination of photogenerated charges, mainly due to irregularity in the periodicity of the materials, has to be addressed or the performance of a solar cell improved (Frank et al. 2004). To improve the application of low-cost, high-efficiency solar cells, low-cost ink technologies need to be developed to make it possible to develop a sort of spray paint methodologies to prepare bulk highefficiency solar cells (Hillhouse and Beard 2009). International standardization of cost for solar cell fabrication is being developed and tested (Chamberlain 1980). Organic material-based solar cells are relatively new and far from becoming state-of-the-art devices. However, they are gaining popularity and there is some market activity with devices offering efficiencies of 4–6 % (Hoppe and Sariciftci 2004). Due to the fact that solar systems are open to the elements and the moving nature of the sun, issues such as tracking to maintain efficiency of the system and protection against dust and minimizing the impact of cloud interference have to be considered for reliable operation of the system. One has to explore the development of new materials and applications for solar energy utilization and minimize the use of environmentally toxic materials such as Cd (Bauer 1993). Other emerging areas such as band gap engineering and multilayered systems (high-efficiency tandem cells) for solar energy utilization have to be examined as well (Khaselev and Turner 1998; Goswami et al. 2004).

Page 20 of 35

Handbook of Climate Change Mitigation and Adaptation DOI 10.1007/978-1-4614-6431-0_32-2 # Springer Science+Business Media New York 2015

Integrating Tested Concepts of Solar Energy Utilization to Produce Fuels in an Effective Way One method to improve solar energy utilization is to develop “smart and integrated systems” that can perform several solar-driven processes that are complementary in nature. The benefits of such an approach are as follows: 1. Maximizing solar energy utilization 2. A one-stop system for multiple applications 3. Improved utilization of land (this benefit can be a significant advantage in places with costly real estate and where limited land may be available for solar energy) 4. Potential for improved energy efficiency, reduced ecological impact, and greater benefits for human activity Three examples are presented below that illustrate these aspects.

Example 1: Integrated Organic Waste Treatment and Fuel Cell System An interesting concept that combines two traditional applications of photocatalysis, environmental remediation and energy generation, to form a photo-fuel cell device is discussed below (Antoniadou et al. 2010). Photocatalytic degradation of organic environmental waste results in the formation of hydrogen ions which can be tapped to produce hydrogen molecules in order to use them as a clean fuel. The organic “fuel” wastes can be a part of a photoelectrochemical device that is comprised of two electrodes, (1) a photoanode that essentially consists of the photocatalyst where holes oxidize the organics to liberate H+ ions, (2) a cathode where the ions are reduced to form hydrogen, and (3) an electrolyte consisting of water, organics, and some salt (essential for ionic conductivity). A schematic of the setup and a prototype of the device are shown in Fig. 5; TiO2 coated on a fluorine-doped tin oxide (FTO) substrate is used as a photoanode for oxidation of organics. One can expand on this concept a step further by (1) matching the pollutants in a manner that maximizes photooxidation on the basis of redox properties of the materials involved, (2) mechanism of degradation, or (3) potential for H+ ion generation to improve the yield of hydrogen.

Example 2: A Hybrid Photocatalytic-Photovoltaic System (HPPS) A research group from Switzerland has pioneered the development of an autonomous eco-friendly HPPS system which utilizes solar energy to perform photodegradation of pollutants and a PV system to generate power for operating the system simultaneously (Sarria et al. 2005). This three-tiered system consists of (1) a sun-facing top layer where UV-assisted photodegradation of pollutants is performed, (2) an intermediate water layer which functions as an IR filter to regulate temperature, and (3) a visible lightabsorbing PV device that produces electricity at the bottom to power a recirculation pump associated with the system. A schematic of the system is shown in Fig. 6. The system thus does not draw any external power for performing the waste treatment. The system consists of four PV modules and has an overall volume of 25 L. This is an example of a smart integrated system that utilizes UV, visible, and IR parts of the solar spectrum to combining photodegradation of pollutants and producing electricity. A possible direction to further improve the efficiency of such systems may be to focus on trying to harvesting IR photons to produce electricity using new photocatalysts.

Page 21 of 35

Handbook of Climate Change Mitigation and Adaptation DOI 10.1007/978-1-4614-6431-0_32-2 # Springer Science+Business Media New York 2015

Fig. 5 Schematic representation of a two-compartment PEC cell. The openings at the upper part represent gas inlets and outlets. The chemical reactions shown are only indicative examples. The system can be used with other combination of pollutants to produce energy (Reprinted with permission from Elsevier)

Fig. 6 Schematic representation of a hybrid photocatalytic-photovoltaic system powered using internally generated energy (Reprinted with permission from Elsevier)

Example 3: Bioprocesses to Convert Waste to Energy Using Algae Man-made emissions such as CO2 from industries have adverse effects on the environment; the realization of the negative effects of such emissions has led to international protocol and policy changes such as cap-and-trade agreements to control environmental impact (Kunjapur and Eldridge 2010; Pittman et al. 2011; Walke 2009). On the other hand, the shortage of transportation fuels has necessitated the need to develop alternate sources of energy. These two challenges can potentially be addressed simultaneously using algae. Algae-based systems can assist in green house gas control by consuming CO2 to produce a variety of useful products. Algae in the presence of sunlight, water, and CO2 nutrients produce biofuels (for transportation), solid biomass (burned to produce heat or electricity), hydrogen, or oxygen. A schematic of the pathway for some of these products is shown in Fig. 7. This approach is considered a promising solution to global environmental and energy needs. Photobioreactors or raceway ponds are two common methods to contact algae with light, CO2, and nutrients. An example of a raceway pond is shown in Fig. 7. In a generic parlance, these examples help reinforce the old adage – One man’s junk is another man’s treasure.

Example 4: Solar-Powered Biomass Gasification Biomass gasification is the process of converting organic material to syngas, primarily carbon monoxide and hydrogen, which can be used to produce various forms of energy and fuels (Sundrop Fuels Inc 2010;

Page 22 of 35

Handbook of Climate Change Mitigation and Adaptation DOI 10.1007/978-1-4614-6431-0_32-2 # Springer Science+Business Media New York 2015 Wastewater including nutrients Spare electricity Exhaust CO2 Power generator

High-rate algal pond

Sunlight

Electricity Harvesting pond

Bio-gas Bio-oil

Biomass collection + chemical conversion

Lipid extraction + transesterification

Spare biofuel

Biodiesel

Purified water

Harvest Food Paddlewheel

Baffle

Flow

Baffle

Fig. 7 The steps involved in biodiesel production using waste water, solar energy, and CO2, and the picture of an actual raceway facility implementing about process (Top) (Reprinted with permission from Elsevier and ACS)

Fig. 8 Schematic of Sundrop Fuels ® system to concentrate solar energy onto the thermochemical reactor for gasification and the ground view of heliostat mirrors used to concentrate solar energy

In Biomassmaganzine.com 2010). Organic biomass is reacted at high temperatures with a specific amount of oxygen and water to produce syngas. The syngas is then purified and can be used for electricity generation, production of liquid fuels, or production of hydrogen gas. The problem with traditional gasification processes is that a large amount of energy is required to generate the high temperatures necessary for gasification. This energy is typically supplied by coal-fired power plants or by burning part

Page 23 of 35

Handbook of Climate Change Mitigation and Adaptation DOI 10.1007/978-1-4614-6431-0_32-2 # Springer Science+Business Media New York 2015

Table 8 Commercial companies involved in the design of solar energy conversion systems Company Konarka Technologies

Headquarter’s location Lowell, MA

Solar cell technology Power plastics

Dyesol Inventux Technologies

Queanbeyan, NSW, Australia Berlin, Germany

High purity dye solar cell Solar micromorph thin-film modules

Website http://www.konaraka. com http://www.dyesol.com http://www.inventux.com

Transparent Packaging Transparent Electrode

Printed Active Material Primary Electrode Substrate

Light

Transparent Packing

Electrons

Transparent Electrode Printed Active Material

External Load

Primary Electrode Substrate

Fig. 9 Illustration of power plastic layers (modified from the Konarka ® website)

of the biomass feedstock. Researchers at several Colorado universities in collaboration with the National Renewable Energy Laboratory have developed a rapid solar-thermal reactor that can be used for biomass gasification. In this process, a number of mirrors are used to concentrate solar energy to a single point producing extremely high reactor temperatures, in excess of 2,000  C. Sundrop Fuels has applied this technology at their solar-driven biomass gasification facility in Louisville, Colorado. Sundrop Fuels uses thousands of solar heliostat mirrors on the ground to direct concentrated solar energy to a thermochemical reactor atop a high tower. Feedstock entering the reactor is converted to syngas at 1,300  C. Figure 8 shows a schematic representation of the solar-driven gasification process. The syngas is then cleaned and processed to create “green” gasoline, diesel, and aviation fuels. Biomass gasification is a promising technology for producing a number of fuels, and the use of concentrated solar energy eliminates traditional energy losses during thermal energy generation.

Commercial Ventures The progress in the development of materials for solar energy utilization in the last few decades has permitted a wide variety of solar cell-based commercial ventures to fulfill contemporary specific niches and markets. Furthermore, solar companies are constantly researching and refining their manufacturing processes to discover more economical and eco-friendly solar cells that will satisfy emerging markets and

Page 24 of 35

Handbook of Climate Change Mitigation and Adaptation DOI 10.1007/978-1-4614-6431-0_32-2 # Springer Science+Business Media New York 2015

cliental needs. Within this section, we will profile three solar companies: Konarka Technologies, Dyesol, and Inventux Technologies. Table 8 briefly introduces these companies.

Konarka ® Technologies Konarka Technologies is an international solar company receiving recognition worldwide for developing a third-generation organic photovoltaic technology-based solar cell (Konarka 2009). An organic photovoltaic cell utilizes conductive polymers or small carbon-based molecules for light absorption and charge transport, respectively, while traditional electronics use inorganic conductors such as copper. Konarka’s chief technology, Power Plastic, was invented by the company’s cofounder and Nobel Prize laureate, Dr. Alan Heeger. Power Plastic is a photoreactive polymer material and can be printed or coated inexpensively onto flexible substrates using roll-to-roll manufacturing; it is comprised of several thin layers: a photoreactive printed layer, a transparent electrode layer, a plastic substrate, and a protective packaging layer, as illustrated in Fig. 9. Unique Features Konarka’s Power Plastic has several advantages over other organic photovoltaic technologies. These include: • • • •

Tunable cell chemistry to absorb specific wavelengths of light as well as broad spectrum To capture both indoor and outdoor light and convert it into direct energy in the form of electric current To perform its function using all recyclable materials Being thin, lightweight, and flexible

Applications Konarka’s Power Plastic has four major end-product applications. These include: • • • •

Microelectronics: powering sensors, smart cards, and low-power applications Portable power: solar-powered sensors, backpacks, and cell phone chargers Remote power: accessing renewable power at stadiums, carports, and airports Building integrated applications (BIPV): custom-manufactured applications roof, windows, and walls

Cost per Watt Konarka’s Power Plastic technology has already reduced the cost of manufacturing solar cell so that it is less than $1 per watt; moreover, Konarka states that through mass production, this cost will be further reduced to approximately $0.10/W. Future Plans Konarka is currently developing two future applications for Power Plastic which are under development by Konarka and Arch Aluminum & Glass: • Manufacturing transparent and opaque solar cells for integrated curtain walls and windows components • The ability for this technology to work off-angle permitting the technology to expand to other niches Konarka’s ongoing research involves conducting advanced research in power fibers, bifacial cells, and tandem architecture. The resulting products of this ongoing research would permit Konarka to expand solar power technology to woven textiles via power fibers. Bifacial cells being transparent in nature would permit the solar cells to generate electricity from inside and outside light while allowing the technology to

Page 25 of 35

Handbook of Climate Change Mitigation and Adaptation DOI 10.1007/978-1-4614-6431-0_32-2 # Springer Science+Business Media New York 2015

Glass

ht

Lig Working Electrode −ve

Conductive Layer e−

Titania (TiO2)



Dye Electricity l− Counter Electrode +ve

Electrolyte Catalyst

+

Conductive Layer Glass

Fig. 10 The fundamental DSC structure design consideration (taken from the Dyesol ® website)

double as a see-through window. Tandem architecture would increase the efficiency of organic photovoltaic devices to 15 % through the process of stacking series-connected subcells.

Dyesol ® Dyesol manufactures and sells high purity dye solar cell (DCS) materials, titania pastes, sensitizing dyes, electrolytes, and electrode catalysts (Dyesol 2010). As seen in Fig. 10, the DSC structure consists of a layer of nanoparticulate titania (titanium dioxide) which is formed on a transparent electrically conducting substrate and photosensitized via a single ruthenium (Ru)-based dye layer. An iodide-tri-iodide-based electrolyte redox system is placed between a layer of photosensitized titania and a second electrically conducting catalytic substrate. Unique Features Some advantages of using DSC technology versus other contemporary silicon-based photovoltaic technology are that they: • • • • • • •

Are much less sensitive to angle of incidence of radiation – it is a “light sponge” soaked with dye. Perform over a wide range of light conditions. Have low sensitivity to ambient temperature changes. Are much less sensitive to shadowing and can be diode-free. Are an option for transparent modules, thus enabling wider applications. Are truly bifacial: They absorb light from both faces and can be inverted. Are versatile: DSC power can be amplified by tandem and optical techniques without the use of concentrators.

The resulting DSC panels are more versatile because they are less sensitive to the angle of the solar radiation, allowing them to be installed on vertical walls and in low-light areas; moreover, they can be transparent and can be designed in various color schemes permitting more attractive architectural integration options than those available for silicon. Applications Dyesol’s patented products and their applications are listed below:

Page 26 of 35

Handbook of Climate Change Mitigation and Adaptation DOI 10.1007/978-1-4614-6431-0_32-2 # Springer Science+Business Media New York 2015

Front Glass TCO Front Contact a-Si µc-Si TCO Bank Contact Back Glass

Fig. 11 Thin-film photovoltaic modules layer composition (taken from the Inventux ® website)

• Interconnected Glass Module: This design is for applications where longest lifetime is needed for exposed mounting and to be isostructural to and replacing the existing structure. Electrical interface can be typically via a short DC bus to a local area network for distribution or inversion to AC. • SureVolt Solar Range: Maintain voltage at all light levels, having high resistance to damage through impact, bending, tension, torsion, and compression. The range is ideally suited for use in portable consumer electronics, military, and indoor applications, as well as developed landscape infrastructure. Dyesol’s dye solar cells (DSC) are marketed to low light, dappled light, and indoor light markets, a market that only DSC can address. Cost per Watt The Photovoltaic World May/June 2009 magazine stated at the anticipated 7 % Dyesol efficiency the resulting cost of DSC technology was $1.00/W. Future Plans Dyesol appears to have two major future goals: 1. To outsource to working in collaboration with wireless technology and tandem products 2. To enhance power out devices to direct chemical production and complete building solutions

Inventux Technologies Inventux is a solar energy company that specializes in the development, production, and marketing of environmentally friendly, silicon-based thin-film solar (micromorph thin-film) modules (Inventux 2010). The combination of an amorphous with a microcrystalline cell is termed a micromorph cell. Micromorph cells thus represent the most consistent advancement of amorphous silicon-based tandem cell technology. Figure 11 illustrates the thin-film photovoltaic modules layer composition. The glass serves both as a substrate for the thin-film PV cell and a component of the later encapsulation of the element. The various layers are successively deposited on the front glass. To produce the absorber layers, the plasma-enhanced chemical vapor deposition (PECVD) using gaseous silicon hydrogen compounds became generally accepted. The production of the front and back contact layers (transparent conductive oxide – TCO) takes place by applying low-pressure chemical vapor deposition (LPCVD).

Page 27 of 35

Handbook of Climate Change Mitigation and Adaptation DOI 10.1007/978-1-4614-6431-0_32-2 # Springer Science+Business Media New York 2015

Unique Features Inventux’ solar modules contain absorbers made of amorphous and microcrystalline silicon. Amorphous and microcrystalline silicon are suitable to be combined in a tandem solar cell, since the different band gaps facilitate an enhanced utilization of solar radiation and manufacturing can be done using the same technology. Several of the advantages and benefits of using Inventux Solar Technologies PV modules are: • The extremely thin 0.002-mm absorber layer requires only a minimum amount of raw material (silicon); the layer thickness is just one-hundredth of that of conventional photovoltaic technology. • Exploitation of a broader light spectrum as well as fewer shading losses as with crystalline modules, up to 30 % higher yield during inhomogeneous light conditions compared to crystalline modules and a wider range of applications possible. • By far better temperature behavior at good solar radiation conditions as with crystalline technology, higher yields at full load conditions. • Monolithic module configuration, as opposed to crystalline cell configuration – with its electrically required spacing, only very little inactive module surface exists. • Monolithic wiring during the process makes subsequent manual production steps superfluous. • Series connection of solar cells leads to a relatively high open circuit voltage of the modules, minimized conduction losses, and reduced cabling work. • Due to the very high spectral acceptance, they have the highest efficiency potential in the area of silicon-based thin-film photovoltaics. Applications Inventux thin-film photovoltaic modules are particularly suitable for large, grid-connect photovoltaic systems; moreover due to the micromorph tandem structure wide light spectrum absorption capabilities, Inventux technologies can be used during inhomogeneous light conditions and climate conditions. Cost per Watt Inventux Solar Technologies implements Oerlikon Solar’s micromorph technology in the company’s manufacturing processes. Oerlikon claims that through the incorporation of advanced fabrication designs, the company’s turnkey tandem junction technology would be capable of producing modules for $0.70/W by the end of 2010.

Commercial Venture that Employs Tested Concepts of Solar Energy Utilization to Produce Fuels in an Effective Way

Algenol ® (2010) is a US-based firm that proposes sequestering CO2 from power plant exhausts and producing transportation fuel from it. The company plans to use water and sunlight to produce value ethanol (an additive to gasoline). A schematic of their proposed flow sheet is shown in Fig. 12. The firm focuses on the production of ethanol from algae without destroying the algae. A unique highlight of their system is the possibility of growing the algae in land areas that may be deemed unfit for agricultural activities, for example, a desert-like environment. They are planning to build commercial facilities in the United States and Mexico in the near term for large-scale ethanol production. The company plans to sell ethanol at a price of $3.00/gal and is expected to be a major player in the value ethanol market in the near future.

Page 28 of 35

Handbook of Climate Change Mitigation and Adaptation DOI 10.1007/978-1-4614-6431-0_32-2 # Springer Science+Business Media New York 2015

Nutrients

Solar Radiation

Exhaust Gas from Power Plant CO2 Gas Treatment

Seawater

Blue-green Algae

Separation Process

Oxygen

Bio-ethanol

Unproductive Land

(Fresh Water)

Fig. 12 An artistic rendition of a plant showing CO2 capture from exhaust of a power plant and its utilization in a biofuel production facility. http://www.treehugger.com/files/2008/06/algenol-algae-biofuel-race-process-economics-advantage.php

Other Types of Solar Companies A list of other solar companies from different regions around the world is provided next. These companies are involved in manufacturing products that utilize solar energy in different processes. The reader is referred to the links provided for further information. Company Auria Solar

Headquarter’s location Taiwan

First Solar Evergreen Solar Inc

Phoenix, AZ Marlboro, MA

OriginOil Sch€ uco

Los Angeles, CA US location Union City, CA Queensland, Australia Winnipeg, MB Canada

Solar Biofuels Northern Lights Solar Solutions Silicon Solar DesignLine

Ithaca, NY Christchurch, New Zealand

Solar cell technology Amorphous silicon thin film Cadmium telluride String ribbon crystalline silicon Biofuel with solar Solar cooling Solar hot water Solar biofuels (biomass) Solar heating

Website http://www.auriasolar.com/

Portable solar power solutions Solar-powered bus (Tindo in Adelaide, Australia)

http://www.siliconsolar.com/index. html http://www.designlinecorporation. com/

http://www.firstsolar.com http://evergreensolar.com/en/ http://www.originoil.com/ http://www.schueco.com/web/com http://www.solarbiofuels.org/ http://www.solartubs.com/

Future Work Solar research is no longer considered as a small blimp in the energy field. The last three decades of work has in fact brought it to a position where it has been accepted as a serious competitor to many traditional/ nontraditional sources of energy. This is evident from the skyrocketing growth rate for solar-based technologies in the world. However, much needs to be done. Several critical areas require further development to cement the status of solar as a reliable, large-scale, and everlasting source of energy for mankind. To make solar

Page 29 of 35

Handbook of Climate Change Mitigation and Adaptation DOI 10.1007/978-1-4614-6431-0_32-2 # Springer Science+Business Media New York 2015

commercial, economical, and lasting, development has to be simultaneously realized along three fronts: scientific, pilot-scale testing and processing, and rapid evaluation and commercialization of promising technologies. A brief insight into future directions along these lines is provided. The scientific step is the most important aspect of all the three stages. Material properties, or rather constraints, are an area that clearly is the limiting factor and greatest challenge to solar energy commercialization. Since solar energy harvesting requires a close interaction of materials with elements and in some instances requires operation under extreme conditions, materials stability is of paramount importance. Solar-to-electric conversion technologies have the potential to transform mankind’s energy needs. However, materials that demonstrate stable performance without undergoing chemical transformations and consisting of earth-abundant elements are urgently needed. Researchers have to focus on developing low-cost, wide-spectrum solar energy harvesters. The improvement of solar conversion or utilization efficiency is critical. To achieve efficiency improvements, fundamental understating of material properties such as charge transport, recombination dynamics, and thermal management is needed. One approach to accelerate materials identification and its testing is to employ a combinatorial analysis method. There have been some efforts in this direction, but more is needed. Past research has abundantly shown that multi-element/multicompound systems as light harvesters are the correct way to move toward realizing efficient solar harvesters. Stacking different materials built on-site or assembling prefabricated compound(s) is required to improve light absorbance as well as efficient transformation of absorbed energy. Combinatorial techniques have to be developed to analyze libraries of possible compounds and to expedite the identification of formulations for maximizing light absorbance and its testing. In stage II, one really needs to focus on the techniques to synthesize solar energy harvesters and convertors on a large scale. Issues such as reliable scale-up material properties (identical and reproducible product) and cost competitiveness have to be addressed at the pilot scale. Approaches such as screen or ink-jet printing have been considered to be very promising techniques to reproduce lab-scale results on a commercial scale. These techniques have to be tested and perfected before further large-scale ventures. Performance data of extended use of solar energy convertors is still limited. Therefore, such data along with weather patterns and its influence on solar energy transformation has to be obtained. Realistic modeling predictions and long-term solar energy outputs have to be generated before using solar as the alternative source of energy for human activity. Stage III will require a large-scale effort on the part of governments (policy makers), green technology companies (research entities and venture capitalists), and public at large to come together to make solar energy a lasting and impacting form of alternate energy source. There are several evidences of governments getting sensitized to such needs, and short-term incentives are provided to test the people’s acceptance of the technologies and market reactions. However, there is still no leading technology within the solar energy conversion technologies that can be considered as the one solution for mankind’s energy needs. Therefore, the search has to go on.

Conclusion Solar energy utilization has immense potential due to the range of applications in which it can be utilized. The core issue with solar energy is materials development. There has been significant progress in this area thanks to cutting-edge research in different parts of the world. Market penetration-driven favorable incentives now have to drive the commercialization of promising technologies. The authors are of the opinion that it is no longer necessary to follow a wait and watch approach for solar energy systems as many of these systems have passed that stage. One has to take concerted efforts to (1) customize systems based on geographical needs, (2) cost considerations, (3) long-term goals, and (4) institutional support.

Page 30 of 35

Handbook of Climate Change Mitigation and Adaptation DOI 10.1007/978-1-4614-6431-0_32-2 # Springer Science+Business Media New York 2015

Solar energy systems are going to play a significant role in future energy portfolios regardless of the applications.

Acknowledgments The authors thank Prof. Wei-Yin Chen for the opportunity to make this contribution. Vaidyanathan Subramanian would like to thank the representatives of Konarka ®, Dyesol ®, and Inventux Technologies ® for their time and contributions. He would also like to thank Prof. Misra and York Smith for their insights as well as the Department of Energy (Grant # DE-EE0000272) for the financial support.

References Algenol (2010) vol 2010. http://www.algenolbiofuels.com/ Anpo M (1995) Solar Energy Mater Solar Cells 38:221 Anpo M, Yamashita H, Ichihashi Y, Ehara SJ (1995) J Electroanal Chem 396:21 Antoniadou M, Kondarides DI, Labou D, Neophytides S, Lianos P (2010) Solar Energy Mater Solar Cells 94:592 Aroutiounian VM, Arakelyan VM, Shahnazaryan GE (2005) Solar Energy 78:581 AWEA (2010) vol 2010. American Wind Energy Association, Washington, DC Bahnemann D (2004) Solar Energy 77:445 Bahnemann DW, Hilgendorff M, Memming R (1997) J Phys Chem B 101:4265 Bard AJ (1979) J Photochem 10:59 Bauer GH (1993) Appl Surf Sci 70–71:650 Baur C, Bett AW, Dimroth F, Siefer G, Meuw M, Bensch W, Kostler W, Strobl G (2007) J Solar Energy Eng Trans ASME 129:258 Best JP, Dunstan DE (2009) Int J Hydrogen Energy 34:7562 Bezdek RH, Hirshberg AS, Babcock WH (1979) Science 203:1214 Bosi M, Pelosi C (2007) Prog Photovolt 15:51 Bosio A, Romeo N, Mazzamuto S, Canevari V (2006) Prog Cryst Growth Char Mater 52:247 Bube RH (1990) Annu Rev Mater Sci 20:19 Catchpole KR, McCann MJ, Weber KJ, Blakers AW (2001) Solar Energy Mater Solar Cells 68:173 Chamberlain RG (1980) Eur J Oper Res 5:405 Cheng P, Gu MY, Jin YP (2005) Prog Chem 17:8 Choi WY, Termin A, Hoffmann MR (1994) J Phys Chem 98:13669 Cozzoli PD, Fanizza E, Comparelli R, Curri ML, Agostiano A, Laub D (2004) J Phys Chem B 108:9623 Cravino A (2007) Polym Int 56:943 Cui Y, Du H, Wen LS (2008) J Mater Sci Technol 24:675 Damonte LC, Donderis V, Ferrari S, Meyer M, Orozco J, Hernandez-Fenollosa MA (2010) Int J Hydrogen Energy 35:5834 Danielsen AL (1978) Rev Bus Econ Res 13:1 De Falco M, Giaconia A, Marrelli L, Tarquini P, Grena R, Caputo G (2009) Int J Hydrogen Energy 34:98

Page 31 of 35

Handbook of Climate Change Mitigation and Adaptation DOI 10.1007/978-1-4614-6431-0_32-2 # Springer Science+Business Media New York 2015

Dhere NG, Kulkarni SS, Jahagirdar AH, Kadam AA (2005) J Phys Chem Solids 66:1876 Dyesol (2010) Solar cell technology. Dyesol, Queenbeyan Eberspacher C, Fredric C, Pauls K, Serra J (2001) Thin Solid Films 387:18 Feng ZF, Zhou JZ, Xi YY, Lan BB, Guo HH, Chen HX, Zhang QB, Lin ZHJ (2009) J Power Sources 194:1142 Frank AJ, Kopidakis N, van de Lagemaat J (2004) Coord Chem Rev 248:1165 Fujishima A, Rao TN, Tryk DA (2000) J Photochem Photobiol C 1:1 Fujiwara H, Hosokawa H, Murakoshi K, Wada Y, Yanagida S, Okada T, Kobayashi H (1997) J Phys Chem B 101:8270 García-Valladares O, Pilatowsky I, Ruíz V (2008) Solar Energy 82:613 Germogenova TA (2002) Prog Nucl Energy 40:1 Ginley D, Green MA, Collins R (2008) MRS Bull 33:355 Goetzberger A, Hebling C, Schock HW (2003) Mater Sci Eng R Rep 40:1 Gogate PR, Pandit AB (2004) Adv Environ Res 8:501 Goswami DY, Vijayaraghavan S, Lu S, Tamm G (2004) Solar Energy 76:33 Gratzel M (1991) Coord Chem Rev 111:167 Gratzel M (2001) Nature 414:338 Gratzel M (2005) MRS Bull 30:23 Green MA (2007) J Mater Sci Mater Electron 18:S15 Guenes S, Sariciftci NS (2008) Inorganica Chim Acta 361:581 Guha S, Yang J (2006) J Non Cryst Solids 352:1917 Guo YF, Quan X, Lu N, Zhao HM, Chen S (2007) Environ Sci Technol 41:4422 Han J, Mol APJ, Lu Y (2010) Energy Policy 38:383 Hanrath T, Veldman D, Choi JJ, Christova CG, Wienk MM, Janssen RAJ (2009) ACS Appl Mater Interfaces 1:244 Harmim A, Boukar M, Amar M (2008) Solar Energy 82:287 Hillhouse HW, Beard MC (2009) Curr Opin Colloid Interface Sci 14:245 Hinogami R, Nakamura Y, Yae S, Nakato Y (1998) J Phys Chem B 102:974 Hoffmann MR, Martin ST, Choi WY, Bahnemann DW (1995) Chem Rev 95:69 Honda S, Nogami T, Ohkita H, Benten H, Ito S (2009) ACS Appl Mater Interfaces 1:804 Hoppe H, Sariciftci NSJ (2004) Mater Res 19:1924 Hou XH, Choy KL (2005) Thin Solid Films 480:13 Hou Z, Zheng DX (2009) Appl Therm Eng 29:3169 Hu Y, Zheng Z, Jia HM, Tang YW, Zhang LZ (2008) J Phys Chem C 112:13037 Hu XL, Li GS, Yu JC (2010) Langmuir 26:3031 Hultman NE (2007) Curr Hist 106:376 Ikeue K, Nozaki S, Ogawa M, Anpo M (2002) Catal Lett 80:111 In Biomassmaganzine.com (2010) http://www.biomassmagazine.com/article.jsp?article_id=1674& q=&page=all Inventux Technology (2010) vol 2010, Berlin Ito S, Murakami TN, Comte P, Liska P, Gr€atzel C, Nazeeruddin MK, Gr€atzel M (2008) Thin Solid Films 516:4613 Jang JS, Yoon KY, Xiao XY, Fan FRF, Bard AJ (2009a) Chem Mater 21:4803 Jang JS, Lee J, Ye H, Fan FRF, Bard AJ (2009b) J Phys Chem C 113:6719 Kaelin M, Rudmann D, Tiwari AN (2004) Solar Energy 77:749 Kalogirou SA (2009) Solar space heating and cooling: processes and systems. Elsevier, Amsterdam Kalyanasundaram K, Gratzel M (1997) Proc Indian Acad Sci Chem Sci 109:447 Page 32 of 35

Handbook of Climate Change Mitigation and Adaptation DOI 10.1007/978-1-4614-6431-0_32-2 # Springer Science+Business Media New York 2015

Kamat PV, Flumiani M, Dawson A (2002) Colloids Surf A Physicochem Eng Asp 202:269 Kar A, Sohn Y, Subramanian V (2008) Chapter 10. Synthesis of oxide semiconductors, metal nanoparticles and semiconductor–metal nanocomposites. Research Signpost, Trivandrum (Invited) Kaushal A, Varun (2010) Renew Sustain Energy Rev 14:446 Kawai T, Kuwabara T, Yoshino K (1992) J Chem Soc Faraday Trans 88:2041 Kay A, Gratzel M (1996) Solar Energy Mater Solar Cells 44:99 Kazmerski LL (2006) J Electron Spectros Relat Phenomena 150:105 Kelly NA, Gibson TL (2006) Int J Hydrogen Energy 31:1658 Khalifa AJN, Hamood AM (2009) Solar Energy 83:1312 Khaselev O, Turner JA (1998) Science 280:425 Knoll A, Klink K (2009) Renew Energy 34:2493 Koci K, Obalova L, Matejova L, Placha D, Lacny Z, Jirkovsky J, Solcova O (2009) Appl Catal B Environ 89:494 Konarka Technology (2009) Lowell, vol 2010 Kondamudi N, Mohapatra SK, Misra M (2008) J Agric Food Chem 56:11757 Kondo M, Takenaka A, Ishikawa A, Kurata S, Hayashi K, Nishio H, Nishimura K, Yamagishi H, Tawada T (1997) Solar Energy Mater Solar Cells 49:127 Kuang D, Brillet J, Chen P, Takata M, Uchida S, Miura H, Sumioka K, Zakeeruddin SM, Grtzel M (2008) ACS Nano 2:1113 Kudo A, Miseki Y (2009) Chem Soc Rev 38:253 Kulkarni GN, Kedare SB, Bandyopadhyay S (2007) Solar Energy 81:958 Kunjapur AM, Eldridge RB (2010) Ind Eng Chem Res 49:3516 Kurtz S, Friedman D, Geisz J, McMahon W (2007) J Cryst Growth 298:748 Lakowicz JR (2006) Principles of fluorescence spectroscopy, 3rd edn. Springer, Boston Landi BJ, Castro SL, Ruf HJ, Evans CM, Bailey SG, Raffaelle RP (2005) Solar Energy Mater Solar Cells 87:733 Lee WJ, Ramasamy E, Lee DY, Song JS (2009) ACS Appl Mater Interfaces 1:1145 Lenzen M (2008) Energy Convers Manag 49:2178 Li GH, Ciston S, Saponjic ZV, Chen L, Dimitrijevic NM, Rajh T, Gray KA (2008) J Catal 253:105 Li C, Yuan J, Han B, Jiang L, Shangguan WF (2010) Int J Hydrogen Energy 34:3621–3630 Liang YY, Xiao SQ, Feng DQ, Yu LP (2008) J Phys Chem C 112:7866 Linsebigler AL, Lu GQ, Yates JT (1995) Chem Rev 95:735 Liu YY, Huang BB, Dai Y, Zhang XY, Qin XY, Jiang MH, Whangbo MH (2009) Catal Commun 11:210 Liu M, Jing D, Zhao L, Guo L (2010) Int J Hydrogen Energy 35:7127–7133 Ma LL, Lin YL, Wang Y, Li JL, Wang E, Qiu MQ, Yu Y (2008) J Phys Chem C 112:18916 Matsumoto Y, Obata M, Hombo J (1994) J Phys Chem 98:2950 Matsuoka M, Kitano M, Takeuchi M, Tsujimaru K, Anpo M, Thomas JM (2007) Catal Today 122:51 Miles RW, Hynes KM, Forbes I (2005) Prog Cryst Growth Char Mater 51:1 Miles RW, Zoppi G, Forbes I (2007) Mater Today 10:20 Mills A, Davies RH, Worsley D (1993) Chem Soc Rev 22:417 Miseki Y, Kusama H, Sugihara H, Sayama K (2010) J Phys Chem Lett 1:1196 Mor GK, Varghese OK, Paulose M, Shankar K, Grimes CA (2006) Solar Energy Mater Solar Cells 90:2011 Muduli S, Lee W, Dhas V, Mujawar S, Dubey M, Vijayamohanan K, Han SH, Ogale S (2009) ACS Appl Mater Interfaces 1:2030 Ni M, Leung MKH, Leung DYC, Sumathy K (2007) Renew Sustain Energ Rev 11:401 Niu MT, Huang F, Cui LF, Huang P, Yu YL, Wang YS (2010) ACS Nano 4:681 Page 33 of 35

Handbook of Climate Change Mitigation and Adaptation DOI 10.1007/978-1-4614-6431-0_32-2 # Springer Science+Business Media New York 2015

Nowotny J, Bak T, Nowotny MK, Sheppard LR (2006) J Phys Chem B 110:18492 Oktik S (1988) Prog Cryst Growth Char Mater 17:171 Oliveira LS, Franca AS, Camargos RRS, Ferraz VP (2008) Bioresour Technol 99:3244 Ouyang JY, Xia YJ (2009) Solar Energy Mater Solar Cells 93:1592 Pan PW, Chen YW (2007) Catal Commun 8:1546 Park JH, Kim S, Bard AJ (2006) Nano Lett 6:24 Perezalbuerne EA, Tyan YS (1980) Science 208:902 Peterson CL, Hustrulid T (1998) Biomass Bioenergy 14:91 Pittman JK, Dean AP, Osundeko O (2011) Bioresour Technol 102:17–25 Price T, Bunn J, Probert D, Hales R (1996) Appl Energy 54:103 Qin S, Xin F, Liu Y, Yin X, Ma W (2011) J Colloid Interface Sci 356:257 Rajeshwar K (2007) J Appl Electrochem 37:765 Rajeshwar K, de Tacconi NR, Chenthamarakshan CR (2001) Chem Mater 13:2765 REN21-Secretariat (2010) Renewable energy 2010 global status report Reyes-Gil KR, Reyes-Garcia EA, Raftery D (2007) J Phys Chem C 111:14579 Robel I, Subramanian V, Kuno MK, Kamat PV (2006) J Am Chem Soc 128:2385 Rocheleau RE, Miller EL, Misra A (1998) Energy Fuels 12:3 Rotmans J, Swart R (1990) Environ Manag 14:291 Sakai I, Takagi M, Terakawa K, Ohue J (1976) Solar Energy 18:525 Sanderson KW (2007) Cereal Foods World 52:5 Sarria V, Kenfack S, Malato S, Blanco J, Pulgarin C (2005) Solar Energy 79:353 Schropp REI (2004) Thin Solid Films 451–452:455 Server H (2010) Photovoltaics: solar electricity and solar cells in theory and practice. Germany Shaban YA, Khan SUM (2008) Int J Hydrogen Energy 33:1118 Shah A, Meier J, Buechel A, Kroll U, Steinhauser J, Meillaud F, Schade H, Domine D (2006) Thin Solid Films 502:292 Shankar K, Basham JI, Allam NK, Varghese OK, Mor GK, Feng XJ, Paulose M, Seabold JA, Choi KS, Grimes CA (2009) J Phys Chem C 113:6327 Sharma A, Chen CR, Lan NV (2009a) Renew Sustain Energy Rev 13:1185 Sharma A, Chen CR, Murty VVS, Shukla A (2009b) Renew Sustain Energy Rev 13:1599 Sivula K, Le Formal F, Gratzel M (2009) Chem Mater 21:2862 Solanki CS, Beaucarne G (2007) Energy Sustain Dev 11:17 Somasundaram S, Chenthamarakshan CRN, de Tacconi NR, Rajeshwar K (2007) Int J Hydrogen Energy 32:4661 Strobel R, Baiker A, Pratsinis SE (2006) Adv Powder Technol 17:457 Subramanian V (2007) Interface 16:32 Sundrop Fuels Inc. (2010) Louisville, vol 2010. http://www.sundropfuels.com/index.html Tada H, Hattori A, Tokihisa Y, Imai K, Tohge N, Ito S (2000) J Phys Chem B 104:4585 Taima T, Sakai J, Yamanari T, Saito K (2009) Solar Energy Mater Solar Cells 93:742 Takabayashi S, Nakamura R, Nakato Y (2004) J Photochem Photobiol A Chem 166:107 Takeda Y, Kato N, Higuchi K, Takeichi A, Motohiro T, Fukumoto S, Sano T, Toyoda T (2009) Solar Energy Mater Solar Cells 93:808

Page 34 of 35

Handbook of Climate Change Mitigation and Adaptation DOI 10.1007/978-1-4614-6431-0_32-2 # Springer Science+Business Media New York 2015

Tan SS, Zou L, Hu E (2006) Catal Today 115:269 Tao J, Sun Y, Ge MY, Chen X, Dai N (2010) ACS Appl Mater Interfaces 2:265 Teramura K, Tsuneoka H, Shishido T, Tanaka T (2008) Chem Phys Lett 467:191 Teramura K, Okuoka S, Tsuneoka H, Shishido T, Tanaka T (2010) Appl Catal B Environ 96:565–568 Tester JW, Drake EM, Driscoll MJ, Golay MW, Peters WA (2005) Sustainable energy-choosing among options. MIT Press, Cambridge, MA Thomas MG, Post HN, DeBlasio R (1999) Prog Photovoltaics 7:1 Tiwari GN, Bhatia PS, Singh AK, Sutar RF (1994) Energy Convers Manag 35:535 Tiwari GN, Kumar S, Sharma PB, Khan ME (1996) Appl Therm Eng 16:189 Tiwari GN, Singh HN, Tripathi R (2003) Solar Energy 75:367 Todorov T, Cordoncillo E, Sanchez-Royo JF, Carda J, Escribano P (2006) Chem Mater 18:3145 Toivola M, Halme J, Miettunen K, Aitola K, Lund PD (2009) Int J Energy Res 33:1145 Tseng IH, Wu JCS, Chou HY (2004) J Catal 221:432 Vorayos N, Kiatsiriroat T, Vorayos N (2006) Renew Energy 31:2543 Wadia C, Alivisatos AP, Kammen DM (2009) Environ Sci Technol 43:2072 Walke C (2009) Cap and trade, EPA, vol 2010 Wang M, Na Y, Gorlov M, Sun LC (2009) Dalton Trans 6458 Wang CJ, Thompson RL, Baltrus J, Matranga C (2010) J Phys Chem Lett 1:48 Wei QS, Hirota K, Tajima K, Hashimoto K (2006) Chem Mater 18:5080 Weiss M, Neelis M, Blok K, Patel M (2009) Clim Change 95:369 Woodhouse M, Parkinson BA (2009) Chem Soc Rev 38:197 Workman JJ (1998) Chapter 2. Ultraviolet, visible, and near infrared spectroscopy. Academic, Chestnut Hill Wu JCS, Lin HM, Lai CL (2005) Appl Catal A Gen 296:194 Wurfel P (2002) Physica E Low Dimens Syst Nanostruct 14:18 Xie TF, Wang DJ, Zhu LJ, Li TJ, Xu YJ (2001) Mater Chem Phys 70:103 Xin H, Reid OG, Ren GQ, Kim FS, Ginger DS, Jenekhe SA (2010) ACS Nano 4:1861 Yamagata S, Nishijo M, Murao N, Ohta S, Mizoguchi I (1995) Zeolites 15:490 Yamashita H, Fujii Y, Ichihashi Y, Zhang SG, Ikeue K, Park DR, Koyano K, Tatsumi T, Anpo M (1998) Catal Today 45:221 Zach M, Hagglund C, Chakarov D, Kasemo B (2006) Curr Opin Solid State Mater Sci 10:132 Zhang A, Ma Q, Lu MK, Yu GW, Zhou YY, Qiu ZF (2008) Cryst Growth Des 8:2402 Zheng L, Xu Y, Song Y, Wu CZ, Zhang M, Xie Y (2009) Inorg Chem 48:4003

Page 35 of 35

Handbook of Climate Change Mitigation and Adaptation DOI 10.1007/978-1-4614-6431-0_33-2 # Springer Science+Business Media New York 2015

Concentrated Solar Thermal Power Anjaneyulu Krothapallia* and Brenton Greskab a Department of Mechanical Engineering, Florida State University, Tallahassee, FL, USA b Cameron International, Houston, TX, USA

Abstract In spite of several successful alternative energy production installations in recent years, it is difficult to point to more than one or two examples of a modern industrial nation obtaining the bulk of its energy from sources other than oil, coal, and natural gas. Thus, a meaningful energy transition from conventional to renewable sources of energy is yet to be realized. It is also reasonable to assume that a full replacement of the energy currently derived from fossil fuels with energy from alternative sources is probably impossible over the short term. For example, the prospects for large-scale production of cost-effective renewable electricity remain to be generated utilizing either the wind energy or certain forms of solar energy. These renewable energies face important limitations due to intermittency, remoteness of good resource regions, and scale potential. One of the promising approaches to overcome most of the limitations is to implement many recent advances in solar thermal electricity technology. In this section, various advanced solar thermal technologies are reviewed with an emphasis on new technologies and new approaches for rapid market implementation. The first topic is the conventional parabolic trough collector, which is the most established technology and is under continuing development with the main focus being on the installed cost reductions with modern materials, along with heat storage. This is followed by the recently developed linear Fresnel reflector technologies. In two-axis tracking technologies, the advances in dish-Stirling systems are presented. More recently, the solar thermal electricity applications in two-axis tracking using tower technology are gaining ground, especially with multi-tower solar array technology. A novel solar chimney technology is also discussed for large-scale power generation. Non-tracking concentrating solar technologies, when used in a cogeneration system, offer low-cost electricity, albeit at lower efficiencies – an approach that seems to be most suitable in rural communities.

Introduction Solar thermally generated electricity is a low-cost solar energy source that utilizes complex collectors to gather solar radiation in order to produce temperatures high enough to drive steam turbines to produce electric power. For example, a turbine fed from parabolic trough collectors might require steam at 750 K and reject heat into the atmosphere at 300 K, thus having an ideal thermal (Carnot) efficiency of about 60 %. Realistic overall conversion (system) efficiency of about 35 % is feasible with intelligent management of waste heat. The solar radiation can be collected by different concentrating solar power (CSP) technologies to provide high temperature heat. The solar heat is then used to operate a conventional power cycle, such as Rankine (steam engine), Brayton (gas turbine engine), or Stirling (Stirling engine) (Decher 1994). While generating power during the daytime, additional solar heat can be collected and stored, generally in a phase-change medium such as molten salt (Pilkington Solar International GmbH

*Email: [email protected] Page 1

Handbook of Climate Change Mitigation and Adaptation DOI 10.1007/978-1-4614-6431-0_33-2 # Springer Science+Business Media New York 2015

Concentrating Solar Thermal Field

Thermal Energy Storage

~

Electricity

Power Block Waste Heat

Fig. 1 Main components of a concentrating solar power (CSP) system

Collec Useful Energy Produced

tor sub

system

Power generation subsystem

Combined system

Operating Temperature

Fig. 2 CSP system efficiency variation with operating temperature

2000). The stored heat can then be used during the nighttime for power generation. A simple schematic, shown in Fig. 1, describes the main elements of such a system. The markets and applications for CSP dictate the category of the system and its components. Typically, the general categories considered by size are small (5.2 kWh/m2/day) direct normal irradiance (DNI), as opposed to PV technologies that can use diffuse, or scattered, irradiance as well (Duffie and Beckman 2006). The history of the Solar Electricity Generating Systems (SEGS) in the southwest desert of California (Jensen et al. 1989), where DNI is quite favorable for CSP, shows impressive cost reductions as shown in Fig. 3. These parabolic trough plants have been operating successfully for over three decades, thus providing valuable data. As indicated in the figure, the advanced concepts, with large-scale implementation and improved plant operation and maintenance, provide a great opportunity for further reductions in the levelized electricity cost (LEC), a topic that will be discussed later. Life cycle assessment of emissions and land surface impacts of the CSP systems suggest that they are best suited for greenhouse gas and other pollutant reductions. CSP systems are also best suited, because of the effortless capture of the waste heat, for multiPage 2

Handbook of Climate Change Mitigation and Adaptation DOI 10.1007/978-1-4614-6431-0_33-2 # Springer Science+Business Media New York 2015 25 Initial SEGS plants

20 Larger SEGS plants Advanced Concentrating Solar Power

15 10

O&M cost reduction of SEGS plants

5 0 1985

Added value for green pricing

Conventional cost of peak or Intermediate power

1990

1995

2000

2005

2010

2015

2020

Fig. 3 Levelized electricity cost (cents/kWh) projections of CSP (Source: Solar Paces)

generation applications, such as the simultaneous production of electricity and water purification. Because of rapid developments occurring both in technology and electricity market strategies, CSP has the greatest potential of any single renewable energy area. It also has significant potential for further development and achieving low cost because of its guaranteed fuel supply (the sun). In this chapter, a succinct review of the current technologies is given together with an assessment of their market potential. While describing some of the recent approaches in some detail, the activity around the world will also be included.

Solar Radiation The potential for CSP implementation in any given geographic location is largely determined by the solar radiation characteristics (Duffie and Beckman 2006). The total specific radiant power per unit area, or radiant flux, that reaches a receiver surface is called irradiance and it is measured in W/m2. When integrating the irradiance over a certain time period, it becomes solar irradiation and is measured in Wh/m2. When this irradiation is considered over the course of a given day, it is referred to as solar insolation, which has units of kWh/m2/day (= 3.6 MJ/m2/day). However, by assigning a number of useful solar hours in a given day, then the units simplify to W/m2. As such, the terms irradiance and insolation are typically used interchangeably. Solar radiation consists primarily of direct beam and diffuse, or scattered, components. The term “global” solar radiation simply refers to the sum of these two components. The daily variation of the different components depends upon meteorological and environmental factors (e.g., cloud cover, air pollution, and humidity) and the relative earth-sun geometry. The direct normal irradiance (DNI) is synonymous with the direct beam radiation, and it is measured by tracking the sun throughout the sky. Figure 4 shows an example of the global solar radiation that is measured on a stationary two flat plates and a plate that is tracking the sun. The measured DNI is also included, and its lower value can be attributed to the fact that it does not account for the diffuse radiation component (Molenbroek et al. 2008). In CSP applications, the DNI is important in determining the available solar energy. It is also for this reason that the collectors are designed to track the sun throughout the day. Figure 5 shows the daily solar insolation on an optimally tilted surface during the worst month of the year around the world (www. meteotest.ch; www.wrdc-mgo.nrel.gov). Regions represented by light and dark red colors are most suitable for CSP implementation. The annual DNI value will also greatly influence the levelized electricity cost (LEC), which will be discussed later. Typical values of DNI at different latitudes and selected locations around the world are given in Fig. 6 and Table 1. Based on the information presented here, it can be seen that desert and equatorial regions appear to provide the best resources for CSP implementation. Page 3

Handbook of Climate Change Mitigation and Adaptation DOI 10.1007/978-1-4614-6431-0_33-2 # Springer Science+Business Media New York 2015 1200

Irradiance (W/m2)

1000

800

600

400 DNI Total Tracked Total Flat Plate

200

0 0

2

4

6

8

10

12

14

16

18

20

22

24

Hour

Fig. 4 Solar irradiance variation within a day measured on a flat plate positioned horizontal and tracking the sun and direct normal irradiance (DNI) (Source: Molenbroek 2008)

1.0-1.9

2.0-2.9

3.0-3.9

4.0-4.9

5.0-5.9

6.0-6.9 Midpoint of zone value

Fig. 5 The solar insolation (kWh/m2/day) on an optimally tilted surface during the worst month of the year (Source: http:// www.meteostest.ch)

CSP Technologies Parabolic Trough Technology This technology is comprised of relatively long and narrow parabolic reflectors with a single-axis tracker to keep the sun’s image in focus on a linear absorber or receiver. This technology uses reflectors curved around the rotation axis (which is typically oriented east-west) using a linear parabolic shape, which has the property of collecting nearly parallel rays from the direct solar beam in a line image. A long pipe receiver can be placed at the focus for heating of heat transfer fluid (Fig. 7). The receiver is normally a tube, which contains a heat transfer fluid or water for direct steam generation. The two major components of the collector subsystem are the parabolic trough reflector, including its support structure, and the receiver, also referred to as the heat collector element. Important factors for the

Page 4

Handbook of Climate Change Mitigation and Adaptation DOI 10.1007/978-1-4614-6431-0_33-2 # Springer Science+Business Media New York 2015

Fig. 6 Annual global irradiation in Europe and the USA (Source: Volker Quaschning, DLR & Manuel Blanco Muriel, CIEMAT, Spain)

Table 1 Annual DNI at selected locations Location United States Barstow, California Las Vegas, Nevada Tucson, Arizona Alamosa, Colorado Albuquerque, New Mexico El Paso, Texas International Northern Mexico Wadi Rum, Jordan Ouarzazate, Morocco Crete, Greece Jodhpur, India

Site latitude

Annual DNI (kWh/m2)

35 N 36 N 32 N 37 N 35 N 32 N

2,725 2,573 2,562 2,491 2,443 2,443

26–30 N 30 N 31 N 35 N 26 N

2,835 2,500 2,364 2,293 2,200

most efficient parabolic trough reflector include the stability and accuracy of the parabolic profile, optical error tolerance, method of fabrication, material availability, and strength constraints. The geometry, length of the trough, the aperture, and rim angle will dictate the amount of heat collection. Since there are a large number of collector modules in a typical plant, the cost optimization requires minimizing the material weight (steel or aluminum), the operations needed to manufacture the structure, and the assembly of the elements that compose the collector (Gee and Hale 2006). A typical modern structure using aluminum space frame technology to support the reflector is shown in Fig. 8. These are considerably lighter per unit of aperture area compared to standard steel structures. All utility-scale parabolic trough installations to date have utilized silvered glass mirrors as reflectors (Fig. 7). These reflectors are limited in size and are typically driven by manufacturing limitations, strength, handling, shipping, and installation issues. These parabolic trough modules will have between 20 and 40 mirrors mounted to a single space frame module. The mirrors are typically 4–5 mm thick and are mounted to the structural frame with bolted connections. Alternatively, a UV-stabilized mirror film Page 5

Handbook of Climate Change Mitigation and Adaptation DOI 10.1007/978-1-4614-6431-0_33-2 # Springer Science+Business Media New York 2015

Steel structure

Parabolic trough reflector Adsorber pipe

Fig. 7 A typical parabolic trough system (Source: http://www.abengoasolar.com)

Fig. 8 Left: parabolic trough space frame structure (Source: NREL). Right: lightweight trough with reflective thin film mirror (Farr and Gee 2009)

(i.e., ReflecTech™) laminated onto an aluminum substrate (Fig. 8b) provides a reflectance of about 94 % (Farr et al. 2009). The weight of the modern reflective surface is about 3.5 kg/m2 versus 10 kg/m2 (2.1 lbf/ ft2) for glass mirrors and allows for a lower initial cost.

Page 6

Handbook of Climate Change Mitigation and Adaptation DOI 10.1007/978-1-4614-6431-0_33-2 # Springer Science+Business Media New York 2015

Durable glass-to-metal seal material combination with matching coefficients of thermal expansion

AR-coated glass tube ensures high transmittance and high abrasion resistance

New absorber coating achieves emittance £10% and absorptance ≥95% Vacuum insulation minimized heat conduction losses Improved bellow design increases the aperture length to more than 96%

Fig. 9 Schott PTR™ 70 receiver

Trough Efficiency vs. Operating Temperature

90

Thermal Efficiency, %

80 70 60 50 40 30 20 10 0 0

50

100

150

200

250

300

Collector Temperature above Ambient, C∞

Fig. 10 The variation of the thermal efficiency of a parabolic collector with operating temperature (Dudley et al. 1995)

The receiver must achieve high efficiency with high solar absorptance, low thermal losses, and minimum shading. The receiver typically consists of a pipe with a solar-selective coating encased in a glass tube throughout which there is a vacuum. The most commonly used thermal receiver is the SCHOTT PTR™ 70 (http://www.schottsolar.com/global/products/concentrated-solar-power/schott-ptr70-receiver/), shown in Fig. 9, which has a highly selective absorber coating on a stainless steel tube that has an outside diameter of 70 mm. The tube is enclosed in a glass cylinder with vacuum insulation to minimize the long-wave IR radiation and convection losses. The receiver tube supports are designed to minimize any receiver deflection and sunlight blockage. This particular configuration is in widespread use, but it has a number of drawbacks, which include the fact that it is difficult to maintain the vacuum seals, especially after welding, and, as has been observed, the heat transfer fluid and solar-selective coating off-gas hydrogen into the vacuum tube, thus negating the convection reducing effects of the tube. The typical thermal conversion efficiency (net heat collected/incident solar radiation over the trough aperture area) for a parabolic trough is shown in Fig. 10 for the PT-1 concentrator (Dudley et al. 1995). The efficiency is largely affected by the collector thermal and optical losses. Since the radiation losses are proportional to the fourth power of the temperature, the efficiency decreases rapidly with increasing working fluid temperature. The nominal operating temperature of many plants (e.g., SEGS) is about Page 7

Handbook of Climate Change Mitigation and Adaptation DOI 10.1007/978-1-4614-6431-0_33-2 # Springer Science+Business Media New York 2015

Solar Field 45% 3%

HTF System Power Block Balance of Plant

18%

Services 7%

7% 7%

13%

Other Site work

Fig. 11 Typical cost breakdown of a parabolic trough SEGS plant

400  C (350  C above ambient) operating at a thermal conversion efficiency of about 50 % at best. The trend over the last 25 years has been to make larger collectors with higher concentration ratios in order to improve the collector thermal efficiency. However, due to increased material manufacturing and installation costs of the large aperture (>6 m) troughs, the LEC still remains high for widespread implementation. The concentrating parabolic trough systems typically produce power based on the Rankine cycle, which is the most fundamental and widely used steam-power cycle. The cycle starts with superheated steam generated by the heat collected from the parabolic trough field. The superheated vapor expands to lower pressure in a steam turbine that drives a generator to convert the work into electricity. The turbine exhaust steam is then condensed and recycled as the feed water for the superheated steam generation to begin the cycle again. The simple steam cycle thermodynamic efficiency can be as high as 35 %. Considering that the generator sets are better than 90 % efficient in converting the shaft power into electricity, it is expected that the cycle can produce electricity at an efficiency in excess of 30 %. As such, the total combined plant efficiency (solar to electricity) is best estimated to be about 15 %. The SEGS system experience shows that the annual solar to electric efficiency varies from 10.7 % to 14.6 %, with the higher number corresponding to the case where thermal storage is included in the plant. Although the plant efficiency appears low when compared to conventional fossil fuel-based plants, the operation and maintenance (O&M) costs are negligible due to the absence of any fuel costs, thus making the LEC largely depend on the capital costs. It is useful to think in terms of the cost/efficiency ratio to determine the viability of the CSP plant. Although much of the recent effort is on increasing the efficiency of the plant, it is more useful to find ways to reduce capital costs, thereby reducing the LEC. Hence, the following is a discussion assessing the component costs for a parabolic trough plant. Figure 11 gives a breakdown of the investment costs associated with a typical parabolic trough plant utilizing the Rankine steam cycle (Brakmann et al. 2002). As the pie chart indicates, the majority of the initial investment cost is associated with the solar field. Much progress has been made recently with the introduction of lightweight space frame structure designs and the development of efficient highly reflective film (www.reflectechsolar.com), such as ReflecTech™ and 3M’s new solar mirror film (http:// solutions.3m.com). The heat transfer fluid (HTF) system moves the heat from the solar field to the power block, and it requires an HTF with the following properties: high-temperature operation with high thermal stability, good heat transfer properties, low energy transportation losses, low vapor pressure, low freeze point, low hazard properties, good material compatibility, low hydrogen permeability of the steel pipe, and economical product and maintenance costs. As a result, synthetic organic HTFs are most suitable for the parabolic trough plants. For example, SYLTHERM™ 800, a high-temperature HTF by Dow Chemical Company, can be used in liquid form up to 400  C and meets many of the requirements delineated above (http://www.dow.com/webapps/lit/litorder.asp?filepath=heattrans/pdfs/noreg/176-01469.pdf&pdf= true). The last of the major components is the power block, which consists of a conventional steam

Page 8

Handbook of Climate Change Mitigation and Adaptation DOI 10.1007/978-1-4614-6431-0_33-2 # Springer Science+Business Media New York 2015

Fig. 12 Left: parabolic trough field. Right: power block at Nevada Solar One power plant (Source: www.acciona-na.com)

turbine-based system, the costs of which are well established and a number of new players from China and India have made the prices quite competitive. Any significant reduction in the cost of any of these three major components will result in a lower LEC for CSP systems. The most recent 64 MW (nominal) installation in Nevada (Nevada Solar One), shown in Fig. 12, uses 5.77 m aperture parabolic troughs with PTR-70 receivers, resulting in a geometric concentration ratio of 26. The total solar field is 357,200 m2 and the plant site area is 1.62 km2. Field inlet and outlet temperatures are 300  C and 390  C, respectively. The solar steam turbine inlet temperature is about 371  C at 86.1 bar. The plant uses a supplementary gas heater to provide 2 % of the total heat requirement. The plant produces about 134  106 kWh of electricity annually, which yields a plant capacity factor of about 0.24. Coal power plants have a capacity factor on the order of 0.74, and as such they can produce the equivalent electricity output from a 21 MW plant. The solar to electricity efficiency of the plant (Fig. 13 shows the plant schematic) is estimated, based on the annual DNI of 2,573 kWh/m2, to be 14.6 %. The

Page 9

Handbook of Climate Change Mitigation and Adaptation DOI 10.1007/978-1-4614-6431-0_33-2 # Springer Science+Business Media New York 2015

Fig. 13 Nevada Solar One plant schematic (Source: www.acciona-na.com)

80

1000

70

900 800

60

M We

600

40

500

JUNE 12, 2007

W/m2

700

50

400

30

300

20

200

10

100

0 7:00 AM

9:30 AM

12:00 PM

Production Mwe

2:30 PM

5:00 PM

0 7:30 PM

Direct Normal Incidence

Fig. 14 Nevada Solar One electricity output and DNI for a typical summer day (Source: www.acciona-na.com)

CO2 emission reduction (as compared to a equivalent coal plant) is estimated to be about 100,000 MT/year. A typical electricity production in a day is depicted in Fig. 14, where the hourly DNI variation is also displayed. The total installed cost of the project was $266 million resulting in a nominal price of about $4.15/W. With medium temperature (250–300  C) parabolic troughs and advanced receiver designs, it is anticipated that the installed costs may reach as low as $2.50/W, thus making the parabolic trough systems competitive with many other renewable energy solutions. The ability to provide near-firm power through the use of thermal energy storage is gaining prominence. This characteristic differentiates CSP from PV technology, as the utilities can tailor the use of CSP electricity as needed. The thermal storage can also provide more uniform output over the day and increase annual electricity generation, thereby increasing the plant capacity factor. For example, while solar energy

Page 10

Handbook of Climate Change Mitigation and Adaptation DOI 10.1007/978-1-4614-6431-0_33-2 # Springer Science+Business Media New York 2015

solar collector field

steam steam turbine

flow of heat transfer fluid

htf-salt heat exchanger

solar evaporator condenser

cold salt tank

hot salt tank

cooling system

deaerator

1.8

900

1.6

800

1.4

700

1.2

600

1.0

500

0.8

400

0.6

300

0.4

200

0.2

100

Solar Resource (W/m2)

Utility Load, Trough Plant Output

Fig. 15 A schematic of a parabolic trough plant with added thermal storage (Kearney and Morse 2010)

0

0.0 1

2

3

4

5

6

7

8

9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24

Hour Ending Relative Value of Generation

Solar Radiation

Trough Plant with 6 hours Thermal Energy Storage

Fig. 16 The effect of storage on utility load during a typical day (Kearney and Morse 2010)

availability peaks at noon, demand peaks in the late afternoon when the energy from the sun is already going down. Figure 15 shows a parabolic trough plant schematic with molten salt thermal storage incorporated (Kearney and Morse 2010). A high-temperature thermal energy storage option has been developed for parabolic troughs that use molten nitrate salt as the storage medium in a two-tank system; it has an oil-to-salt heat exchanger to transfer thermal energy from the solar field to the storage system (Laing et al. 2009). A more desirable option under development is an advanced heat transfer fluid (HTF) that is thermally stable at high temperatures, has a high thermal capacity and a low vapor pressure, and remains a liquid at ambient temperatures. The effect of storage to follow the utility system demand is clearly depicted in Fig. 16. When compared to the data shown in Fig. 14, where the electricity supply follows closely with the sun’s energy, the storage extends the availability of electricity through evening hours. The performance of SEGS plants, the successful development of Nevada Solar One, and the progress made by industry innovations have greatly increased interest in utility-scale CSP projects in the USA and

Page 11

Handbook of Climate Change Mitigation and Adaptation DOI 10.1007/978-1-4614-6431-0_33-2 # Springer Science+Business Media New York 2015

sun rays second stage reflector primary fresnel reflectors absorber tube

Fig. 17 The principle of a typical Fresnel collector (Haberle et al. 2002)

Europe. Abengoa Solar’s proposed 250 MW Solana parabolic trough plant provides an example of the potential of this technology (http://www.abengoasolar.com/corp/web/en/our_projects/solana/).

Linear Fresnel Reflector Technology Fresnel lenses are used as solar concentrators where the reflector is composed of many long row segments of flat mirrors, which concentrate beam radiation onto a fixed receiver, located at few meters height, running parallel to the mirrors axis of rotation (Fig. 17). Linear Fresnel follows the principles of parabolic trough technology but replaces the curved mirrors with long parallel lines of flat, or slightly curved, mirrors. Unlike parabolic troughs where the aperture is limited to few meters, a large aperture can be achieved by the linear Fresnel reflector at low cost. Although the original idea is quite old (Francia 1961, 1968), only recently has this concept been brought to fruition by two teams in Australia and Belgium. The concentration ratios used in this system are quite similar to those achieved using parabolic troughs (10–80). Hence, the operating temperatures are also in the same range of the parabolic trough systems, 250–400  C. A picture of the Solarmundo prototype system (Häberle et al. 2002) erected in Liege, Belgium, is shown in Fig. 18. The collector area is 2,500 m2 (25 m wide and 100 m long) and the absorber tube has an outer diameter of 18 cm. The prototype used a black (nonselective) absorber. However, in order to achieve satisfactory thermal performance, a highly selective absorber coating that is stable at high operation temperatures must be applied. A pilot plant, a 1 MW (peak) thermal, system similar to the prototype was built at PSA in Almeria, Spain. Water flows through this absorber pipe, which is heated to temperatures of up to 450  C. This produces steam (as in a conventional power plant), which is converted into electrical energy through the use of a steam turbine. A 5 MWe compact linear Fresnel reflector (CLFR) power plant was built by Ausra in California as a demonstration plant (www.ausra.com) (Fig. 19). The solar-field aperture area was 26,000 m2, with three lines, each 385 m length with a mirror width of 2 m. The plant produces 354  C superheated steam at 70 bar. The CLFR utilizes multiple absorbers, which is an alternate solution to the linear Fresnel reflector (LFR) where only one linear absorber on a single linear tower is used. This prohibits any option of the direction of orientation of a given reflector. Therefore, if the linear absorbers are close enough, individual reflectors will have the option of directing reflected solar radiation to at least two absorbers. This additional factor gives potential for more densely packed arrays, since patterns of alternative reflector inclination can be set up such that closely packed reflectors can be positioned without shading and blocking. The main advantages of linear Fresnel are its lower investment and operational costs. Firstly, the flat mirrors are cheaper and easier to produce than parabolic curved reflectors and so are readily available from

Page 12

Handbook of Climate Change Mitigation and Adaptation DOI 10.1007/978-1-4614-6431-0_33-2 # Springer Science+Business Media New York 2015

Fig. 18 2,500 m2 reflected area Fresnel concentrator prototype in Belgium (Haberle et al. 2002)

Fig. 19 The Ausra 5 MW Kimberlina solar thermal demonstration plant (Source: http://www.ausra.com)

manufacturers worldwide. The structure also has a low profile, with mirrors just 1 or 2 m aboveground. This means the plant can operate in strong winds and it can use a lightweight, simple collector structure. Although the technology offers a simpler and more cost-effective solution, it has not been tested long enough to determine its viability as an alternative to parabolic trough technologies.

Page 13

Handbook of Climate Change Mitigation and Adaptation DOI 10.1007/978-1-4614-6431-0_33-2 # Springer Science+Business Media New York 2015 Stirling Engine and Alternator Receiver

Concentrator

Fig. 20 Left: Euro Dish (Source: http://www.sbp.de). Right: SAIC-Sandia dish (Source: http://www.energylan.sandia.gov/)

Dish-Stirling Technology Dish-Stirling systems are relatively small units that track the sun and focus solar energy onto a cavity receiver at the focal point of the reflector, where it is absorbed and transferred to a heat engine/generator. The ideal concentrator shape is a paraboloid of revolution (Fig. 20, left). Some concentrators approximate this shape with multiple spherically shaped mirrors supported with a truss structure (Fig. 20, right). An engine based on the Stirling cycle is most commonly used in this application due to its use of an external heat supply that is indifferent to how the heat is generated (Saad 1997). Hence, it is an ideal candidate to convert solar heat into mechanical energy. The high-efficiency conversion process involves a closed cycle engine using an internal working fluid (usually hydrogen or helium) that is recycled through the engine. The working fluid is heated and pressurized by the solar receiver, which in turn powers the Stirling engine. Stirling engines have decades of recorded operating history. For over 20 years, the Stirling Energy System (www.stirlingenergy.com) dish-Stirling system has held the world’s efficiency record for converting solar energy into electricity with a record of 31.25 % efficiency. Their size typically ranges from 1 to 25 kW with a dish that is 5–15 m in diameter. Because of their size, they are particularly well suited for decentralized applications, such as remote stand-alone power systems. One of the most advanced dual-axis tracking parabolic dish-Stirling systems is manufactured by Stirling Energy Systems (SES), and it produces 25 kWe peak power (at 1,000 W/m2 DNI) (www.stirlingenergy. com). This unique design uses a radial solar concentrator dish structure that supports an array of curved glass mirror facets as shown in Fig. 21. The dish has a diameter of about 11.6 m (glass surface area 90 m2), which results in a concentration ratio of about 7,500. The heat input from the sun is focused onto solar receiver tubes (at a focal length of 7.45 m) that contain hydrogen gas. The solar receiver is an external heat exchanger that absorbs the incoming solar thermal energy. This heats and pressurizes the gas in the heat exchanger tubing, which in turn powers the Stirling engine at a typical operating temperature of about 800  C. A generator that is connected to the engine then provides the electrical output. Waste heat from the engine is transferred to the ambient air via a radiator system similar to those used in automobiles. The gas is cooled by a radiator system and is continually recycled within the engine during the power cycle. The solar energy to electricity peak conversion efficiency is reported as 31.25 %. A much smaller 3 kWe advanced parabolic dish-Stirling system is manufactured by Infinia (Fig. 21). The single free-piston Stirling engine uses helium in a hermetically sealed system, thereby avoiding maintenance issues generally associated with moving parts. The solar to electric peak efficiency is reported to be around 24 %. Dish-Stirling systems are quite flexible in terms of size and scale of deployment. Owing to their modular design, they are capable of both small-scale distributed power output and large-scale, utilityscale projects. Although dish-Stirling systems have been tested and proven for over two decades with no Page 14

Handbook of Climate Change Mitigation and Adaptation DOI 10.1007/978-1-4614-6431-0_33-2 # Springer Science+Business Media New York 2015

Fig. 21 Left: SES Sun Catcher™ (Source: http://www.stirlingenergy.com/). Right: Power Dish™ by Infinia (Source: http:// www.infiniacorp.com)

Fig. 22 Left: 1.5 MW Maricopa Solar installation (Source: http://www.srpnet.com/ maricopasolar). Right: 1 MW solar installation in Villarrobledo, Spain (Source: http://www.infiniacorp.com)

appreciable loss in the key performance criteria, there were no utility-scale plants in operation until very recently. Within the past year, 60 SES SunCatcher™ systems were installed as part of the Maricopa Solar demonstration plant in Arizona (Fig. 22). The plant is currently operational and it is capable of producing 1.5 MWe. Two other plants in California, totaling over 1.4 GW are slated to begin construction soon using thousands of the SES systems. A similar 1 MW system is under construction in Villarrobledo, Spain, using the Infinia 3 kW units (www.infiniacorp.com). The successful installation and operation of these dish-Stirling systems in a scale beyond a handful of units will demonstrate their technical viability for the Page 15

Handbook of Climate Change Mitigation and Adaptation DOI 10.1007/978-1-4614-6431-0_33-2 # Springer Science+Business Media New York 2015

Hot Salt Storage Tank

565 °C Cold Salt Storage Tank

290 °C

Steam Generator

Conventional EPGS

Fig. 23 Left: Solar One/Two central solar tower receiver plant. Right: schematic of the plant’s major components (Source: USDOE)

large-, utility-scale plants. Unlike steam cycles, this technology uses no water in the power conversion process, a key benefit compared to other CSP plants. Current installed cost for the dish-Stirling systems at demonstration scale, with few units (mostly built in semiautomated manufacturing facilities), is about $6,000/kW. This cost is approximately distributed with 40 % in the concentrator and controls, 33 % in the power conversion unit, and the remaining 27 % of the costs in the balance of plant and installation of the system. Mass production techniques, such as those employed at the automotive scale, will provide great cost benefits to these systems. With the economies of scale in their favor and because of higher solar to electricity efficiency (25–30 %), the dish-Stirling systems will become competitive with the photovoltaic and parabolic trough systems. However, unlike the parabolic trough systems, the 20-year life cycle costs of these systems are yet to be determined.

Power Tower Technology The solar central receiver power tower is a concept that has been under study both in the USA and Spain over the last three decades. This technique utilizes a central power tower that is surrounded by a large array of two-axis tracking mirrors – termed heliostats – that reflect direct solar radiation onto a fixed receiver located on the top of the tower. The typical concentration ratio for this approach is in excess of 400. Within the receiver, a fluid transfers the absorbed solar heat to the power block where it is used to generate steam for a Rankine cycle steam engine/generator. Until recently, the largest demonstration plant employing this technology was the 11.7 MWe “Solar One” plant in Barstow, California (Fig. 23), that was

Page 16

Handbook of Climate Change Mitigation and Adaptation DOI 10.1007/978-1-4614-6431-0_33-2 # Springer Science+Business Media New York 2015

Fig. 24 Typical advanced heliostat field (Source: Plataforma Solar de Almeria – PSA, Spain)

constructed and operated in the 1980s. Solar One operated at a nominal temperature of 510  C and it had a peak solar to electric efficiency of about 8.7 %. In the 1990s, Solar One was converted to “Solar Two” through the addition of additional heliostats and a two-tank molten salt storage system to improve the capacity factor of the system (www1.eere.energy.gov/library/pdfs/28751.pdf). Two important components of the power tower technology are the heliostats and the receiver. Heliostats are the most important cost element of the power tower plant, and they typically contribute to about 50 % of the total plant cost. Consequently, much attention has been paid to reduce the cost of heliostats to improve the economic viability of the plant. The most commonly used design is the two-axis sun-tracking pedestal-mounted system as shown in Fig. 24. A heliostat consists of a large mirror with the motorized mechanisms to actuate it, such that it reflects sunlight onto a given target throughout the day. A heliostat array is a collection of heliostats that focus sunlight continuously on a central receiver. A 148 m2 ATS glass/metal heliostat has successfully operated for over 20 years at the National Solar Thermal Test Facility in Albuquerque, USA, without much degradation of the beam quality. It has also survived high winds in excess of 40 m/s. Depending upon the production rates, the installed price of the ATS heliostat was estimated to be between $126 and 164 per square meter (Kolb et al. 2007). With increasing installations, the estimated installation price will be around $90/m2. The Sandia study also suggests that large heliostats are more cost efficient than small ones on a cost per square meter basis (Kolb et al. 2007). A relatively new facility that began operation in 2006 is the PS10 solar power tower in Spain (10 MW Solar Thermal Power Plant for Southern Spain 2006). The main goal of the PS10 project was to design, construct, and operate a power tower on a commercial basis and produce electricity in a grid-connected mode. This 11 MWe facility generates about 23,000 MWh of grid-connected electricity annually at an estimated solar to electricity efficiency of about 15 %. However, it should be noted that the plant also uses natural gas for 12–15 % of its electricity production. The solar radiation is concentrated through the use of 624 reflective heliostats, each of which has a 121 m2 curved reflective surface, arranged in 35 circular rows, as shown in Fig. 25. As a result, the total reflective surface is 75,216 m2. The heliostats concentrate the solar radiation to a cavity receiver that is located at the top of a 115 m high tower. The cavity receiver is basically a forced circulation radiant boiler designed to use the thermal energy supplied by the concentrated solar radiation flux to produce more than 100,000 kg/h of saturated steam at 40 bar and 250  C. The saturated steam is then sent to the turbine where it expands to produce mechanical work and electricity. For cloudy transient periods, the plant has a saturated water thermal storage system with a thermal capacity of 20 MWh, which is equivalent to an effective operational capacity of 50 min at 50 % turbine workload. This is a relatively short storage time, partially because the tower uses water rather than molten

Page 17

Handbook of Climate Change Mitigation and Adaptation DOI 10.1007/978-1-4614-6431-0_33-2 # Springer Science+Business Media New York 2015

Solar Receiver

Steam Drum

Steam 40 bar, 250°C

Turbine

Generator 11.0MWe

Steam Storage System

Condensator 0,06 bar, 50°C Heliostat Field

Fig. 25 PS10 11 MW central receiver tower project in southern Spain. Left: plant schematic. Right: the PS10 plant aerial picture (Source: Abengoa Solar)

salt for heat storage. The water is held in thermally clad tanks and reaches temperatures of 250–255  C (instead of around 600  C for systems using salt). The investment cost of the PS10 plant was about 35 million euros, thus resulting in an installed cost of about 3,000 euros per kWe. Of this cost, the heliostat cost was reported to be about 140 euros/m2. From this experience, it appears that about 30 % of the total installed cost of a solar power tower goes toward the heliostat expense. A second-generation plant, referred to as PS20, has twice the PS10 output (20 MW), with 1,255 two-axis sun-tracking heliostats. The receiver is located on top of a 165 m tower, and it utilizes the same technology as that of PS10 for electricity generation. The new plant features include control and operational systems enhancements, an improved thermal energy storage system, and a higher efficiency receiver. A utility-scale 400 MW solar tower power project, referred to as the “Ivanpah Solar Power Complex,” is being built in California by a consortium led by Bright Source Energy, and it is operational in 2013 (www.brightsourceenergy.com). The heliostats in this project will consist of smaller flat mirrors, termed the LPT 550, each having a reflecting area of 14.4 m2. Fifty thousand of these LPT 550 heliostats will be required for every 100 MW of installed capacity. The receiver is a traditional high-efficiency boiler

Page 18

Handbook of Climate Change Mitigation and Adaptation DOI 10.1007/978-1-4614-6431-0_33-2 # Springer Science+Business Media New York 2015

Fig. 26 The LPT 550 central receiver tower demonstration plant in Israel’s Negev desert. Left: heliostat field with central tower. Right: 7.22 m2 heliostat (Source: Brightsource Energy)

positioned on top of the tower. The boiler tubes in the receiver are coated with a solar-selective material that maximizes energy absorbance, and there are sections within the receiver for steam generation, superheating, and reheating. This results in the generation of superheated steam at 550  C and 160 bars (unlike the saturated steam that is produced in the PS10 and PS20). The power block consists of a conventional Siemens steam turbine generator with a reheat cycle and auxiliary functions of heat rejection, water treatment, water disposal, and grid interconnection capabilities. The technology demonstration plant, as shown in Fig. 26, has 1,641 heliostats (reflecting area 12,000 m2) with each measuring 2.25  3.21 m (7.22 m2). The tower height was 75 m (60 m tower plus 15 m receiver), and the thermal energy collected by the receiver was between 4.5 and 6 MWth. Because of the higher operating temperature, the solar to electrical efficiency of these plants is expected to be about 20 %. Although there is not yet any experience with utility-scale plant installations, it appears that the installation cost of these plants may be in the range of $3,000/kWe. In an attempt to bring down the installed cost of the solar power plant technology, eSolar, a California company introduced a modular/distributed tower design with a 1 m2 reflected area heliostat (www.esolar. com). These much smaller heliostats, with fully automated two-axis sun tacking system, are easy to assemble and install in large numbers. Each central tower unit is capable of producing 2.5 MWe through the use of 12,000 mirrors that reflect the radiation onto a 47 m high tower. The thermal receiver in the tower has external evaporator panels for producing superheated steam at 440  C and 60 bar. Figure 27 shows a technology demonstration plant with two-tower system that nominally produces 5 MWe of electricity. Since the performance details of the plant are not disclosed in any public domain, it is difficult to assess the solar to electric efficiency and the installed plant cost. In principle, the smaller heliostats are easy to manufacture, install, and maintain. However, the solar energy collection may involve significant losses due to spillage reaching the thermal receiver. Hence, it is important to study the pilot plant performance characteristics before a utility-scale plant design is considered.

Solar Chimney Power Plant Technology A solar chimney power plant has a high chimney (tower) that is surrounded by a large collector roof made of either glass or resistive plastic supported on a framework (Fig. 28) (von Backstrom et al. 2008). Toward

Page 19

Handbook of Climate Change Mitigation and Adaptation DOI 10.1007/978-1-4614-6431-0_33-2 # Springer Science+Business Media New York 2015

Fig. 27 5 MW twin central receiver tower facility with 1 m2 heliostats in California (Source: eSolar)

its center, the roof curves upward to join the chimney, thus creating a funnel. Solar radiation (direct and diffuse) strikes the collector and transmits part of its energy that heats up the ground and the air underneath the collector roof. At the ground surface, part of the transmitted energy is absorbed and the rest is reflected back to the roof, where it is subsequently reflected to the ground. The multiple reflections result in a higher fraction of energy absorbed by the ground. The warm ground surface heats the adjacent air through natural convection. The buoyant air follows the upward incline of the roof until it reaches the chimney, thereby drawing in more air at the collector perimeter. The natural and forced convection set up between the ground and the collector flows at high speed through the chimney and drives wind generators at its bottom. As the air flows from the collector perimeter toward the chimney, its temperature increases, while the velocity remains constant due to the increasing collector height at the center as shown in the schematic (Fig. 28). The pressure difference between the outside cold air and the hot air inside the chimney causes the air to flow through the turbine. The ground under the collector roof behaves as a storage medium and can even heat up the air for a significant time after sunset. The efficiency of the solar chimney power plant is below 2 % and depends mainly on the height of the tower. As a result, these power plants can only be constructed on land that is very cheap or free. Such areas are usually situated in desert regions. However, this approach is not without other uses, as the outer area under the collector roof can also be utilized as a greenhouse for agricultural purposes. A 200 m high solar chimney demonstration plant based was constructed in Manzanares, Spain (http:// www.youtube.com/watch?v=XCGVTYtJEFk). The peak power output of this demonstration plant was 50 kW, and it operated for over 8 years without any significant degradation in performance. However, as with other CSP plants, the minimum economical size of the solar chimney power plant is in the several MW range. Although no pilot plant has been built to demonstrate the viability of this technology in the MW range, computer simulations suggest its promise as a low-cost solar thermal technology. Figure 29 shows the results from a simulation of a large-scale solar chimney power plant with a 5,000 m collector diameter (20 km2 area) and a chimney height of 1,000 m and inside diameter 210 m (von Backstrom et al. 2008). With the vast expanse of unpopulated land in Australia, it may be possible to economically erect a solar chimney plant of this size.

Nonimaging Concentrator Technology All of the concentrating technologies discussed thus far require some type of active solar tracking in order to account for the change in the elevation of the sun on any given day and throughout the year. Nonimaging concentrators, such as the compound parabolic concentrator (CPC), allow for the use of a Page 20

Handbook of Climate Change Mitigation and Adaptation DOI 10.1007/978-1-4614-6431-0_33-2 # Springer Science+Business Media New York 2015

solar radiation G

turbine

Htower

Dcoll

Sun Solar chimney SC

Collector area CA

Ground

Turbine

Generator coupled to the turbine PCU

Fig. 28 Left: an artist rendering of a 5 MW solar chimney plant (Source: http://www.sbp.de). Right: a schematic indicating the main components of the plant (von Backstrom et al. 2008)

140 21. June 21. Dec

120

Power in MW

100 80 60 40 20 0 0

5

10

15

20

Solar time

Fig. 29 Simulated results of electrical power output of solar chimney power plant during summer and winter (von Backstrom et al. 2008)

non-tracking stationary concentrator that can account for the daily and annual excursion in solar elevation (Meinel and Meinel 1976). Figure 30 illustrates how the light rays in a commercial CPC collector are concentrated when the source is directly overhead (left), such as solar noon on the equinox, and when it is at the acceptance angle of the CPC design (right), such as would be observed during the solstice. Page 21

Handbook of Climate Change Mitigation and Adaptation DOI 10.1007/978-1-4614-6431-0_33-2 # Springer Science+Business Media New York 2015

Fig. 30 Ray tracing diagrams for the Winston Series CPC. Left – incoming light rays directly overhead. Right – incoming light rays at the acceptance angle of the design (Source: www.soalrgenix.com)

Fig. 31 Cutaway of multiple parabolic trough flat-plate collector

The stationary benefit of the CPC comes at the expense of a concentration ratio of 2 for the design. This is an order of magnitude lower than what can be achieved through the use of a parabolic trough, but it is twice that of a typical flat-plate collector. As such, the CPC design is capable of producing sensible heat at temperatures well in excess of 120  C, thus making it a good candidate for use with an absorption refrigeration system. It can also be paired with a low-temperature power cycle, such as an organic Rankine cycle, to generate electricity. The resulting system would be fairly inefficient when compared to a dishStirling system, but it would have a cost-to-efficiency ratio that would make it attractive for use in rural areas. Industrial process heat from solar energy is becoming important in many industries such as pharmaceutical and textile, etc. In this context, the goal is to create a solar collector that can produce thermal energy in the medium temperature range (100–150  C). To achieve this goal, a collector is designed to emulate features of existing flat-plate and CPC collectors while utilizing parabolic trough collector (PTC) geometry (Pandolfini et al. 2013). The Multiple Parabolic Reflector Flat Panel Collector (MPFC) applies aspects of a stationary panel with the reflector geometry of a PTC. Multiple parabolic reflectors are arranged in an array within the envelope of the panel (Fig. 31). A tubular receiver is associated with each reflector, collecting the reflected light. The receivers are able to transverse within the panel independently from the reflectors. This allows for collection of concentrated light while the entire panel remains stationary. The main feature of the MPFC is its moving receiver (which differs from conventional stationary reflector designs). Movement of the receiver is along a constant height, located through the focal point of the reflector. A motor controls the movement of the receiver assembly. The motor moves a rack-and-

Page 22

Handbook of Climate Change Mitigation and Adaptation DOI 10.1007/978-1-4614-6431-0_33-2 # Springer Science+Business Media New York 2015

Fig. 32 A schematic of a single section of the MPFC

Fig. 33 Reflected rays at yi = 0 (top), yi = 45 (bottom)

pinion system that in turn moves the receiver array. Position of the receiver must be accurate because it determines the optical gain of the receiver. The placement of the receiver causes some loss of solar energy that is determined by the incident angle of the sun on the reflector, yi. However, the reflector and the receiver designs are both optimized to collect solar energy over the course of a year. Analysis of this new design is based on two parameters that affect the amount of light collected: concentration ratio, C, and rim angle, yrim (Fig. 32). The concentration ratio is defined as the ratio of the collector aperture area to the receiver’s surface area. The rim angle defines the angle swept around the focal point, from the surface normal to the edge of the parabolic reflector. Ray tracing is used to determine an optimal rim angle. The amount of light accepted by the receiver at concentration ratio of 6 is found greatest for 50 < yrim < 60 . Figure 33 shows the receiver placed at the focal height f. The receiver is allowed to be positioned at any horizontal position at this height. The ideal placement for the receiver is where the edge of the receiver is tangent to the caustic surface with the center of the receiver within the caustic envelope. The relative position of the receiver will change as yi changes, seen in Fig. 33.

Page 23

Handbook of Climate Change Mitigation and Adaptation DOI 10.1007/978-1-4614-6431-0_33-2 # Springer Science+Business Media New York 2015

The MPFC being stationary as the sun moves across the sky enables the receiver to accept varying amounts of radiation with minimal tracking except the receiver tubes move with respect to a parabolic reflector. The simplicity of the MPFC design makes it a cost-viable alternative to other collector types for producing heat at medium temperatures. For example, the levelized cost of energy (LCOE) is estimated at $0.006/kWhth at a collection temperature of about 125  C over its lifetime of about 20 years.

Concentrating Solar Power (Thermal) Systems Economics The concentrating solar power (CSP) for electricity generation technologies examined in the previous sections is the most dominant and has the greatest potential for commercialization. Current projects are targeted so that they meet specific needs at an economic benefit. Once success is achieved, the price points will come down, and good economics will drive the CSP projects. The following discussion is included here to indicate that CSP is becoming more economically attractive. Component manufacturers, utilities, and regulators are making decisions now that will determine the scale, structure, and performance of the CSP industry. Since each country’s approaches to the renewable electricity industry are different, only the observations that are more common globally are included here. When considering the economic viability of CSP, often the levelized electricity cost (LEC) is calculated and compared among different technologies. Therefore, in the following, a general method is given for determining LEC. The LEC is dependent on many variables related to the site, technology chosen, and the plant financing. The LEC is defined as (Pitz-Paal et al. 2005) LEC ¼ where

CRF  K I þ K OM þ K F E

k d ð1 þ k d Þn CRF ¼ þ ki ð1 þ k d Þn  1 CRF is capital recovery factor, KI total investment of the plant, KOM annual operation and maintenance costs, KF annual fuel costs (any fossil fuel, such as natural gas), E annual net electricity revenue, kd debt interest rate, n depreciation period in years (30), and ki annual insurance rate (1 %). The many factors that determine the LEC vary greatly due to government subsidies, tax incentives, and annual net electricity production. One of the key parameters in the above formula is the determination of the annual electricity generation, which depends largely on the available DNI at the plant location. For example, Fig. 34 shows the impact of the annual DNI on the annual power generation and the LEC of a 50 MWe parabolic trough SEGS type power plant with a 375,000 m2 solar field. The economic parameters (e.g., discount rate of 6.5 %, solar-field costs of 200 euro/m2, power block costs of 1,000 euro/kW, and O&M costs of 3.7 million euro per annum) have been kept constant (Quaschning et al. 2001). Although some of the financial data may be outdated, the intent here is simply to show that the annual electricity generation is approximately proportional to the DNI. This suggests that a careful analysis needs to be carried out for the determination of an economically optimized project site that not only depends on the solar irradiance (DNI) but on many other influencing parameters. The present evaluation estimates (Fig. 35) from a number of sources that the LEC for CSP systems, shown here as cost of electricity (COE), will be around $0.15–0.20/kWh, assuming a load demand between 9:00 am and 11:00 pm. However, the absolute cost data on many of the CSP systems considered here, and those planned for commercial deployment around the world, is largely unavailable so these numbers must be considered with some caution. Cost reductions due to technological improvements, such

Page 24

Handbook of Climate Change Mitigation and Adaptation DOI 10.1007/978-1-4614-6431-0_33-2 # Springer Science+Business Media New York 2015

Fig. 34 The variation of annual electricity generation and LEC for a 50 MWe parabolic trough plant with a 375,000 m2 solar field for fifty chosen sites (Quaschning et al. 2001)

85

COE cents/kWh (2005 $)

68

51

34

17

0 1980

1995

2010

2025

Fig. 35 The variation of the LEC for concentrating solar thermal power (Source: National Renewable Energy Laboratory, USA)

as the implementation of thermal storage, and large-scale deployment are estimated to be around 10–30 % for parabolic trough systems, 20–35 % for central receiver systems, and 20–40 % for dish-Stirling systems (Quaschning et al. 2001). Given the rapid deployment of CSP systems, it is suggested that within the next 5 years, the LEC will be $0.10–0.15/kWh. With the additional benefit of carbon credits, CSP technology is poised to become the dominant solar electricity generating plant development in places where there is good DNI.

Summary and Conclusions Concentrating solar thermal power (CSP) is a proven technology, which has significant potential for further development and achieving low cost. The history of the Solar Electricity Generating Systems

Page 25

Handbook of Climate Change Mitigation and Adaptation DOI 10.1007/978-1-4614-6431-0_33-2 # Springer Science+Business Media New York 2015

(SEGS) in California demonstrates impressive cost reductions achieved up to now, with electricity costs ranging today between $0.10 and $0.15/kWh. Advanced technologies, mass production, economies of scale, and improved operation will allow for a reduction in the cost of solar-produced electricity to a competitive level within the next 5–10 years. Hybrid solar-and-fuel plants, at favorable sites, making use of special schemes of finance, can already deliver competitively priced electricity today. With over two decades of experience, parabolic trough technology is mature enough that its investment cost estimates can be made with confidence. Given the rapid growth contemplated within the immediate future (mostly in the southwest USA) and medium temperature CSP systems (250–300  C), it is very likely that the LEC price target of $0.10/kWh may well be met within the next 3 years using the parabolic trough technology. When the parabolic trough technology is combined with biomass gasification in a hybrid system, the overall plant efficiency will be substantially increased, thus resulting in a relatively low LEC. This is an approach that is ideally suited for regions of moderate DNI (5.2–5.5 kWh/m2/day) and for distributed power applications (1–5 MW power plants). A greater opportunity lies in the thousands of niche markets that are primed for smaller-scale (1–10 MW) parabolic trough projects at a lower cost. The central receiver tower (CRT) systems are being pursued aggressively by a number of companies with approaches that mostly differ in the heliostat size. The distributed approach with multiple towers appears to gain prominence because of their lower installation costs. Both parabolic trough and central tower systems benefit from heat storage, especially when the power demand is during off-peak solar hours. The CRT systems are best suited in areas of good annual solar insolation (>2,000 kWh/m2/year) and utility-scale plant sizes (>50 MW). Because of the steam cycle used in the power block, the water availability can be an issue, especially in desert regions. The problem can be overcome by the use of an air-cooling system, which will have the adverse effect of reducing the overall plant efficiency. The recent advances made in dish-Stirling systems in improving their solar to electric efficiency in the range of 30 % make them attractive for utility-scale power plant implementation. Because of their small unit electricity output (100 kW) currently occupy the biggest market share and are expected to principally account for wind deployment in the near future (Ackermann and Söder 2002). This section therefore specifically focuses on their designs that can be divided into three parts (NREL 2006): • A tower on top of which the nacelle is mounted • The rotor that includes blades • The generator that includes an electrical generator, control electronics, and most of the time a gearbox

The Tower The tower is an important part of a wind turbine primarily because it supports the nacelle and the rotor. The tubular steel tower design is the most widespread technological choice, even though there exist other alternatives like lattice towers or concrete towers. Towers are conical, with their diameter decreasing toward the nacelle, to enhance their strength on the one hand and reduce their material intensity on the other. Page 8 of 25

Handbook of Climate Change Mitigation and Adaptation DOI 10.1007/978-1-4614-6431-0_34-2 # Springer Science+Business Media New York 2015

In areas with a high surface drag, it is better to erect tall towers since the wind blows faster farther away from the ground. More specifically, wind speed follows in daytime the wind profile power law, which foresees that wind speed rises proportionally to the seventh root of altitude (Peterson and Hennessey 1978). Consequently, doubling the altitude of a turbine theoretically increases the expected wind speeds by 10 % and the expected power by 34 %. However, to avoid buckling, increasing the tower height generally entails enlarging the diameter of the tower as well.

Rotor: Blade Design and Count Turbine blades, sometimes slightly tilted up, are positioned significantly ahead of the tower and made rigid to prevent them from being shoved into the tower by high winds. Most modern large-scale wind turbine rotor blades are therefore made of glass fiber-reinforced plastics (e.g., epoxy), which, besides, allows for low rotational inertia and quick accelerations, should gusts of wind occur (variable-speed turbines). In contrast, previous generations of (fixed-speed) wind turbines whose rotational speed is imposed by the AC frequency of the power lines are manufactured with heavier steel blades and therefore higher inertia (Sahin 2004). The determination of the number of blades depends on the purpose of the wind turbine as aforementioned. Wind turbines for electricity generation usually use either two or three blades even though two-bladed designs are more the exception than the norm for large-scale grid-connected horizontal-axis wind turbines. The rotor moment of inertia of a three-bladed wind turbine is simpler to comprehend than that of a two-bladed one. In addition, three-bladed wind turbines are often better accepted for their visual aesthetics and are responsible for lower audible noise than their two-bladed counterparts (Thresher and Dodge 1998). Furthermore, during the yawing (alignment) of the nacelle in or out of the wind, a cyclic load is exercised on the root end of every blade and whose magnitude is function of the blade position. Threebladed turbines see their cyclic load symmetrically balanced when combined at the turbine drivetrain shaft, contributing to smoother maneuvers during yawing. On the flip side, two-bladed wind turbines, when equipped with a pivoting teetered hub, can also nearly filter out the cyclic loads into the turbine driveshaft and system during yawing. Moreover, the tower top weight is lighter and so consequently is the whole supporting structure, lowering associated costs. In addition, two-bladed turbines can have a higher rotational speed than their three-bladed counterparts. Indeed, the degree of rigidity necessary to avoid hindrance with the tower imposes a lower limit on the thinness of the blades and subsequently (a lower limit) on their mass. However, this is only true for upwind machines as bending of blades enhances tower clearance for downwind ones. Likewise, cheaper gearbox and generator costs can be achieved with two-bladed turbines as faster rotational speeds reduce peak torques in the turbine drivetrain. Lastly, the fewer the number of blades the higher the system reliability is, chiefly through the dynamic loading of the rotor into the tower and turbine drivetrain systems.

Electrical Generator The energy captured by the blades is subsequently passed onto the generator via a transmission system consisting of a rotor shaft with bearings, brakes, an optional gearbox, as well as a generator. Whereas the power generation industry resorts almost integrally to synchronous generators because of their variable reactive power production (voltage control), most wind turbines generate electricity through (six-pole induction) asynchronous generators that are directly connected with the electricity grid. However, some designs also use directly driven synchronous generators (Ackermann and Söder 2002). Electrical generators produce AC (alternating current) power by definition. While the previous generations of (fixed-speed) wind turbines spin at a constant speed governed by the frequency of the Page 9 of 25

Handbook of Climate Change Mitigation and Adaptation DOI 10.1007/978-1-4614-6431-0_34-2 # Springer Science+Business Media New York 2015

grid they are connected onto, new (variable-speed) ones most of the time rotate at the speed that produces electricity most efficiently being given the actual wind conditions. This can be achieved either using direct AC-to-AC frequency converters (cycloconverters) or using DC current link converters (AC to DC to AC). Although variable-speed turbines require costly power electronics that in addition generate supplementary power loss, a substantially larger fraction of the wind energy can be harnessed by the rotor (NREL 2001).

Control Electronics Wind conditions being highly variable across sites and over time, a wind turbine is designed to operate over a large range of wind speeds (usually between 12 and 16 m/s). Therefore, to avoid any potential damage to the primary turbine structure during operation in strong winds while ensuring an optimal aerodynamic efficiency of the rotor in light ones, the rotational speed and torque of the rotor must permanently be monitored and controlled. There are several approaches to successfully achieve this (power output) control.

Stall Regulation This technique requires the rotor to spin at a constant speed (independent of the wind speed). When a wind stream is intercepted in the rotor area, it creates natural turbulences right behind the blades. This is called the stall effect. As a result, aerodynamic forces (induced drag or drag associated with lift) are reduced and so subsequently is the power output of the rotor (Ackermann and Söder 2002). If the stall effect is a complicated dynamic process to comprehend, stalling is an easy power output control to practically implement as the faster the wind blows, the larger the stall effect is (passive regulation). However, stalling increases the (ordinary) drag by increasing the cross section of the blade facing the wind.

Pitch Control Pitching the angle of attack of the blades into (respectively out of) the wind increases (respectively reduces) the aerodynamic forces and subsequently the power output of the rotor. One of the main technical challenges associated with designing pitch-controlled wind turbines is getting the blades to furl (to swing out of the wind) swiftly enough in case of a gust of wind. Seemingly, these systems must be able to adjust the pitch of the blades by a fraction of a degree at a time, depending on the wind speed, to control the power output. The pitching system in medium and large size grid-connected wind turbines is usually based on a hydraulic system, controlled by a computer system. To prevent an eventual hydraulic power failure to furl the blades, pitch regulation systems are also spring-loaded. By permanently fine-tuning at an optimum angle the rotor blades (even in low-wind conditions), pitchcontrolled turbines achieve a better yield at low-wind sites than stall-regulated turbines. In addition, the thrust exercised by the rotor on both the tower and the foundation being significantly lower for pitchcontrolled turbines than their stall-regulated counterparts, the primary structure of the former is less material intensive and likely incur lower costs. Moreover, stall-regulated (fixed-pitch) turbines must be shut down when the cutout wind speed threshold is reached, whereas pitch-controlled ones can progressively evolve toward a spinning mode as the rotor operates in a no-load mode at the maximum pitch angle (fully furled turbine). On the flip side, once the stall effect becomes effective (in high wind conditions), the power oscillations occurring on stall-regulated turbines and stemming from the wind oscillations are smaller than those occurring on pitch-controlled turbines in a corresponding regulated mode (Ackermann and Söder 2002).

Page 10 of 25

Handbook of Climate Change Mitigation and Adaptation DOI 10.1007/978-1-4614-6431-0_34-2 # Springer Science+Business Media New York 2015

Active Stall Regulation This regulation system is a combination between and a culmination of pitch and stall approaches. It is a combination because, to optimize the aerodynamic efficiency of the rotor and to ensure a torque large enough to create a turning force in light winds, the rotor blades are pitched like in a pitch-controlled wind turbine, whereas after the rated capacity is reached, they are pitched in the opposite direction (than that of a pitch-controlled turbine) in order to increase their angle of attack and install them into a deeper stall. It is a culmination because active stall regulation achieves a power output control smoother than the jerky one associated with pitch-controlled turbines while still preserving at the same time the advantage of pitch-controlled turbines over stall-regulated ones to turn the blades parallel to the airflow (the so-called low-load feathering position) and subsequently reducing the thrust on the turbine structure (Ackermann and Söder 2002).

Wind Farms Groups of turbines are often combined into wind farms whose installed capacity can range from a few to several 100 MW. The largest wind farm for commercial production of electric power, situated in Texas, USA, combines 421 turbines into a 735 MW plant. Such turbines are usually three bladed and have high tip speeds (the ratio between the rotational speed of the tip of a blade and the actual velocity of the wind) of 300 km/h. Their supporting structures tower from 60 to 90 m above ground, while their associated blades range from 20 to 40 m in length. Wind plants have short construction lead times, even compared to those of transmission infrastructure.

Trends Variable-speed turbines with pitch control using either direct driven synchronous ring generator or double-fed asynchronous generators are likely to become the norm, not the exception. However, cost of energy is and will remain the key driving force of wind energy growth. Therefore, if variable-speed turbines are to become a sound economic winner, additional costs incurred by power electronics required by most variable-speed designs must clearly be counterbalanced by the enhanced energy capture.

Capacity and Load Characteristics Wind energy converters are dependent on the wind, and hence turbine output varies over time, across all timescales ranging from seconds to up to years. Measuring, modeling, and understanding this variability are crucial for site selection and also for integration of wind power into electricity grids. In 2008, the global capacity of wind energy converters was 121 GW, generating about 260 TWh of electricity (WWEA 2008). This yields a capacity factor of about 24.5 % (Fig. 5). Plant outages are not as problematic with wind power as they are with fossil, nuclear, or large hydro, because numerous wind plants are usually distributed over a wide geographical area (Archer and Jacobson 2007). Such decentralization in a power supply system reduces the requirements for contingency reserve, since this type of reserve is mostly tied to the largest potential source of failure, which is the largest single generator in the system (Holttinen et al. 2008). Output from wind farms can be expected to be smoother than that of a single turbine, but smoothing effects on larger scales may not be so significant and may also vary between regions. While smoothing effects are discernible when comparing single turbines with wind farms and regions (Fig. 6, and also a similar figure for the UK in Oswald et al. (2008)), combining regions as such may not necessarily lead to much additional smoothing because of strong correlations in the wind regime over large distances (Fig. 7). Østergaard (2008) artificially combines the wind output of West and East Denmark (which are not connected into a common grid) and obtains only Page 11 of 25

Handbook of Climate Change Mitigation and Adaptation DOI 10.1007/978-1-4614-6431-0_34-2 # Springer Science+Business Media New York 2015 5,000 4,500

Full load hours (h y−1)

4,000 3,500 3,000 2,500 2,000 1,500 1,000 500 0 4

5

7

6

8

9

–1)

Wind speed (m s Bonus 33.4

NEG Micon 750/48

Nordex N54/1000

Vestas V66 1650/66

Wind world 750/48

Bonus 44/600

this study k = 2

existing wind turbines

this study k = 1.5

this study k = 3

Fig. 5 Average capacity factor as a function of wind speed. Most turbines operate in a range between 2,000 and 3,000 full load hours, which is equivalent to capacity factors between 23 % and 34 % (After Hoogwijk et al. (2004); k is the Weibull wind speed distribution parameter). For wind farms at certain windy sites, average capacity factors of up to 45 % are reported (Archer and Jacobson 2007)

1.0 0.8 0.6 0.4 0.2

Normalised power

0.0

Single turbine (Oevenum/Fohr) 225 kW

1.0

Group of wind farms (UW Krempel) 72.7 MW

0.8 0.6 0.4 0.2 0.0 1.0 All WTs in Germany 14.3–15.9 GW

0.8 0.6 0.4 0.2 0.0 21.12

22.12

23.12

24.12

25.12

26.12

27.12

28.12

29.12

30.12

31.12

Fig. 6 Normalized power output from a single wind turbine (top), and group of turbines (middle), and all turbines in Germany (bottom; after Focken and Lange)

Page 12 of 25

Handbook of Climate Change Mitigation and Adaptation DOI 10.1007/978-1-4614-6431-0_34-2 # Springer Science+Business Media New York 2015 100 80 Load factor (%)

Plotted as a five-point moving average

Germany EoN Netz UK model Ireland eirGrid

60 40

18:00, 2nd Feb 06

20 0 0

50

100

150

200

250

300

Hours

Fig. 7 Variability and correlation of wind loads across Ireland, the UK, and Germany (After Oswald et al. (2008)). On February 2, the electricity demand in Britain reached its peak for 2006

West Denmark January 2006

4,000 3,500 3,000

MW

2,500 2,000 1,500 1,000 500 0 0

100

200

300

400 Hour Load

500

600

700

800

Wind

Fig. 8 Wind power output and load in West Denmark (After Söder et al. (2007), # 2007 IEEE)

small averaging effects. Oswald et al. (2008) uses weather maps to demonstrate the correlation and variability of wind regimes across a large area combining Ireland, the UK, and Germany (Fig. 7). His findings (confirmed for Germany in Weigt (2008, Sect. 3.1 and Fig. 4)) cast doubt on the effectiveness of a trans-channel “supergrid” in smoothing out variations in wind load. Holttinen et al. (2009) present a detailed account of variability across geographical and temporal scales. Archer and Jacobson (2003) present wind speed data for a single site, and three and eight sites in Kansas, USA, and show how the frequency of low-wind events decreases as the number of included sites increases. However, wind generators cannot – without storage – react to changes in demand because unlike hydropower they cannot follow a fluctuating demand (Fig. 8). Therefore, in the absence of supply matched end uses, they require a flexible electricity grid with a sufficient portion of technologies that can react quickly to demand changes, such as hydropower or natural-gas-fired plants (GWEC 2008; Söder 2004). The average capacity factor of 24.5 % given above does not reflect the circumstance that electricity system planners must meet demand whenever it occurs and not on average. Where a technology is assessed with regard to its ability to supply peak load, the capacity credit describes the fraction of average

Page 13 of 25

Capacity credit

Handbook of Climate Change Mitigation and Adaptation DOI 10.1007/978-1-4614-6431-0_34-2 # Springer Science+Business Media New York 2015 Capacity credit of wind power

45% 40% 35% 30% 25% 20% 15% 10% 5% 0%

Germany UK Mid Norway US Minnesota 2004 US Minnesota 2006 US New York on-off-shore US California

0%

10%

20%

30%

40%

50%

Wind power penetration as % of peak load

60%

“Ireland ESBNG 5GW” “Ireland ESBNG 6.5GW”

Fig. 9 Capacity credit of wind power as a function of wind penetration (After Holttinen et al. (2009)). Note that as penetration approaches 20 %, the capacity credit starts to fall consistently below wind power’s average capacity factor. The results from Mid-Norway show that geographical dispersion improves capacity credit. Decreasing capacity credits have been confirmed theoretically, for example, by Martin and Diesendorf (1980)

capacity that is reliably available during peak demand. Capacity credit is also referred to in the literature as demand capacity (Pavlak 2008), capacity value (Milligan and Porter 2008), or moderation factor (Lund 2005). The difference between the average capacity and capacity credit is proportional to the time when wind power cannot meet (peak) demand because of a lack of wind. For example, provided a filled reservoir, the capacity credit of hydropower is virtually equal to its average capacity, but this is not the case for wind power because of its variability and uncertainty. Some generators assign zero capacity credit to wind; however, this is unrealistic (Diesendorf 2007). Wind can achieve up to 40 % capacity credit when penetration is low and times of ample wind coincide with times of high demand (Holttinen et al. 2009). In general, however, the higher the penetration of wind power in a system, and the more uncorrelated wind output with demand load, the lower its capacity credit (see Fig. 11 in Strbac et al. (2007) and Fig. 9). Capacity credit is usually measured by applying probability calculus to hourly data on load, generation capacity, ramp rates, and planned or forced outages and applying merit orders in which technologies that avoid fuel costs are recruited first (Milligan and Porter 2008). The loss-of-load probability LOLPi = Prob(j Cj < Li), with Cj being the capacity of generator j in the grid and Li the load at hour i, is the probability that a supply system is not able to meet demand in hour i. Integrating LOLP overall operating hours results in the loss-of-load expectation LOLE = i LOLPi, which is expressed in units of hours/year, or days/10 years, and provides a measure of system reliability. A common system LOLE target is 1 day/10 years, in which case the system has to import capacity from elsewhere. This corresponds to a 1 1/(10  365) = 99.97 % probability that the system will be able to meet demand without having to import capacity. A power supply system is usually made up of a technology mix. A measure that allows characterizing the incremental contribution of any one component to the reliability of the system is the effective loadcarrying capability ELCC, which is the new firm (i.e., zero-variance) load that can be added to the system including the incremental capacity increase, without deteriorating the system’s reliability. Adding a new generator G as well as a hypothetical firm load ELCC to a system, hourly LOLP becomes LOLPi = Prob (j Cj + G < Li + ELCC). ELCC is a hypothetical firm (i.e., zero-variance) load that can be added to a system as a result of the addition of a non-firm (i.e., variable) capacity G that would not change the system’s LOLE. ELCC is hence calculated by solving i Prob(j Cj < Li) = i Prob(j Cj + G < Li + ELCC). ELCC depends critically on the ability of a generator to meet demand at top-ranking LOLP hours, which, in the case of wind, is determined by the correlation of wind output with top-ranking LOLP hours. Capacity credit is the ratio of ELCC and rated capacity. Defined as such, capacity credit values are around or lower than the average capacity (Fig. 9). However, capacity credit has at times been measured as the Page 14 of 25

Handbook of Climate Change Mitigation and Adaptation DOI 10.1007/978-1-4614-6431-0_34-2 # Springer Science+Business Media New York 2015 Area relevant for impact studies Task 25 Balancing System wide 1,000−5,000 km

Primary reserve Grid stability

Regional 1,00−1,000 km

Secondary reserve Transmission efficiency

Adequacy Adequacy of power

Hydro/thermal efficiency Adequacy of gird

Congestion management Voltage management

Local 50−50 km

Reduced emissions

Grid

Distribution efficiency

Power quality

ms...s

s...min min...h 1...24 h years Time scale relevant for impact studies

Fig. 10 Typology of grid impacts of wind power across temporal and spatial scales (After Holttinen et al. (2009)). Balancing reserves deal with short-term variability in the order of up to 24 h. Adequacy in peak-load situations (i.e., low LOLE) has to be secured long term and requires load-carrying reserves to compensate for shortfalls in capacity credit

ratio of ELCC and average power (Martin and Diesendorf 1980), in which case it varies between 0 % and 100 %. As a result, where grid operators are required to meet demand at usual loss-of-load expectations, reserve load-carrying capacity or storage has to be secured (Pavlak 2008, Fig. 10). Similarly, operators also strive to avoid having to curtail surplus wind power at times of high wind, raising different management issues again (Holttinen 2008). Geographical dispersion of wind turbines can help to reduce variability as well as increase predictability of output (Holttinen et al. 2008). Even during a rapidly passing storm front, power from dispersed capacity will take a few hours to change (Söder et al. 2007). Depending on the characteristics of the power system, that is, composition and diversity of technologies, demand management, size, demand profile, and degree of interconnection, low capacity credit poses barriers to the degree of integration of wind energy. In general, the more flexible, loadfollowing capacity there is in the existing grid, the higher the potential penetration of wind power. However, operators run either the risk of not meeting demand by committing too much cheap slowstart capacity or the risk of overrunning cost by committing too much expensive fast-start capacity (DeCarolis and Keith 2006). Grid integration issues have largely been studied theoretically, except for some European regions. For example, while Denmark receives on average more than 20 % of its electricity from wind, it sometimes receives much higher percentages and sometimes very little, in which case Denmark exports or imports electricity from the European grid and thus relies on other generation technology for load balancing (Pavlak 2008; Østergaard 2003), in particular Norwegian, Swedish, and Finnish hydro reservoirs and idle peaking plants in Denmark (Sovacool et al. 2008). For higher degrees of integration, the management and/or export of excess wind loads become(s) an issue (DeCarolis and Keith 2006). Söder et al. (2007) report results from four regional systems with high wind penetration, among which two are connected to a larger outside system, and two are not. Management of wind power variability involves the requirement for flexible interconnection capacity and the ability to curtail wind power production, respectively. Hoogwijk et al. (2007) (Fig. 9) find that – subject to supply and load correlation – the amount of electricity that has to be discarded grows strongly for penetrations in excess of 20–30 %. Lund (2005) investigates a Page 15 of 25

Handbook of Climate Change Mitigation and Adaptation DOI 10.1007/978-1-4614-6431-0_34-2 # Springer Science+Business Media New York 2015

0.15

Component decomposition h

norm=0.16P

12.6

−0.18

Multivariate regression

8.4

h

t payback (months)

0.10

norm

(kWhin/kWhel)

for Me =2 and Sc =8

0.05

4.2

Univariate regression h

norm =

−0.008Log(P)+0.11

Regression of PA data from 1990 EUROWIN study 0.00 0.1

1

10

100

1000

0 10000

P (kW)

Fig. 11 Cumulative energy requirements of wind energy converters as a function of rated power (After Lenzen and Munksgaard (2002)). The multivariate regression line takes into account different scopes and methodologies adopted in case studies. 0.05 kWhth/kWhel is found to be realistic for modern large turbines

scenario for expansion of wind power to cover 50 % of Danish demand and concludes that supply–demand balancing problems would become severe. Similarly, penetration of less than 20 % can lead to instabilities if a grid is not well interconnected with other grids, such as in the case of Spain (Hoogwijk et al. 2007).

Life-Cycle Characteristics Lenzen and Munksgaard (2002) review and analyze a large body of literature on the life cycle of wind energy converters, comparing bottom-up component analyses with top-down input–output analyses. In their multiple regressions, these authors take into account not only technical features such as scale, vintage year, and load factor but also scope and methodology of the analysis (Fig. 11). A more recent study by Wagner and Pick (2004) confirms the energy payback times between 3 and 7 months, which – assuming a turbine lifetime of 20 years – corresponds to cumulative energy requirements between 0.035 and 0.075 kWhth/kWhel. The cumulative energy requirement  is related to the energy payback time, that is, the time it takes the wind turbine (lifetime T) to generate the primary-energy equivalent of its energy requirement, via tpayback =   T  efossil . efossil is the conversion efficiency (assumed to be 35 %) of conventional power plants that are to be displaced by wind turbines. Lenzen and Munksgaard (2002) found greenhouse gas intensities for the larger, modern turbines to be about 10 g/kWhel, ranging among the lowest values for all electricity generation technologies. Lenzen and Wachsmann (2004) found large variations of specific life-cycle emissions of wind turbines between countries where turbine components were produced.

Page 16 of 25

Handbook of Climate Change Mitigation and Adaptation DOI 10.1007/978-1-4614-6431-0_34-2 # Springer Science+Business Media New York 2015 7,000 6,000

Scale (kW)

5,000 4,000 3,000 2,000 1,000 0 1980

1985

1990

1995

2000

2005

Fig. 12 Maximum scale of wind energy converters over time (Compiled after Hoogwijk et al. (2004), GWEC (2008), Joselin Herbert et al. (2007))

Roth et al. (2005) and Pehnt et al. (2008) take the reduced capacity credit of wind into account in their systems LCA and conclude that CO2 emissions arising from the need of additional reserves add between 35 and 75 g CO2/kWh, thus outweighing CO2 emissions from the turbine life cycle. However, these values depend strongly on the technology mix of the overall power system. Noise and impacts on birds are likely to be small from wind farms, compared to other impacts (GWEC 2008). Snyder and Kaiser (2009) provide a detailed account of possible ecological impacts from offshore wind farms. The mitigation potential of wind in a power system represents an optimization problem, because the higher the penetration of wind power, the higher the emission reductions, and also the higher the variability cost.

Current Scale of Deployment Due to large economies of scale, the scale of single wind energy converters has been increasing steadily (Fig. 12), featuring taller towers and larger rotors. Larger turbines with ratings above 3.5 MW are usually dedicated to offshore power generation, while onshore installations are usually rated between 1.5 and 3 MW (GWEC 2008). In early 2009, the French manufacturer Areva deployed 5 MW turbines for operation 45 km offshore of the German North Sea island of Borkum (Jha 2009). Five-megawatt turbines are also installed at the Beatrice site (40 m depth) off the Moray Firth east of Scotland (http://www. repower.de/index.php?id=369). In 2007, the average size of operating turbines was 1.5 MW.

Contribution to Global Electricity Supply In 2008, the global capacity of wind energy converters was 121 GW, generating about 260 TWh of electricity or about 1.5 % of global electricity production (WWEA 2008). Most of the capacity (Fig. 1) is installed in the USA (25 GW, 1 % of electricity generation) and in the EU (about 65 GW, 3.7 %), followed by China (12 GW) and India (10 GW). However, regional shares of wind power can be much higher in some countries: Denmark (21 %), Spain (12 %), Portugal (9 %), Ireland (8 %), and Germany (7 %). However, it is worth noting that Denmark at times receives much higher percentages of its electricity from

Page 17 of 25

Handbook of Climate Change Mitigation and Adaptation DOI 10.1007/978-1-4614-6431-0_34-2 # Springer Science+Business Media New York 2015

wind and sometimes very little, in which case Denmark exports or imports electricity from the European grid and thus relies on other generation technology for load balancing (Pavlak 2008; Sovacool et al. 2008). Wind energy deployment has been increasing rapidly throughout the past decade, recording growth rates of around 30 % since 1996 (Fig. 2). More than half of the 2008 additions occurred in the USA and in China (Fig. 3), with the USA overtaking Germany as the leader in installed wind capacity (WWEA 2008). In the USA, wind power has represented 40 % of 2007 national capacity growth (Bolinger and Wiser 2009). Most of the wind generation is onshore; only about 1.1 GW is presently installed offshore, mainly located in Denmark (420 MW), the UK (300 MW), Sweden (135 MW), and the Netherlands (130 MW) (IEA 2008b, www.ieawind.org/Annex_XXIII.html). A further 8 GW were planned in early 2009 (Jha 2009).

Cost of Electricity Output Capital costs make up about 80 % of total wind energy cost, with the remainder for operation and maintenance, since the wind turbine does not require any fuel input. Blanco (2008) presents a detailed breakdown of these costs; in onshore installations, the turbine covers 70 % of capital cost, with the remainder for grid connection, civil works, taxes, permits, etc. Within the turbine, the tower and blades make up for half of the costs. Electricity costs vary with site conditions: assuming a 20-year plant life, 5–10 % discount rate, and 23 % average capacity factor, Blanco (2008) states a levelized cost range for electricity from European 2 MW wind turbines between 6.5 and 13 US¢/kWh. Welch and Venkateswaran (2008) and Snyder and Kaiser (2009) report US cost estimate between 3 and 5 US¢/kWh and DeCarolis and Keith (2006) between 4 and 6 US¢/kWh. Levelized electricity cost is the constant (discounted to present values) real wholesale price of electricity that recoups owners’ and investors’ capital costs, operating costs, fuel costs, income taxes, and associated cash flow constraints. They exclude costs for transmission and distribution. Levelized cost may differ from sales prices, because of profits or losses. The figures reported here are averages over plant types and vintages and over locations with varying resource endowments and demand profiles. Actual cost for particular plants may be different from the cost given here. Levelized electricity costs are strongly determined by the competitive landscape, in particular the extent and nature of regulation, subsidization and taxation, primary fuel (coal, gas, uranium) prices, and future carbon pricing. While under government regulation operators are able to transfer costs and risk to consumers and taxpayers, this is not the case in deregulated electricity markets, where high interest rates lead to investors favoring less capital-intensive and therefore less risk-prone power options. Electricity cost figures reported here refer to the financial and regulatory environment at the time of publication of the various references. Civil works and especially the foundations are much more expensive in offshore installations, where they represent 20 % of capital cost, leading to higher levelized cost of 9–16 US¢/kWh. This is confirmed in an estimate of 10 US¢/kWh by Snyder and Kaiser (2009). However, technological learning can bring these costs down in the future (IEA 2008b; Smit et al. 2007). Wind energy costs have increased during the past 3 years, mainly driven by supply tightness and price hikes of raw materials (IEA 2008b), which is difficult to control by government fiscal policy. Bolinger and Wiser (2009) provide a detailed analysis of most recent upward cost trends. Yet, the analysis of learning curves for the industry suggests that levelized costs will come down through increased efficiency, by about 10 % for every doubling of capacity (Blanco (2008); compare Fig. 14 in UNDP (2004)). As with other nonfossil electricity generation technologies, wind plant operators expect the competitive landscape Page 18 of 25

Handbook of Climate Change Mitigation and Adaptation DOI 10.1007/978-1-4614-6431-0_34-2 # Springer Science+Business Media New York 2015

to change in favor of wind power, once carbon is adequately priced (GWEC 2008; DeCarolis and Keith 2006). In the future, wind energy is also expected to benefit more from not being affected by fuel price volatility. However, depending on the penetration of a power system with variable wind energy, additional indirect costs arise for maintaining LOLE, because wind energy will not be able to meet demand at its average capacity factor but at a generally reduced rate depending on its capacity credit (DeCarolis and Keith 2006). In addition, the presence of wind power in a power supply system introduces short-term variability and uncertainty and therefore requires balancing reserve scheduling and unit commitment. Grid operators need to meet peak demand to certain statistical reliability standards even when wind output falls relative to load. During these periods, which range from minutes to hours, electricity markets need to recruit demand-following units (such as gas, hydro, or storage), which at times of sufficient wind remain idle, so that costs arise essentially for two redundant systems (Pavlak 2008; Benitez et al. 2008) and for inefficient fuel use during frequent ramping (see p. 903 in Hoogwijk et al. (2007), Benitez et al. (2008), Smith et al. (2007)). Both adequacy and balancing cost (compare Fig. 10) are sometimes referred to as intermittency cost; however, in this chapter the term variability cost is used because strictly speaking wind energy is variable and not intermittent (Diesendorf 2007). Thus, wind energy reduces dependence on fuel inputs but does not eliminate the dependence on short-term balancing capacity and long-term reliable load-carrying capacity. The impact of wind power on the power supply system is critically dependent on the technology mix in the remainder of the system, because the more flexible and load-following existing technologies, the less peak reserves are needed. It is also dependent on time characteristics of system procedures (frequency and duration of forecasts, etc.) and local market rules (Holttinen 2008). In general, the higher the wind penetration, the higher the variability in the supply system, and the more long-term reserve and short-term balancing capacity has to be committed (Fig. 13) on short-term balancing only. The corresponding cost increases are only partly offset by a smoothing out of wind variability when many turbines are dispersed and interconnected over a wide geographical area (Hirst and Hild 2004), but they are more than offset by reduced fuel and operating cost. In specific applications, the cost of additional wind power also depends on the relative locations of turbines, load, and existing transmission lines and on whether sufficient loadcarrying reserve exists in the grid or has to be built. As expected, variability costs scatter significantly depending on a large array of parameters. They cannot be derived from capacity credit estimates, since these do not contain any information about to what extent cheap base load and expensive peak load are being displaced by wind (Martin and Diesendorf

Increase as % of wind capacity

10% 8%

Nordic 2004 Finland 2004

7%

Sweden

6%

Ireland 1 hour

5%

Ireland 4 hours

4%

UK

3%

Sweden (4 hours)

2%

dena Germany

1%

Minnesota 2006

9%

0% 0%

20% 25% 5% 10% 15% Wind penetration (% of gross demand)

30%

Fig. 13 Increase in short-term balancing requirement as a percentage of wind power as a function of wind penetration (After Holttinen et al. (2009))

Page 19 of 25

Handbook of Climate Change Mitigation and Adaptation DOI 10.1007/978-1-4614-6431-0_34-2 # Springer Science+Business Media New York 2015 4.5 4.0 Euros/MWh wind

3.5 3.0 2.5 2.0 1.5 1.0 0.5 0.0 0%

25% 5% 10% 15% 20% Wind penetration (% of gross demand)

30%

Nordic 2004 Finland 2004 UK Ireland Colorado Minnesota 2004 Minnesota 2006 California Greenet Germany Greenet Denmark Greenet Finland Greenet Norway Greenet Sweden

Discarded Spinning reserve Backup capacity Depletion and learning Depletion Constant costs

0.14 0.12 0.1 0.08 0.06 0.04 0.02

8% 10 % 13 % 16 % 18 % 21 % 24 % 27 % 30 % 33 % 37 % 39 % 41 % 42 %

6%

2%

0 1%

Overall marginal cost of wind electricity ($ kWh−1)

Fig. 14 Increase in balancing requirements per kWh of wind power as a function of wind penetration (After Holttinen et al. (2009))

Fig. 15 Marginal cost of wind electricity at varying degrees of penetration (After Hoogwijk et al. (2007))

1982). Variability costs are difficult to disentangle from overall cost in real-world grids (DeCarolis and Keith 2006), so that they have largely been estimated for theoretical settings, using statistical models for resource and load fluctuations and least-cost-optimizing generation and reserve scheduling under given output limits, startup and shutdown cost, ramp-rate restrictions, planned outages, fuel cost, and day-ahead forecasts (Holttinen et al. 2008; Hirst and Hild 2004). They have been quoted between 0.2 and 0.4 US¢/ kWh for existing installations (Snyder and Kaiser 2009; GWEC 2008) and also higher at 1–1.8 US¢/kWh (DeCarolis and Keith 2006; Benitez et al. 2008; Ilex and Strbac 2002) for larger degrees of wind penetration. In a more up-to-date survey, Holttinen (2008), Strbac et al. (2007), and Smith et al. (2007) report on recent findings about increases in balancing requirements due to the presence of wind, ranging widely between 0.05 and 0.5 US¢/kWh (Fig. 14). Hence, at penetrations of up to 20 %, variability cost can be expected to be about equal or less than 10 % of generation cost. Hoogwijk et al. (2007) (see Fig. 15) run numerical experiments at large-scale penetration rates of up to 45 % and find that beyond 30 % penetration the cost incurred by discarded excess electricity becomes comparable to base cost (6 US¢/kWh). Page 20 of 25

Handbook of Climate Change Mitigation and Adaptation DOI 10.1007/978-1-4614-6431-0_34-2 # Springer Science+Business Media New York 2015 1

Error reduction

0.8 0.6 0.4 0.2 0

0

500

1,000

1,500

2,000

Region size (km)

Fig. 16 Measured forecast error as a function of spatial range of interconnection (After Rohrig). The error reduction plotted on the y-axis is the ratio of the root-mean-square error (RMSE) of prediction at a regional scale and the single-site prediction RMSE

The market for wind turbine manufacturing is diverse and competitive, with manufacturers spread across many countries. However, large corporations are entering the market, sometimes assimilating smaller entities (GWEC 2008). During the recent wind market boom, and the shift to larger turbines, the industry faced a number of supply chain bottlenecks related to gearboxes and large bearings (Blanco 2008), leading to waiting times for turbines of up to 30 months (Sovacool et al. 2008).

Future Directions Wind energy faces a number of technical future challenges. The variable and distributed nature of wind energy requires specific grid infrastructure in order to ensure grid stability, congestion management, and transmission efficiency. Significant investment in grid infrastructure has to occur in order to allow for substantial global penetration of wind energy (GWEC 2008). One of the most significant challenges is hence the integration of wind power into a large grid and the theoretical modeling of power system behavior at high penetration rates of wind. Recent efforts are also aimed at improving short-term forecasting of wind, which is still less accurate than forecasting of load (Holttinen et al. 2009). With increasing interconnection and geographical dispersion, forecasting errors are expected to decrease (see Fig. 16). Some researchers suggest directing wind power to where it can be most competitive or where its variability does not create problems. Some industrial applications and also combined heat and power plants can – within limits – adjust their demand to supply (Østergaard 2003). Dedicated load-leveling applications such as desalination, aluminum smelting, space and water heating, or chargeable hybrid vehicle fleet can deal with hourly variations in wind power since they only require a certain amount of energy over a period of many hours (Kempton et al. 2007; Pavlak 2008). For example, large-scale vehicleto-grid technologies can significantly reduce excess wind power at large wind penetration and replace a significant fraction of regulating capacity, but as Lund and Kempton (2008) show in a study for Denmark, electric vehicles would not nearly eliminate excess power and CO2 emissions, even if they had long-range battery storage. Tavner (2008) and Smith et al. (2007) list improvements in resource, turbine and systems modeling and forecasting, capital cost reduction, lifetime extension, transmission upgrading, and system integration as the main future research challenges for wind power. Joselin Herbert et al. (2007) review past developments and present research needs for wind technologies, such as for resource assessment, site selection,

Page 21 of 25

Handbook of Climate Change Mitigation and Adaptation DOI 10.1007/978-1-4614-6431-0_34-2 # Springer Science+Business Media New York 2015

turbine aerodynamics, wake effects, and turbine reliability. Offshore wind deployment faces technical challenges in the form of extreme wind conditions that exceed tolerances of current onshore turbines (Snyder and Kaiser 2009; Smit et al. 2007). The IEA Wind Offshore subgroup’s tasks include research on ecological issues and deepwater installation. In order to reduce offshore wind costs, turbine concepts, submerged structures and cabling, and remote operation and maintenance will need to undergo further research (Blanco 2008). Many of the above issues are approached through theoretical modeling, be it turbine structure, system control and balancing, wind conditions, or reliability (Tavner 2008). Surprisingly, offshore wind power generation shares many of large hydropower and nuclear power’s challenges regarding public opinion. Firestone and Kempton (2007) report a case study where the majority of survey respondents opposed offshore wind power development for environmental reasons and that many of the beliefs were “stunningly at odds” with the scientific literature. Perceived landscape changes also feature in a survey by Zoellner et al. (2008), but economic considerations more strongly influenced acceptance.

Summary Wind energy deployment has witnessed a rapid increase throughout the past decade, with annual growth rates around 30 %, generating now about 1.5 % of global electricity. The technology is mature and simple, and decades of experience exist in a few countries. Due to strong economies of scale, wind turbines have grown to several megawatts per device, and wind farms have now been deployed offshore. In recent years, wind power has become competitive without subsidies, in markets without carbon pricing. The global technical potential of wind exceeds current global electricity consumption; however, taking into account the temporal mismatch and geographical dispersion of wind energy and demand loads and requirements for supply–load balance and grid stability, the maximum economic potential appears to be in the order of 20 % of electricity consumption. At such rates of wind energy penetration, and without storage and supply matched demand, the integration of wind power into electricity grids and long-distance transmission begins to present significant challenges for system reliability and loss-of-load expectation. The main issue for future deep penetrations of wind on a global scale is hence how wind plants can be integrated across very large geographical scales and with other variable power sources. For example, there are popular proposals for integrating parts of North African solar power for output smoothing of large wind supply in Europe. Some commentators remark that these proposals may be difficult to implement because of political and supply security issues; others are more optimistic. Finally, the life-cycle greenhouse gas emissions from wind power are some of the lowest among all electricity-generating technologies, but depending on the remainder of the power supply system, emissions arise because of the use of conventional technologies for supply–demand balancing.

References Ackermann T, Söder L (2002) An overview of wind energy – status 2002. Renew Sustain Energy Rev 6:67–128 Archer CL, Jacobson MZ (2003) Spatial and temporal distributions of U.S. winds and wind power at 80 m derived from measurements. J Geophys Res 108:1–20 Archer CL, Jacobson MZ (2005) Evaluation of global wind power. J Geophys Res 110:1–20 Archer CL, Jacobson MZ (2007) Supplying baseload power and reducing transmission requirements by interconnecting wind farms. J Appl Meteorol Climatol 46:1701–1717

Page 22 of 25

Handbook of Climate Change Mitigation and Adaptation DOI 10.1007/978-1-4614-6431-0_34-2 # Springer Science+Business Media New York 2015

AWEA (2009) Wind energy basics. American Wind Energy Association. www.awea.org/faq/wwt_basics. html Benitez LE, Benitez PC, Van Kooten GC (2008) The economics of wind power with energy storage. Energy Econ 30:1973–1989 Blanco MI (2008) The economics of wind energy. Renew Sustain Energy Rev 13:1372–1382 Bolinger M, Wiser R (2009) Wind power price trends in the United States: struggling to remain competitive in the face of strong growth. Energy Policy 37:1061–1071 Carolin Mabel M, Fernandez E (2008) Growth and future trends of wind energy in India. Renew Sustain Energy Rev 12:1745–1757 Changliang X, Zhanfeng S (2009) Wind energy in China: current scenario and future perspectives. Renew Sustain Energy Rev 13:1966–1974 DeCarolis JF, Keith DW (2006) The economics of large-scale wind power in a carbon constrained world. Energy Policy 34:395–410 Diesendorf M (2007) The base-load fallacy. www.energyscience.org.au Firestone J, Kempton W (2007) Public opinion about large offshore wind power: underlying factors. Energy Policy 35:1584–1598 Focken U, Lange M. German forecasting company. Energy & Meteo systems. Accessed on 23 March 2015. www.energymeteo.de Golding EW (1997) The generation of electricity by wind power. E & FN Spon, London GWEC (2008) Global wind energy outlook. Global Wind Energy Council, Brussels Hirst E, Hild J (2004) The value of wind power as a function of wind capacity. Electr J 17:11–20 Holttinen H (2008) Estimating the impacts of wind power on power systems – summary of IEA wind collaboration. Environ Res Lett 3:1–6 Holttinen H, Milligan M, Kirby B, Acker T, Neimane V, Molinski T (2008) Using standard deviation as a measure of increased operational reserve requirement for wind power. Wind Eng 32:355–378 Holttinen H, Meibom P, Orths A, van Hulle F, Lange B, Malley MO, Pierik J, Ummels B, Olav Tande J, Estanqueiro A, Matos M, Gomez E, Söder L, Strbac G, Shakoor A, Ricardo J, Charles Smith J, Milligan M, Ela E (2009) Design and operation of power systems with large amounts of wind power, VTT research notes, 2493. VTT Technical Research Centre of Finland, Espoo Hoogwijk MM, De Vries BJM, Turkenburg WC (2004) Assessment of the global and regional geographical, technical and economic potential of onshore wind energy. Energy Econ 26:889–919 Hoogwijk MM, Van Vuuren D, De Vries BJM, Turkenburg WC (2007) Exploring the impact on cost and electricity production of high penetration levels of intermittent electricity in OECD Europe and the USA, results for wind energy. Energy 32:1381–1402 IEA (2008a) Energy technology perspectives. International Energy Agency, Paris IEA (2008b) Wind renewable energy essentials. OECD/IEA, Paris Ilex X, Strbac G (2002) Quantifying the system cost of additional renewables in 2020. Ilex Energy Consulting, Oxford Jha A (2009) Brawny wind turbines set for German offshore debut. The Guardian Weekly, 30 Jan 2010 Joselin Herbert GM, Iniyan S, Sreevalsan E, Rajapandian S (2007) A review of wind energy technologies. Renew Sustain Energy Rev 11:1117–1145 Kempton W, Archer CL, Dhanju A, Garvine RW, Jacobson MZ (2007) Large CO2 reductions via offshore wind power matched to inherent storage in energy end-uses. Geophys Res Lett 34:1–5 Lenzen M (2010) Current state of development of electricity-generating technologies: a literature review. Energies 3(3):462–591

Page 23 of 25

Handbook of Climate Change Mitigation and Adaptation DOI 10.1007/978-1-4614-6431-0_34-2 # Springer Science+Business Media New York 2015

Lenzen M, Badcock J (2009) Current state of development of electricity-generating technologies – a literature review. Centre for Integrated Sustainability Analysis, The University of Sydney, Sydney. www.aua.org.au/Content/Lenzenreport.aspx Lenzen M, Munksgaard J (2002) Energy and CO2 analyses of wind turbines – review and applications. Renew Energy 26:339–362 Lenzen M, Wachsmann U (2004) Wind energy converters in Brazil and Germany: an example for geographical variability in LCA. Appl Energy 77:119–130 Lund H (2005) Large-scale integration of wind power into different energy systems. Energy 30:2402–2412 Lund H, Kempton W (2008) Integration of renewable energy into the transport and electricity sectors through V2G. Energy Policy 36:3578–3587 Martin B, Diesendorf M (1980) Calculating the capacity credit of wind power. In: Proceedings of the Fourth Biennial Conference of the Simulation Society of Australia, Brisbane, 27-29 Martin B, Diesendorf M (1982) Optimal mix in electricity grids containing wind power. Electr Power Energy Syst 4:155–161 Milligan M, Porter K (2008) Determining the capacity value of wind: an updated survey of methods and implementation. In: Conference paper, NREL/CP-500-43433. National Renewable Energy Laboratory, Golden NREL (2001) The history and state of the art of variable-speed wind turbine technology. Technical report, NREL/TP-500-28607. National Renewable Energy Laboratory NREL (2006) Wind turbine design cost and scaling model. Technical report, NREL/TP-500-40566. National Renewable Energy Laboratory Østergaard PA (2003) Transmission grid requirements wit scattered and fluctuating renewable electricitysources. Appl Energy 76:247–255 Østergaard PA (2008) Geographic aggregation and wind power output variance in Denmark. Energy 33:1453–1460 Oswald J, Raine M, Ashraf-Ball H (2008) Will British weather provide reliable electricity? Energy Policy 36:3212–3225 Pavlak A (2008) The economic value of wind energy. Electr J 21:46–50 Pehnt M, Oeser M, Swider DJ (2008) Consequential environmental system analysis of expected offshore wind electricity production in Germany. Energy 33:747–759 Peterson EW, Hennessey JP (1978) On the use of power laws for estimates of wind power potential. J Appl Meteorol 17:390–394 Resch G, Held A, Faber T, Panzer C, Toro F, Haas R (2008) Potentials and prospects for renewable energies at global scale. Energy Policy 36:4048–4056 Rohrig K. Fraunhofer Institut f€ ur Windenergie und Energiesystemtechnik. IWES. www.iwes.fraunhofer. de Roth H, Br€ uckl O, Held A (2005) Windenergiebedingte CO2-Emissionen konventioneller Kraftwerke, lfE-Schriftenreihe, Heft 50. Lehrstuhl f€ ur Energiewirtschaft und Anwendungstechnik, M€ unchen Sahin AD (2004) Progress and recent trends in wind energy. Prog Energy Combust Sci 30(5):501–543 Sesto E, Casale C (1998) Exploitation of wind as an energy source to meet the world’s electricity demand. J Wind Eng Ind Aerodyn 74–76:375–387 Shackleton J (2009) World first for Scotland gives engineering student a history lesson. The Robert Gordon University. www.rgu.ac.uk/pressrel/BlythProject.doc Smit T, Junginger M, Smits R (2007) Technological learning in offshore wind energy: different roles of the government. Energy Policy 35:6431–6444

Page 24 of 25

Handbook of Climate Change Mitigation and Adaptation DOI 10.1007/978-1-4614-6431-0_34-2 # Springer Science+Business Media New York 2015

Smith JC, Milligan M, DeMeo EA, Parsons B (2007) Utility wind integration and operating impact state of the art. IEEE Trans Power Syst 22:900–908 Snyder B, Kaiser MJ (2009) Ecological and economic cost-benefit analysis of offshore wind energy. Renew Energy 34:1567–1578 Söder L (2004) On limits for wind power generation. Int J Global Energy Issues 21:243–254 Söder L, Hofmann L, Orths A, Holttinen H, Y-h W, Tuohy A (2007) Experience from wind integration in some high penetration areas. IEEE Trans Energy Convers 22:4–12 Sovacool BK, Lindboe HH, Odgaard O (2008) Is the Danish wind energy model replicable for other countries? Electr J 21:27–28 Strbac G, Shakoor A, Black M, Pudjianto D, Bopp T (2007) Impact of wind generation on the operation and development of the UK electricity systems. Electr Power Syst Res 77:1214–1227 Tavner P (2008) Wind power as a clean-energy contributor. Energy Policy 36:4397–4400 Thresher R, Dodge D (1998) Trends in the evolution of wind turbine generator configurations and systems. Wind Energy 1:70–85 U.S. Department of Energy, Black & Veatch and AWEA (2008) 20 % Wind energy by 2030. DOE/GO-102008-2567, U.S. Department of Energy, Oak Ridge UNDP (2004) World energy assessment: 2004 update. United Nations Development Programme, New York Wagner H-J, Pick E (2004) Energy yield ratio and cumulative energy demand for wind energy converters. Energy 29:2289–2295 Weigt H (2008) Germany’s wind energy: the potential for fossil capacity replacement and cost saving. Appl Energy 86:1857–1863 Welch JB, Venkateswaran A (2008) The dual sustainability of wind energy. Renew Sustain Energ Rev 12(9):2265–2300 WWEA (2008) World wind energy report. World Wind Energy Association, Bonn. www.wwindea.org Wyatt A (1986) Electric power: challenges and choices. Book Press, Toronto Zoellner J, Schweizer-Ries P, Wemheuer C (2008) Public acceptance of renewable energies: results from case studies in Germany. Energy Policy 36:4136–4141

Page 25 of 25

Handbook of Climate Change Mitigation and Adaptation DOI 10.1007/978-1-4614-6431-0_35-2 # Springer Science+Business Media New York 2015

Geothermal Energy Hirofumi Muraoka* North Japan Research Institute for Sustainable Energy, Hirosaki University, Aomori, Japan

Abstract While most renewable energies are, directly or indirectly, derived from the sun, geothermal energy originates in the interior of the earth. Geothermal energy is the most stable of the renewable energies because it can be utilized constantly, regardless of weather or season. Geothermal energy can be used not only for power generation but also for direct heat application. The development of geothermal power generation entered a phase of rapid growth in 2005, and its total installed capacity worldwide reached 10.7 GWe in 2010. The capacity of 10.7 GWe appears small when compared with solar and wind power generation; however, the high-capacity factor of geothermal power plants, which is 0.7–0.9, provides several times greater electricity from the same installed capacity than photovoltaic and wind plants. Direct heat application can be used almost anywhere on land. Geothermal resources are classified into two categories: hydrothermal convection resources and thermal conduction resources. Today’s geothermal power capacity is mainly hydrothermal-based and unevenly distributed in volcanic countries. As a borehole is drilled into deeper formations, formation temperature becomes higher but permeability becomes lower. Hydrothermal convection resources have a limit depth. Rock’s brittle-plastic transition gives a bottom depth to permeability, and it is the absolute limit depth for the hydrothermal convection resources. Enhanced or engineered geothermal systems (EGS), in which fractures are artificially created in less-permeable rocks and heat is extracted by artificially circulating water through the fractures, are still at a demonstration stage, but they will extend geothermal power generation to thermal conduction resources and to depths even deeper than the brittle-plastic transition. Assessment of worldwide geothermal resource potential is still under study. However, an estimate shows that potential is 312 GWe for hydrothermal resources for electric power generation to a depth of 4 km, 1,500 GWe for EGS resources to a depth of 10 km, and 4,400 GWth for direct geothermal use resources. Were 70 % of hydrothermal resources, 20 % of EGS resources, and 20 % of direct-use resources to be developed by 2050, it could reduce carbon dioxide emission by 3.17 Gton/year, which is 11 % of the present worldwide emission.

Introduction While most kinds of renewable energy available are, directly or indirectly, derived from the sun, geothermal energy originates in the interior of the earth. This makes geothermal energy distinct from other kinds of renewable energy, giving its use merits as well as demerits. Geothermal energy is the most stable energy among a variety of renewable energies. The capacity factor, a ratio of working time of the facility, of geothermal power plants is as high as those of thermal power plants, qualifying geothermal power generation as a base load electricity source. Geothermal energy can be supplied constantly from the earth’s interior regardless of time of the day or night, weather, or season. Assessment of the resource potentials of most kinds of renewable energy, such as the wind velocity and solar radiant energy, is not very difficult because they can be directly observed. Assessment of geothermal *Email: [email protected] Page 1 of 23

Handbook of Climate Change Mitigation and Adaptation DOI 10.1007/978-1-4614-6431-0_35-2 # Springer Science+Business Media New York 2015

Fig. 1 Simplified thermal structure of the earth’s interior (Topographic data are taken from Lindquist et al. (2004) and drawing is made with GMT by Wessel and Smith (1998))

resource potential is not as easy, however, because geothermal resources are stored in the earth’s crust. This makes the initial investment risk for geothermal energy developments higher. For many years, geothermal energy development was only undertaken in volcanic countries such as Italy, New Zealand, Japan, the USA, the Philippines, Iceland, and Indonesia. More recently, however, less volcanic countries, such as Germany, Australia, France, and Switzerland, have begun enthusiastically developing geothermal power plants under a new concept of the enhanced or engineered geothermal system (EGS). Innovation in geothermal energy utilization technology currently aims at a goal that every country can use geothermal energy.

Heat in the Earth’s Interior Enormous heat is stored in the earth’s interior. The simplified thermal structure of the earth’s interior is illustrated in Fig. 1. The deepest hole ever drilled was the SG-3 well in Kola Peninsula, Russia, that reached a depth of 12,262 m in 1989 (Fuchs et al. 1990). The hole reached only 0.2 % of the radius of the earth, suggesting that direct temperature measurement of the earth’s interior would be difficult. The temperature of the earth’s interior is estimated from the transmissibility and velocity of seismic waves, Page 2 of 23

Handbook of Climate Change Mitigation and Adaptation DOI 10.1007/978-1-4614-6431-0_35-2 # Springer Science+Business Media New York 2015

high-temperature and high-pressure experiments for mineral phase changes, a model calculation of temperature increases by the adiabatic compression of mineral phases, a model calculation of electric conductivity of mineral phases, and many other methods. All of these are indirect estimates and inevitably yield a large opportunity for error. The temperature at the center of the earth is commonly estimated to be 6,000  C, but that may easily yield an error 1,000  C (Fig. 1). Seismological observation delineates that the outer core of the earth consists mainly of molten state iron and nickel, and the mantle of the earth consists of solid state peridotite (ultramafic rocks mostly composed of olivine, Ca-poor pyroxene, and Ca-rich pyroxene). When the melting point temperatures of these are considered at the given pressure, the core-mantle boundary is estimated to be about 4,000  C (Kanamori 1978). The abrupt change of the velocity of seismic waves at a depth of 670 km in the lower and upper mantle boundary is ascribed to the phase changes from g phase of spinel to perovskite + magnesiow€ustite at 1,600  C (Tajika 1996). The thickness of the lithosphere, a rigid plate, is less than 30 km near the axis of ocean ridges and increases to 100 km away from the ridge. Low-velocity zones of the seismic waves or asthenosphere underlie the rigid plate at a depth between 70 and 250 km. This zone is partially fused and probably reaches 1,000  C because of the wet solidus temperature of peridotite. Therefore, only a thin veneer of the earth’s surface is less than 1,000  C, and the 93 % volume of the earth’s interior exceeds 1,000  C (Fig. 1). Thus, the earth is some sort of a thermal engine. The derivation of the heat is mainly attributed to the accretion heat from bombardment of unsorted micro-planet materials and the heat from gravitational differentiation and compression in the initial stage of the earth’s formation history some 4.6 billion years ago (Takahashi 1996). The earth is gradually losing the initial heat with time. Nevertheless, abundant heat is still stored in the earth’s interior, 4.6 billion years after its incipiency. The thermal life of the earth is prolonged by the additional heat generation in radioactive decay. Based on the past 4.6 billion years, the earth’s heat will be probably preserved for another 4.6 billion years. Thus, the earth is some sort of a semipermanent thermal engine.

Quantity of Thermal Energy Supplied from the Earth’s Interior The earth’s interior constantly supplies heat to the surface. This heat transportation phenomenon is called “terrestrial heat flow.” Terrestrial heat flow can be measured in wells at a depth from 0.3 to 3 km on shore, where the solar radiant heat does not reach. Terrestrial heat flow can be measured in shallower wells on the ocean floor because of no disturbance by solar radiant heat. Terrestrial heat flow is calculated from the observed thermal gradient multiplied by the thermal conductivity of the constituent rocks. An average terrestrial heat flow is 65  1.6 mW/m2 onshore and 101  2.2 mW/m2 offshore (Tajika 1996). The value of the terrestrial heat flow observed on shore consists of not only heat flow from the earth’s interior but also heat flow from radioactive decay of elements such as uranium, thorium, and potassium predominantly concentrated in the continental crust. About half of the terrestrial heat-flow value on shore may be derived from radioactive decay. The value of the terrestrial heat flow observed offshore has a negative correlation with the geological age of the ocean floor. The younger ocean floor tends to yield the higher terrestrial heat flow (Tajika 1996). Including both continental and oceanic regions, a global average terrestrial heat flow is 87  2.0 mW/m2. This value 2.1  106 cal/cm2  s (87 mW/m2) multiplied by the entire global area is converted to the annual value, yielding an annual global terrestrial heat-flow energy Ehf = 1.3  1021 J/year or 3.2  1020 cal/year (Mizutani and Watanabe 1978). This heat-flow energy causes a variety of the earth’s internal dynamics, such as mantle convection, plate tectonics, earthquakes, and magma generation. Thermal energy of lava flow and volcanic ash that erupted from global volcanoes is estimated to be Ee = 3  1019 J/year or 7  1018 cal/year (Mizutani and Watanabe 1978). This energy is one or two Page 3 of 23

Handbook of Climate Change Mitigation and Adaptation DOI 10.1007/978-1-4614-6431-0_35-2 # Springer Science+Business Media New York 2015

orders of magnitude lower than the terrestrial heat-flow energy that causes it. Many active volcanoes form high-level magma chambers that heat up ground water to discharge fumaroles (natural vents of steam and volcanic gas) and hot springs. The discharge energy of fumaroles and hot springs on the earth is estimated to be Ew = 2  1018 J/year or 5  1017 cal/year (Mizutani and Watanabe 1978). This energy is also one magnitude lower than the volcanic eruption energy that causes it. The earth seems to be an almost semipermanent thermal engine, where abundant terrestrial heat flow is always being lost from the earth’s surface. Therefore, artificial utilization of the terrestrial heat flow does not seriously affect the dynamic equilibrium of the earth. This is a concept of the environment-friendly geothermal energy utilization.

Direct Use of Geothermal Energy Since the early history of human beings, hot springs and steaming grounds were utilized for a variety of purposes such as bathing, cooking, balneology, and healing. Direct use of geothermal energy thus has a long history over the last few millenniums. The direct use of geothermal energy is currently extended to the hot-water supply, swimming pools, space heating, snow melting, drying foods or materials, condensing sugar, greenhouse cultivation, and fish cultivation. Geothermal heat pumps for heating and cooling of buildings and houses are rapidly spreading in the world. Methods of direct use of geothermal energies are briefly described here. Bathing is one of the most traditional direct uses of geothermal energy. For example, as of March 2010, there are 27,825 hot-spring sources in Japan (Ministry of the Environment and Japan 2011), most of which were developed by drillings for bathing in hot-spring resort hotels. In China, since the China Western Development policy was launched in 2000, many hot springs were developed by 3,000-m class deep drillings to the porous Ordovician limestone strata (Zheng 2004). In New Zealand, the Maori people have traditionally cooked foods in steaming grounds in Rotorua and Taupo. Large-scale flower cultivation greenhouses heated by hot springs are found in Monte Amiata, Italy. Shrimp cultivation by hot water from the geothermal power plant is famous in Wairakei, New Zealand. In Iceland, approximately 90 % of residences are heated by hot water from geothermal power plants and hot-spring wells. Thus, Iceland has become an almost energy-independent nation. Use of geothermal heat pumps, a direct use of geothermal energies, has increased rapidly because they do not require any geothermal anomaly areas and can be utilized almost everywhere on land. The equivalent number of the 12 kW units (typical of US and Western European homes) reached 2.94 million in 2010, over double the number of units in 2005 (Lund et al. 2010). Temperature is constant underground. This can be experienced in limestone caves, which feel warm in winter and cool in summer. Only the atmospheric temperature changes from day to night and due to seasonal variation of solar radiant energy. Solar radiant energy reaches the shallower part of the ground, to a depth of 10, 20, or 30 m, depending on the rock species of strata, but does not reach the deeper formations. Therefore, heat can be wasted through a well as shallow as 50 or 100 m underground by a heat pump in summer and can be extracted in winter. Geothermal heat pumps resemble air conditioners (air-sourced heat pumps), but air conditioners waste heat to the atmosphere in summer, causing the heat island phenomenon. Increasing use of geothermal heat pumps for heating and cooling of buildings and houses will effectively mitigate the global warming and the heat island phenomena.

Page 4 of 23

Handbook of Climate Change Mitigation and Adaptation DOI 10.1007/978-1-4614-6431-0_35-2 # Springer Science+Business Media New York 2015

Geothermal Power Generation Geothermal power generation has a relatively short history, beginning only in the last century. Geothermal power-generation experiments were successfully initiated at Larderello in Tuscany, Italy, in 1904 (Dickson and Fanelli 2004). Since then, a variety of methods of geothermal power generation have been developed. These methods are briefly described here. Geothermal power generation is classified into two categories: steam flash power generation from hightemperature hydrothermal resources of 150–370  C and binary cycle power generation from low-temperature hydrothermal resources of 50–200  C (Fig. 2). In the 1970s, geothermal fluids less than 150  C could not have been economically utilized for electric power generation. The only available method of geothermal power generation in those days was conventional-type steam flash power generation, which uses the conventional steam turbine directly rotated by the natural steam from the geothermal production wells (Fig. 3). Even if the subsurface natural fluid is a liquid state under the high-pressure geothermal reservoirs at a depth, a pressure release by the drill hole makes the fluid boil and the steam ascends to the surface automatically. This phenomenon is called “borehole flash” and requires the temperature of water to be at least 150  C (Fig. 3). Therefore, the temperature threshold of 150  C used to be important. This situation changed with the development of binary cycle power-generation technolGeothermal power generation Binary cycle power generation: suitable for 50–200°C resources, zero emission but low efficiency and small scale

Steam flash power generation: suitable for 150– 370°C resources, non-zero emission but high efficiency and large scale Double flash power generation: secondary steam can also be utilized

Single flash power generation: no secondary flasher

Back pressure power generation: no condenser and cooling tower

Rankine cycle power generation: one-component working fluid

Kalina cycle power generation: twocomponent working fluid

Fig. 2 Classification of geothermal power generation

Turbine Steam

Generator

Steam Cooling tower

Water Separator

Intra-borehole flash

Condenser

Production well

Cooling water pump

Reinjection well

Fig. 3 Illustration of a plant of the single flash geothermal power generation (Modified from Dickson and Fanelli (2004)) Page 5 of 23

Handbook of Climate Change Mitigation and Adaptation DOI 10.1007/978-1-4614-6431-0_35-2 # Springer Science+Business Media New York 2015

Working fluid

Turbine

Generator

Cooling tower

Water

Normally pumping up

Heat exchanger (evaporator)

Production well

Feed pump

Heat exchanger (condenser)

Cooling water pump

Reinjection well

Fig. 4 Illustration of a plant of the binary cycle geothermal power generation (Modified from Dickson and Fanelli (2004))

ogy in the 1980s. The binary cycle power-generation method uses a secondary working fluid, such as pentane or ammonia, that has a low boiling-point temperature (Fig. 4). When the temperature of subsurface water is around 150  C or lower, the water is not easy to flash. Then, subsurface hot water is pumped up from the production wells. The subsurface water is only used as a heat source for the secondary working fluid. Today, binary cycle power-generation technology enables use of moderate(150–90  C) to low-temperature ( 110 MPa

(2)

where • s1 = the maximum principal stress • s3 = the minimum principal stress Figure 9 is only constrained by the upper equation. The power law creep is expressed by the following equation (Brace and Kohlstedt 1980): H ϵ_ ¼ Aðs1  s3 Þn eð RT Þ

(3)

where • • • • • •

ϵ_ = strain rate, s1 A = material constant specified to the rock or mineral species n = stress exponent H = activation enthalpy, J mol1 R = gas constant, J K-1 mol1 T = temperature, K

Most of these parameters are reasonably determined on the well WD-1a. The strain rate ϵ_ is given to be 1012 s1 because of the active compressive tectonic region (Fournier 1991). Taking the data from quartz diorite close to the Kakkonda Granite, the stress exponent n is given to be 2.4, and activation enthalpy H is given to be 219 kJ mol1. The temperature T is given by the temperature profile of the well WD-1a in Fig. 8. The material constant A varies from 1019 to 1049 according to the given material, and only this

Page 10 of 23

Handbook of Climate Change Mitigation and Adaptation DOI 10.1007/978-1-4614-6431-0_35-2 # Springer Science+Business Media New York 2015

0

0

100

Temperature (°C) 200 300 400

500

600

Minimum homogenization temperature of fluid inclusions of minerals Inflection point dividing the shallow reservoir (low-pressure) and deep reservoir (high-pressure) in the Kakkonda geothermal field

Depth (m)

1,000

Temperature log at Feb.14, 1996 (167 days after drilling) 2,000 Equilibrium temperature estimated from recovery trends Pre-tertiary system Kakkonda Granite 3,000 Temperature log at Jul. 21, 1995 (82.3 h after drilling)

Temperature Profile simplified from various data Inflection point dividing the hydrothermal convection zone and magmatic conduction zone

3,800 Chemical compound tablets with known melting point temperatures (one for 500°C was melted but one for 505°C was unmelted)

Fig. 8 Temperature profile of the well WD-1a in the Kakkonda geothermal field, Japan (Muraoka et al. 1998; Muraoka 2005)

parameter cannot be reasonably determined. However, the inflection point of the strength curve by the temperature inflection in the plastic region should coincide with the depth of 3.1 km, restricting the material constant A to be 100.85. Then, the strength curve on the plastic field is drawn as shown in Fig. 9 (Muraoka et al. 1999). The four points of stress ratio measurements of the differential strain curve analysis (DSCA) on the core samples (New Energy and Industrial Technology Development Organization 1996) are well explained by the model of the strength profile drawn by the Byerlee’s law and the power law creep equation as shown in Fig. 9. Particularly, closure of s1 value to s3 at the deepest DSCA stress ratio measurement indicates that the dramatic strength weakening occurred as being accommodated by the plastic field as shown in Fig. 9. Three points of DSCA stress ratio measurements in the brittle field are close to the dry line (l = 0). This is also reasonable because the sampling of cores could have been only performed from impermeable zones with no lost circulation. These observations strongly support that the well WD-1a actually penetrated the brittle-plastic transition (Muraoka et al. 1998; Muraoka 2005).

Page 11 of 23

Handbook of Climate Change Mitigation and Adaptation DOI 10.1007/978-1-4614-6431-0_35-2 # Springer Science+Business Media New York 2015

Fig. 9 Crustal strength-depth relation along the well WD-1a in the Kakkonda geothermal field, Japan (Muraoka et al. 1999)

The brittle-plastic transition may primarily be expected in the temperature inflection point at the depth of 3,100 m and the temperature 380  C (Muraoka et al. 1998). However, the graphical representation makes it clear that the brittle-plastic transition in a strict sense probably lies at a shallower depth, like 2,400 m, where the maximum strength is attained at 360  C (Fig. 9). A zone of very high concentration of low-angle fractures is observed in the depth interval from 1,770 to 2,860 m (Fig. 10; Muraoka and Ohtani 2000). This fracture zone likely reflects the maximum strength zone of the bottom of the brittle layer because this zone may play a role of the dehydration front and dehydration-induced weakening front (Muraoka and Ohtani 2000).

Distribution of Geothermal Resources in the World High-temperature hydrothermal resources (>150  C) are unevenly distributed in the world. Figure 11 shows global topography, global bathymetry (topography of ocean floors), plate boundaries, active volcanoes (Siebert and Simkin 2002), and representative geothermal power plants. High-temperature hydrothermal resources that are hot enough for the steam flash-type power generation are associated with active volcanic zones so that most of the geothermal power plants are developed in active volcanic zones. This is because sub-volcanic magma chambers or their sub-solidus equivalents (hot intrusive bodies) are serving for geothermal heat sources for the high-temperature hydrothermal reservoirs. Actually, young plutonic bodies are often penetrated by geothermal wells not only in the Kakkonda geothermal field, Japan, as described in the preceding section, but also in other geothermal fields such as The Geysers, in California, USA; Tongonan and Palinpinon, in the Philippines; and Mutnovsky, in Russia (Muraoka 1993). Active volcanic zones are associated with two types of plate boundaries: spreading and convergent. A typical spreading plate boundary is mid-ocean ridges. In Fig. 11, mid-ocean ridges are not represented as volcanoes, but they are the largest volcanoes on the earth that are lineally continued. Their geothermal development has been difficult so far because of the high cost of submarine exploitations. As a result, most

Page 12 of 23

Handbook of Climate Change Mitigation and Adaptation DOI 10.1007/978-1-4614-6431-0_35-2 # Springer Science+Business Media New York 2015

Fig. 10 Chemistry of cutting samples from the contact metamorphic aureole of the Kakkonda Granite and high concentration of low-angle fractures (Muraoka and Ohtani 2000)

of the geothermal power plants are built in the volcanic zones of the convergent plate boundaries. One exception is the Great Rift Valley in the eastern Africa, where spreading plate boundaries appear on shore. The other exception is the Salton Sea and Cerro Prieto geothermal fields in the western North America, where spreading plate boundaries or their transform faults appear on shore. The last exception is Iceland, which is situated on the mid-Atlantic ridge where the ridge appears above sea level because of the strong volcanic activity in the hot spot. Most of geothermal power plants are developed in the volcanic zones of subduction zones. Particularly, the largest subduction zones are found in the circum-Pacific regions. Although the first geothermal power generation was accomplished in Italy in 1904, the largest geothermal power country is currently the USA, the second is the Philippines, and the third is Indonesia. All of these are situated along the circum-Pacific regions. Geothermal electricity plays a very important role in Central America, where 10 % or higher ratios of electricity are supplied from geothermal power generation in many countries. It seems curious that very few geothermal power plants have been developed in South America, in spite of the huge geothermal potential in the Andes. The Copahue geothermal power plant used to operate in Argentina, but it has been terminated. No geothermal power plant is currently operating in South America. In other words, South America has reserved a large potential for geothermal power development in the near future.

Global Geothermal Resource Potentials Assessment of global geothermal resource potentials is hard to accomplish because subsurface exploration data with uniform accuracy are seldom available on a global scale. However, a rough estimate is

Page 13 of 23

Fig. 11 Global topography, global bathymetry, plate boundaries, active volcanoes, and representative geothermal power plants (Topographic data are taken from Lindquist et al. (2004), drawing is made with GMT by Wessel and Smith (1998), and active volcanoes are taken from Siebert and Simkin (2002))

Handbook of Climate Change Mitigation and Adaptation DOI 10.1007/978-1-4614-6431-0_35-2 # Springer Science+Business Media New York 2015

Page 14 of 23

Handbook of Climate Change Mitigation and Adaptation DOI 10.1007/978-1-4614-6431-0_35-2 # Springer Science+Business Media New York 2015

Table 1 Number of active volcanoes and estimated geothermal potential for electrical generation in eight representative countries. The number of active volcanoes is taken from Siebert and Simkin (2002) where submarine volcanoes are excluded Country USA Indonesia Japan Philippines Mexico

Number of active volcanoes 163 137 93 46 39

Assessed hydrothermal power potentials (MW) 39,090 27,140 23,470 6,000 6,000

Iceland New Zealand Italy

27 12

5,800 3,650

References William et al. (2008) Darma et al. (2010) Muraoka et al. (2008) Wright (1999) Mulas de Pozo et al. (1985) Palmason et al. (1985) Lawless (2002)

12

1,500

Buonasorte et al. (2007)

45,000 USA

Assessed potential (MW)

40,000 35,000 30,000

Indonesia

25,000 Japan 20,000 15,000

E = 221.7v R 2 = 0.9598

10,000

Philippines Iceland 5,000 Mexico New Zealand Italy 0 0 120 40 60 80 100 20 Number of active volcanoes

140

160

180

Fig. 12 Correlation between the number of active volcanoes and estimated geothermal potential in eight representative countries slightly modified from Stefansson (2005). The number of active volcanoes is taken from Siebert and Simkin (2002), where submarine volcanoes are excluded

possible by analogical reasoning. Hydrothermal power resources, EGS power resources, and direct-use resources are estimated here by analogical reasoning. Stefansson (Stefansson 2005) estimated world geothermal assessment based on well-known data on active volcanoes (Siebert and Simkin 2002). Hydrothermal resources high enough in the temperature for the steam flash-type power generation are normally associated with active volcanoes, and therefore, his method is an excellent analogical reasoning by use of well-known data. Although the size of the active volcano varies from the 100-km-long Toba caldera in Indonesia to 0.6-km-long maars (explosion of volcanic craters without remarkable tuff rings in its surroundings) such as Megata in Japan, these size differences might cancel one another by using statistical data. Here, modification is made from the original paper because some data have been updated. Table 1 and Fig. 12 show a relationship between the numbers of active volcanoes and assessed hydrothermal power resources in eight representative countries modified from the original paper (Stefansson 2005). A number of active volcanoes are taken from the catalog (Siebert and Simkin 2002), where submarine volcanoes are excluded. Assessed hydrothermal power potentials are taken from each reference described in Table 1. A regression equation is Page 15 of 23

Handbook of Climate Change Mitigation and Adaptation DOI 10.1007/978-1-4614-6431-0_35-2 # Springer Science+Business Media New York 2015

obtained between numbers of active volcanoes and estimated hydrothermal power resources in eight representative countries: E ¼ 221:7 n

(4)

where • E = hydrothermal resource potential for geothermal power generation (MWe) • v = a number of active volcanoes without submarine volcanoes The maximum exploitation depth of Eq. 4 is not strictly defined in the different assessment condition in each country, but it is roughly assumed to be a depth of 4 km. Equation 4 is useful to estimate hydrothermal resource potentials when the subsurface exploration data are not fully available. If a number of global active volcanoes, 1,406 excluding submarine volcanoes, are input into this equation, about 312 GWe are obtained as a hydrothermal resource potential for geothermal power generation in the world. Only a few countries estimated the EGS resources. The EGS resources are, however, almost proportional to the width of the given area. Therefore, an estimate in the USA (Tester et al. 2006) can be used as teacher data, and it can be extrapolated to the world according to the area ratio. Tester et al. (2006) estimated EGS resources to be at least 100 GWe in the USA for the exploitation to a depth of 10 km. The widths of the land area of the world and the USA are 148,890,000 km2 and 9,826,635 km2, respectively. Then, 1,500 GWe was obtained as a global EGS resource potential. Assessment of direct-use resources is far more difficult to attain because of the variety of uses: baths, swimming pools, snow melting, hot-water supply, space heating, greenhouses, and drying foods. Therefore, it needs some simplification. Stefansson (2005) estimated hydrothermal resources lower than 130  C for direct use to be 4,400 GWth based on the resource frequency distribution like a power function. This seems an excellent estimate, because the number is almost 10 % of the annual global terrestrial heat-flow energy. As described in the earlier section, an annual global terrestrial heat-flow energy Ehf = 1.3  1021 J/year, and 10 % is 1.3  1020 J/year, which is equivalent to 4,120 GWth. Therefore, the estimate for direct-use resources of 4,400 GWth seems reasonable.

Present State of Geothermal Development in the World The present state of geothermal development in the world is briefly described for geothermal power generation (Bertani 2010) and on direct use (Lund et al. 2010). The installed capacity of geothermal power generation in the world is 10,715 MWe as of 2010, and the growth rate during the last 5 years was the second greatest, after the early 1980s, as shown in Fig. 13. The produced electricity during the year 2010 was 67,246 GWh (Bertani 2010). The capacity factor, a ratio of working time of the facility, of geothermal power plants is 0.72 throughout the world. This capacity factor is amazingly high compared with not only other renewable electricity sources but also thermal or nuclear power sources. The five largest geothermal power-generation countries – the USA, the Philippines, Indonesia, Mexico, and Italy – account for about 75 % of the world geothermal power capacity (Table 2). The installed geothermal energy facilities for direct utilization at the end of 2009 were 50,583 MWth, and the thermal energy used was 438,071 TJ/year (121,696 GWh/year) as shown in Fig. 14 (Lund et al. 2010). Again, the growth rate during the last 5 years was rapid compared to the past. The capacity factor of direct use was 0.27 at the end of 2009, and it was decreasing with passing time. This is because geothermal heat pumps are rapidly spreading among a variety of direct utilization methods, and the

Page 16 of 23

Handbook of Climate Change Mitigation and Adaptation DOI 10.1007/978-1-4614-6431-0_35-2 # Springer Science+Business Media New York 2015 12,000 10,715

Installed capacity (MW)

10,000

8,933 7,972

8,000 6,000

6,833 5,834 4,764

4,000 2,110 2,000

1,180 520 720 200 270 386 0 1950 1990 1960 1970 1980 Calendar year

2000

2010

Fig. 13 Growing installed geothermal power capacity in the world (Bertani 2010)

capacity factor of geothermal heat pumps is normally less than 0.2. Almost all countries are utilizing geothermal heat directly (Table 3). Countries in colder regions tend to use more geothermal heat.

Geothermal Cascade Utilization Technology To date, geothermal resources tend to be used for a single utilization purpose, and the spent resources have been wasted into reinjection wells or rivers, in the case of low-temperature direct use. However, a more efficient utilization is repeated consumption of energy such as a cascade from the high-temperature resources to the low-temperature resources as shown in Fig. 15. This can be called geothermal cascade utilization. A typical example is seen in Iceland, where voluminous hot water spent in geothermal power plants is transported by pipelines, more than 25 km long, to a tank on the hill of Reykjavik City. Then, the hot water is distributed to each residence for space heating. This is a typical example of the geothermal cascade utilization. More efficient cascade utilization is possible with steam flash-type power generation, binary cycle power generation, space heating, and snow melting in descending order as shown in Fig. 15. Cascade utilization makes geothermal resources several times more efficient. However, for the development of the geothermal cascade utilization, heat exchange technology and scale-inhibition technology are critically important.

Mitigation of Global Warming by Geothermal Development If 70 % of hydrothermal power resources and 20 % of EGS resources are developed by 2050, these capacities would be 218 GWe and 300 GWe, respectively, and their total installed capacity would be 518 GWe. When the capacity factor is assumed to be 0.72 and the unit reduction of carbon dioxide emission by the replacement from the oil thermal power to geothermal power is assumed to be 817 g-CO2/ kWh (Mongillo 2005), it could reduce carbon dioxide emission by 2.67 Gton/year. If 20 % of direct-use geothermal resources are developed by 2050, its installed capacity would be 880 GWth. When the capacity factor is assumed to be 0.27 and the reduction unit of carbon dioxide

Page 17 of 23

Handbook of Climate Change Mitigation and Adaptation DOI 10.1007/978-1-4614-6431-0_35-2 # Springer Science+Business Media New York 2015

Table 2 Installed geothermal power capacity and electricity production in the world (Bertani 2010) Country Argentina Australia Austria Canada Chile China Costa Rica El Salvador Ethiopia France Germany Greece Guatemala Honduras Hungary Iceland Indonesia Italy Japan Kenya Mexico Nevis New Zealand Nicaragua Papua New Guinea Philippines Portugal Romania Russia Spain Slovakia Thailand Netherlands Turkey USA Total

Installed in 2005 (MW) 0 0.2 1.1 0 0 28 163 151 7.3 15 0.2 0 33 0 0 202 797 791 535 129 953 0 435 77 6 1,930 16 0 79 0 0 0.3 0 20 2,564 8,933

Energy in 2005 (GWh) 0 0.5 3.2 0 0 96 1,145 967 0 102 1.5 0 212 0 0 1,483 6,085 5,340 3,467 1,088 6,282 0 2,774 271 17 9,253 90 0 85 0 0 1.8 0 105 16,840 55,709

Installed in 2010 (MW) 0 1.1 1.4 0 0 24 166 204 7.3 16 6.6 0 52 0 0 575 1,197 843 536 167 958 0 628 88 56 1,904 29 0 82 0 0 0.3 0 82 3,093 10,715

Energy in 2010 (GWh) 0 0.5 3.8 0 0 150 1,131 1,422 10 95 50 0 289 0 0 4,597 9,600 5,520 3,064 1,430 7,047 0 4,055 310 450 10,311 175 0 441 0 0 2 0 490 16,603 67,246

emission by the replacement from the oil fuels to geothermal energy is assumed to be 409 g-CO2/kWh (Mongillo 2005), it could reduce carbon dioxide emission by 0.50 Gton/year. Both geothermal electricity and direct use could reduce carbon dioxide emission by 3.17 Gton/year, which is 11 % of the present worldwide emission.

Page 18 of 23

Handbook of Climate Change Mitigation and Adaptation DOI 10.1007/978-1-4614-6431-0_35-2 # Springer Science+Business Media New York 2015 500,000 438,071

450,000

Utilization (TJ/yr)

400,000 350,000 300,000

272,372

250,000 200,000

190,699

150,000 112,441 100,000 50,000 0 1995

2000

2005

2010

Calendar year

Fig. 14 Growing direct geothermal use in the world (Lund et al. 2010) Table 3 Direct geothermal utilization in the world (Lund et al. 2010) Country Albania Algeria Argentina Armenia Australia Austria Belarus Belgium Bosnia and Herzegovina Brazil Bulgaria Canada Caribbean islands Chile China Columbia Costa Rica Croatia Czech Republic Denmark Ecuador Egypt El Salvador Estonia Ethiopia Finland France Georgia Germany Greece Guatemala

Capacity (MWt) 11.48 55.64 307.47 1.00 33.33 662.85 3.42 117.90 21.70 360.10 98.30 1,126.00 0.10 9.11 8,898.00 14.40 1.00 67.48 151.50 200.00 5.16 1.00 2.00 63.00 2.20 857.90 1,345.00 24.51 2,485.40 134.60 2.31

Annual use (TJ/year) 40.46 1,723.13 3,906.74 15.00 235.10 3,727.70 33.79 546.97 255.36 6,622.40 1,370.12 8,873.00 2.78 131.82 75.348.30 287.00 21.00 468.89 922.00 2,500.00 102.40 15.00 40.00 356.00 41.60 8,370.00 12,929.00 659.24 12,764.50 937.80 56.46

Annual use (GWh/year) 11.20 478.70 1,085.30 4.20 65.30 1,035.60 9.40 151.90 70.90 1,839.70 380.60 2,464.90 0.80 36.60 20,931.80 79.70 5.80 130.30 256.10 694.50 28.40 4.20 11.10 98.90 11.60 2,325.20 3,591.70 183.10 3,546.00 260.50 15.70

Capacity factor 0.11 0.98 0.40 0.48 0.22 0.18 0.31 0.15 0.37 0.58 0.44 0.25 0.85 0.46 0.27 0.63 0.67 0.22 0.19 0.40 0.63 0.48 0.63 0.18 0.60 0.31 0.30 0.85 0.16 0.22 0.78 (continued) Page 19 of 23

Handbook of Climate Change Mitigation and Adaptation DOI 10.1007/978-1-4614-6431-0_35-2 # Springer Science+Business Media New York 2015

Table 3 (continued) Country Honduras Hungary Iceland India Indonesia Iran Ireland Israel Italy Japan Jordan Kenya Korea (South) Latvia Lithuania Macedonia Mexico Mongolia Morocco Nepal Netherlands New Zealand Norway Papua New Guinea Peru Philippines Poland Portugal Romania Russia Serbia Slovak Republic Slovenia South Africa Spain Sweden Switzerland Tajikistan Thailand Tunisia Turkey Ukraine UK USA Venezuela Vietnam Yemen Total

Capacity (MWt) 1.93 654.60 1,826.00 265.00 2.30 41.61 152.88 82.40 867.00 2,099.53 153.30 16.00 229.30 1.63 48.10 47.18 155.82 6.80 5.02 2.72 1,410.26 393.22 3,300.00 0.10 2.40 3.30 281.05 28.10 153.24 308.20 100.80 132.20 104.17 6.01 141.04 4,460.00 1,060.90 2.93 2.54 43.80 2,084.00 10.90 186.62 12,611.46 0.70 31.20 1.00 50,583.12

Annual use (TJ/year) 45.00 9,767.00 24,361.00 2,545.00 42.60 1,064.18 764.02 2,193.00 9,941.00 25,697.94 1,540.00 126.62 1,954.65 31.81 411.52 601.41 4,022.80 213.20 79.14 73.74 10,699.40 9,552.00 25,200.00 1.00 49.00 39.58 1,501.10 386.40 1,265.43 6,143.50 1,410.00 3,067.20 1,136.39 114.75 684.05 45,301.00 7,714.60 55.40 79.10 364.00 36,885.90 118.80 849.74 56,551.80 14.00 92.33 15.00 438,070.66

Annual use (GWh/year) 12.50 2,713.30 6,767.50 707.00 11.80 295.60 212.20 609.20 2,761.60 7,138.90 427.80 35.20 543.00 8.80 114.30 167.10 1,117.50 59.20 22.00 20.50 2,972.30 2,653.50 7,000.60 0.30 13.60 11.00 417.00 107.30 351.50 1,706.70 391.70 852.10 315.70 31.90 190.00 12,584.60 2,143.10 15.40 22.00 101.10 10,246.90 33.00 236.10 15,710.10 3.90 25.60 4.20 121,695.90

Capacity factor 0.74 0.47 0.42 0.30 0.59 0.81 0.16 0.84 0.36 0.39 0.32 0.25 0.27 0.62 0.27 0.40 0.82 0.99 0.50 0.86 0.24 0.77 0.24 0.32 0.65 0.38 0.17 0.44 0.26 0.63 0.44 0.74 0.35 0.61 0.15 0.32 0.23 0.60 0.99 0.26 0.56 0.35 0.14 0.14 0.63 0.09 0.48 0.27

Page 20 of 23

Handbook of Climate Change Mitigation and Adaptation DOI 10.1007/978-1-4614-6431-0_35-2 # Springer Science+Business Media New York 2015 180°C99 vol.% can be achieved

Ammonia process

Lower heat of regeneration than MEA

Higher net CO2 transfer capacity than MEA Stripping steam not required Offers multi-pollutant control Membrane technology

No regeneration energy is required Simple modular system No waste streams

Cons Process consumes considerable energy Solvent degradation and equipment corrosion occur in the presence of O2 Concentrations of SO x and NO x in the gas stream combine with the amine to form irregenerable, heat-stable salts Rectisol™ refrigeration costs can be high Ammonium bicarbonate decomposes at 140_F (use Kelvin), so temperature in the absorber must be lower than 140_F (use Kelvin) Ammonia is more volatile than MEA and often provides an ammonia slip into the exit gas Ammonia is consumed through the irreversible formation of ammonium sulfates and nitrates as well as removal of HCl and HF Membranes can be plugged by impurities in the gas stream Preventing membrane wetting is a major challenge Technology has not been proven industrially

Table 17 Future directions of chemical absorption technologies for CO2 capture Technology for CO2 capture Amine scrubbing

Future direction Absorbent studies 1. Set an evaluation criterion for the selection of amines 2. Mixed amines 3. Ionic liquid 4. New solvents Absorber column studies 1. New packings 2. High-efficiency reactors Process integration and optimization 1. Heat integration 2. Water balance (continued)

104

M. Fang and D. Zhu

Table 17 (continued) Technology for CO2 capture Ammonia process

Membrane technologies

Future direction Optimization of regeneration process Slippery control of ammonia Selection of additives for ammonia solution Plugging mechanism of membrane and control measures Wetting mechanism of membrane and control measures Selection and pretreatment of membrane Optimization of process

References Agar DW, Tan YH, Hui ZX (2008) Separation CO2 from gas mixtures. Patent WO 2008/015217 BP America (2005) CO2 capture project technical report DEFC26-01NT41145, National Energy Technology Laboratory Aroonwilas A, Veawab A (2004) Integration of CO2 capture unit using single- and blended-amines into supercritical coal-fired power plants: implications for emission and energy management. Int J Greenhouse Gas Control 1:143–150 Bacon JR, Demas JN (1987) Determination of oxygen concentrations by luminescence quenching of a polymer-immobilized transition-metal complex. Anal Chem 59(23):2780–2785 Bedell SA (2009) Oxidative degradation mechanisms for amines in flue gas capture. Energy Procedia 1:771–778 Bedell SA, Myers J (1994) Chelating agent formulation for hydrogen sulfide abatement. US Patent 5,338,778, 16 Aug 1994 Black S (2006) Chilled ammonia scrubber for CO2 capture. MIT Carbon Sequestration Forum VII, Cambridge, MA Blauwhoff PMM, Versteeg GF, Van Swaaij WPM (1984) A study on the reaction between CO2 and alkanolamines in aqueous solutions. Chem Eng Sci 39:207–225 Boa L, Trachtenberg MC (2006) Facilitated transport of CO2 across a liquid membrane: comparing enzyme, amine, and alkaline. J Membr Sci 280:330–334 Bozzano G, Dente M, Manenti F, Corna P, Masserdotti F (2014) Fluid distribution in packed beds. Part 1. Literature and technology overview. Ind Eng Chem Res 53:3157–3164 Buxton GV (1988) Critical review of rate constants for reactions of hydrated electrons, hydrogen atoms and hydroxyl radicals (OH radical dot/O radical dot-) in aqueous solution. J Phys Chem Ref Data 17:513 Cai Z, Xie R, Wu Z (1996) Binary isobaric vapor-liquid equilibria of ethanolamines + water. J Chem Eng Data 41:1101–1103 Camacho F, Sánchez S, Pacheco R, Sánchez A, La Rubia MD (2005) Absorption of carbon dioxide at high partial pressures in aqueous solutions of di-isopropanolamine. Ind Eng Chem Res 44:7451–7457 Chapel DG, Mariz CL, Ernest J (1999) Recovery of CO2 from flue gases: commercial trends. http:// www.netl.doe.gov/publications/proceedings/01/carbon_seq/2b3.pdf Chen J (2002) Super-gravity technology and its application. Chemical Industry Press, Beijing Chen M (2006) Super-gravity absorption reactors for CO2 removal from flue gas. CN 2829861Y Crognale G (1999) Environmental management strategies: the 21st century perspective. Air and Waste Management Association, Sewickley Dame UON. CO2 capture with ionic liquids involving phase change. http://www.arpae-summit. com/em_reporting/exhibitor_detail?exhibitor_id=4238

Chemical Absorption

105

Davis J, Rochelle GT (2009) Thermal degradation of monoethanolamine at stripper conditions. Energy Procedia 1:327–333 Derks PWJ, Dijkstra HBS, Hogendoorn JA, Versteeg GF (2005) Solubility of carbon dioxide in aqueous piperazine solutions. AIChE J 51:2311–2327 Desideri U, Paolucci A (1999) Performance modelling of a carbon dioxide removal system for power plants. Energ Convers Manag 40:1899–1915 DuPart MS, Bacon TR, Edwards DJ (1993) Understanding corrosion in alkanolamine gas treating plants. Part 2. Case histories show actual plant problems and their solutions. Hydrocarb Process 75–80 Eide-Haugmo I et al (2009) Environmental impact of amines. Energy Procedia 1:1297–1304 Ermatchkov V, Kamps AP-S, Speyer D, Maurer G (2006) Solubility of carbon dioxide in aqueous solutions of piperazine in the low gas loading region. J Chem Eng Data 51:1788–1796 Figueroa JD, Fout T, Plasynski S, McIlvried H, Srivastava RD (2008) Advances in CO2 capture technology – the U.S. Department of Energy’s carbon sequestration program. Int J Greenhouse Gas Control 2:9–20 Fisher KS, Beitler C, Rueter C, Searcy K, Rochelle GT, Jassim M (2005) Integrating MEA regeneration with CO2 compression and peaking to reduce CO2 capture costs. US: final report of work performed under grant no: DE-FG02-04ER84111, 09 June 2005 Freeman SA, Rochelle GT (2011) Thermal degradation of piperazine and its structural analogs. Energy Procedia 4:43–50 Freeman SA, Dugas R, Van Wagener DH, Nguyen T, Rochelle GT (2010) Carbon dioxide capture with concentrated, aqueous piperazine. Int J Greenhouse Gas Control 4:119–124 Goff GS, Rochelle GT (2006) Oxidation inhibitors for copper and iron catalyzed degradation of monoethanolamine in CO2 capture processes. Ind Eng Chem Res 45:2513 Hakka LE, Ouimet MA (2006) US Patent 7,056,482, 6 June 2006 Harris F, Kurnia KA, Mutalib MIA, Thanapalan M (2009) Solubilities of carbon dioxide an densities of aqueous sodium glycinate solutions before and after CO2 absorption. J Chem Eng Data 54:144–147 Haslegrave JA, Hedges WM, Montgomerie HTR, O’Brien TM (1992) The development of corrosion inhibitors with low-environmental toxicity. In: SPE annual technical conference and exhibition, 4–7 Oct 1992, Washington, DC Horng CT, Li M (2002) Bottom spin valves with continuous spacer exchange bias. US Patent, 6,466,418 Hu L (2009) Phase transitional absorption method. US Patent 7,541,011 Huang B, Xu S, Gao S, Liu L, Tao J, Niu H et al (2010) Industrial test and techno-economic analysis of CO2 capture in Huaneng Beijing coal-fired power station. Appl Energ 87:3347–3354 Jassim MS, Rochelle GT (2006) Innovative absorber/stripper configurations for CO2 capture by aqueous monoethanolamine. Ind Eng Res 45:2465–2472 Kamijo TIMM (2006) Apparatus and method for CO2 recovery. Mitsubishi Heavy Industries, Kansai Electric Company Kang MS, Moon SH, Park YI, Lee KH (2002) Development of carbon dioxide separation process using continuous hollow-fiber membrane contactor and water-splitting electrodialysis. Sep Sci Technol 37:1789–1806 Koch G (2001) Corrosion cost preventive strategies in the Unites States. CC Technologies & NACE international (Sponsored by Office of Infrastructure and Development Federal Highway Administration) Kohl AL, Nielsen R (1997) Gas purification, 5th edn. Gulf Publishing Company, Houston Korendovych IV, Kryatov SV, Rybak-Akimova EV (2007) Dioxygen activation at non-heme iron: insights from rapid kinetic studies. Acc Chem Res 40:510 Kumar PS, Hogendoorn JA, Feron PHM, Versteeg GF (2003) Equilibrium solubility of CO2 in aqueous potassium taurate solutions: part 1. Crystallization in carbon dioxide loaded aqueous salt solutions of amino acids. Ind Eng Chem Res 42:2832–2840

106

M. Fang and D. Zhu

Kvamsdal HM, Rochelle GT (2008) Effects of temperature in CO2 absorption from flue gas by aqueous mono-ethanolamine. Ind Eng Chem Res 43(3):867–875 Kvamsdal HM et al (2010) Maintaining a neutral water balance in a 450 MWe NGCC-CCS power system with post-combustion carbon dioxide capture aimed at offshore operation. Int J Greenhouse Gas Control 4:613–622 Lalevee J, Allonas X, Fouassier J-P (2002) NH and α (CH) bond dissociation enthalpies of aliphatic amines. J Am Chem Soc 124:9613 Larsen BL, Rasmussen P, Fredenslund A (1987) A modified UNIFAC group contribution model for prediction of phase equilibria and heats of mixing. Ind Eng Chem Res 26:2274–2286 Lenard J, Rousseau R, Teja A (1990) Vapor-liquid equilibria for mixtures of 2-aminoethanol + water. AIChE Symp Ser 86(279):1–5 Lente G, Fabian I (1998) The early phase of the iron(III)-sulfite ion reaction. Formation of a novel iron(III)-sulfito complex. Inorg Chem 37:4204 Lin C-C, Liu W-T, Tan C-s (1990) Removal of carbon dioxide by absorption in a rotating packed bed. Ind Eng Chem Res 29:917 Liu X, Sawyer DT, Bedell SA, Worley CM (1995) Ligand degradation in the iron/dioxygen-induced dehydrogenation of H2S. Paper presented at the seventh sulfur recovery conference, Austin, 24 Sept 1995 Ma’mun S, Jakobsen JP, Svendsen HF (2006) Experimental and modeling study of the solubility of carbon dioxide in aqueous 30 mass % 2-((2-aminoethyl)amino)ethanol solution. Ind Eng Chem Res 45:2505–2512 Maddox RN (1985) Gas conditioning and processing, vol 4, Gas and liquid sweetening. Campbell Petroleum Series, Norman Mamun S et al (2007) Selection of new absorbents for carbon dioxide capture. Energy Conv Manag 48:251–258 Mandal BP, Bandyopadhyay SS (2005) Simulation absorption of carbon dioxide and hydrogen sulfide into aqueous blends of 2-amino-2-methyl-1-propanol and diethanolamine. Chem Eng Sci 60:6438–6451 Mandal BP, Bandyopadhyay SS (2006a) Absorption of carbon dioxide into aqueous blends of 2-amino-2-methyl-1-propanol and monoethanolamine. Chem Eng Sci 61:5440–5447 Mandal BP, Bandyopadhyay SS (2006b) Simultaneous absorption of CO2 and H2S into aqueous blends of N-methyldiethanolamine and diethanolamine. Environ Sci Technol 40:6076–6084 Mandal BP, Guba M, Biswas AK, Bandyopadhyay SS (2001) Removal of carbon dioxide by absorption in mixed amines: modeling of absorption in aqueous MDEA/MEA and AMP/MEA solutions. Chem Eng Sci 56:6217–6224 Mandal BP, Kundu M, Padhiyar NU, Bandyopadhyay SS (2004) Physical solubility and diffusivity of N2O and CO2 into aqueous solutions of (2-amino-2-methyl-1-propanol + diethanolamine) and (N-methyldiethanolamine + diethanolamine). J Chem Eng Data 49:264–270 Matsumiya N, Teramoto M, Kitada S, Matsuyama H (2005) Evaluation of energy consumption for separation of CO2 in flue gas by hollow fiber facilitated transport membrane module with permeation of amine solution. Sep Purif Technol 46:26–32 McLees JA (2006) Vapor-liquid equilibrium of monoethanolamine/piperazine/water at 35–70  C. MSE thesis, The University of Texas at Austin, Austin McMahon AJ, Harrop D (1995) Green corrosion inhibitors: an oil company perspective. In: CORROSION 95, Houston Meng H et al (2008) Removal of heat stable salts from aqueous solutions of N-methyldiethanolamine using a specially designed three-compartment configuration electrodialyzer. J Membr Sci 322:436–440 Mimura T, Suda T, Honda A, Kumazawa H (1998) Kinetics of reaction between carbon dioxide and sterically hindered amines for carbon dioxide recovery from power plant flue gases. Chem Eng Commun 170:245–260

Chemical Absorption

107

MOEA Industrial Development Bureau (2002) The technique manual on the recovery of carbon dioxide by absorption, Taiwan Mogul MG (1999) Reduce corrosion in amine gas absorption columns. Hydrocarb Process 78 (10):47–56 Nath A, Bender E (1983) Isothermal vapor-liquid equilibria of binary and ternary mixtures containing alcohol, alkanolamine, and water with a new static device. J Chem Eng Data 26:370–375 Nguyen T et al (2010) Amine volatility in CO2 capture. Int J Greenhouse Gas Control 4(5):707–715 Okabe K, Mano H, Fujioka Y (2008) Separation and recovery of carbon dioxide by a membrane flash process. Int J Greenhouse Gas Control 2:485–491 Oyenekan BA, Rochelle GT (2006) Energy performance of stripper configurations for CO2 capture by aqueous amines. Ind Eng Chem Res 45(8):2457–2464 Oyenekan BA, Rochelle GT (2007) Alternative stripper configurations for CO2 capture by aqueous amines. AIChE J 53(12):3144–3154 Oyenekan BA, Rochelle GT (2009) Rate modeling of CO2 stripping from potassium carbonate promoted by piperazine. Int J Greenhouse Gas Control 3:121–132 Pederson O, Dannstrom H, Gronvold M, Stuksrud D, Ronning O (2000) Gas treating using membrane gas/liquid contactors. In: Fifth international conference on greenhouse gas control technologies, Cairns Polderman LD, Dillon CP et al (1955) Why monoethanolamine solution breaks down in gas treating service. In: Proceedings of the gas conditioning conference, pp 49–56 Rampin P (2000) Amine units: results of a survey on structural reliability. In: Proceedings of international conference corrosion in refinery, petrochemical and power generation plants, Venezia, pp 18–19 Ramshaw C, Mallinson RH (1981) Mass transfer process. US Patent 4,283,255 Raynal L, Alix P, Bouillon P, Gomez A, de Nailly MLF, Jacquin M et al (2011) The DMX™ process: an original solution for lowering the cost of post-combustion carbon capture. Energy Procedia 4:779–786 Reddy SGJF (2007) Integrated compressor/stripper configurations and methods. Fluor Technologies Corporation Resnik KP, Yeh JT, Pennline HW (2004) Aqua ammonia process for simultaneous removal of CO2, SO2 and NOx. Int J Environ Technol Manage 4(1/2):89–104 Resnik KP, Garber W, Hreha DC, Yeh JT, Pennline HW (2006) A parametric scan for regenerative ammonia-based scrubbing for the capture of CO2. In: Proceedings of the 23rd annual international Pittsburgh coal conference, Pittsburgh Rinker EB, Ashour SS, Sandall OC (1995) Kinetics and modelling of carbon dioxide absorption into aqueous solutions of N-methyldiethanolamine. Chem Eng Sci 50:755–768 Rochelle G, Chen E, Dugas R, Oyenakan B, Seibert F (2006) Solvent and process enhancements for CO2 absorption/stripping. In: 2005 annual conference on capture and sequestration, Alexandria Rochelle G, Chen E, Freeman S, Van Wagener D, Xu Q, Voice A (2011) Aqueous piperazine as the new standard for CO2 capture technology. Chem Eng J 171:725–733 Rojey A, Cadours R, Carrette P-L et al (2007) Process for deacidification of a gas by means of an absorbent solution with fractionated regeneration by heating. Patent WO 2007/104856 Rooney PC, DuPart MS, Bacon TR (1998) Oxygen's role in alkanolamine degradation. Hydrocarb Process 77(7):109–113 Rubin ES, Rao AB (2002) A technical economic and environmental assessment of amine-based CO2 capture technology for power plant greenhouse gas control. Annual Technical Progress Report Sakwattanapong R et al (2005) Behavior of reboiler heat duty for CO2 capture plants using regenerable single and blended alkanolamines. Ind Eng Chem Res 44:4465–4473 Schnell I (2004) Dipolar recoupling in fast-MAS solid-state NMR spectroscopy. Chem Inform 35. doi:10.1002/chin.200451274 Schwartz HA (1982) Chain decomposition of aqueous triethanolamine. J Phys Chem 86:3431

108

M. Fang and D. Zhu

Semeonova TA, Lieyijiesi ИЛ (1982) Purification of industrial gas. Research Institute of Nanjing Chemical Industrial Corporation Translation. Chemical Industry Press, Beijing Shuster E (2010) Estimating freshwater needs to meet future thermoelectric generation requirements, DOE/ NETL-400/2010/1339, pp 7–14 Singh D, Croiset E, Douglas PL, Douglas MA (2002) Economics of CO2 capture from a coal-fired power plant – a sensitivity analysis. In: Proceedings of the sixth conference on greenhouse gas control technologies (GHGT-6), Kyoto Singh P, Niederer JPM, Versteeg GF (2007) Structure and activity relationships for amine based CO2 absorbents – I. Int J Greenhouse Gas Control 1:5–10 Soosaiprakasam IR (2008) Corrosion and polarization behavior of carbon steel in MEA2 based CO2 capture process. Int J Greenhouse Gas Control 2(4):553–562 Soosaiprakasam IR, Veawab A (2008) Corrosion and polarization behavior of carbon steel in MEA-based CO2 capture process. Int J Green House Gas Control 2:553–562 Source: IPCC special report on carbon dioxide capture and storage, 2005 Spekuljak Z, Monella H (1994) A new design concept of structured packing column auxiliaries. Chem Eng Technol 17:61–66 Stéphenne K (2013) Start-up of world’s first commercial post-combustion coal fired CCS project: contribution of Shell Cansolv to SaskPower Boundary Dam ICCS project. Energy Procedia Suzuki M (2007) Ligand effects on dioxygen activation by copper and nickel complexes: reactivity and intermediates. Acc Chem Res 40:609 Svendsen HF, Tobiesen FA, Mejdell T et al (2007) Method and apparatus for energy reduction in acid gas capture processes. Patent WO 2007/001190 Tan MSYH (2010) Study of CO2-absorption into thermomorphic lipophilic amine solvents Tan LS, Shariff AM, Lau KK, Bustam MA (2012) Factors affecting CO2 absorption efficiency in packed column: a review. J Ind Eng Chem 18:1874–1883 Teramoto M, Ohnishi N, Takeuchi N, Kitada S, Matsuyama H, Matsumiya N, Mano H (2003) Separation and enrichment of carbon dioxide by capillary membrane module with permeation of carrier solution. Sep Purif Technol 30:215–217 Teramoto M, Kitada S, Ohnishi N, Matsuyama H, Matsumiya N (2004) Separation and concentration of CO2 by capillary-type facilitated transport membrane module with permeation of carrier solution. J Membr Sci 234:83–94 Thitakamol B, Veawab A, Aroonwilas A (2007) Environmental impacts of absorption-based CO2 capture unit for post-combustion treatment of flue gas from coal-fired power plant. Int J Greenhouse Gas Control 1(3):318–342 Touhara H, Okazaki S, Okino F, Tanaka H, Ikari K, Nakanishi K (1982) Thermodynamic properties of aqueous mixtures of hydrophilic compounds 2-aminoethanol and its methyl derivatives. J Chem Thermodyn 14:145–156 Trachtenberg MC, Tu CK, Landers RA, Wilson RC, McGregor ML, Laipis PJ, Paterson M, Silverman DN, Thomas D, Smith RL, Rudolph FB (1999) Carbon dioxide transport by proteic and facilitated transport membranes. Life Support Biosph Sci 6:293–302 Vaidya PD, Kenig EY (2007) Absorption of CO2 into aqueous blends of alkanolamines prepared from renewable resources. Chem Eng Sci 62:7344–7350 Van Eldik R, Coichev N, Bal Reddy K, Gerhard A (1992) Metal ion catalyzed autoxidation of sulfur (IV) oxides: redox cycling of metal ions induced by sulfite. Ber Bunsenges Phys Chem 96:478 Veawab A, Aroonwilas A (2002) Identification of oxidizing agents in aqueous amine-CO2 systems using a mechanistic corrosion model. Corros Sci 44:967–987 Weiyang F (2005) Liquid effective velocity in a column containing corrugated metal sheet packing. Chem Ind Eng Process 24:1–4 White CM, Strazisar BR, Granite EJ (2003) Separation and capture of CO2 from large stationary sources and sequestration in geological formations: coalbeds and deep saline aquifers. J Air Waste Manag Assoc 53(6):645–715 Woodhouse SRP (2008) Improved absorbent regeneration. Aker Clean Carbon

Chemical Absorption

109

Yan S, Fang M et al (2007) Experimental study on the separation of CO2 from flue gas using hollow fiber membrane contactors without wetting. Fuel Process Technol 88(5):501–511 Yan S, Fang M et al (2008) Comparative analysis of CO2 separation from flue gas by membrane gas absorption technology and chemical absorption technology in China. Energy Convers Manage 49:3188–3197 Yan S, Fang M et al (2009) Regeneration of CO2 from CO2-rich alkanolamines solution by using reduced thickness and vacuum technology: regeneration feasibility and characteristic of thinlayer solvent. Chem Eng Process 48:515–523 Yang WC, Ciferno J (2006) Assessment of carbozyme enzyme-based membrane technology for CO2 capture from flue gas. DOE/NETL 401/072606 Yazvikova NV, Zelenskaya LG et al (1975) Mechanism of side reactions during removal of carbon dioxide from gases by treatment with monoethanolamine. Z Prik Khim 48(3):674–676 Yeh JT, Resnik KP, Rygle K, Pennline HW (2005) Semibatch absorption and regeneration studies for CO2 capture by aqueous ammonia. Fuel Process Technol 86(14–15):1533–1546 Yeon SH, Lee KS, Sea B, Park YI, Lee KH (2005) Application of pilot-scale membrane contactor hybrid system for removal of carbon dioxide from flue gas. J Membr Sci 257:156–160 Zhang Q, Cussler EL (1985a) Microporous hollow fibers for gas-absorption. 1. Mass-transfer in the liquid. J Membr Sci 23:321–332 Zhang Q, Cussler EL (1985b) Microporous hollow fibers for gas-absorption. 2. Mass-transfer across the membrane. J Membr Sci 23:333–345 Zhang J, Agar DW, Zhang X, Geuzebroek F (2011) CO2 absorption in biphasic solvents with enhanced low temperature solvent regeneration. Energy Procedia 4:67–74 Zhang J, Qiao Y, Wang W, Misch R, Hussain K, Agar DW (2013) Development of an energyefficient CO2 capture process using thermomorphic biphasic solvents. Energy Procedia 37:1254–1261

Handbook of Climate Change Mitigation and Adaptation DOI 10.1007/978-1-4614-6431-0_39-2 # Springer Science+Business Media New York 2015

Oxy-Fuel Firing Technology for Power Generation Edward John (Ben) Anthony* Natural Resources Canada, CanmetENERGY, Ottawa, ON, Canada

Abstract In order to generate pure streams of CO2 suitable for sequestration/storage, various routes are possible, involving either precombustion strategies such as the use of gasification technology combined with shift reactors to produce H2 or alternatively post-combustion strategies such as CO2 scrubbing with, for example, amine-based carriers. One of the more direct approaches is to carry out the combustion in pure or nearly pure oxygen-oxy-fuel combustion to produce primarily CO2 and H2O in the combustion gases, resulting in almost complete CO2 capture. Until recently, the primary avenue for deploying this technology was with conventional pulverized fuel-fired boilers, and there is already one large demonstration plant operating in Europe with more being planned in the future. However, more recently oxy-fired fluidized bed combustion (FBC) has also become increasingly important as a potential technology, offering as it does fuel flexibility and the possibility of firing local or indigenous fuels, including biomass in a CO2-neutral manner. Both oxy-fuel combustion technologies have been examined here, considering factors such as their economics and potential for improvement, as well as challenges to the technology, including the need to generate CO2 streams of suitable purity for pipeline transport to available sequestration sites. Finally, the emission issues for both classes of the technology are discussed.

Introduction The idea that anthropogenic CO2 could cause significant global warming was first presented over 114 years ago by Svante Arrhenius (1896); by 1907, the use of fossil fuels as a potent cause of CO2 emissions had also been clearly spelled out: “The enormous combustion of coal by industrial establishments suffices to increase the percentage of carbon dioxide in the air to an appreciable amount” (Arrhenius 1907), albeit that Arrhenius saw global warming as a potential benefit. Unfortunately, it has taken a further 60 or more years for the potential for damage due to these phenomena to be universally recognized (Houghton 2004; Edwards 2010). Against this background, world populations have been burgeoning, and the use of fossil fuels to meet mankind’s energy needs has increased and shows no signs of stopping in the near future. To further complicate the picture, many countries have aging thermal power plants, most of which will have to be replaced in the next couple of decades. At this point the major hope for continuing to use fossil fuels for widespread power generation in an environmentally benign manner lies in the technology known as carbon capture and sequestration (CCS), as renewable and nuclear technologies would not be able to quickly fill the gap if fossil fuels were simply abandoned. In practice, this means that the norm may well be to build thermal power plants to incorporate back-end technologies for CO2 capture following a carbon-capture-ready philosophy (Irons et al. 2007). Thus, if “new” technologies are to be introduced on a wide scale, they had best either be already commercialized, such as gasification, or in a near-market-ready state. Fortunately, for oxy-fuel combustion technology, in which the fuel is converted in a stream of nearly pure oxygen (90–95 % plus) using pulverized fuel (PF) or pulverized coal (PC), research is now well underway with various larger *Email: [email protected] Page 1 of 24

Handbook of Climate Change Mitigation and Adaptation DOI 10.1007/978-1-4614-6431-0_39-2 # Springer Science+Business Media New York 2015

Flue gas recycle

Air separation unit

Air

Oxygen

Nitrogen

Boiler

Coel

Flue gas clean-up

CO2 Separation unit

Particulates and condensed water

Nitrogen and oxygen

Recovered CO2 stream

Fig. 1 Schematic of generic oxy-fired boiler configuration

pilot-scale demonstrations either completed or under construction, as will be seen later in this chapter. The oxy-fuel technology, in its currently typical configuration, is presented in a schematic in Fig. 1. As an alternative there is also the possibility of retrofitting existing boilers, and here too oxy-fuel technology can potentially meet this challenge in an economically attractive fashion when compared with other CO2neutral options (Farley 2006). Finally, it should also be added that there are side benefits of oxy-fuel combustion, the most obvious being that thermal NOx levels are expected to be significantly reduced, since N2 in air is largely absent from the oxidant. This chapter focuses on oxy-fuel PF and circulating fluidized bed (CFB), and it will not discuss oxy-fuel technology in which water is used instead of CO2 to moderate flame temperatures, as any such technology is much further from commercialization than oxy-fuel technology with flue gas recycle (FGR) and/or may pose excessive technical challenges at the present time (Zheng et al. 2009). Nor will this chapter consider hybrid systems combining, for instance, amine scrubbing and oxy-fuel, although they have been proposed (Doukelis et al. 2008), for the same reason, namely, that such concepts are probably significantly further removed from commercialization than “conventional” oxy-fuel technology.

Oxy-Fuel Pulverized Fuel Technology Oxy-fuel work in PF systems traces its origins to pioneering research carried out by Argonne National Laboratory in the 1980s (Weller et al. 1985). Subsequently, in the 1990s, significant oxy-fuel research and development (R&D) work was initiated elsewhere, with a number of small pilot plant programs, including CanmetENERGY (Canada), Air Liquide (USA), and the International Flame Research Foundation (IFRF) R&D program in Ijmuiden (The Netherlands) looking initially at natural gas firing (Buhre et al. 2005). In addition to the early small-scale pilot plant work, there were various economic evaluations of the technology versus back-end scrubbing, primarily for natural gas-fired systems, and there are now at least three major reviews in the open literature available to describe such developments (Buhre et al. 2005; Toftegaard et al. 2010; Davidson and Stanley 2010). The economic studies often suggested that oxy-firing was not a preferred technology or, if so, only slightly better, with efficiencies in the mid-40 % range and costs somewhat higher than back-end capture of CO2 for natural gas, and there has been vigorous debate about the results of such evaluations. The paper of Kvamsdal et al. (2007) is typical of such studies, looking at nine potential concepts and ranking oxy-firing of natural gas as only of moderate performance, while noting the significant energy losses associated with the cryogenic separation of air (6.6 % in that study with a further loss of 2.4 % for compression, when expressed in percent of the net plant efficiency, based on the fuel’s lower heating value).

Page 2 of 24

Handbook of Climate Change Mitigation and Adaptation DOI 10.1007/978-1-4614-6431-0_39-2 # Springer Science+Business Media New York 2015

More recent studies also confirm efficiency losses of 8–10 % (on the same basis) for both natural gasand coal-fired systems (Liszka and Ziębik 2010). Studies, which tend to focus more on coal, seem to show a fairly similar picture in terms of overall economics, that is, that oxy-fuel PC looks comparable to PC technology with back-end CO2 capture. Thus, Bouillon et al. (2009) came to the conclusion that the penalties for both post-combustion and oxy-fuel combustion-integrated processes were around 44 €/t CO2. In a later study, Hadjipaschalis et al. (2009) carried out an analysis for a 500 MW steam plant, with assumed efficiency of 33.5 %, a generation capacity factor of 85 %, and 90 % CO2 capture rate. The results of their study indicate that the oxy-fuel combustion plant represents a competitive technology, which currently seems to be the most economical having the lowest electricity costs and lower CO2 avoidance costs. Similarly, a major Canadian study done for the Canadian Clean Power Coalition (CCPC) in 2007 (Xu et al. 2007) came to the following conclusions: • The oxy-fuel combustion technology was found to have technical and environmental benefits comparable to post-combustion capture, and it was the most economic option in one of the cases studied, which was based on a greenfield site in Alberta utilizing low-S coal, where flue gas desulfurization (FGD) was not required for oxy-fuel but was for post-combustion capture. The assumption in this case was that most SO2 emissions would be captured in the CO2 compression phase. In the two other sites studied (Saskatchewan and Nova Scotia), the amine-based post-combustion options were found to be more economic, but the difference was marginal. • It was also found that parasitic energy losses directly related to CO2 capture were the largest single cost item, closely followed in most cases by capital charges. These costs were similar for both oxy-fuel and amine-based capture if FGDs were present in both cases. The capital and operating costs of the air separation unit (ASU) represent a major problem for oxy-fuel economics, and improvements in this area will have major benefits. • Operating and maintenance (O&M) costs were found to make up a relatively minor portion of the total charges. • The CCPC study also found that oxy-fuel combustion was expected to capture slightly more CO2 than post-combustion technology, which tended to help its cost-per-tonne-captured figures. • Retrofits based on oxy-fuel were found to be significantly more expensive compared to back-end retrofits that left the existing boiler plant intact. Finally, Farley (2006) also quotes costs which suggest that post-combustion capture and oxy-fuel technologies are comparable, but in apparent contrast to the 2007 CCPC (Xu et al. 2007) study, suggesting that a significant advantage lies in the fact that oxy-fuel technology can be retrofitted to existing plants (Farley 2006). From the above citations, it is clear that the oxy-fuel combustion concept is comparable with postcombustion capture in terms of cost, and it represents a midterm solution with considerable potential for commercialization, as opposed to many of the schemes evaluated where scale-up presented significant uncertainties. It is also clear that very significant economic gains can be made if oxygen could be produced more cheaply. Buhre et al. (2005) summarized some key conclusions about oxy-firing as follows: • Typically 31 % oxygen, rather than 21 % for air-fired boilers, must be used to achieve a similar adiabatic flame temperature, and to achieve this level, it requires recycle of about 60 % of the flue gas. • The much higher CO2 and H2O proportions in the flue gases result in increased emissivity, so a retrofitted boiler will have similar radiative heat transfer to an air-fired boiler for an O2 proportion of 30 %. Page 3 of 24

Handbook of Climate Change Mitigation and Adaptation DOI 10.1007/978-1-4614-6431-0_39-2 # Springer Science+Business Media New York 2015

Table 1 Comparison of burner gas compositions (wt%) (From Zhou and Moyeda 2010) Composition O2 CO2 H 2O NOx (ppm)

Conventional air fired 3.2 14.7 5.85 154

Oxy-fuel with wet FGR 3.1 69 27.5 82

• Gas flows will be reduced to about 80 % of those in the air case. • Emissions of minor species such as SO2 and NO will be higher in the recycled gases, unless they are removed in the recycle process. Such conclusions are useful but are dependent on case-specific factors such as how much air leakage there is into the boiler (1 % or less is desired) and details concerning auxiliary systems. The choice of whether flue gas recycle should be wet or dry is also important. Zheng et al. (2009) suggest that, as an approximate guideline, coals with less than 1 % sulfur content are suitable for wet flue gas recycle, while coals with higher sulfur levels are not, because of concerns over corrosion associated with high SOx levels. Zhou and Moyeda (2010) suggest that typical wet flue gas recycle should be in the range of 70–75 %. Wet flue gas recycle lowers the adiabatic flame temperature of the gas, while dry recycle allows higher flame temperatures but reduces the overall gas velocity. Table 1 gives an expected comparison for air-fired and oxy-fuel combustion with wet flue gas recycle taken from Zhou and Moyeda (2010). These authors also note that the efficiency of such a plant is less than that of a conventional plant without carbon capture but greater than that of an air-fired plant fitted with an amine system, and this is a conclusion on which there is considerable debate, as noted above. Finally, a last comment to be made on such plants is that since the oxygen used has to be produced cryogenically, and hence is expensive, excess oxygen levels should ideally be kept as low as possible.

Emissions: SOx, NOx, and Other Micro-pollutants As PC firing is the ranking coal combustion technology, dating back to at least the 1940s when early attempts were made to use pulverized coal in fire-tube boilers, it is not surprising that an enormous amount is known about the emissions from such systems (Gunn and Horton 1989). The obvious differences between air firing and oxy-firing (with FGR) using pulverized coal are the high levels of CO2 (of the order of five times or more) and water (of the order of two to three times more) and the somewhat smaller flue gas volumes that will tend to concentrate the emissions of micro-pollutants in the flue gas stream. Other differences would include the effect of somewhat higher oxygen levels in the burner of around 30 %, which together with the lower flue gas levels allows a similar exit oxygen content to the air-fired case (Table 1), and possible effects on fuel devolatilization and char combustion due to the differences in the gaseous environment. Also, absolute emissions might well vary if the flame length and its temperature and/or the gas concentrations change, and it has been shown, for instance, that at high flue gas recycle ratios, flame temperatures may fall by as much as 100  C or more (Smart et al. 2010). This affects the combustion process itself and reduces flame stability, although such effects are lower for high-volatile coals, in line with the results of earlier work (Buhre et al. 2005). Much of the early work was done in thermogravimetric analyzers (TGAs) and other laboratory-scale devices, with low heating rates, which are atypical of pulverized fuel firing. However, there is now more information from systems with realistic heating

Page 4 of 24

Handbook of Climate Change Mitigation and Adaptation DOI 10.1007/978-1-4614-6431-0_39-2 # Springer Science+Business Media New York 2015

rates. Thus, for instance, wire mesh tests done with two coals in an oxy-fuel environment (30 % O2/70 % CO2) showed evidence of a small rise in ignition temperature of around 20  C for the coals used in this study (Qiao et al. 2010). Experiments were done in an entrained flow reactor (0.08 m diameter and 2 m length) with a bituminous coal but failed to find any evidence of char–CO2 gasification reactions (Brix et al. 2010). In addition, unlike earlier studies this one found no significant change in volatile production between air firing and oxy-firing, but char burned faster in air than in oxy-fuel conditions, suggesting that the CO2 is important in reducing the burning rate when external mass transfer dominates the combustion process, which is also in agreement with other earlier work (Shaddix and Molina 2008).

NOx Production The production of nitric oxides can occur in combustion systems by means of three mechanisms: • Prompt NOx – where hydrocarbon radicals in the flame front react with the nitrogen in the combustion gases • Thermal NOx – where oxygen reacts at temperatures above about 1,000  C to form NOx (the so-called Zeldovich mechanism) • Fuel nitrogen reactions Prompt NOx is more important in gaseous hydrocarbon flames, while thermal NOx would reasonably be expected to be substantially reduced by a factor of about 20 due to the effective removal of nitrogen from the oxidant, but fuel nitrogen mechanisms can still be expected to be important and give rise to significant NOx production. In a recent study using a quartz flow reactor, high CO2 levels were shown to compete for H atoms (the main source of chain branching and hence radical production) and thus reduce the rate of oxidation of HCN (Giménez-López et al. 2010). Similar arguments concerning the change of OH/H radical concentrations in methane oxy-fuel flames have also been advanced from measurements and modeling done by Mendiara and Glarborg (2009). Since HCN is an important intermediate for the production of NO, this implies that the elevated CO2 levels in oxy-fuel combustion should also decrease NO formation from fuel nitrogen. These results are interesting in that they are in contradiction to an earlier study which stated that the influence of CO2 on NOx was negligible (Okazi and Ando 1997). However, both measurements (Tan et al. 2006; Hj€artstam et al. 2009) and modeling (Cao et al. 2010) have shown reduced NOx levels, even without flue gas recycle, which itself also causes a reduction in NOx production. Another difference between the air and oxy-fuel cases is that the CO levels can be higher for oxy-fuel systems, depending on flame temperature and the degree of recycle. However, in measurements of CO profiles when burning lignitic fuels in a small oxy-fuel combustor, Hj€arstam et al. (2009) showed that this did not necessarily reflect in emissions of either CO or NO leaving the boiler.

SOx Production Sulfur in fossil fuel combustion is released as SO2 and SO3, and except in the case of, for example, ashes with very high Ca contents (e.g., the French Gardanne lignite, which has an extremely high natural Ca/S molar ratio in its ash), more than 90 % of the S in the fuel will typically be found as SO2 in the flue gas (Smoot and Pratt 1979). Flue gas recycle will tend to concentrate micro-pollutants and Fig. 2, in a CanmetENERGY small-scale pilot plant study at high temperatures appropriate to suspension firing shows SO2 levels for oxy-fuel with recycled flue gas versus the air-firing case. SO2 is the dominant sulfur form, as even at lower temperatures oxidation of SO2 to SO3 tends to be very slow and dependent on the

Page 5 of 24

Handbook of Climate Change Mitigation and Adaptation DOI 10.1007/978-1-4614-6431-0_39-2 # Springer Science+Business Media New York 2015

SO2 (ppmv) for Lignite

1,400 1,200 1,000 800 600 400 200 0 0.00

SO2, Air

1.00

SO2, O2/RFG

2.00 3.00 4.00 Distance from Burner Tip (m)

5.00

6.00

Fig. 2 SO2 emissions from CanmetENERGY oxy-fired pilot plant (oxy-fuel case at 35 % O2)

Table 2 Average measured SO2/SO3 concentrations in ppm (From Maier et al. 2008) Firing mode Air Oxy-fuel

SO2 (ppm) 733 1,758

SO3 (ppm) 8 85

presence of catalytic reactions. However, higher SO2 levels mean that fly ash components will tend to react more readily if they can sulfate (Sturgeon et al. 2009). In early studies, both Hu et al. (2000), who carried out experiments in a small flow reactor burning coal (1.1 % sulfur), and Croiset and Thambimuthu (2001), operating the CanmetENERGY oxy-fuel combustor with an Eastern US bituminous coal (1 % sulfur), reported a decrease in sulfur release for oxy-fuel combustion, ranging from 75 % without flue gas recycle to 65 % with recycle. Hu et al. (2000) suggested that the reduced SO2 levels they experienced were due to ash reactions, the possible formation of other sulfur compounds (COS and/or H2S), or retention of sulfur in the char. The reaction of ash with sulfur in pulverized coal-fired systems is well established, especially for lignites and Western US coals (Smoot and Pratt 1979), although the ash content of the coal used in their experiments was only 2 %. The possibility that some of the sulfur is retained in the char is also reasonable for a small flow reactor, but suggestions of other sulfur compounds do not seem likely if oxidizing conditions prevail. Croiset and Thambimuthu (2001) ruled out capture in the ash and significant losses of SO2 in condensate from the flue gas recycle for their experiments and instead postulated the production of elevated SO3 levels. Subsequent experiments have not, however, suggested such dramatic reductions in SO2 releases, at least where Ca levels in the ash are not high, but elevated levels of SO3 are indeed found. Thus, Tan et al. (2006) found that SO3 concentrations could be up to three or four times higher for oxy-fuel with wet flue gas recycling than for the corresponding air-fired situation, with levels consistently around 5 %, whereas normal levels would vary in the range of 1–5 %, depending on the sulfur content of the fuel, air/fuel mixing patterns, and excess air in the furnace. This picture for SO3 levels has also been confirmed in later experiments. Thus, Maier et al. (2008) found an elevated level of SO3 and indicated that the average SO3 concentrations for their experiments were in the range of 36–121 ppm, for an average SO2 concentration of 1,791 ppm, and flue gas moisture of 28.9 vol.%. Table 2 gives average SO2 and SO3 levels for their experiments (note that SO2 levels are also elevated, as expected). Page 6 of 24

Handbook of Climate Change Mitigation and Adaptation DOI 10.1007/978-1-4614-6431-0_39-2 # Springer Science+Business Media New York 2015

However, it is clear that an increase in SO3 up to about 5 % may occur, which is sufficient to cause the acid dew point to be raised, leading to a real potential for enhanced corrosion, if flue gases go below that temperature. Based on their data, Maier et al. (2008) predict an increase in the acid dew point of around 30  C for oxy-fuel conditions, from 138  C for air to 169  C for oxy-fuel. Ahn et al. (2010) have also noted that SO3 levels are likely to be elevated, based on their experiments with a small pilot-scale combustor at the University of Utah, and that on average the SO3 levels under oxy-fuel-fired conditions are about four times higher than in the air-fired situation. Both of these studies are in agreement with the work of Fleig et al. (2009), who explored the SO3 levels to be produced for the oxy-fuel combustion of lignite for wet recycling and indicated that they should be about four times higher than in the air-fired case, with a resulting rise in the acid dew point of 20–30  C. As a result of these higher SO2 and SO3 levels, there are concerns about corrosion and deposition on boiler walls and surfaces; however, research is still in its early days. Nonetheless, the large boiler companies have initiated research programs to generate information on these potential problems, and it can be expected that significant results will be produced in the next several years (Kung and Tanzosh 2008; Bordenet et al. 2008). It should also be added that on the positive front, air pollution control (APC) devices may be reduced in size, leading to some cost reductions, and this is especially true for FGD if wet flue gas recycle is used. Concerning the use of FGD equipment, while there were questions initially on the effect of high CO2 concentrations on limestone effectiveness in FGD, Vattenfall’s experience showed that SO2 removal rate and limestone usage are the same for air and oxy-fuel cases (Strőmberg et al. 2009). For a very detailed analysis of the issues relating to sulfation in oxy-fired PF systems, the interested reader is referred to a very recent review article by Stranger and Wall (2011).

Major Pilot Plant Developments There are now a large number of major pilot plant and pre-commercial demonstration projects worldwide, and these include: • Germany – 30 MWth demonstration unit (Strőmberg et al. 2009, 2010; Vattenfall 2010) • USA – Jupiter Oxygen Corporation, 15 MWth burner test facility, National Energy Technology Laboratory (NETL) (Ochs et al. 2009; NETL 2010); Demonstration of a 30 MWth oxy-fired unit (the Clean Environmental Development Facility in Alliance, Ohio), Babcock & Wilcox Power Generation Group, Inc. (B&W PGG) and Air Liquide (McDonald et al. 2008; Air Liquide Press 2008) • United Kingdom – 40 MWth Doosan Babcock Demonstration unit (Hesselmann et al. 2009) • France – 30 MWth Lacq Project, Total, in partnership with Air Liquide (Total 2010) • Australia – Callide Oxy-fuel Project (30 MW unit) and various other projects (Cook 2009) Another possibility also exists that of retrofitting older conventional power plants to operate in an oxy-fuel mode (Tigges et al. 2009). Moreover there is considerable research underway to develop cheaper methods of oxygen production, such as membrane technologies, which have the potential to make oxy-fuel technologies even more attractive, either by themselves (especially for smaller-scale applications) (Carbo et al. 2009) or possibly by first carrying out a first-stage air separation, prior to the cryogenic separation, thus reducing the size of the unit and the energy used in making oxygen at 90 % + purity (Burdyny and Struchtrup 2010). However, for now at least it is also clear that the classic approach of cryogenic air separation can, in principle, meet the needs of a new generation of oxy-fuel power plants (Allam et al. 2005), and the use of this technology is generally assumed in this chapter. Page 7 of 24

Handbook of Climate Change Mitigation and Adaptation DOI 10.1007/978-1-4614-6431-0_39-2 # Springer Science+Business Media New York 2015

Other Issues for Oxy-Fuel Firing in Pulverized Fuel Combustion Systems Biomass Firing in Oxy-Fuel Combustion Systems The idea of using biomass in pulverized coal blends is well established, and there are efforts in most countries to supplement the coal fuel supply with biomass (a carbon-neutral fuel), as an approach to the reduction of carbon emissions. In Canada, for instance, trials have been undertaken with up to 100 % biomass firing in the 227 MWe Atikokan Generating Station in Ontario (Marshall et al. 2010). Unfortunately, there are a number of issues, the first, and probably most important, being that there is usually insufficient biomass in a given area to supply a commercial-scale power plant. Second, the properties of biomass differ from coal, and so torrefaction or pelletization, before the fuel is then pulverized, is effectively essential if such fuels are to be used directly, and modifications of the fuel feed system are necessary. On this point, it is instructive that the Atikokan unit had a dust explosion in its coal feeding system during preparations for further biomass trials (Marshall et al. 2010). One ingenious suggestion is that biomass, which typically has high moisture, might be co-fired with coal, thus providing an alternative method to flue gas recycle; however, as noted above the problem with such an approach would be the need to obtain sufficient biomass for such a concept to be applied (HaykirAcma et al. 2010). Another issue for biomass is fouling, and depending on the nature of the coal ash (i.e., does it contain sufficient Ca in the form of CaO that it can carbonate), it is evident that the fouling and deposit behavior of an ash might be different in an oxy-fuel environment compared to that in an air-fired one. Fryda et al. (2010) carried out some experiments in a drop tube using biomass blends and a Russian and South African coal. They report some small changes in K, Na, and Ca in the deposits depending on operating conditions and speculate that the “lower char temperature” in oxy-fuel combustion may be producing such effects. At the present time, it appears that general conclusions are probably premature, but it seems likely the greatest barrier to using biomass in oxy-fuel combustion is probably simply insufficient supply, although other issues such as the inhomogeneity of biomass, potentially high alkali metal content for some types of biomass, and the cost of good quality biomass, especially if the biomass must be pelletized before pulverization, are also potential concerns.

Oxy-Fuel CFBC Combustion Oxy-fuel combustion has been thoroughly studied for pulverized coal combustion, but to date there has been relatively little attention paid to oxy-fuel circulating fluidized bed combustion (CFBC), although the concept was examined over 30 years ago for bubbling FBC (Yaverbaum 1977). More recently the boiler companies, Alstom and Foster Wheeler, have explored the oxy-fuel CFBC concept using pilotscale tests (Eriksson et al. 2007; Stamatelopoulos and Darling 2008). Alstom’s work included tests in a unit of up to 3 MWth in size but did not involve recycle of flue gas (Liljedahl et al. 2006). Foster Wheeler’s work (Eriksson et al. 2007) also involved pilot-scale testing, using a small (30–100 kW) CFBC owned and operated by VTT (Technical Research Centre of Finland), and this work along with CanmetENERGY’s work with its own 100 kW CFBC appears to be the first in which units were operated with oxy-fuel combustion using flue gas recycle. Foster Wheeler is currently developing its Flexi-burn™ technology, which would allow a CFBC boiler to operate either in air- or oxy-firing conditions (Eriksson et al. 2009). The advantages of the CFBC technology are already well known in terms of its ability to burn a wide range of fuels, both individually and co-fired, to achieve relatively low NOx emissions, because temperatures are too low for significant thermal NOx production, and to accomplish SO2 removal by limestone (Grace et al. 1997). Another advantage of CFBC technology, in the context of oxy-fuel firing, is the fact Page 8 of 24

Handbook of Climate Change Mitigation and Adaptation DOI 10.1007/978-1-4614-6431-0_39-2 # Springer Science+Business Media New York 2015

that hot solids are kept in the primary reaction loop by means of a hot cyclone. This solid circulation potentially provides an effective means, in conjunction with the recycle of flue gas, to control combustion and effectively extract heat during the combustion process, thus allowing either a significant reduction of the amount of recycled flue gas or, alternatively, permitting the use of a much higher oxygen concentration in the combustor. These factors allow the economics of oxy-fired CFBC to be significantly improved over PC or stoker firing by reducing the size of the CFBC boiler island by as much as 50 % (Liljedahl et al. 2006). In considering the scale-up of CFBC units above 300 MWe, both Foster Wheeler and Alstom are now offering much larger units, and Foster Wheeler has in operation a 460 MWe supercritical CFBC boiler (Stamatelopoulos and Darling 2008; Hotta et al. 2008). Advantages more difficult to quantify for the technology relate to: the possibility of co-firing biomass, so that in conjunction with CCS, the overall combustion process may potentially result in a net reduction of anthropogenic CO2 and the potential for this technology to be used with more marginal fuels, as premium fossil fuel supplies wane. The co-firing option offers a potentially interesting advantage of CFBC technology, since it is well established that CFBC can burn biomass and fossil fuels at any given ratio ranging from 0 % to 100 %, thus offering the possibility of using local and seasonally available biomass fuels in a CO2 “negative” manner. The ultimate availability of premium coal for a period of hundreds of years has also recently been called into doubt with suggestions that coal production may peak well before the end of this century. Thus, Mohr and Evans (2009), for example, have developed a model which suggests that coal production will peak in 2034 on a mass basis and 2026 on an energy basis. A good general discussion of these ideas can also be found in Wikipedia (2010). In the event of such solid fuel shortages, fluidized bed combustion is ideally suited to exploit the many marginal coals and hydrocarbon-based waste streams available worldwide. Currently R&D on oxy-fired CFBC technology is being undertaken in numerous countries, including Canada, Finland, Poland, China, and the USA among others. However, to date most test work has been done at small scale (in the 420 nm >410 nm 300 W Xe 300 W Xe 1,000 W Xe 1,000 W Xe–Hg >290 nm

0.1

4.0

DMF

0.07

13



TEA

>290 nm

trace

2.0

MeCN



13



TEA

>290 nm

0.1

1.0

DMF



14



TEA

>290 nm

0.3

2.8

DMF

0.08

15



TEA

>290 nm

0.1

0.7

DMF



(Grodkowski et al. 1997) (Lehn and Ziessel 1982) (Lehn and Ziessel 1982) (Matsuoka et al. 1992) (Matsuoka et al. 1992) (Matsuoka et al. 1992) (Matsuoka et al. 1992) (Matsuoka et al. 1992) (continued)

Page 30 of 39

Handbook of Climate Change Mitigation and Adaptation DOI 10.1007/978-1-4614-6431-0_46-2 # Springer Science Business Media New York (outside the USA) 2015

Table 3 (continued)

CO2 red. catalyst Photosensitizer Multinuclear photocatalysts 16 Bound

Electron, proton, additive source

Light

CO (TON)

HCO2H (TON)

Solvent

j

References

Hg

50



DMF

0.120

Hg

170



DMF



Hg

240



DMF

0.093

500 W Xe 500 W Xe 500 W Xe 500 W Xe 500 W Xe >500 nm



562

DMF

0.04



671

DMF

0.06

13

315

DMF

0.04

315d

353d

DMF

0.03

358d

234d

DMF

0.02

207



DMF

0.15

>500 nm

3,029



DMF

0.45

(Gholamkhass et al. 2005) (Gholamkhass et al. 2005) (Gholamkhass et al. 2005) (Tamaki et al. 2012a) (Tamaki et al. 2012a) (Tamaki et al. 2012a) (Tamaki et al. 2012a) (Tamaki et al. 2012a) (Tamaki et al. 2012b) (Tamaki et al. 2013a) (Tamaki et al. 2013b) (Schneider et al. 2011) (Schneider et al. 2011)

17

Bound

18

Bound

19

Bound

19

Bound

20

Bound

21

Bound

22

Bound

23

Bound

23

Bound

TEOA, BNAH TEOA, BNAH TEOA, BNAH TEOA, BNAH TEOA, BNAH-OMe TEOA, BNAH TEOA, BNAH TEOA, BNAH BNAH, TEOA BIH, TEOA

24

Bound

BIH, TEOA

>620 nm

1,138



DMF

0.12

25

Bound

TEA

>420 nm

1



DMF



25

Bound

TEA

>420 nm

1



DMF



Intermolecular photosensitized catalyst systems CoCl2 13 TEA, MeOH UV*





MeCN



CoCl2

41

TEOA, H2O

0.8a



MeCN



CoCl2*12

41

TEOA, H2O

0.2a



MeCN



CoCl2*12

41

TEOA

4



DMF



10

13

TEA

6

5

MeCN



11

13

TEA

4

5

MeCN



26

6

101



DMF

0.062

27

41

TEOA, BNAH TEA

300 W Xe 300 W Xe Hg >420 nm

10



DMF



1,000 W Xe 1,000 W Xe 250 W

(Matsuoka et al. 1993; Ogata et al. 1995a) (Lehn and Ziessel 1982) (Lehn and Ziessel 1982) (Hawecker et al. 1983) (Dhanasekaran et al. 1999) (Dhanasekaran et al. 1999) (Gholamkhass et al. 2005) (Schneider et al. 2011) (continued)

Page 31 of 39

Handbook of Climate Change Mitigation and Adaptation DOI 10.1007/978-1-4614-6431-0_46-2 # Springer Science Business Media New York (outside the USA) 2015

Table 3 (continued)

CO2 red. catalyst 27

Photosensitizer 43

Electron, proton, additive source TEA

27

44

TEA

>420 nm

9



DMF



28

6

480 nm

12

157

DMF



28

6

480 nm

40

78

MeCN



28

41

480 nm

12

149

DMF



29

6

316d

DMF









DMF



30

41

TEOA



158

DMF



(Ishida et al. 1990)

30

41

BNAH, H2O

120

158

DMF



(Ishida et al. 1990)

30

41

BNAH

420 nm

55



MeCN



34

13

TEA, MeOH

UV*

4.8

2.4

MeCN



34

13

TEOA, MeOH

UV*

10

6.0

MeCN



34

13

TIPOA, MeOH

UV*

13

3.5

MeCN



34

38

TEA, MeOH

0.3

4.8

MeCN



34

39

TEA, MeOH

500 W Hg 500 W Xe

0.3

5.6

MeCN

0.07

(Thoi et al. 2013) (Thoi et al. 2013) (Matsuoka et al. 1993; Ogata et al. 1995a) (Bonin et al. 2014a) (Bonin et al. 2014a) (Matsuoka et al. 1993; Ogata et al. 1995a) (Matsuoka et al. 1993; Ogata et al. 1995a) (Matsuoka et al. 1993; Ogata et al. 1995a) (Yanagida et al. 1995) (Ogata et al. 1995b)

Light >420 nm

CO (TON) 3

HCO2H (TON) –

Solvent DMF

j –

(continued)

Page 32 of 39

Handbook of Climate Change Mitigation and Adaptation DOI 10.1007/978-1-4614-6431-0_46-2 # Springer Science Business Media New York (outside the USA) 2015

Table 3 (continued)

CO2 red. catalyst 35

Photosensitizer 13

Electron, proton, additive source TEA, MeOH

36

13

TEA, MeOH

UV*

5.6

3.6

MeCN



37

13

TEA, MeOH

UV*

5.0

2.6

MeCN



Light UV*

CO (TON) 5.6

HCO2H (TON) 3.6

Solvent MeCN

j –

References (Matsuoka et al. 1993; Ogata et al. 1995a) (Matsuoka et al. 1993; Ogata et al. 1995a) (Matsuoka et al. 1993; Ogata et al. 1995a)

*Visible Light Not Confirmed TON are per molecule, not per catalytic site where applicable significant H2 produced b significant MeOH produced c low concentration d estimated a

A diverse array of catalysts known to be active electrocatalysts for CO2 reduction has been studied in photocatalytic systems with a variety of added photosensitizers (38–44). While Re and Ru complexes similar to those studied as covalently linked multinuclear photocatalysts have been studied, reactivity commonly decreases significantly (twofold in many cases) when the linking tether is removed. However, these systems still typically perform an order of magnitude better with regard to TON when compared with mononuclear systems. The use of non-linked photosensitizers has enabled the recent rapid evaluation of several noteworthy electrocatalysts in photocatalytic systems including those based on manganese (28) (Takeda et al. 2014), nickel (31 and 32) (Thoi et al. 2013), iron porphyrins (Bonin et al. 2014a), and cobalt cyclams (32, 35–37) (Yanagida et al. 1995; Ogata et al. 1995b). Manganese complex 28 paired with ruthenium photosensitizer 6 demonstrated an unusual switch in CO2 product selectivity from CO under electrocatalytic conditions to HCO2H under photocatalytic conditions with a dramatic increase in TON (from 13 to 149). Perhaps one of the most important developments in these non-covalently bound systems is the recent use of Ir(ppy)3 (42) which has been paired with electrocatalysts 31 and 33. 42 has a remarkably high-energy reduction potential (near 1.9 V vs. NHE) which allows for pairing with many electrocatalysts with higher reduction potentials than that of Ru(bpy)3 photosensitizers (reduction potential near 1.1). For that case of 42 paired with 31 record TON values of 98,000 were observed under very dilute conditions. With more common concentrations, high TONs were still observed (1,500). In this system, the photosensitizer was found to decompose over time whereas the addition of fresh 42 led to reactivity rates near that at the start of photolysis. 33 is a known efficient electrocatalyst which was recently paired with 42 to give good TON values >100. Interestingly, 33 was also paired with anthracene 40 to give the highest TON values observed with an organic sensitizer. It should be noted that 33 was recently shown to operate as a photocatalyst independent of a photosensitizer for 30 TON (Bonin et al. 2014b). With the exception of Neumann’s RWGS reaction catalyst, the above systems have used an organic electron and proton source. These are often present in solvent quantities as amine additives (TEA and TEOA, typically 1:5 with the listed solvent). However, several researchers have noted that addition of a second electron/proton source has resulted in substantially higher reactivities. These additional sources are often suggested to be a co-catalyst which received electrons from the stoichiometric amine reductant

Page 33 of 39

Handbook of Climate Change Mitigation and Adaptation DOI 10.1007/978-1-4614-6431-0_46-2 # Springer Science Business Media New York (outside the USA) 2015

and delivers them efficiently to the photosensitizer or catalyst. The most commonly observed sources are ascorbic acid (AA), 1-benzyl-1,4-dihydronicotinamide (BNAH), and most recently, 1,3-dimethyl-2phenyl-2,3-dihydro-1H-benzo[d]imidazole (BIH). Care must again be taken concerning the background reactions associated with AA which can decompose to CO. BIH has been shown to directly lead to a >10x increase in catalyst turnover numbers and a threefold increase in already high f values (Tamaki et al. 2012b, 2013a). BIH has led to some of the highest quantum yields and TONs to date and will likely find utility in further studies.

Future Directions Photocatalytic water splitting and photocatalytic carbon dioxide reduction each offer the promise of cheap and plentiful sources of energy for society’s future. In addition, the transformation of carbon dioxide from a harmful waste product to usable energy source is a win/win proposition. However, the development of efficient photocatalytic materials that make either of these processes viable on large scales has not been realized. With such a clear and potentially substantial payoff, many synthetic chemists have been attracted to this problem. In this very active field, advances are constant as the search for an ideal photocatalyst continues.

References Abe R (2010) Recent progress on photocatalytic and photoelectrochemical water splitting under visible light irradiation. J Photochem Photobiol C 11:179–209 Anfuso CL, Xiao D, Ricks AM et al (2012) Orientation of a series of CO2 reduction catalysts on single crystal TiO2 probed by phase-sensitive vibrational Sum frequency generation spectroscopy (PS-VSFG). J Phys Chem C 116:24107–24114 Armaroli N, Balzani V (2007) The future of energy supply: challenges and opportunities. Angew Chem Int Ed 46:52–66 Arrhenius S (1896) On the influence of carbonic acid in the air upon the temperature of the ground. Philos Mag A 41:237–276 Bae ST, Shin H, Kim JY et al (2008) Roles of MgO coating layer on mesoporous TiO2/ITO electrode in a photoelectrochemical cell for water splitting. J Phys Chem C 112:9937–9942 Bahnemann D (2004) Photocatalytic water treatment: solar energy applications. Sol Energy 77:445–459 Barreto L, Makihira A, Riahi K (2003) The hydrogen economy in the 21st century: a sustainable development scenario. Int J Hydrog Energy 28:267–284 Behar D, Dhanasekaran T, Neta P et al (1998) Cobalt porphyrin catalyzed reduction of CO2. Radiation chemical, photochemical, and electrochemical studies. J Phys Chem A 102:2870–2877 Benson EE, Kubiak CP, Sathrum AJ et al (2009) Electrocatalytic and homogeneous approaches to conversion of CO2 to liquid fuels. Chem Soc Rev 38:89–99 Bhattacharyya K, Varma S, Tripathi AK et al (2008) Effect of vanadia doping and its oxidation state on the photocatalytic activity of TiO2 for gas-phase oxidation of ethene. J Phys Chem C 112:19102–19112 Bonin J, Robert M, Routier M (2014a) Selective and efficient photocatalytic CO2 reduction to CO using visible light and an iron-based homogeneous catalyst. J Am Chem Soc 136:16768–16771 Bonin J, Robert M, Routier M (2014b) Homogeneous photocatalytic reduction of CO2 to CO using iron (0) porphyrin catalysis: mechanism and intrinsic limitations. ChemCatChem 6:3200–3207

Page 34 of 39

Handbook of Climate Change Mitigation and Adaptation DOI 10.1007/978-1-4614-6431-0_46-2 # Springer Science Business Media New York (outside the USA) 2015

Boston DJ, Xu C, Armstrong DW et al (2013) Photochemical reduction of carbon dioxide to methanol and formate in a homogeneous system with pyridinium catalysts. J Am Chem Soc 135:16252–16255 Caetano MAL, Gherardi DFM, Yoneyama T (2008) Optimal resource management control for CO2 emission and reduction of the greenhouse effect. Ecol Model 213:119–126 Chen W-Y, Shi G, Hailey AK et al (2012) Photocatalytic conversion of carbon dioxide to organic compounds using a green photocatalyst: an undergraduate research experiment. Chem Educ 17:166–171 Choi WY, Termin A, Hoffmann MR (1994) The role of metal-ion dopants in quantum-sized TiO2 – correlation between photoreactivity and charge-carrier recombination dynamics. J Phys Chem 98:13669–13679 Colombo DP, Bowman RM (1996) Does interfacial charge transfer compete with charge carrier recombination? A femtosecond diffuse reflectance investigation of TiO2 nanoparticles. J Phys Chem 100:18445–18449 Crabtree GW, Dresselhaus MS, Buchanan MV (2004) The hydrogen economy. Phys Today 57:39–44 de Richter RK, Ming T, Caillol S (2013) Fighting global warming by photocatalytic reduction of CO2 using giant photocatalytic reactors. Renew Sustain Energy Rev 19:82–106 Dhakshinamoorthy A, Navalon S, Corma A et al (2012) Photocatalytic CO2 reduction by TiO2 and related titanium containing solids. Energy Environ Sci 5:9217–9233 Dhanasekaran T, Grodkowski J, Neta P et al (1999) p-terphenyl-sensitized photoreduction of CO2 with cobalt and iron porphyrins. Interaction between CO and reduced metalloporphyrins. J Phys Chem 103:7742–7748 Domen K, Kondo JN, Hara M et al (2000) Photo- and mechano-catalytic overall water splitting reactions to form hydrogen and oxygen on heterogeneous catalysts. Bull Chem Soc Jpn 73:1307–1331 Dunn S (2002) Hydrogen futures: toward a sustainable energy system. Int J Hydrog Energy 27:235–264 Ettedgui J, Diskin-Posner Y, Weiner L et al (2011) Photoreduction of carbon dioxide to carbon monoxide with hydrogen catalyzed by a rhenium(I) phenanthroline-polyoxometalate hybrid complex. J Am Chem Soc 133:188–190 Fang J, Wang F, Qian K et al (2008) Bifunctional N-doped mesoporous TiO2 photocatalysts. J Phys Chem C 112:18150–18156 Fischer H, Wahlen M, Smith J et al (1999) Ice core records of atmospheric CO2 around the last three glacial terminations. Science 283:1712–1714 Fox MA, Dulay MT (1993) Heterogeneous photocatalysis. Chem Rev 93:341–357 Fujishima A, Honda KB (1971) Electrochemical evidence for the mechanism of the primary stage of photosynthesis. Bull Chem Soc Jpn 44:1148–1150 Fujishima A, Honda K (1972) Electrochemical photolysis of water at a semiconductor electrode. Nature 238:37–38 Gai YQ, Li JB, Li SS et al (2009) Design of narrow-gap TiO2: a passivated codoping approach for enhanced photoelectrochemical activity. Phys Rev Lett 102:036402 Gholamkhass B, Mametsuka H, Koike K et al (2005) Architecture of supramolecular metal complexes for photocatalytic CO2 reduction: ruthenium  rhenium Bi- and tetranuclear complexes. Inorg Chem 44:2326–2336 Gilfillan SMV, Lollar BS, Holland G et al (2009) Solubility trapping in formation water as dominant CO2 sink in natural gas fields. Nature 458:614–618 Graciani J, Nambu A, Evans J et al (2008) Au – N synergy and N-doping of metal oxide-based photocatalysts. J Am Chem Soc 130:12056–12063 Grodkowski J, Behar D, Neta P et al (1997) Iron porphyrin-catalyzed reduction of CO2. Photochemical and radiation chemical studies. J Phys Chem A 101:248–254 Page 35 of 39

Handbook of Climate Change Mitigation and Adaptation DOI 10.1007/978-1-4614-6431-0_46-2 # Springer Science Business Media New York (outside the USA) 2015

Ha E-G, Chang J-A, Byun S-M et al (2014) High-turnover visible-light photoreduction of CO2 by a Re (I) complex stabilized on dye-sensitized TiO2. Chem Commun 50:4462–4464 Habisreutinger SN, Schmidt-Mende L, Stolarczyk JK (2013) Photocatalytic reduction of CO2 on TiO2 and other semiconductors. Angew Chem Int Ed 52:7372–7408 Hansen J, Sato M (2004) Greenhouse gas growth rates. Proc Natl Acad Sci U S A 101:16109–16114 Hawecker J, Lehn J-M, Ziessel R (1983) Efficient photochemical reduction of CO2 to CO by visible light irradiation of systems containing Re(bipy)(CO)3X or Ru(bipy)32+-Co2+ combinations as homogeneous catalysts. J Chem Soc Chem Commun 536–538 Hernández-Alonso MD, Fresno F, Suárez S et al (2009) Development of alternative photocatalysts to TiO2: challenges and opportunities. Energy Environ Sci 2:1231–1257 Hidalgo MC, Maicu M, Navio JA et al (2009) Effect of sulfate pretreatment on gold-modified TiO2 for photocatalytic applications. J Phys Chem C 113:12840–12847 Hoffmann MR, Martin ST, Choi WY et al (1995) Environmental applications of semiconductor photocatalysis. Chem Rev 95:69–96 Hong YC, Bang CU, Shin DH et al (2005) Band gap narrowing of TiO2 by nitrogen doping in atmospheric microwave plasma. Chem Phys Lett 413:454–457 Hurum DC, Gray KA, Rajh T et al (2005) Recombination pathways in the Degussa P25 formulation of TiO2: surface versus lattice mechanisms. J Phys Chem B 109:977–980 Inoue T, Fujishima A, Konishi S et al (1979) Photoelectrocatalytic reduction of carbon dioxide in aqueous suspensions of semiconductor powders. Nature 277:637–638 Ishida H, Terada T, Tanaka K et al (1990) Photochemical CO2 reduction catalyzed by Ru(bpy)2(CO)22+ using triethanolamine and 1-benzyl-1,4-dihydronicotinamide as an electron donor. Inorg Chem 29:905–911 Izumi Y (2013) Recent advances in the photocatalytic conversion of carbon dioxide to fuels with water and/or hydrogen using solar energy and beyond. Coord Chem Rev 257:171–186 Jackson RB, Schlesinger WH (2004) Curbing the US carbon deficit. Proc Natl Acad Sci U S A 101:15827–15829 Jagadale TC, Takale SP, Sonawane RS et al (2008) N-doped TiO2 nanoparticle based visible light photocatalyst by modified peroxide sol–gel method. J Phys Chem C 112:14595–14602 Janáky C, Rajeshwar K, de Tacconi NR et al (2013) Tungsten-based oxide semiconductors for solar hydrogen generation. Catal Today 199:53–64 Kaneco S, Shimizu Y, Ohta K et al (1998) Photocatalytic reduction of high pressure carbon dioxide using TiO2 powders with a positive hole scavenger. J Photochem Photobiol A 115:223–226 Keith DW (2009) Why capture CO2 from the atmosphere? Science 325:1654–1655 Kesselman JM, Weres O, Lewis NS et al (1997) Electrochemical production of hydroxyl radical at polycrystalline Nb-doped TiO2 electrodes and estimation of the partitioning between hydroxyl radical and direct Hole oxidation pathways. J Phys Chem B 101:2637–2643 Kočí K, Obalová L, Lacný Z (2008) Photocatalytic reduction of CO2 over TiO2 based catalysts. Chem Pap 62:1–9 Kondratenko EV, Mul G, Baltrusaitis J et al (2013) Status and perspectives of CO2 conversion into fuels and chemicals by catalytic, photocatalytic and electrocatalytic processes. Energy Environ Sci 6:3112–3135 Kou Y, Nakatani S, Sunagawa G et al (2014) Visible light-induced reduction of carbon dioxide sensitized by a porphyrin – rhenium dyad metal complex on p-type semiconducting NiO as the reduction terminal end of an artificial photosynthetic system. J Catal 310:57–66 Kudo A, Miseki Y (2009) Heterogeneous photocatalyst materials for water splitting. Chem Soc Rev 38:253–278 Page 36 of 39

Handbook of Climate Change Mitigation and Adaptation DOI 10.1007/978-1-4614-6431-0_46-2 # Springer Science Business Media New York (outside the USA) 2015

Kudo A, Kato H, Tsuji I (2004) Strategies for the development of visible-light-driven photocatalysts for water splitting. Chem Lett 33:1534–1539 Kumar B, Smieja JM, Sasayama AF et al (2012) Tunable, light-assisted co-generation of CO and H2 from CO2 and H2O by Re(bipy-tbu)(CO)3Cl and p-Si in non-aqueous medium. Chem Commun 48:272–274 Lehn J-M, Ziessel R (1982) Photochemical generation of carbon monoxide and hydrogen by reduction of carbon dioxide and water under visible light irradiation. Proc Natl Acad Sci U S A 79:701–704 Li GH, Dimitrijevic NM, Chen L et al (2008) Role of surface/interfacial Cu2+ sites in the photocatalytic activity of coupled CuO-TiO2 nanocomposites. J Phys Chem C 112:19040–19044 Linsebigler AL, Lu G, Yates JT (1995) Photocatalysis on TiO2 surfaces: principles, mechanisms, and selected results. Chem Rev 95:735–758 Liu G, Hoivik N, Wang K et al (2012) Engineering TiO2 nanomaterials for CO2 conversion/solar fuels. Sol Energy Mater Sol Cells 105:53–68 Livraghi S, Chierotti MR, Giamello E et al (2008) Nitrogen-doped titanium dioxide active in photocatalytic reactions with visible light: a multi-technique characterization of differently prepared materials. J Phys Chem C 112:17244–17252 Luthi D, Le Floch M, Bereiter B et al (2008) High-resolution carbon dioxide concentration record 650,000–800,000 years before present. Nature 453:379–382 Matsuoka S, Kohzuki T, Pac C et al (1992) Photocatalysis of ollgo(p-phenylenes). Photochemical reduction of carbon dloxlde with trlethylamlne. J Phys Chem 96:4437–4442 Matsuoka S, Yamamoto K, Ogata T et al (1993) Efficient and selective electron mediation of cobalt complexes with cyclam and related macrocycles in the p-terphenyl-catalyzed photoreduction of CO2. J Am Chem Soc 115:601–609 McMichael A, Woodruff R (2004) Climate change and risk to health. Br Med J 329:1416–1417 Mikkelsen M, Jørgensen M, Krebs FC (2010) The teraton challenge. A review of fixation and transformation of carbon dioxide. Energy Environ Sci 3:43–81 Mohapatra SK, Raja KS, Mahajan VK et al (2008) Efficient photoelectrolysis of water using TiO2 nanotube arrays by minimizing recombination losses with organic additives. J Phys Chem C 112:11007–11012 Moriarty P, Honnery D (2009) Hydrogen’s role in an uncertain energy future. Int J Hydrog Energy 34:31–39 Morris AJ, Meyer GJ, Fujita E (2009) Molecular approaches to the photocatalytic reduction of carbon dioxide for solar fuels. Acc Chem Res 42:1983–1994 Navalón S, Dhakshinamoorthy A, Álvaro M et al (2013) Photocatalytic CO2 reduction using non-titanium metal oxides and sulfides. ChemSusChem 6:562–577 Navarro RM, Sanchez-Sanchez MC, Alvarez-Galvan MC et al (2009) Hydrogen production from renewable sources: biomass and photocatalytic opportunities. Energy Environ Sci 2:35–54 Ogata T, Yanagida S, Brunschwig BS et al (1995a) Mechanistic and kinetic studies of cobalt macrocycles in a photochemical CO2 reduction system: evidence of Co-CO2 adducts as intermediates. J Am Chem Soc 117:6708–6716 Ogata T, Yamamoto Y, Wadaj Y et al (1995b) Phenazine-photosensitized reduction of CO2 mediated by a cobalt-cyclam complex through electron and hydrogen transfer. J Phys Chem 99:11916–11922 Osterloh FE (2008) Inorganic materials as catalysts for photochemical splitting of water. Chem Mater 20:35–54 Portenkirchner E, Oppelt K, Ulbricht C et al (2012) Electrocatalytic and photocatalytic reduction of carbon dioxide to carbon monoxide using the alkynyl-substituted rhenium(I) complex (5,50 -bisphenylethynyl-2,20 -bipyridyl)Re(CO)3Cl. J Organomet Chem 716:19–25

Page 37 of 39

Handbook of Climate Change Mitigation and Adaptation DOI 10.1007/978-1-4614-6431-0_46-2 # Springer Science Business Media New York (outside the USA) 2015

Reithmeier R, Bruckmeier C, Rieger B (2012) Conversion of CO2 via visible light promoted homogeneous redox catalysis. Catalysts 2:544–571 Roy SC, Varghese OK, Paulose M et al (2010) Toward solar fuels: photocatalytic conversion of carbon dioxide to hydrocarbons. ACS Nano 4:1259–1278 Sato S, Morikawa T, Kajino T et al (2013) A highly efficient mononuclear iridium complex photocatalyst for CO2 reduction under visible light. Angew Chem Int Ed 52:988–992 Savéant J-M (2008) Molecular catalysis of electrochemical reactions. Mechanistic aspects. Chem Rev 108:2348–2378 Schimel D, Melillo J, Tian HQ et al (2000) Contribution of increasing CO2 and climate to carbon storage by ecosystems in the United States. Science 287:2004–2006 Schlapbach L, Zuttel A (2001) Hydrogen-storage materials for mobile applications. Nature 414:353–358 Schneider J, Vuong KQ, Calladine JA et al (2011) Photochemistry and photophysics of a Pd (II) metalloporphyrin: Re(I) tricarbonyl bipyridine molecular dyad and its activity toward the photoreduction of CO2 to CO. Inorg Chem 50:11877–11889 Schneider J, Jia H, Muckerman JT et al (2012) Thermodynamics and kinetics of CO2, CO, and H+ binding to the metal centre of CO2 reduction catalysts. Chem Soc Rev 41:2036–2051 Sekizawa K, Maeda K, Domen K et al (2013) Artificial Z – scheme constructed with a supramolecular metal complex and semiconductor for the photocatalytic reduction of CO2. J Am Chem Soc 135:4596–4599 Tahir M, Amin NS (2013) Advances in visible light responsive titanium oxide-based photocatalysts for CO2 conversion to hydrocarbon fuels. Energy Convers Manag 76:194–214 Takeda H, Koike K, Inoue H et al (2008) Development of an efficient photocatalytic system for CO2 reduction using rhenium(l) complexes based on mechanistic studies. J Am Chem Soc 130:2023–2031 Takeda H, Koizumi H, Okamoto K et al (2014) Photocatalytic CO2 reduction using a Mn complex as a catalyst. Chem Commun 50:1491–1493 Tamaki Y, Morimoto T, Koike K et al (2012a) Photocatalytic CO2 reduction with high turnover frequency and selectivity of formic acid formation using Ru(II) multinuclear complexes. Proc Natl Acad Sci U S A 109:15673–15678 Tamaki Y, Watanabe K, Koike K et al (2012b) Development of highly efficient supramolecular CO2 reduction photocatalysts with high turnover frequency and durability. Faraday Discuss 155:115–127 Tamaki Y, Koike K, Morimoto T et al (2013a) Substantial improvement in the efficiency and durability of a photocatalyst for carbon dioxide reduction using a benzoimidazole derivative as an electron donor. J Catal 304:22–28 Tamaki Y, Koike K, Morimoto T et al (2013b) Red-light-driven photocatalytic reduction of CO2 using Os (II)-Re(I) supramolecular complexes. Inorg Chem 52:11902–11909 Thoi VS, Kornienko N, Margarit CG et al (2013) Visible-light photoredox catalysis: selective reduction of carbon dioxide to carbon monoxide by a nickel N – heterocyclic carbene  isoquinoline complex. J Am Chem Soc 135:14413–14424 Tseng IH, Chang W-C, Wu JCS (2002) Photoreduction of CO2 using sol–gel derived titania and titaniasupported copper catalysts. Appl Catal B Environ 37:37–48 Usubharatana P, McMartin D, Veawab A et al (2006) Photocatalytic process for CO2 emission reduction from industrial flue gas streams. Ind Eng Chem Res 45:2558–2568 U.S. Energy Information Administration, International Energy Outlook 2009 Document #DOE/EIA0484 (2009) Vaneski A, Schneider J, Susha AS et al (2014) Colloidal hybrid heterostructures based on II–VI semiconductor nanocrystals for photocatalytic hydrogen generation. J Photochem Photobiol C 19:52–61 Page 38 of 39

Handbook of Climate Change Mitigation and Adaptation DOI 10.1007/978-1-4614-6431-0_46-2 # Springer Science Business Media New York (outside the USA) 2015

Varghese OK, Paulose M, LaTempa TJ et al (2009) High-rate solar photocatalytic conversion of CO2 and water vapor to hydrocarbon fuels. Nano Lett 9:731–737 Wang YQ, Cheng HM, Zhang L et al (2000) The preparation, characterization, photoelectrochemical and photocatalytic properties of lanthanide metal-ion-doped TiO2 nanoparticles. J Mol Catal A Chem 151:205–216 Wang C, Xie Z, DeKrafft KE et al (2011) Doping metal-organic frameworks for water oxidation, carbon dioxide reduction, and organic photocatalysis. J Am Chem Soc 133:13445–13454 Xiaoding X, Moulijn JA (1996) Mitigation of CO2 by chemical conversion: plausible chemical reactions and promising products. Energy Fuel 10:305–325 Xie G, Zhang K, Guo B et al (2013) Graphene-based materials for hydrogen generation from light-driven water splitting. Adv Mater 25:3820–3839 Yan XL, He J, Evans DG et al (2005) Preparation, characterization and photocatalytic activity of Si-doped and rare earth-doped TiO2 from mesoporous precursors. Appl Catal B Environ 55:243–252 Yanagida S, Ogata T, Yamamoto Y et al (1995) A novel CO2 photoreduction system consisting of phenazine as a photosensitizer and cobalt cyclam as a CO2 scavenger. Energy Convers Manag 36:601–604 Yang YH, Chen QY, Yin ZL et al (2005) Progress in research of photocatalytic water splitting. Prog Chem 17:631–642 Yeredla RR, Xu HF (2008) Incorporating strong polarity minerals of tourmaline with semiconductor titania to improve the photosplitting of water. J Phys Chem C 112:532–539 Younpblood WJ, Lee SHA, Kobayashi Y et al (2009) Photoassisted overall water splitting in a visible light-absorbing dye-sensitized photoelectrochemical cell. J Am Chem Soc 131:926–927 Yuan Y-J, Yu Z-T, Chen X-Y et al (2011) Visible-light-driven H2 generation from water and CO2 conversion by using a zwitterionic cyclometalated iridium(III) complex. Chem Eur J 17:12891–12895 Zhang K, Guo L (2013) Metal sulphide semiconductors for photocatalytic hydrogen production. Catal Sci Technol 3:1672–1690 Zhang PD, Jia G, Wang G (2007) Contribution to emission reduction of CO2 and SO2 by household biogas construction in rural China. Renew Sustain Energy Rev 11:1903–1912 Zong X, Wang L (2014) Ion-exchangeable semiconductor materials for visible light-induced photocatalysis. J Photochem Photobiol C 18:32–49 Zuttel A (2004) Hydrogen storage methods. Naturwissenschaften 91:157–172

Page 39 of 39

Handbook of Climate Change Mitigation and Adaptation DOI 10.1007/978-1-4614-6431-0_47-2 # Springer Science+Business Media New York 2015

Technological Options for Reducing Non-CO2 GHG Emissions Jeff Kuo* Department of Civil and Environmental Engineering, California State University, Fullerton, CA, USA

Abstract In recent years, non-CO2 greenhouse gases (NCGGs), including methane (CH4), nitrous oxide (N2O), hydrofluorocarbons (HFCs), perfluorocarbons (PFCs), and sulfur hexafluoride (SF6), have gained attention due to their higher global warming potentials (GWPs) and abundance of cost-effective and readily implementable technological options available for achieving significant emission reductions. A project titled Clearinghouse of Technological Options for Reducing Anthropogenic Non-CO2 GHG Emissions from All Sectors was recently conducted. The overall objective of the project was to develop a clearinghouse of technological options for reducing anthropogenic NCGG emissions. The findings of the project help to better characterize cost-effective opportunities for emission reductions of NCGGs. Employment of an appropriate control technology for a given source would achieve a net reduction in NCGG emissions as well as its contribution to climate change. This chapter of the handbook extracts relevant data and information on the technological options for reducing non-CO2 GHG emissions from the aforementioned project report.

Introduction In the past, climate mitigation studies were focused on carbon dioxide (CO2), especially from energyrelated sources. However, in recent years, non-CO2 greenhouse gases (NCGGs), including methane (CH4), nitrous oxide (N2O), hydrofluorocarbons (HFCs), perfluorocarbons (PFCs), and sulfur hexafluoride (SF6), have gained attention due to their higher global warming potentials (GWPs) and abundance of cost-effective and readily implementable technological options available for achieving significant emission reductions. Studies have found that abatement options for several of the NCGG sources are relatively inexpensive. In addition, NCGG emission reductions may provide a more rapid response in avoiding climate impacts by focusing on short-lived gases (Lucas et al. 2006; de la Chesnaye et al. 2001). A project titled Clearinghouse of Technological Options for Reducing Anthropogenic Non-CO2 GHG Emissions from All Sectors was recently conducted by California State University, Fullerton under the sponsorship of the California Air Resources Board (CARB contract number 05-328 and Steve Church as the ARB Contract Manager) (Kuo 2008). The overall objective of the project was to develop a clearinghouse of technological options for reducing anthropogenic NCGG emissions from sectors that are relevant to California. To achieve this goal, specific project tasks were completed, including (1) identification of sources of NCGG emissions from various sectors in California, (2) identification of available technological options for NCGG emission reductions through a comprehensive literature search, (3) evaluation of the identified technological options for their applicability in California, and (4) report preparation. Although the emission sources can be categorized into economic sectors (i.e., residential, commercial, industrial, agricultural, transportation, and electricity generation), six potential source sectors as defined by United Nations Intergovernmental Panel on Climate Change (IPCC) were used: *Email: [email protected] Page 1 of 31

Handbook of Climate Change Mitigation and Adaptation DOI 10.1007/978-1-4614-6431-0_47-2 # Springer Science+Business Media New York 2015

Table 1 Comparison of GHG emissions in the United States and California Gas Carbon dioxide Methane Nitrous oxide HFCs, PFCs, and SF6 Total

USA (2004) MMTCO2-Eq. 5,988 557 387 143 7,074

(%) 84.6 7.9 5.5 2.0 100

California (2004) MMTCO2-Eq. 364 28 33 14 439

(%) 82.8 6.4 7.6 3.2 100

CA/USA (%) 6.1 5.0 8.6 9.9 6.2

energy, industrial processes, solvent use, agriculture, land-use change and forestry, and waste (Intergovernmental Panel on Climate Change – IPCC 1997). The findings of the project help to better characterize cost-effective opportunities for emission reductions of NCGGs. Employment of an appropriate control technology for a given source would achieve a net reduction in NCGG emissions as well as its contribution to climate change. Table 1 presents a comparison of GHG emissions between the United States and California. These emission estimates were derived by using the 1996 IPCC GWP values. The CA estimates were extracted from an inventory report (California Energy Commission 2006). The NCGG emissions in the United States were 1,087 million metric tons of carbon dioxide equivalent (MMTCO2-Eq.) in 2004, approximately 15 % of the total GHG emissions. Out of this 15 %, 7.9 % came from nitrous oxide, 5.5 % from methane, and 2.0 % from HFCs, PFCs, and SF6. The NCGG emissions in California were 75 MMTCO2-Eq. in 2004, approximately 18 % of the total GHG emissions. Out of this 18 %, 7.6 % came from nitrous oxide, 6.4 % from methane, and 3.2 % from HFCs, PFCs, and SF6. Although the population of California was approximately 12 % of the United States in 2004, it only made up 6.2 % of GHG emissions of the United States. Some of the technological options identified from the literature search were already in use, but many of them were still in conceptual, bench-scale studies, or research and development (R&D) stages. To evaluate the applicability and implementability of a technological option, it is important to have data on reduction efficiency (RE), market penetration (MP), technical applicability (TA), service lifetime, and costs (capital and O&M). Those options having sufficient and definite information on lifetime, RE, MP, TA, and costs were summarized in tables for easier comparison and use. The reduction efficiency tells the percentage that emission can be mitigated by a technological option. The percentage of the baseline to which a technological option is applicable is called technical applicability (California Energy Commission 2005). Market penetration is the percentage of emissions from a given source that is addressed by a given technological option (California Energy Commission 2005). The cost data were presented in year 2000 US dollars per metric ton CO2 equivalent ($/MTCO2-Eq.). There were three types of cost data presented in the aforementioned report for a given technological option. The one-time capital cost reflects the initial investment of the technological option. The annual cost reflects the yearly O&M cost needed to implement the option, while benefits refer to monetary savings, if any, resulting from the implementation of the option. In addition, lifetime data were also provided which showed the expected lifespan of the project (Kuo 2008). With these cost data and the expected lifespan of a given technological option, one could easily derive an estimate for implementing an option. However, it should be noted that the data for a given option are often very general in nature and may not be applicable to all cases. Data on a given technological option could sometimes be found from various sources, such as reports of California Energy Commission (CEC), CARB, United States Environmental Protection Agency (USEPA), and some international agencies, including the United Nations (UN) and International Energy Agency (IEA). The aforementioned report used the data that were more specific to California first Page 2 of 31

Handbook of Climate Change Mitigation and Adaptation DOI 10.1007/978-1-4614-6431-0_47-2 # Springer Science+Business Media New York 2015

(e.g., from reports of CEC and CARB). If information from these sources was not available, data specific to the United States were then used, followed by the data that were developed for global perspectives or for other countries. The aforementioned report is about 400-pages long; this chapter of the handbook extracts relevant data and information on the technological options for reducing non-CO2 GHG emissions from it. The subsequent sections address each NCGG individually. Section “Methane (CH4)” is dedicated to methane and section “Nitrous Oxide (N2O)” to nitrous oxide. Due to the similarities between PFCs, HFCs, and SF6 in their characteristics and sources, they were grouped as the high-GWP gases and covered in section “High-GWP Gases.” Section “Summary” presents a brief summary.

Methane (CH4) According to the IPCC, CH4 is approximately 23 times as effective as CO2 in trapping heat in the atmosphere over a 100-year time horizon (Intergovernmental Panel on Climate Change – IPCC 2001). The chemical lifetime of CH4 in the atmosphere is about 12 years. Since 1750, the global-average atmospheric concentrations of CH4 have changed from about 700–1,745 parts per million by volume (ppmV), a 150 % increase (Intergovernmental Panel on Climate Change – IPCC 2001). The top contributors for CH4 emissions in California in the order of magnitude are landfills (30.2 %), enteric fermentation (25.9 %), manure management (21.6 %), wastewater treatment (6.1 %), natural gas systems (5.0 %), stationary combustion (4.7 %), mobile combustion (2.2 %), rice cultivation (2.2 %), petroleum system (1.8 %), and field burning of agricultural residues (0.4 %) (California Energy Commission 2006). The following subsections describe the emission sources and mitigation options for three major sectors: energy, agriculture, and waste.

Energy The major contributors for methane emissions in the energy sector in California are natural gas systems (36.8 %), stationary combustion (34.2 %), mobile combustion (15.8 %), and petroleum systems (13.2 %) (California Energy Commission 2006). Petroleum Systems Out of the 25.7 MMTCO2-Eq. methane emissions from petroleum systems in the United States in 2004, production field operations account for 97 %, followed by 2 % in refining operations and less than 1 % in crude oil transportation (US Environmental Protection Agency 2006a). The relative contribution of each subsector within the petroleum systems in California should be similar to the corresponding ones in the United States. The sources of methane emissions from petroleum production field operations include pneumatic device venting, tank venting, combustion and process upsets, miscellaneous venting and fugitive emissions, and wellhead fugitive emissions (US Environmental Protection Agency 2006a). The measures to reduce methane emissions from the petroleum systems (as well as natural gas systems to be discussed in section “Natural Gas Systems”) can be grouped into the following mitigation strategies (Hendriks and de Jager 2001): • Prevention – improved process efficiencies and leakage reduction • Recovery and reinjection – recovery of off-gases and reinjection into the subsystems such as oil reservoirs and natural gas transport pipeline • Recovery and utilization – recovery and utilization of otherwise emitted gases for energy production Page 3 of 31

Handbook of Climate Change Mitigation and Adaptation DOI 10.1007/978-1-4614-6431-0_47-2 # Springer Science+Business Media New York 2015

Table 2 Technological options for petroleum systems – production field operations Technology Flaring instead of venting (US Environmental Protection Agency 2004a; International Energy Agency 2003) Associated gas (vented) mix with other options (US Environmental Protection Agency 2004a; International Energy Agency 2003) Associated gas (flared) mix with other options (US Environmental Protection Agency 2004a; International Energy Agency 2003) Option for flared gas (improved flaring efficiencies) (California Energy Commission 2005; European Commission 2001)

Lifetime MP RE TA (years) (%) (%) (%) 15 100 98 5

Capital Annual cost cost Benefits $33.30 $1.00 $0.00

15

100 90

23–25 $69.54 $1.11

$3.71

15

100 95

14–15 $66.61 $2.21

$3.71

15

100 10

13

$0.00

$66.61 $2.21

MP market penetration, RE reduction efficiency, TA technical applicability; costs are in year 2000 US$/MTCO2-Eq.

• Recovery and incineration – recovery, followed by incineration (flaring) without energy production Table 2 summarizes the information on technological options that was found in literature with regard to cost, market penetration (in 2010), emission reduction efficiency, and technical applicability (in 2010) for emission reduction from production field operations in petroleum systems. It should be noted here that all the values in this table and the other similar tables in this chapter were directly extracted from the literature search. Factors such as new regulations, development of the technologies, and economic conditions may affect these projected values. In addition, assessment of the current status of these technological options is beyond the scope of work of the project. The CH4 emissions from crude oil transportation and refining operations are relatively small, and their mitigation options are very similar to those of the natural gas processing and transmission (see section “Natural Gas Systems”). Natural Gas Systems Out of the total 118.8 MMTCO2-Eq., methane emissions from natural gas systems in the United States are mainly associated with field production (33.1 % of total), processing (11.8 %), transmission and storage (32.3 %), and distribution (22.8 %). The methane emissions from natural gas systems in California were 1.4 MMTCO2-Eq. in 2004, composing 5.0 % of its total methane emissions. Field Production In this initial stage of natural gas systems, gas wells are used to extract raw gas from subsurface formations. Sources of emissions include wells themselves, collection pipelines, well-site gas treatment facilities such as dehydrators and separators, fugitive emissions and emissions from pneumatic devices, and emissions from routine maintenance and repair of wells and equipment (US Environmental Protection Agency 2006a). In the United States and the worldwide, many efforts have been made to identify and implement mitigation options to reduce methane emissions from the natural gas sector (US Environmental Protection Agency 2003). For example, the Natural Gas STAR program is a voluntary partnership between USEPA and the oil and gas industry to identify and implement cost-effective technologies and measures to reduce methane emissions. They have identified many best management practices (BMPs) that are cost-effective in reducing methane emissions (http://www.epa.gov/gasstar/techprac.htm). The program has sponsored a series of lessons learned studies of these BMPs and several other practices. In addition, companies that are Natural Gas Star Partners have identified other practices referred to as partner-reported opportunities (PROs) that also reduce methane emissions (US Environmental Protection Agency 1999). Since 1990, the

Page 4 of 31

Handbook of Climate Change Mitigation and Adaptation DOI 10.1007/978-1-4614-6431-0_47-2 # Springer Science+Business Media New York 2015

oil and gas industry in the United States has achieved over 10 billion cubic meters of methane emission reduction (Fernandez et al. 2004). Similar to the petroleum sector, the measures to reduce methane emissions from the natural gas systems can be categorized into the following mitigation strategies: prevention, recovery and reinjection, recovery and utilization, and recovery and incineration (Hendriks and de Jager 2001). Specific technological options to reduce CH4 emissions from natural gas field operations include the following (Kuo 2008): • • • • • • • • • • • • • • • • • • • • • • • • • •

Good housekeeping practices to reduce blowouts Good operational procedures with regards to well-testing Flaring of gas produced at well tests (during exploration) Green completion Installing plunger-lift systems in gas wells Using surge vessels for station/well venting Replacing high-bleed pneumatic devices with low-bleed pneumatic devices Replacing high-bleed pneumatic devices with compressed-air systems Reducing the glycol circulation rates in dehydrators Installing flash tank separators on dehydrators Replacing glycol dehydrators with desiccant dehydrators Minimizing strip gas in glycol dehydration Increasing the pressure of the condensate flash Rerouting glycol dehydrator vapor to vapor-recovery unit Reducing purge gas streams Using portable evacuation compressors for pipeline venting Fuel gas line retrofitting for blow-down valve and alter emergency shutdown (ESD) practices Installing electric starters on compressors Replacing gas starters with air/nitrogen Replacing ignition/reduce false starts Use of automatic air/fuel ratio control Replacing the frequency of gas start with gas Inspection and maintenance (pipeline leaks) Inspection and maintenance (equipment and facilities) Inspection and maintenance (chemical inspection pumps) Inspection and maintenance (enhanced)

Table 3 summarizes the information on technological options that was found in literature for emission reduction from production activities in natural gas systems. Processing Subsequent to field production, “impurities” such as natural gas liquids and various other constituents from the extracted raw gas are removed, resulting in “pipeline quality” gas that is injected into transmission pipe and storage system. Fugitive emissions from compressors, including compressor seals are the primary emission source (US Environmental Protection Agency 2006a). The mitigation options for methane emission during the processing of natural gas are very similar to those for transmission and storage, and they will be described in section “Transmission and Storage.” Transmission and Storage Natural gas produced from gas fields needs to be transported to distribution systems, power plants, or chemical plants through high-pressure pipelines. Compressor stations which contain large reciprocating engines and turbine compressors are used to move the gas throughout the Page 5 of 31

Handbook of Climate Change Mitigation and Adaptation DOI 10.1007/978-1-4614-6431-0_47-2 # Springer Science+Business Media New York 2015

Table 3 Technological options for natural gas system – production Technology Installation of plunger-lift systems in gas wells (California Energy Commission 2005; US Environmental Protection Agency 2004a) Surge vessels for station/well venting (California Energy Commission 2005; US Environmental Protection Agency 2004a) Replace high-bleed with low-bleed pneumatic devices (California Energy Commission 2005; US Environmental Protection Agency 2004a) Replace high-bleed pneumatic devices with compressed-air systems (California Energy Commission 2005; US Environmental Protection Agency 2004a) Reducing glycol circulation rates in dehydrators (California Energy Commission 2005; US Environmental Protection Agency 2004a) Installation of flash tank separators (California Energy Commission 2005; US Environmental Protection Agency 2004a) Installation of electric starters on compressors (US Environmental Protection Agency 2004a; International Energy Agency 2003) Portable evacuation compressor for pipeline venting (California Energy Commission 2005; US Environmental Protection Agency 2004a) Inspection and maintenance (pipeline leaks) (California Energy Commission 2005; US Environmental Protection Agency 2004a) Inspection and maintenance (facilities & equipment) (US Environmental Protection Agency 2004a; International Energy Agency 2003) Inspection and maintenance (chemical inspection pumps) (US Environmental Protection Agency 2004a; International Energy Agency 2003) Inspection and maintenance (enhanced) (US Environmental Protection Agency 2004a; International Energy Agency 2003)

Lifetime MP RE TA (years) (%) (%) (%) 10 100 4 1

Capital cost $3,986

Annual cost Benefits $159.42 $8.21

10

100 50

X2-value) 0.00000*** % Correct prediction 74.46 No. of observations 1,711 Base category: no adaptation

Reactive measures 0.9650***

Proactive measures 0.7882***

0.0405

0.0369

0.1049**

0.0976**

0.0264

0.0214

0.1586*** 0.0981***

0.1496*** 0.0992***

0.0187 0.0185 0.1139***

0.0269 0.0279 0.1037***

0.0120* 0.0168*** 0.0020 0.0452

0.0085 0.0157*** 0.0010 0.0513

0.0600**

0.0638**

0.0597*** 0.0392 0.0031 0.0014 0.2183*** 0.3696*** 0.5946***

0.0523*** 0.0252 0.0017 0.0009 0.2400*** 0.2960*** 0.5312***

Cf: Francisco et al. (2011) Note: ***, **, * = significant at 1 %, 5 % and 10 % level, respectively a 1 = yes, 0 = otherwise b 1 = more severe than what was experienced, 0 = otherwise

Understanding Climate Change Adaptation Needs and Practices of Households in. . .

19

In terms of socioeconomic characteristics, a bigger household size is found to positively and significantly affect the probability of a household undertaking reactive adaptation measures. This indicates that with strength in number, people tend to be complacent about taking a more proactive stance to climate change. Moreover, the more educated households were more likely to implement proactive adaptation strategies than reactive ones. This is supported by Tiwari et al. (2014) who stated that households with more knowledge and information can be proactive in their response toward climate change. Other studies (Maddison 2006; Nhemachena and Hassan 2008) found that the age of the household head, which represents experience, positively correlated with the uptake of adaptation measures. Finally, the analysis revealed that households were more likely to undertake reactive rather than proactive measures if they perceived the risk of future extreme climate events to be more severe than what they had previously experienced. This is unexpected and should be further investigated as this seems to imply the existence of an attitude of resignation to fate to climate change disasters. This attitude of resignation was indeed expressed by some of the respondents. Their thinking is that since nature wills it, it cannot be controlled and is something we just have to live with. Surely, there are ways to reduce damages and this is the essence of undertaking effective adaptation measures to protect lives, property, and livelihood from extreme climate change disasters. In fact, everyone should also be thinking of mitigation because while it is true that our generation is already committed to climate change, there are efforts that can be done to slow down climate change for future generations.

Coastal Communities’ Vulnerability and Adaptation Practices Coastal areas in Asia face “an increasing range of stresses and shocks,” which are intensified by climate change (Cruz et al. 2007). This is supported by a 170-country assessment by Harmeling (2011) on the impacts of extreme weather-related events such as storms, flood, and extreme temperatures. The assessment showed six Asian countries to be among the most vulnerable, namely, Bangladesh (rank 1), Myanmar (rank 2), Vietnam (rank 5), the Philippines (rank 7), Mongolia (rank 9), and Tajikistan (rank 10). That coastal communities are highly vulnerable to climate change is widely recognized (IPCC 2007, 2014; ADB 2009, 2014). Their vulnerability comes from the rising sea level that accompanies the overall warming of temperature as well as the storm surges that accompanies the increased frequency and intensity of typhoons. Low physical and financial capacity for disaster preparedness also contributes, to some extent, to these areas’ vulnerability to extreme climate events (Ward and Shively 2011; Adger 1999); wealthier countries typically suffer lower social losses than poorer countries (Kahn 2005). The threat from sea level rise has been in the radar of climate discussions during the last few years, but the threat from storm surges became real only with the Philippines’ experience during super Typhoon Haiyan in November 2013. In just

20

H.A. Francisco and N.A. Zakaria

less than an hour, the 13 ft storm surges with strong current experienced during Haiyan left a death toll of 6,300 with 1,785 left unaccounted for. A few months after the super typhoon, more than 52,000 families are still living in tents in the danger zone in Tacloban City as the local government struggled to find a 100-hectare relocation site for these people (Lowe 2014; Stevens 2014). Given the critical situation that coastal communities face as a result of the changing climate, EEPSEA also supported several research projects on CCA in coastal areas. This effort was done in collaboration with the WorldFish Philippine Country Office (WF-PCO) and came in two sets of projects. The first project focused on understanding the adaptation practices and assessing the vulnerability of selected coastal communities in the Philippines, Vietnam, and Indonesia. The various studies part of this first project looked into the impacts and adaptation practices used in dealing with typhoons/flooding, coastal erosion, and saltwater intrusion at the household, community, and local government levels. Several planned adaptation options were then evaluated using cost-effectiveness analysis (CEA). The second project, which is still ongoing, looks into intra-household impacts of climate change, how the various members are affected, and how they can be engaged to generate a stronger household adaptation plan. Table 11 shows the climate change impacts considered in the different study sites and the compounding environmental stresses that those communities face in addition to climate change. In particular, most communities had to deal with coastal erosion, sand quarrying, deforestation of forests and mangroves, and rampant illegal fishing, all of which could compound the impacts of climate change. What Table 11 further tells us is that efforts to address the development needs of these coastal communities need to be holistic since theirs is a complex environment that is not necessarily affected only by their coastal location. Instead, it is important to note that coastal communities live in an environment that traverses several ecosystems, some of which are linked to one another. They are also affected by an economic and governance system that influences their livelihood and hazard vulnerability, be it climate change or other hazards. Moreover, they are also assisted by the government and other development agencies in how they cope with climate change impacts. These coping measures are further discussed in the next section.

Adaptation Practices in Selected Coastal Villages In the Philippines, it is noticeable that all local governments have formed a municipal disaster risk management council (MDRMC) that is funded using 5 % of the 20 % Development Fund (Table 12). This fund is used to provide disaster victims with food, particularly those who stay in evacuation centers during disaster events. The fund is also used to undertake information, education, and communication (IEC) campaigns on disaster risk reduction (DRR). In addition, local governments conduct dredging and river widening activities to reduce the flooding

Understanding Climate Change Adaptation Needs and Practices of Households in. . .

21

Table 11 Summary of climate change hazards, impacts, and compounding issues in the study sites Batangas, Philippines Hazards Sea level rise Confounding environmental issues Coastal erosion Sand quarrying Illegal charcoal making from mangroves Illegal fishing using blasting and cyanide Fishing with fine mesh net and superlight Impacts Damage to property (hotels, resort, houses, and boat) during typhoon Coral bleaching and increasing number of crown of thorns Impacts to livelihood and tourism in vulnerable coastal areas House relocation due to coastal erosion Mangrove areas, coral reefs, marine protected areas, and beaches now at risk

Palawan, Philippines Hazards More frequent and intense typhoons Floods Confounding environmental issues Mangrove cutting for charcoal, housing, and fencing materials Weak enforcement of coastal management laws Illegal fishing Burning of some upland areas for rice farming (kaingin) Expansion of private beachfront property Inadequate protection of the fish sanctuary Impacts Change in the fish species caught More houses and boats destroyed by typhoons Coral bleaching Decreased land area due to coastal erosion Loss of traditionally gleaned shells along the coastline Seawater is hotter during the 3–4 pm gleaning activity Bangus fry collected for the past 5–6 years declined significantly

Jakarta, Indonesia

Ben Tre, Vietnam

Hazards Coastal erosion and seawater intrusion Coastal or tidal flooding Sea level rise Confounding environmental issues Loss of most of mangrove and coastal ecosystems Large population Pollution that affects water quality, soil erosion Absence of strong fisheries policy and overlapping jurisdictions Impacts Land subsidence, coastal inundation, and coastal abrasion Seawater intrusion has reached the National Monument Increased turbidity of water affecting photosynthesis Decreasing water quality Change in the pattern of flow, bathymetry, and coastline Sediment accumulation in the entrance of harbor lanes increases dredging costs

Hazards More frequent and intense typhoons Destructive flood and tidal surges from 1996 to 2008 Confounding environmental issues Sand mining Salinity intrusion Heavy traffic of sea vessels Impacts Loss of shelter and livelihood from typhoons Land encroachment Saltwater intrusion during the dry season leads to a shortage of freshwater for domestic and production uses

Cf: Perez et al. (2013), shown as Table 3 in the original study

22

H.A. Francisco and N.A. Zakaria

threat as well as to rehabilitate mangroves, which are now widely believed to provide protection from coastal erosion and flooding. The same efforts are reported in the Indonesia and Vietnam study sites. In addition, Vietnam reports of technological support to farmers in the form of drought/flood-tolerant varieties and modified farming systems to suit the new climate (Table 12). Vietnam is also doing more structural measures, in the form of dike and pond system, to support livelihood. Indeed, as noted by Francisco (2008), there are a lot that countries in the region could learn from Vietnam on how they have been living with flooding. Adaptation practices at the community and household levels were also obtained during the study. It is worth noting that most local government initiatives involved the community. Community folk participated in mangrove replanting, dike repairs and construction, and in other activities to improve their environment and help prepare for disasters. At the household level, adaptation practices included relocating and strengthening of houses, putting up defense structures like cement dikes, engaging in alternative livelihood activities to enhance financial security, and shifting fishing/farming practices to suit new and changed environments, particularly in Vietnam.

Cost-Effectiveness Analysis of Selected Adaptation Options The results of a CEA of selected adaptation options, which were identified by LGUs as priority projects, are presented in Tables 13, 14, and 15. For the analysis, the researchers selected a common denominator, such as cost per unit of area protected or per household saved or protected. However, this is a very crude comparison as multiple types of benefits may be delivered by each of the adaptation options. For instance, a mangrove protection project will produce other types of benefits than what can be “produced” by installing a dike to protect a given area. As such, the comparison should be interpreted with caution. The last column in each of the next three tables provides some additional information regarding the interpretation of results. For San Juan, Batangas, in the Philippines, sea wall construction and mangrove reforestation were compared, and results showed that it is a lot cheaper to prevent a kilometer of shoreline erosion using mangrove reforestation (Table 13). In addition, this option produces other forms of benefits from the mangrove resources, both in terms of provisioning and regulating functions; if valued, these benefits will make this measure even more attractive. The use of early warning system and the provision of evacuation shelter were also compared with improvement of zoning regulation and relocation. As expected, the latter was a lot more costly to implement as a way of protecting households from the negative impacts of flooding and/or typhoons. The early warning system is being put in place in many parts of the country. A similar analysis was carried out for the Palawan, Philippines, study site (Babuyan in Honda Bay). Several options were evaluated: to protect households

Understanding Climate Change Adaptation Needs and Practices of Households in. . .

23

Table 12 CCA and disaster mitigation strategies in the study sites, 2012 Government-led initiatives San Juan, Batangas, Philippines Organized MDRMC, which is financed by 5 % of the 20 % Development Fund Gave out cash support (e.g., PhP 1,000–2,000 to affected fisher folk) River dredging and widening to prevent flooding Regular IEC campaigns Maintenance of Marine Protected Areas, mangrove replanting, and engagement in “Billions of Trees Project” in 400 ha of upland, lowland, and beach side areas Honda Bay, Palawan, Philippines Passed an ordinance to conserve, protect, and restore (CPR) the Puerto Princesa City’s sources of life Flood control project implementation and construction of breakwaters Mangrove reforestation Established a barangay disaster risk management council (BDRMC) with fund allocation

Jakarta Bay, Indonesia Implemented measures related to watershed management and coastal and marine resources protection Conducted capacity building and community empowerment activities to implement watershed and marine and coastal resources management Promoted policies that integrate environmental concerns in economic development Encouraged institutional strengthening for river basin management and coastal and marine bay management

Community-based initiatives

Autonomous household adaptation practices

Aquaculture and fish processing projects Mangrove replanting Crown-of-thorns starfish removal, as spearheaded by resort owners Typhoon warning system improvement and preparation for emergency evacuation Cleanup of drainage and flood control structures

Relocate or strengthen house structures Plant and sell mangrove seedlings Temporarily remove light structures in beach areas Join savings/credit cooperative Modify planting schedules

Establishment/ maintenance of fish sanctuary Participation in riverbank bioengineering projects (e.g., sea dike construction, mangrove reforestation) to reduce erosion and siltation Establishment of community-based early warning system and provision of temporary evacuation center

Use of indigenous materials to strengthen housing structures Use of cement and rocks to build dikes

Community involvement in various initiatives to protect watershed and coastal and marine resources Construction of permanent embankments Drainage improvement and river dredging Mangrove planting

Clean the beach fronting their houses

(continued)

24

H.A. Francisco and N.A. Zakaria

Table 12 (continued) Government-led initiatives Ben Tre Province, Vietnam Coastal zone management: road construction, dike upgrading, and mangrove protection Freshwater resources management“Acknowledgements” section has been deleted as it is not a part of the contribution structure. investment on irrigation systems for water storage, construction of dikes to prevent saltwater intrusion, and watershed management to protect water sources Supported agricultural adaptation: switch to salt-tolerant crops, investment on drought-tolerant crops, improvement of early warning system Supported CCA for aquaculture and capture fisheries: technological innovation in pond construction for improved water storage, introduction of fish-rice model in saline areas, and research to identify rich fishing grounds Information and awareness campaign on how to prepare for climate change

Community-based initiatives

Autonomous household adaptation practices

Mangrove forest protection Ensuring supply of freshwater for agriculture, aquaculture, and domestic needs (e.g., storage structure construction and provision of containers to harvest rainwater) Relocation of at-risk houses Participation in sea dike construction

Harvest rainwater Switch from black tiger shrimp to whiteleg shrimp to adapt to saline water Change cultivation schedules to avoid saltwater intrusion Use of sandbags to build dikes around the farm to prevent saltwater intrusion and seawater inflow

Source: Perez et al. (2013)

from storm surges (i.e., breakwater construction, dike construction, and mangrove reforestation), to protect them from inland flooding (i.e., upland reforestation, IEC with provision of temporary evacuation shelter, and household relocation), and to protect production areas (i.e., dike construction, riverbank rehabilitation, and river dredging). The results show the superiority of mangrove reforestation over structural measures, the cost-effectiveness of river dredging and riverbank rehabilitation, and support for an effective early warning system supplemented by IEC as part of DRR strategies (Table 14). For the study sites in Jakarta Bay, Indonesia, several options with varying objectives were compared as shown in Table 15. River dredging1 was found to be more cost-effective than the construction of new canals or embankment and even mangrove rehabilitation. The high cost of mangrove rehabilitation is attributed to

1

Note: The study did not indicate how often this has to be done.

Understanding Climate Change Adaptation Needs and Practices of Households in. . .

25

Table 13 Cost-effectiveness analysis results for San Juan, Batangas, Philippines

Objective Protect the coastline from erosion

Planned adaptation strategies Sea wall construction

Mangrove reforestation

Increase the number of households safe from typhoon/flooding

Zoning provisions and relocation

CE ratio USD 0.16 M/ linear km of erosion prevented USD 0.01 M/ linear km of erosion prevented USD 0.07 M/ HH saved

Notes Mangrove reforestation is not only more cost-effective but also offers other co-benefits like additional sources of income and marine biodiversity preservation

The changing zoning provisions need to be accompanied by the removal of communities from areas they currently occupy, a very costly and socially unattractive option

Source:

the need to purchase land from private landowners who already have rights over the areas previously occupied by mangroves. However, mangrove restoration is likely to make an even bigger contribution to the local economy in the face of climate change and the resulting increase in typhoon frequency and intensity (Tuan and Duc 2013). Mclvor et al. (2012) suggested that mangroves can potentially play a significant role in coastal defense and DRR. Overall, one can see that the CEA results tend to favor mangrove reforestation over structural measures and river dredging in order to increase flood control function. The scientific basis for this claim was found in the study by Mazda et al. (2006) and cited in Andrade et al. (2010). Moreover, the early warning system supplemented by evacuation shelter provision was found to be quite cost-effective compared with the other options evaluated. This finding is consistent with other studies’ findings which validate the use of an early warning system as one of the most cost-effective measures to reduce damage cost (Hallegatte 2012; Linham and Nicholls 2010). The next section describes efforts to link research with local government adaptation planning based on the experience from two cross-country projects implemented from 2011 to 2013.

26

H.A. Francisco and N.A. Zakaria

Table 14 Cost-effectiveness analysis for Honda Bay, Palawan, Philippines Objectives Protect households from storm surges and loss of property and minimize sand erosion

Planned adaptation strategies Breakwater construction Dike/levee construction Mangrove reforestation

Prevent river overflow and minimize siltation, which damage coconut plantations and fishponds

Riverbank rehabilitation using vetiver grass Riverbank rehabilitation using vetiver grass combined with mechanical method Dike construction

River dredging

Protect households from inland flooding

Upland reforestation IEC/early warning system establishment and provision of temporary evacuation area Household relocation

CE ratio USD 0.276 M/ HH USD 0.032 M/ HH USD 0.019 M/ HH USD 0.004 M/ ha USD 0.034 M/ ha

USD 0.032 M/ ha USD 0.002 M/ ha USD 926/HH USD 120/HH

Notes Mangrove reforestation is cost-effective in protecting households and properties and in minimizing sand erosion where mangroves are seen to thrive well

The discussion on the planned options and costeffectiveness (CE) ratios focused on prioritizing riverbed dredging together with riverbank rehabilitation using vetiver grass alone

IEC is cost-effective but success depends on the maturity of the residents to react accordingly

USD 2,234/ HH

Source:

Working with Local Governments in Adaptation Planning Climate change will affect everyone. Differences on the impacts felt will depend on the locality’s hazard exposure, the people’s adaptive capacity, and the LGU’s level of preparedness. This means that adaptation planning has to be locale specific to suit the conditions and capability of different local governments.

Understanding Climate Change Adaptation Needs and Practices of Households in. . .

27

Table 15 Cost-effectiveness analysis for Jakarta, Indonesia Planned adaptation strategies Construction of East flood canal Dredging of Sunter river

Site Rorotan

Objectives Reduce the no. of HH affected by flooding

Marunda

Reduce the no. of HH affected by flooding and coastal flooding

Construction of permanent embankment Mangrove rehabilitation

Kalibaru

Reduce the no. of HH affected by flooding, coastal flooding, and saltwater intrusion

Road elevation

Kamal Muara

Reduce the no. of HH affected by flooding and coastal flooding

Dredging of Pesanggrahan river Mangrove rehabilitation

Muara Angke

Reduce the no. of HH affected by flooding and coastal flooding

Road elevation

Dredging of Cakung river

Mangrove rehabilitation

CE ratio USD 307 M/ HH USD 0.695 M/ HH USD 2.5 M/ HH USD 13.37/ HH USD 2.64 M/ HH USD 2.09 M/ HH USD 2.09 M/ HH USD 2.43 M/ HH

USD 0.311 M/ HH USD 2.07 M/ HH

Notes The cost-effective option suitable in this area is the dredging of Sunter river

The cost-effective option is planting mangroves

The more cost-effective solution is to dredge Cakung river

The more cost-effective solution is to dredge Pesanggrahan river. A large portion of the cost of planting mangroves is the value of coastal land owned by private individuals or groups The more cost-effective solution is to elevate roads. Like in Kamal Muara, a large portion of the cost of planting mangroves is the value of privately-owned coastal land

Source: Agus et al. (2013)

In order to bring research to the level where it can have impact, EEPSEA supported two multi-country projects to engage local government planners in adaptation planning in 2011–2013.

CBMS-EEPSEA Project The main funding source of EEPSEA, the International Development Research Centre (IDRC), also supports a Community-Based Monitoring System (CBMS), a

28

H.A. Francisco and N.A. Zakaria

global project with presence in the Philippines, Vietnam, and Indonesia. The CBMS-EEPSEA partnership was carried out to pilot test the application of the EEPSEA framework on climate change vulnerability assessment and mapping at the local level using CBMS data and supplemented by data from other government sources. There are two outcomes that were expected from this initiative: (1) LGU level capacity building on understanding how to assess climate change vulnerability and (2) identification of adaptation strategies based on research done on this topic. The study sites included (1) Vietnam, Kim Son district of Ninh Binh province in the North, Nghia Lo municipal of Yen Bai province in the North Mountainous Region, and Tam Ky town of Quang Nam province in the Central Region; (2) Indonesia, two villages in the province of Kota Pekalongan (Pasirsari village, Kecamatan Pekalongan Barat, and Panjang Wetan village, Kecamatan Pekalongan Utara); and (3) the Philippines, municipality of Carmona in Cavite province and Marinduque province. Experience in pilot testing the climate change vulnerability framework shows that its main advantage is its simplicity, which allows local government decisionmakers to understand what factors they should consider when doing such an assessment. The ability to map information that allows government officials to immediately see how they fare relative to their neighbors was also found attractive. Experience in using vulnerability mapping to aid in identifying suitable adaptation strategies varies across the participating country teams. The Philippine teams managed to bring the discussion to the point that they were able to identify the adaptation practices that need to be strengthened and those that need to be implemented as listed in Table 16 (Reyes 2012). The Vietnamese team shares that the process helped them understand the location and the sources of vulnerability but that these were not sufficient to assess what the community needs in order to adapt to the changing climate. In a way, identifying the adaptation practices that the community or the local government could undertake is indeed only the first step. Given limited resources and varying capacity, an assessment of the economics of these measures and their technical and social acceptability must be carried out as well; these are not within the scope of the CBMS-EEPSEA project. In the case of the Indonesian team, the adaptation policies and programs of the national and local governments were discussed and analyzed as a separate activity from the vulnerability mapping. The analysis revealed that current adaptation planning is largely addressing the hazard component of vulnerability, which they pointed out is a major limitation on account of the findings of the project. In particular, the CBMS-EEPSEA project showed clearly that adaptive capacity and sensitivity are equally important sources of vulnerability and should therefore be addressed as well. All the project country teams recommended that similar efforts be done to assist other LGUs to better understand their vulnerability situation and to help them identify adaptation practices. To help evaluate the economic viability and acceptability of these identified strategies would require a longer time and a different set

Understanding Climate Change Adaptation Needs and Practices of Households in. . .

29

Table 16 Adaptation strategies identified in the Philippine CBMS-EEPSEA project Study site Carmona, Cavite

Marinduque province

Adaptation strategies identified Strengthen current efforts in: river cleanup, solid waste management, and de-clogging of canals and waterways Conduct more orientations on DRR management, enhance DRR communication capability and early warning system with new equipment and strengthen flood forecasting, and upgrade evacuation and health facilities Install diversion canals, dams, and reservoirs to protect industrial and agricultural lands Review/update and enhance the provincial DRR management plan Strengthen the rehabilitation of watershed areas and reforestation projects through the National Greening Project (NGP), Bamboo Greenbelt Project, and other forest rehabilitation projects Build LGU and community capability and capacity on the various facets of DRR (i.e., warning, search and rescue, emergency relief, logistics and supply, communication and information management, emergency operation management, evacuation planning and management, health emergency education, and post disaster management) Establish/construct evacuation centers in safe areas and improve and construct roads and feeder roads, drainage facilities, footbridges, spillways, and floodways in priority areas Produce and disseminate natural hazard and geographic info system susceptibility maps and IEC materials and install Integrated Warning/ Communication and Response System Install automatic weather station in major critical areas such as Boac and Sta. Cruz

Source: Reyes (2012)

of skills, as demonstrated in EEPSEA’s project with IDRC’s Climate Change and Water program, which is discussed in the next section.

CCW-EEPSEA Project In February 2011, EEPSEA and the IDRC program on climate change and water (CCW) awarded three 3-year research grants to institutions based in the Philippines, Vietnam, and Cambodia. The project, entitled “Building Capacity to Adapt to Climate Change in Southeast Asia,” aimed to enhance the capacity of university researchers, provincial officials, LGU representatives, and other mass organizations in selected SEA countries by equipping them with knowledge on how to assess climate change causes and impacts and how to undertake an economic analysis of selected adaptation options. Specifically, the collaborating partners were expected to undertake vulnerability analysis, prioritize adaptation options, and develop sound and feasible project proposals for funding. Based on the highly vulnerable sites identified in Yusuf

30

H.A. Francisco and N.A. Zakaria

and Francisco (2009), the project selected the following study sites: Kampong Speu in Cambodia (highly vulnerable to drought), Laguna in the Philippines (highly vulnerable to flooding), and Thua Thien Hue in Vietnam (exposed to flooding and typhoons). The project adopts a multidisciplinary and participatory approach. Each country research team is composed of researchers with backgrounds in natural science, sociology, and economics. Their mandate was to work with their study site’s local government officials in undertaking vulnerability assessment, in identifying and evaluating adaptation projects, and in developing the CCA proposal/plan for submission to donor agencies. During project implementation, the research team implemented a series of training courses, joint field visits, and dialogues with community members. The key skills that the training courses addressed are vulnerability assessment and mapping following the framework used in Yusuf and Francisco (2009), evaluation of climate change impacts, economic analysis of adaptation options, and proposal development for adaptation funding support. Interestingly, many of the team members are now being involved in providing consultancy services on these areas in their own countries – a sure offshoot of their engagement in this project. The engagement of the local government people in the project had brought about several benefits, namely, (1) greater awareness on climate change risks; (2) generation of risk maps, which were used to develop agricultural production plans for three subregions in Thua Thien Hue; (3) integration of climate risks in the socioeconomic development plan; and (4) improved knowledge on how to conduct vulnerability assessment and economic analysis of adaptation options as well as proposal development. They were also able to network with government people from other countries as the project hosted sharing and training meetings as part of the capacity building activities designed for the collaborators. A post project survey was implemented to assess changes in knowledge and skills of the people involved in the project. All team members across the three countries demonstrated improved understanding of the climate change problem and a higher level of knowledge on the various tools that were used by the team. The biggest improvement was noted among the research team members from Cambodia. In terms of concrete actions taken up, local government partners in Thua Thien Hue have developed agricultural production plans based on the results of the climate change vulnerability map that was prepared through the project. The Thua Thien Hue LGU staff developed proposals to raise funds for the construction of local CCA measures, particularly better use of water in rice production. The LGU partners in Laguna, Philippines, now have better appreciation and knowledge on how to implement vulnerability assessment and mapping and how to package proposals, but they acknowledged that they may not be able to carry these out on their own. Their reservation regarding their independent implementation/ conduct of such activities is not due to perceived capacity constraint but more as a result of their busy schedules as they are responsible for multiple projects at the provincial office.

Understanding Climate Change Adaptation Needs and Practices of Households in. . .

31

The project research findings also affirmed the findings of previous studies. First, that vulnerability to climate change is quite high and that it varies across areas. In Thua Thien Hue, Vietnam, households living in delta communes had higher adaptive capacity compared with households living in coastal and upland communes. The social capital was found to be generally high but limited infrastructure support, access to technology, and lower financial resources contributed to lower adaptive capacity for many households. In Laguna, Philippines, higher vulnerability was noted among coastal communities compared with agriculture-based households. Interestingly, it was found that a big proportion of the vulnerable households are not knowledgeable about the threats posed by climate change. The most vulnerable households are often the most poor as well and so it probably makes no difference where their poverty stresses are coming from. In terms of experience with climate-related hazards, majority identified typhoons and flooding as the hazards posing the greatest threat based on experienced damages over the years, most intensely in the last few years. The biggest losses/damages come in the form of damages to houses and furniture. About 16 % of the households regularly experience evacuation, and a tenth of those interviewed had experienced permanent relocation as supported by the local government of Sta. Cruz, Laguna. The important role played by social capital, particularly those involving women’s organizations, in accessing DRR programs was also highlighted in the Philippine study site. The study site in Kampong Speu, Cambodia, is predominantly a farming community and is threatened mostly by drought, with flash floods occurring in certain areas only. Vulnerability was found highest in the eastern and central western regions of the province on account of the highest concentration of vulnerable communes in these areas. Farming households in lowland areas have been coping with drought conditions by shifting to short-duration crop varieties. Those in mountainous areas were found less willing to shift to new varieties, but they generally have more resources to cope with the impacts of a changing climate. For both sites, other adaptation strategies included: constructing and renovating canals, selling household assets, and migrating to work outside the village (e.g., work in a garment factory, as household help, and in construction). The teams also packaged adaptation proposals, which the LGUs can then submit for funding support: (1) a technology-based flood early warning system for the Sta. Cruz River Watershed in Laguna, Philippines; (2) improved irrigation system for Kampong Speu, Cambodia; and (3) upgrading of the An Xuan tributary banks and river dredging in Thua Thien Hue, Vietnam. These projects were found to be the most economically efficient option for the three sites based on the economic analysis of several alternative projects.

Lessons in Action Research: Working with Local Government Units The researchers of the two projects considered working with the local governments both rewarding and productive. They are able to immediately share and discuss

32

H.A. Francisco and N.A. Zakaria

with local government planners their research results as the study progresses. The local government collaborators were involved more deeply in data analysis and presentation as they were engaged all throughout the project. Hence, there is joint ownership of the research reports, which was linked to adaptation planning more directly, unlike in the traditional research process. More importantly, there is improvement in the knowledge and skills of not only the researchers but also of their collaborating local government partners. All these are expected to result in a better understanding of the problem and a higher capacity to assess, evaluate, and decide on how it can be addressed. Nonetheless, it must be mentioned that the heavy workload of the local government partners prevented them from being more engaged in the research process. They are only engaged in the project part time with DRR/climate change being only one of their many other responsibilities in their capacity as local government staff. The same can be said of the university researchers who are only working part time with the research. They meet and work with their collaborators only during meetings or major activities. Perhaps, the project duration for the CCW-EEPSEA project is too long (i.e., 3 years), and that of the CBMS-EEPSEA project (i.e., 1 year) is too short. The CCW-EEPSEA project differs from the CBMS-EEPSEA project in that it is longer and thus has more resources. Its idea is to fully understand the various aspects of vulnerability by forming three teams (i.e., economic, social, and mapping teams) and to analyze and characterize who are the most vulnerable sectors in the locality. The team is comprised of university researchers, LGU representatives, and representatives from NGOs working in the community. On the second year, the potential and existing adaptation practices were identified and evaluated. An economic analysis of those options was carried out. In the last year, the team assisted the LGU in developing a proposal to seek support from donor organizations to fund their adaptation plans. This 3-year project completely contrasts with the 1-year duration of the CBMS-EEPSEA project, both in terms of depth of analysis and resources. From a program point of view, it will appear that having 2 years for both projects will be a Pareto improvement. The CBMS-EEPSEA project would have more time to expand its adaptation analysis. On the other hand, 3 years may have been too long for the CCW-EEPSEA team as the members managed to do multiple assignments that took them away from a more concerted, focused analysis of the problem on hand. Somewhere in between these two project durations could result in a more intensive interaction between researchers and their local government partners as well as richer data collection and analysis.

Understanding Climate Change Adaptation Needs and Practices of Households in. . .

33

Lessons from EEPSEA-Funded Climate Change Adaptation Research in Southeast Asia: Future Directions Climate change is here to stay and is likely to get worse given that serious commitment to reduce greenhouse gas (GHG) emission will only happen starting 2020 if, and only if, the 2015 International Agreement with significant GHG reduction commitment will be signed by all countries in Paris. The prospect of having such a stronger agreement in place does not look good given how previous talks have failed on this front. We can only hope that our global leaders will come to their senses and put in place serious efforts to reduce GHG emission to slow down climate change. On the positive side, it is good to hear about various countries’ efforts to move toward a green economy by using more efficient technologies, building high energy savings, and investing on ecosystem reforestation or protection. However, these efforts cannot be done at a microscale – we need government commitment to pass stronger measures to reduce carbon emission. Surely, a carbon tax supplemented by programs to reduce impacts on the poor is a step in the right direction but is opposed in many countries because of strong lobbying pressure by those sectors who will be hit by this measure. With the exception of climate deniers, everyone agrees that climate change poses a real threat to people, with some countries facing (and in fact, already experiencing) more serious challenges than others and the poor being more vulnerable to it. The various research supported by EEPSEA on CCA in SEA proved that the impact of extreme climate events on those affected is huge with an extreme event costing households up to 44 % of their annual household income. Depending on the number of extreme climate events (which unfortunately is expected to be more frequent and intense in the future), future damage can be even bigger and will most likely drive vulnerable households to extreme poverty. Despite the severity of the situation, adaptation actions by SEA households are generally very crude and mostly in the form of reactive measures (e.g., strengthening housing units, using sandbags during flooding, storing of food, evacuation) rather than preventive ones (e.g., relocation, building multistorey and stronger housing units). Largely, this is explained by the limited resources available to most vulnerable households for investment in stronger measures. Moreover, in many cases, there is still the hope that extreme events will not occur that often in the future or that they are used to this condition already and will be able to manage. This complacency is now being addressed by enhanced IEC campaigns that most governments are launching toward DRR/climate change. A lot of this effort needs to be done more widely and intensely, but stronger adaptation investments need to be made if adaptation capacity or resilience is to be enhanced in areas most vulnerable to climate change. The financial resources available to these communities are undoubtedly small in relation to needs and efforts must be done to provide more resources and/or use more wisely the limited resources available. Hopefully, developing countries in the region will be able to access some of the adaptation

34

H.A. Francisco and N.A. Zakaria

funds available, and the economic analysis done on some of these measures was found to be economically viable. Local governments need support in adaptation planning. The two action research projects carried out with funding support from EEPSEA and the collaborating organizations (IDRC’s CBMS and CCW) showed that local government officials are receptive to such research collaborations and are very much willing to learn science-based planning. With more resources provided to free up some of their time during the conduct of such action research projects, a higher-level and more fruitful engagement can perhaps be expected from them. In addition, our experience shows that there are existing programs (e.g., CBMS, WorldFish, etc.) in most communities that researchers can tie up with so that synergy can be achieved in getting more resources and technical expertise in the action research. Finally, given the urgency of the situation and the relatively higher level of research information now available from different organizations working in SEA, it is high time to move toward research that feeds directly into concrete actions, to not simply come up with plans and proposals but with adaptation projects/measures that can be readily implemented on the ground. This calls for action research with some funding available to pilot test adaptation projects that will be evaluated as part of the research. This is not going to be easy as there seems to be a dichotomy between research and development. In most cases, research organizations just do research while development organizations focus on development projects. This is so unfortunate in this instance since concrete actions supported by science are what we need to help prepare local governments, communities, and households to better adapt to climate change. Research alone oftentimes translates to just pure talk; action research is walking the talk.

References ADB (Asian Development Bank) (2009) The economics of climate change in Southeast Asia: a regional review. Asian Development Bank, Manila ADB (Asian Development Bank) (2014) Climate change and rural communities in the greater Mekong Subregion: a framework for assessing vulnerability and adaptation options. Asian Development Bank, Bangkok Adger WN (1999) Social vulnerability to climate change and extremes in coastal Vietnam. World Dev 27(2):249–269 Agus HP, Siti Hajar S, Ivonne MR, Klaudia OS (2013) Climate Change Impact, Vulnerability Assessment, Economic and Policy Analysis of Adaptation Strategies in Jakarta Bay, Indonesia. Unpublished Research Report. Economy and Environment Program for Southeast Asia (EEPSEA), Philippines Andrade AP, Fernandez BH, Gatti RC (2010) Building resilience to climate change: ecosystembased adaptation and lessons from the field. International Union for Conservation of Nature (IUCN), Gland Anthoff D, Nicholls RJ, Tol RSJ, Vafeidis AT (2006) Global and regional exposure to large rises in sea level: a sensitivity analysis. Working paper 96, Tyndall Centre for Climate Change Research, University of East Anglia Asfaw S, Lipper L (2011) Economics of PGRFA management for adaptation to climate change: a review of selected literature. Commission on Genetic Resources for Food and Agriculture.

Understanding Climate Change Adaptation Needs and Practices of Households in. . .

35

Agricultural Economic Development Division (ESA), Food and Agriculture Organization (FAO), Rome Cruz RV, Harasawa H, Lal M, Wu S, Anokhin Y, Punsalmaa B, Honda Y, Jafari M, Li C, Huu Ninh N (2007) Asia climate change 2007: Impacts, adaptations and vulnerability. In: Parry ML, Canziani OF, Palutikof JP, van der Linden PJ, Hanson CE (eds) Contribution of working group II to the fourth assessment report of the intergovernmental panel on climate change. Cambridge University Press, Cambridge, MA, pp 469–506 Cumming-Bruce N, Gladstone R (2013) U.N. Appeals for $301 million towards typhoon relief. The New York Times, 12 Nov 2013. http://www.nytimes.com/2013/11/13/world/asia/philip pines-typhoon-haiyanresponse.html. Retrieved 13 Aug 2014 Elyda C, Dewi SW (2014) Jakarta braces for major flood. The Jakarta Post, 19 Jan 2014. http:// www.thejakartapost.com/news/2014/01/19/jakarta-braces-major-flood.html. Retrieved 13 Aug 2014 European Commission (2007) Disaster preparedness in Vietnam. European commission article. http://ec.europa.eu/echo/files/policies/dipecho/presentations/vietnam.pdf European Commission (2014) New study quantifies the effects of climate change in Europe. JRC News Release. Copenhagen 25 June 2014. https://ec.europa.eu/jrc/sites/default/files/jrc_ 20140625_newsrelease_climate-change_en.pdf Francisco HA (2008) Adaptation to climate change needs and opportunities in Southeast Asia. ASEAN Econ Bull 25(1):7 Francisco HA, Predo CD, Manasboonphempool A, Tran P, Jarungrattanapong R, The BD, Pen˜alba LM, Tuyen NP, Tuan TH, Elazegui DD, Shen Y, Zhu Z (2011) Determinants of household decisions on adaptation to extreme climate events in Southeast Asia. Economy and Environment Program for Southeast Asia (EEPSEA), Singapore Garnaut R (2010) The Garnaut climate change review in Australia. http://www.garnautreview.org.au/. Retrieved 12 Aug 2014 GISTDA (Geo-Informatics and Space Technology Development Agency) (2011) Radar satellite images and flood maps of the 2011 flood, May–Dec 2011 Hallegatte S (2012) A cost effective solution to reduce disaster losses in developing countries, hydro meteorological services, early warning, and evacuation. Policy research working paper 6058, World Bank Handley P (1992) Before the flood. Climate Change May Seriously Affect Southeast. Far East Econ Rev 65(155):65–66 Harmeling S (2011) Global climate risk index 2011: who suffers most from extreme weather events? Weather-related loss events in 2009 and 1990 to 2009. A Briefing Paper. Germanwatch e.V. 24 pp Heijmans A, Victoria L (2001) Citizenry-based & development-oriented disaster response. Centre for Disaster Preparedness and Citizens’ Disaster Response Centre, Quezon City Horiguchi C (2014) Rammasun is one of the strongest typhoons to hit Southeast China in recent years. http://www.rms.com/blog/2014/07/25/rammasun-is-one-of-the-strongest-typhoons-tohit-southeast-china-inrecent-years/ IPCC (Intergovernmental Panel on Climate Change) (2007) Climate change 2007: impacts, adaptation and vulnerability. Contribution of working group II to the fourth assessment report of the intergovernmental panel on climate change. Cambridge University Press, Cambridge, MA Jones RN, Preston BL (2006) Climate change impacts, risk and the benefits of mitigation. Commonwealth Scientific and Industrial Research Organisation, Canberra Kahn M (2005) The death toll from natural disasters: the role of income, geography, and institutions. Rev Econ Stat 87(2):271–284 Linham MM, Nicholls RJ (2010) Technologies for climate change adaptation: coastal erosion and flooding, TNA guidebook series. United Nation for Economics Program, Roskilde Loo YY, Billa L, Singh A (2014) Effect of climate change on seasonal monsoon in Asia and its impact on the variability of monsoon rainfall in Southeast Asia. Geosci Front. doi:10.1016/j. gsf.2014.02.009

36

H.A. Francisco and N.A. Zakaria

Lowe A (2014) Typhoon Haiyan survivors in Tacloban face upheaval as city tries to rebuild. The Guardian. 8 May 2014. http://www.theguardian.com/world/2014/may/08/typhoon-haiyan-sur vivors-tacloban-philippines. Retrieved 13 Aug 2014 Maddison D (2006) The perception and adaptation to climate change in Africa. Discussion paper no. 10. Centre for Environmental Economics and Policy in Africa (CEEPA), University of Pretoria, Pretoria Maddison D (2007) The perception of and adaptation to climate change in Africa. World bank policy research working paper, 4308.The World Bank, Washington, DC Mazda Y, Magi M, Ikeda Y, Kurokawa T, Asano T (2006) Wave reduction in a mangrove forest dominated by Sonneratia sp. Wetl Ecol Manag 14:365–378 Mclvor A, Moller I, Spenser T, Spalding M (2012) Reduction of winds and swell waves by mangrove, Natural coastal protection series. Cambridge Coastal Research Unit working paper 40 Morgan J (1993) Natural and human hazards. In: Brookfield H, Byron Y (eds) Southeast Asia’s environmental future: the search for sustainability. Oxford University Press, Kuala Lumpur Nabangchang O, Leangcharoean P, Jarungrattanapong R, Allair M, Whittington D (2013) Economic costs incurred by households in the 2011 Bangkok flood. Economy and Environment Program for Southeast Asia (EEPSEA), Los Ban˜os Nhemachena C, Hassan R (2007) Micro-level analysis of farmers’ adaptation to climate change in Southern Africa. IFPRI discussion paper 00714. International food policy research institute (IFPRI), Washington, DC Nhemachena C, Hassan R (2008) Determinants of African farmers’ strategies for adapting to climate change: multinomial choice analysis. Afr J Agric Resour Econ 2(1):83–104 Nhu OL, Thuy NT, Wilderspin I, Coulier M (2011) A preliminary analysis of flood and storm disaster data in Viet Nam, Global assessment report on disaster risk reduction. United Nations Development Program (UNDP), Hanoi Pen˜alba LM, Elazegui DD (2011) Adaptive capacity of households, community organizations and institutions for extreme climate events in the Philippines. Economy and Environment Program for Southeast Asia (EEPSEA), Singapore Perez ML, Sajise AJU, Ramirez PJB, Purnomo AH, Dipasupil SR, Regoniel PA, Nguyen KAT, Zamora GJ (2013) Economic analysis of climate change adaptation strategies in selected coastal areas in Indonesia, Philippines and Vietnam. Economy and Environment Program for Southeast Asia and Worldfish, Penang Phong T, Tuan TH, The BD, Tinh BD, Penalba LM, Elazegui DD, Jarungrattanapong R, Manasboonphempool A, Yueqin S, Zhu Z, Li L, Lv Q, Wang X, Wang Y, Nghiem PT, Le TVH, Vu TDH, Pamela DM, Armi S, Safwan H, Dwi RP, Mamad TMMF, Taora V, Titania S, Saskya S, Alliza A, Wulan S, Francisco HA (2011) Cross-country analysis of household adaptive capacity. Unpublished Research Report. Economy and Environment Program for Southeast Asia (EEPSEA), Singapore Pittock B (ed) (2003) Climate change: an Australian guide to the science and potential impacts. Australian Greenhouse Office, Canberra Roncoli C, Ingram K, Kirshen P (2002) Reading the rains: local knowledge and rainfall forecasting among farmers of Burkina Faso. Soc Nat Resour 15:411–430 Reyes CM (2012) CBMS-EEPSEA PEP-Asia CBMS Network Climate Change Vulnerability Mapping in the Philippines: A Pilot Study. Unpublished Research Report. Economy and Environment Program for Southeast Asia (EEPSEA), Singapore. Stevens A (2014) CNN’s Andrew Stevens returns to Tacloban more than six months after Typhoon Haiyan, 19 June 2014. http://cnnpressroom.blogs.cnn.com/2014/06/19/cnns-andrew-stevensreturns-to-tacloban-more-than-six-months-after-typhoon-haiyan/. Retrieved 13 Aug 2014 Tiwari KR, Rayamajhi S, Pokharel RK, Balla MK (2014) Determinants of the climate change adaptation in rural farming in Nepal Himalaya. Institute of Forestry, Tribhuvan University, Pokhara Tuan TH, Duc TB (2013) Cost- benefit analysis of mangrove restoration in Thi Nai Lagoon, Quy Nhon City, Vietnam. Asian cities climate resilience working paper series 4, 2013

Understanding Climate Change Adaptation Needs and Practices of Households in. . .

37

Tuan AT, Phong T, Tran HT (2012) Review of housing vulnerability implications for climate resilient houses. Discussion paper series, Institute for Social and Environmental TransitionInternational UNEP (United Nations Environment Program) (2008) An overview of the state of the world’s fresh and marine waters, 2nd edn. http://www.unep.org/dewa/vitalwater/index.html Ward P, Shively G (2011) Vulnerability, income growth and climate change. World Dev 40 (5):916–927 Wijayanti P, Tono H, Pramudita D (2014) Estimation of flood river damage in jakarta: the case of Pesanggrahan river. Economy and Environment Program of Southeast Asia (EEPSEA), Los Ban˜os World Bank (2011) Vulnerability, risk reduction and adaptation to climate change: Indonesia. http://sdwebx.worldbank.org/climateportalb/doc/GFDRRCountryProfiles/wb_gfdrr_climate_ change_country_pofile_for_IDN.pdf Yueqin S, Zhu Z, Li L, Lv Q, Wang X, Wang Y (2011) Analysis of household vulnerability and adaptation behaviors to Typhoon saomai, Zhejiang Province, China. Economy and Environment Program for Southeast Asia (EEPSEA), Singapore Yusuf AA, Francisco HA (2009) Hotspots! Mapping climate change vulnerability in Southeast Asia. Economy and Environment Program for Southeast Asia (EEPSEA), Singapore Ziervogel G, Bithell M, Washington R, Downing T (2005) Agent-based social simulation: a method for assessing the impact of seasonal climate forecasts among smallholder farmers. Agr Syst 83(1):1–26 Ziervogel G, Bithell M, Washington R, Downing T (2013) Typhoon Haiyan: worse than hell. The Economist, 16 Nov 2013. http://www.economist.com/news/asia/21589916-one-strongeststorms-ever-recorded-hasdevastated-parts-philippines-and-relief. Retrieved 13 Aug 2014

Handbook of Climate Change Mitigation and Adaptation DOI 10.1007/978-1-4614-6431-0_68-1 # Springer Science+Business Media New York 2015

Potential Impacts of the Growth of a Mega City in Southeast Asia, a Case Study on the City of Dhaka, Bangladesh A. K. M. Azad Hossaina* and Greg Eassonb a National Center for Computational Hydroscience and Engineering (NCCHE), The University of Mississippi, University, MS, USA b Mississippi Mineral Resources Institute, The University of Mississippi, University, MS, USA

Abstract Megacities with populations of more than ten million people in compact urban areas are the most vulnerable environments on the earth. The impacts of climate change on these megacities will be multifaceted and severe, especially in developing countries, due to fast growth rate and inefficient adaptation. It is very important therefore to understand the contributions of the growth of megacities to climate change, especially in the developing countries. Dhaka, the capital of Bangladesh, is one of the fastest-growing megacities in the world; its population increased from 6.621 million (in 1990) to 16.982 million (in 2014). Today, Dhaka is the 11th largest megacity in the world and is projected to be the 6th largest megacity in the world with a population of 27.374 million by the year 2030. Remote sensing technology has been successfully used for mapping, modeling, and assessing urban growth and associated environmental studies for many years. This research investigates how the intensity of the urban heat island (UHI) effects correlates with continuous decrease in the greenness of the city of Dhaka, as measured from satellite observations. The results of this study indicate that Landsat imageryderived normalized difference vegetation index (NDVI) can be used to investigate the changes in greenness in the city of Dhaka from 1980 to 2014. The changes in greenness can be correlated with the increase in the intensity of UHI effects in the city of Dhaka as determined using Landsat thermal data from 1989 to 2014.

Keywords Megacity; Southeast Asia; Dhaka; Bangladesh; Global climate change; Remote sensing; Digital image processing; Satellite observations; Landsat 1–3 MSS; Landsat 4–5 TM; Landsat 7 ETM+; Landsat 8 OLI; Landsat 8 TIRS; Land surface temperature (LST); Vegetation; Greenness; Normalized difference vegetation index (NDVI); Land use and land cover change; City; Urbanization; Urban management; Urban heat island (UHI); Image classification; Statistics; Mapping; Modeling; Monitoring; Comparison; Population growth; Environment; Developing world; Developed countries; Potential impact; Data time series; Thermal imagery; Visual analysis; Mean temperature; Research; Case study; Fastest growing; Prediction; Projection; Data calibration; Radiance; Reflectance

Introduction Urban populations grew rapidly throughout the nineteenth century, more by migration from the rural areas to the cities and manufacturing centers than by absolute population growth. Throughout the twentieth *Email: [email protected] Page 1

Handbook of Climate Change Mitigation and Adaptation DOI 10.1007/978-1-4614-6431-0_68-1 # Springer Science+Business Media New York 2015

Table 1 List of megacities (United Nations 2014)

Megacity Tokyo Delhi Shanghai Mexico City Sao Paolo Mumbai Osaka Beijing New York*** Cairo Dhaka Karachi Buenos Aires Kolkata Istanbul Chongqing Rio de Janeiro Manila Lagos Los Angeles* Moscow Guangzhou Kinshasa Tianjin Paris Shenzhen London Jakarta

Country Japan India China Mexico Brazil India Japan China USA Egypt Bangladesh Pakistan Argentina India Turkey China Brazil Philippines Nigeria USA Russian Federation China Congo** China France China UK Indonesia

Population (thousands) 2014 2030 37,833 37,190 24,953 36,060 22,991 30,751 20,843 23,865 20,831 23,444 20,741 27,797 20,123 19,976 19,520 27,706 18,591 19,885 18,419 24,502 16,982 27,374 16,126 24,838 15,024 16,956 14,766 19,092 13,954 16,694 12,916 17,380 12,825 14,174 12,764 16,756 12,614 24,239 12,308 13,257 12,063 12,200 11,843 17,574 11,116 19,996 10,860 14,655 10,764 11,803 10,680 12,673 10,189 11,467 10,176 13,812

Rank 2014 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28

2030 1 2 3 10 11 4 13 5 14 8 6 7 18 15 20 17 23 19 9 26 31 16 12 22 33 29 36 25

Last 5 years’ average growth (2010–2015) 0.6 3.2 3.4 0.8 1.4 1.6 0.8 4.6 0.2 2.1 3.6 3.3 1.3 0.8 2.2 3.4 0.8 1.7 3.9 0.2 1.2 5.2 4.2 3.4 0.7 1 1.2 1.4

*

Los Angeles-Long Beach-Santa Ana Democratic Republic of the Congo *** New York-Newark **

century, the number and sizes of cities grew, along with the percentage of the total population living in the cities (Schubel and Levi 2000). Since 1950, the worldwide urban population has grown from 746 million to 3.9 billion in 2014, 54 % of the total global population (United Nations 2014). Continued population growth and urbanization are predicted and it is projected to add 2.5 billion more people to the world’s urban population by 2050 (United Nations 2014). A large percentage of the urban growth is concentrated in the developing world, where the average urban growth rate for developing countries is 3.5 % per year, compared with a rate of less than 1 % per year for the developed countries (United Nations 1997; WRI 1998). Asia, despite its lower level of urbanization, is home to 53 % of the world’s urban population. By 2050 it is projected that 90 % of the world’s urban population will be in Asia and Africa (United Nations 2014).

Page 2

Handbook of Climate Change Mitigation and Adaptation DOI 10.1007/978-1-4614-6431-0_68-1 # Springer Science+Business Media New York 2015

The past several decades have seen the emergence of megacities, a metropolitan area with a total population in excess of ten million people (New Scientist Magazine 2006). A megacity can be a single metropolitan area or two or more metropolitan areas that have converged. The concept of megacities was initiated in 1987 to combine both theory and practice in the search for successful approaches to improve urban management and the conditions of daily life in the world’s largest cities. The megacities concept was based on a collaborative effort among government, business, and community leaders of these megacities, in an attempt to shorten the time between the introduction of innovative ideas and their implementation and diffusion. The idea was coined not simply to identify, distill, and disseminate positive approaches but to strengthen the leaders and groups who are evolving the approaches and find sources of support to multiply their efforts. The idea promotes a dual strategy that functions simultaneously at the practical and theoretical levels: (1) sharing “best practices” among the cities and putting the lessons of experience in the hands of decision makers and the public and (2) gaining a deeper understanding of the process of innovation and the consequences of deliberate social changes in the cities. In 1950, New York and London were the world’s only megacities (Schubel and Levi 2000). In 1990, the number of megacities had increased to 10, with a population of 153 million people, representing less than 7 % of the global urban population. By 2014, the number of megacities had nearly tripled to 28. The urban population in these megacities has grown to 453 million, and these areas now account for 12 % of the world’s urban residents. The number of megacities is projected to increase to 41 by 2030 (United Nations 2014). Since most of the recent urban growth is concentrated in the developing world, the majority of the megacities are expected to be located in the developing world (Schubel and Levi 2000). Currently, 15 out of the 28 megacities are located in Asia, with the number projected to increase to 23 in Asia by 2030 (United Nations 2014). Table 1 shows the list of the current and projected megacities in the world. The most vulnerable environments on the earth are the urban areas, especially the megacities. It is increasingly recognized that airborne emissions from major urban and industrial areas influence both air quality and climate change on scales ranging from regional to continental and global. The viability of important natural and agricultural ecosystems in regions surrounding highly urbanized areas is severely affected by the deteriorating urban air quality. Megacities also influence regional atmospheric chemistry. This situation is particularly acute in the developing world where the rapid growth of megacities is producing atmospheric pollution at unprecedented severity and extent (Gurjar et al. 2014). The impacts of climate change due to urbanization are multi-faceted and severe. The impacts differ dramatically among the megacities in the developed and in the developing countries. The impacts in the developed countries are already adapted or being adapted with efficient technologies/policies/regulations, whereas in the developing countries, due to fast growth rate and inefficient adaptation, the impact is imminent and severe. There are also no signs that the governments in the developing countries will prove to be more capable in the future. These swarming, massive urban areas will continue to grow and should concern the world (Liotta and Miskel 2012). It is very important therefore to understand the impacts of the growth of megacities on climate change. Studies on megacities at different spatial and temporal scales using various models will be required to understand their local-to-global impacts and implications (Gurjar et al. 2014). Lawrence et al. (2007) employed a global model to examine the outflow characteristics of pollutants from megacities. That model demonstrated the trade-offs between pollutant buildup in the region surrounding each megacity versus export to downwind regions or to the upper troposphere. Unfortunately, the coarse resolution of global atmospheric models and source inventories still presents difficulties to capturing the details of the impact of megacity emissions temporally and spatially (Gurjar et al. 2014). Dhaka, the capital of Bangladesh, is one of the fastest-growing megacities in the world; its population increased from 6.621 million (in 1990) to 16.982 million (in 2014) (Table 1). Today, Dhaka is the 11th Page 3

Handbook of Climate Change Mitigation and Adaptation DOI 10.1007/978-1-4614-6431-0_68-1 # Springer Science+Business Media New York 2015

Table 2 Satellite data acquired Date Dec. 05, 1973 Feb. 20, 1980 Jan. 28, 1989 Feb. 28, 2000 Mar. 24, 2003 Jan. 16, 2005 Nov. 03, 2006 Jan. 11, 2009 Feb. 15, 2010 Jan. 25, 2014

Sensor Landsat 1 MSS Landsat 3 MSS Landsat 4 TM Landsat 7 ETM+ Landsat 7 ETM+ Landsat 5 TM Landsat 5 TM Landsat 5 TM Landsat 5 TM Landsat 8 OLI

VNIR bands 4,5,6, and 7 4,5,6, and 7 1,2,3, and 4 1,2,3, and 4 1,2,3, and 4 1,2,3, and 4 1,2,3, and 4 1,2,3, and 4 1,2,3, and 4 2,3,4, and 5

Spatial resolution (m) 60a 60a 30 30 30 30 30 30 30 30

Thermal bands NA NA 6 6 6 6 6 6 6 10, 11

Spatial resolution (m) NA NA 30b 30c 30c 30b 30b 30b 30b 30d

Original MSS pixel size was 79  57 m; production systems now resample the data to 60 m TM Band 6 was acquired at 120-m resolution but is resampled to 30-m pixels (after February 25, 2010) c ETM+ Band 6 is acquired at 60-m resolution but is resampled to 30-m pixels (after February 25, 2010) d TIRS bands are acquired at 100 m resolution but are resampled to 30 m in delivered data product a

b

largest megacity in the world and is projected to be the 6th largest megacity in the world with a population of 27.374 million by the year 2030 (United Nations 2014). The urban heat island (UHI) effect is an important impact of urbanization. Urban and suburban areas experience elevated temperatures compared to their surrounding rural areas (EPA 2015). The annual mean air temperature of a city with one million people or more can be 1.8–5.4  F (1–3  C) warmer than the surrounding area (OKe 1997). On a clear, calm night, this temperature difference can be as much as 22  F (12  C) (OKe 1987). The UHI effect for the city of Dhaka has already been recorded by several reports and articles (Ahmed et al. 2013). It is not yet however completely understood how the intensity of the UHI effect changes with the continuous growth of the city. Remote sensing technology has been successfully used for urban growth and associated environmental studies for many years. This research investigates how the intensity of the UHI correlates with continuous decrease in the greenness of the city, as measured from satellite observations and using digital image processing techniques. The specific objectives include: (1) evaluating the changes in greenness from 1973 to 2014 using Landsat imagery-derived normalized difference vegetation index (NDVI), (2) estimating land surface temperatures (LST) using Landsat thermal imagery, and (3) investigating the potential of the Landsat-derived LST to evaluate the changes in the UHI effect.

Research Data A time series of ten Landsat images covering Dhaka, Bangladesh, acquired from 1973 to 2014 were used in this research. The time series of imagery includes data acquired by Landsat 1 and Landsat 3 Multispectral Scanner (MSS), Landsat 4 and Landsat 5 Thematic Mapper (TM), Landsat 7 Enhanced Thematic Mapper Plus (ETM+), and Landsat 8 Operational Land Imager (OLI) and Thermal Infrared Sensor (TIRS). Table 2 lists the imagery acquisition dates and corresponding sensors and their characteristics. All ten data sets were used for visual analysis, but only selected imagery were used for vegetation and land surface temperature (LST) analysis. Table 3 shows the data usage matrix. The spatial distribution of vegetation in Dhaka was mapped to evaluate the changes in greenness over time. A normalized difference vegetation index (NDVI) was used to detect the changes in greenness. NDVI was calculated for the imagery acquired in 1973, 1980, 1989, 2000, 2010, and 2014. The thermal

Page 4

Handbook of Climate Change Mitigation and Adaptation DOI 10.1007/978-1-4614-6431-0_68-1 # Springer Science+Business Media New York 2015

Table 3 Satellite data used Date Dec. 05, 1973 Feb. 20, 1980 Jan. 28, 1989 Feb. 28, 2000 Mar. 24, 2003 Jan. 16, 2005 Nov. 03, 2006 Jan. 11, 2009 Feb. 15, 2010 Jan. 25, 2014

Sensor Landsat 1 MSS Landsat 3 MSS Landsat 4 TM Landsat 7 ETM+ Landsat 7 ETM+ Landsat 5 TM Landsat 5 TM Landsat 5 TM Landsat 5 TM Landsat 8 OLI

Visual inspection X X X X X X X X X X

NDVI X X X X

Thermal analysis

X X

X X

X X

Fig. 1 Location of the study site (not scaled)

sensor of the Landsat series became available with the launch of Landsat 4. The earliest thermal data available for this region was acquired in 1989, started the time series of LST data for 1989, 2000, 2010, and 2014 (Table 3). The gradual changes in land use and land cover in and around the city of Dhaka from 1973 to 2014 are shown in Fig. 2. Figure 3 shows the net change in land cover and land use between 1973 and 2014.

Methods This research is based on the hypothesis that satellite observation-based normalized difference vegetation index (NDVI) and land surface temperature (LST) can monitor changes in urban greenness in a megacity and the changes in the intensity of urban heat island (UHI) effect. Data acquired by Landsat satellites would be the best available option to achieve these results due to the extensive archive of imagery and consistency of the sensors.

Page 5

Handbook of Climate Change Mitigation and Adaptation DOI 10.1007/978-1-4614-6431-0_68-1 # Springer Science+Business Media New York 2015 A time series of Landsat 1-3 MSS, Landsat 4-5 TM, Landsat 7 ETM+, and Landsat 8 OLI imagery for a period of 41 years (1973-2014) for Dhaka City, Bangladesh Dhaka City: December, 1973

Dhaka City: March, 2003

District Boundary

0

5

Dhaka City: February, 1980

Dhaka City: January, 1989

Dhaka City: January, 2009

Dhaka City: February, 2010

Kilometers

Dhaka City: February, 2000

Dhaka City: January 25, 2014

False color composite of Landsat data. Green: Vegetation. Light Purple: Urban areas

Fig. 2 Changes in land cover in Dhaka City

Fig. 3 Land use and land cover changes as observed by Landsat data

Normalized Difference Vegetation Index (NDVI) The normalized difference vegetation index (NDVI) is an image enhancement technique, which can be used to describe the greenness or relative density and health of vegetation in an image. It is one of the most

Page 6

Handbook of Climate Change Mitigation and Adaptation DOI 10.1007/978-1-4614-6431-0_68-1 # Springer Science+Business Media New York 2015

widely accepted and widely used vegetation indices. NDVI was first attributed by Rouse et al. (1973), but the concept was discussed by Kriegler et al. (1969). NDVI is commonly used as an indicator of relative biomass and greenness (Boone et al. 2000). The calculation of NDVI is based on the nature of the variation of reflectance values obtained from vegetated surfaces in the near-infrared (NIR) and red regions of the electromagnetic spectrum (EMS). The reflectance values of vegetation in the near-infrared (NIR) region are higher than that in the red region. NDVI provides a ratio of the NIR and the red bands (Eq. 1), eliminating any discrepancies that may occur in the imagery due to sensor differences or image quality issues, such as brightness and other interference (Hossain and Easson 2011). The NDVI can be computed for a wide variety of sensors depending on the availability of measurements in the NIR and red bands. NDVI ¼

ðNIR  RÞ ðNIR þ RÞ

(1)

where NIR and R are pixel values of NIR and R bands, respectively. Landsat data has been used for vegetation studies for many years. Since all the sensors used in Landsat data acquisitions consist of both visible and near-infrared (VNIR) channels (Table 2), it is possible to calculate NDVI using image data from all Landsat sensors. In this study, NDVI was calculated using imagery acquired by Landsat 1 MSS, Landsat 3 MSS, Landsat 4 TM, Landsat 5 TM, Landsat 7 ETM+, and Landsat 8 OLI satellites (Table 3). The NDVI calculation equations are as follows: NDVILandsat1_MSS ¼

ðBand7  Band4Þ ðBand7 þ Band4Þ

(2)

NDVILandsat3_MSS ¼

ðBand7  Band4Þ ðBand7 þ Band4Þ

(3)

NDVILandsat4_TM ¼

ðBand4  Band3Þ ðBand4 þ Band3Þ

(4)

NDVILandsat5_TM ¼

ðBand4  Band3Þ ðBand4 þ Band3Þ

(5)

NDVILandsat7_ETMþ ¼ NDVILandsat8_OLI ¼

ðBand4  Band3Þ ðBand4 þ Band3Þ

ðBand5  Band4Þ ðBand5 þ Band4Þ

(6)

(7)

Land Surface Temperature (LST) Land surface temperatures (LSTs) in and around the city of Dhaka were estimated using the Level 1 thermal data acquired by Landsat 4–5 TM, Landsat 7 ETM+, and Landsat 8 TIRS sensors. The unitless digital number (DN) values of the thermal bands were digitally processed to corresponding radiance values. The processed radiance values were then used to calculate LST.

Page 7

Handbook of Climate Change Mitigation and Adaptation DOI 10.1007/978-1-4614-6431-0_68-1 # Springer Science+Business Media New York 2015

Conversion of DN to Radiance Landsat 4–5 TM and Landsat 7 ETM+ During the generation of Level 1 data, pixel values from raw unprocessed imagery (Level 0 data) were converted to units of absolute radiance using 32-bit floating-point calculations. These absolute radiance values were then scaled to 8-bit values representing calibrated digital numbers (Qcal) before output to the distribution media. Conversion of these calibrated digital numbers (Qcal) in L1 products back to the “atsensor spectral radiance” (Ll) requires knowledge of the original rescaling factors. The following equation (Eq. 8) was used to perform a radiance conversion for the Level 1 Landsat 4–5 TM and Landsat 7 ETM+ imagery (Chander and Markham 2003; Chander et al. 2009).   LMAXl LMINl ðQcal Qcalmin Þ þ LMINl (8) Ll ¼ Qcalmax Qcalmin where Ll = spectral radiance at the sensor’s aperture in W/(m2.sr.mm) Qcal = quantized calibrated pixel value in DNs Qcalmin = minimum quantized calibrated pixel value corresponding to LMINl (DN = 0) Qcalmax = maximum quantized calibrated pixel value corresponding to LMAXl (DN = 255) LMINl = spectral radiance that is scaled to Qcalmin in W/(m2.sr.mm) LMAXl = spectral radiance that is scaled to Qcalmax in W/(m2.sr.mm) The required parameters were obtained from the Level 1 product metadata to process the acquired thermal data using Eq. 8. Equation 8 was modified to Eqs. 9, 10, and 11 and was used to obtain the at-sensor radiance values for the imagery acquired in 1989, 2000, and 2010, respectively.   15:303  1:238 ðDNBand6  1Þ þ 1:2378 (9) LlðL4 TM_1989Þ ¼ 255  1  LlðL7 ETM_2000Þ ¼

 12:650  3:20 ð DNBand62  1Þ þ 3:20 255  1

 LlðL5 TM_2010Þ ¼

 15:303  1:238 ð DNBand6 Þ þ 1:238 255  1

(10)

(11)

Landsat 8 TIRS Landsat 8 TIRS data has two different thermal bands (Band 10 and Band 11), unlike Landsat 4–5 TM and Landsat 7 ETM+. The center wavelength and bandwidth of Band 10 are 10.9 and 0.6 mm respectively, whereas the center wavelength and bandwidth of Band 11 are 12.0 and 1.0 mm, respectively. In this study, Band 11 was used to estimate LST to be more comparable with Landsat TM and ETM+ thermal data. As proposed by USGS (2014), the conversion of DN values (Qcal) to the “at-sensor spectral radiance” (Ll) was done using different approaches (comparing to Landsat TM and ETM+). Equation 12 was used in this case. This approach was also used in several recent research projects (e.g., Sameen and Kubaisy 2014). Ll ¼ M:Qcal þ B

(12)

Page 8

Handbook of Climate Change Mitigation and Adaptation DOI 10.1007/978-1-4614-6431-0_68-1 # Springer Science+Business Media New York 2015

Table 4 Landsat 8 TIR parameters TIR parameters

TIR bands Band 10 Band 11

Radiance multiplier (M) 0.0003342 0.0003342

Radiance add (B) 0.1 0.1

Thermal constants K1 W/(m2.sr.mm) 774.89 480.89

K2 Kelvin 1,321.08 1,201.14

Table 5 Landsat TM and ETM+ thermal band calibration constants Constants K1 Units W/(m2.sr.mm) 671.62 607.76 666.09

Sensor type Landsat 4 TM Landsat 5 TM Landsat 7 ETM+

K2 Kelvin 1,284.30 1,260.56 1,282.71

where M is the radiance multiplier B is the radiance add The values of “radiance multiplier” and “radiance add” were obtained from Landsat 8 TIRS metadata (Table 4) for Band 11. These values were used in Eq. 12 to obtain Eq. 13, which was used to estimate LST for 2014 imagery acquisition date. LlðL8 TIR2014 Þ ¼ M  DNBand11 þ B

(13)

Conversion of Radiance to LST The obtained radiance values for all Landsat thermal data were converted to land surface temperature (LST) using Eq. 14. Since the obtained radiance values are of top of the atmosphere (at-sensor radiance), Eq. 14 was modified by adding an emissivity factor (Ɛ) to minimize the influence of atmospheric distortion in the calculation (Eq. 15). Table 4 provides the values of K1 and K2 for Landsat 8 TIRS (Maher and Kubaisy 2014). Table 5 provides the values of K1 and K2 for Landsat 4–5 TM and Landsat 7 ETM + (Coll et al. 2010). Tk ¼

Tk ¼

K2   K1 ln þ1 Ll

K2   K1 :e ln þ1 Ll

(14)

(15)

where

Page 9

Handbook of Climate Change Mitigation and Adaptation DOI 10.1007/978-1-4614-6431-0_68-1 # Springer Science+Business Media New York 2015

Fig. 4 Changes in greenness from 1980 to 2014 as observed by the Landsat data

Tk = effective at-satellite temperature in Kelvin K2 = calibrated constant 2 in Kelvin K1 = calibrated constant 1 in W/(m2.sr.mm) Ll = spectral radiance at the sensor’s aperture e = emissivity (typically 0.95) Equation 15 was then modified to form Eqs. 16–19 to calculate LST, in degrees Kelvin, for each different Landsat sensor by using corresponding values of Ll, K1, and K2. After calculating LST in absolute temperature, the values were converted to degrees Celsius, using Eq. 20. TkðL4 TM1989 Þ ¼

TkðL7 ETM2000 Þ ¼

1260:56 607:76  0:95 ln LlðL4 TM_1989Þ

!

1282:71 666:09  0:95 ln LlðL7 ETM_2000Þ

(16) þ1

!

(17) þ1

Page 10

Handbook of Climate Change Mitigation and Adaptation DOI 10.1007/978-1-4614-6431-0_68-1 # Springer Science+Business Media New York 2015

Fig. 5 Detailed changes in greenness from 1980 to 2014 as observed by the Landsat data (Thana is a kind of local administrative boundary like county)

TkðL5 TM2010 Þ ¼

1260:56 607:76  0:95 ln LlðL5 TM_2010Þ

!

1201:14

TkðL8 TIR2014 Þ ¼ ln

480:89 LlðL8 TIR_2014Þ

Tc ¼ Tk  273:15

(18) þ1

!

(19) þ1 (20)

Results and Analysis The processed NDVI and LST data were subset for Dhaka metropolitan area for analysis of changes in greenness and land surface temperature. The time series of Dhaka NDVI data was used to study the changes in greenness since 1980. The time series of Dhaka LST data was used to evaluate the changes in the intensity of urban heat island (UHI) impact since 1989.

Changes in Greenness On the basis of the minimum and maximum values of NDVI, the lookup table of the entire time series data was scaled from 0.5 to 0.65 to visualize the changes in greenness over time. Figure 4 shows the NDVI time series for Dhaka city from 1980 to 2014. In Fig. 4, it is clearly seen that the average NDVI values

Page 11

Handbook of Climate Change Mitigation and Adaptation DOI 10.1007/978-1-4614-6431-0_68-1 # Springer Science+Business Media New York 2015

Fig. 6 Land surface temperature (LST) from 1989 to 2014 as observed by the Landsat data

decreased continuously from 1980 to 2014. The most dramatical change occurred between 1989 and 2000. The net change in greenness from 1980 to 2014 is also substantial as seen in Fig. 5.

Changes in the Urban Heat Island (UHI) Effects The variation in the intensity of urban heat island (UHI) effects due to the changes in greenness in the city of Dhaka was evaluated by determining the changes in the nature of spatiotemporal distribution of land surface temperature (LST) over time. The Landsat satellite observed LST time series data were used in different ways to study the changes in the nature of spatiotemporal distribution of LST over time. At first, the LST distribution was visually analyzed by stretching the data lookup table from red to green. The red and green ends represent the maximum and minimum temperatures for each date. The areas covered by yellow represents approximately mean temperature for each date. Figure 6 shows the spatiotemporal distribution of Landsat observed LST in the city of Dhaka from 1989 to 2014. The LST imagery time series in Fig. 6 clearly shows that the areas characterized by high temperature extended substantially from 1989 to 2014, with significant increase from 1989 to 2000. Since the satellite imagery used in this study were acquired in different seasons of different years, it was not found reasonable to determine the absolute changes in LST variation by detecting the net changes in LST values.

Page 12

Handbook of Climate Change Mitigation and Adaptation DOI 10.1007/978-1-4614-6431-0_68-1 # Springer Science+Business Media New York 2015

Fig. 7 Variation in LST along A-B in 1989

Fig. 8 Variation in LST along A-B in 2000

The second approach focused on the variation in LST along specific cross-section profile. A crosssection line A-B was selected in the east-west direction on each LST data set to extract the temperature values along the line (Fig. 6). The extracted LST values along line A-B were plotted and compared with the mean LST value for the corresponding data acquisition dates. Figures 7, 8, 9, and 10 show the variation in LST along A-B in 1989, 2000, 2010, and 2014, respectively. This analysis supports the visual analysis performed earlier and also provided a more quantitative understanding of how the LST values changed over time in reference to the mean values.

Page 13

Handbook of Climate Change Mitigation and Adaptation DOI 10.1007/978-1-4614-6431-0_68-1 # Springer Science+Business Media New York 2015

Fig. 9 Variation in LST along A-B in 2010

Fig. 10 Variation in LST along A-B in 2014

The changes in the LST distribution pattern observed along line A-B provide a good quantitative evaluation of the changes in the intensity of UHI effects over time. However, the observation is limited in a particular direction and areas. The potential of image classification techniques was therefore evaluated to extend the quantitative analysis. The classification was performed based on the statistics of the satelliteobserved LST imagery (Fig. 6) as shown in Table 6 and Figs. 11 and 12.

Page 14

Handbook of Climate Change Mitigation and Adaptation DOI 10.1007/978-1-4614-6431-0_68-1 # Springer Science+Business Media New York 2015

Table 6 Temperature statistics Date 1989 2000 2010 2014

Temperature statistics ( C) Min Max 22.09 32.66 21.61 36.63 21.63 31.81 17.47 25.38

Sensor Landsat 4 TM Landsat 7 ETM+ Landsat 5 TM Landsat 8 TIR January 28, 1989

Mean 25.17 26.41 25.52 20.49

Mean

50000

Number of pixels

40000

30000

20000

10000

0 22.09

24.72

27.36 Temperature (C)

30.00

32.62

29.12

32.88

36.63

February 28, 2000 Mean 50000

Number of pixels

40000

30000

20000

10000

0 21.61

25.37

Temperature (C)

Fig. 11 LST mean for 1989 and 2000

Each LST imagery was classified into five classes around the mean temperature to map the spatiotemporal distribution of the areas characterized by different levels of above mean temperature. The classes are as follows:

Page 15

Handbook of Climate Change Mitigation and Adaptation DOI 10.1007/978-1-4614-6431-0_68-1 # Springer Science+Business Media New York 2015 February 15, 2010 Mean 50000

Number of pixels

40000

30000

20000

10000

0 21.63

24.17

26.72

29.27

31.81

23.40

25.38

Temperature (C) January 25, 2014 Mean 50000

Number of pixels

40000

30000

20000

10000

0 17.47

19.45

21.42 Temperature (C)

Fig. 12 LST mean for 2010 and 2014

• • • • •

Class 1: Areas with temperature equal or less than mean Class 2: Areas with temperature 1 higher than mean Class 3: Areas with temperature 2 higher than mean Class 4: Areas with temperature 3 higher than mean Class 5: Areas with temperature >3 higher than mean

Figures 13, 14, 15, and 16 show the distribution of LST pixels above mean LST in the city of Dhaka as observed in 1989, 2000, 2010, and 2014, respectively. The classified raster LST data were converted to vector data and polygons were simplified. The vector data with simplified polygons were used to calculate the areas covered by each LST regime (class). The area calculations were plotted for different classes to compare them graphically. The area comparison plots improve the understanding of the changes in the intensity of UHI effect over time.

Page 16

Handbook of Climate Change Mitigation and Adaptation DOI 10.1007/978-1-4614-6431-0_68-1 # Springer Science+Business Media New York 2015

Fig. 13 Distribution of above mean temperature on January 28, 1989

Figure 17 shows the comparison of the size of the areas characterized by total above mean temperature in the city of Dhaka from 1989 to 2014. The total size of the areas where LST remained above the mean increased continuously from 1989 to 2010 but decreased in 2014. Figure 18 shows the comparison of the size of the areas characterized by 1 above mean temperature in the city of Dhaka from 1989 to 2014. The total size of the areas where LST remained 1 above the mean decreased from 1989 to 2000, but increased since then continuously. Figure 19 shows the comparison of the size of the areas characterized by 2 above mean temperature in the city of Dhaka from 1989 to 2014. The total size of the areas where LST remained 2 above the mean increased from 1989 to 2000, but decreased since then. Figure 20 shows the comparison of the size of the areas characterized by 3 above mean temperature in the city of Dhaka from 1989 to 2014. The total size of the areas where LST remained 3 above the mean increased significantly from 1989 to 2000 and remained above 1989 since then.

Page 17

Handbook of Climate Change Mitigation and Adaptation DOI 10.1007/978-1-4614-6431-0_68-1 # Springer Science+Business Media New York 2015

Fig. 14 Distribution of above mean temperature on February 28, 2000

Discussion and Conclusions The number and size of megacities are increasing with the majority of the growth occurring in developing countries, especially in Asia and Africa. Understanding the potential impacts of the growth of megacities on the climate in Southeast Asia will provide insight for understanding the relationship between climate change and urban growth in the developing world. The analysis of the results obtained in this research for Dhaka, Bangladesh, shows that: • The land use and land cover change due to urban growth and development can be mapped and quantified using time series data acquired by the Landsat satellite programs from 1973 to date (Landsat 1–3 MSS, Landsat 4–5 TM, Landsat 7 ETM+, and Landsat 8 OLI and TIRS). • Landsat imagery-derived NDVI can be used to map and monitor the changes in greenness in growing megacities. It was observed that the average NDVI values in Dhaka decreased continuously from 1980 to 2014 with a significant change between 1989 and 2000. Page 18

Handbook of Climate Change Mitigation and Adaptation DOI 10.1007/978-1-4614-6431-0_68-1 # Springer Science+Business Media New York 2015

Fig. 15 Distribution of above mean temperature on February 15, 2000

• The changes in the land surface temperature (LST) can be used to determine the changes in the intensity of urban heat island (UHI) effect as a result of the growth and development in a megacity. The Landsat satellite-observed thermal data can be used to estimate continuous LST at 80–30 m spatial resolution from 1980 to date. • It is possible to study the changes in the intensity of UHI effects in megacities, such as Dhaka, using the thermal data acquired by Landsat 4–5 TM, Landsat 7 ETM+, and Landsat 8 TIRS from 1989 to 2014. Visual inspection of the Landsat-derived LST estimation can be used to interpret the changes in the intensity of UHI effects. However, a quantitative assessment of the changes in the spatiotemporal distribution of the LST over time is necessary to quantify the changes in the intensity of UHI effects. Image classification technique of the LST distribution can provide a reasonable solution in this regard. A five-class image classification scheme based on mean LST and 1, 2, 3, >3 above mean LST provided a good understanding of the spatiotemporal variation of the above mean LST in Dhaka from 1980 to 2014.

Page 19

Handbook of Climate Change Mitigation and Adaptation DOI 10.1007/978-1-4614-6431-0_68-1 # Springer Science+Business Media New York 2015

Fig. 16 Distribution of above mean temperature on January 25, 2014

• The imaging technology of LST (thermal data) by the Landsat 8 TIRS is different from that of the other Landsat sensors. The Landsat 8 TIRS data calibration approach used by NASA is also different. More research is needed to make the thermal data acquired by Landsat 8 TIRS and other Landsat sensors (TM and ETM+) comparable and reduce uncertainty. • The interpretation of the changes in the greenness in the city of Dhaka was qualitative in nature in this study. It is recommended to use the surface reflectance-based NDVI calculation for the quantitative change detection studies.

Future Research Directions The research presented in this chapter shows the potential of remote sensing data and image processing techniques to improve our current understanding about the impact of the growth of megacities in Southeast Asia on climate change. This study provides a good platform for future research to contribute Page 20

Handbook of Climate Change Mitigation and Adaptation DOI 10.1007/978-1-4614-6431-0_68-1 # Springer Science+Business Media New York 2015

Fig. 17 Comparison of the size of the areas characterized by above mean temperature in the city of Dhaka (1989–2014)

Fig. 18 Comparison of the size of the areas characterized by 1 above mean temperature in the city of Dhaka (1989–2014)

in climate change studies following the emerging “bottom-up approach” (Hossain 2013). As part of this approach, initiatives are underway to extend the current research in the following directions. • Categorize the NDVI and LST data for specific seasons and months so that the seasonal and monthly variations in land use and land cover and LST are minimized. • Develop more statistically based methods to determine the changes in the intensity of UHI effects over time by normalizing the seasonal and monthly variations of LST. Page 21

Handbook of Climate Change Mitigation and Adaptation DOI 10.1007/978-1-4614-6431-0_68-1 # Springer Science+Business Media New York 2015

Fig. 19 Comparison of the size of the areas characterized by 2 above mean temperature in the city of Dhaka (1989–2014)

Fig. 20 Comparison of the size of the areas characterized by 3 above mean temperature in the city of Dhaka (1989–2014)

• Extend this study to selected other megacities in both developing and developed countries to investigate if the developed methods/techniques work globally to determine the changes in the intensity of UHI effects due to urban growth.

Page 22

Handbook of Climate Change Mitigation and Adaptation DOI 10.1007/978-1-4614-6431-0_68-1 # Springer Science+Business Media New York 2015

Acknowledgments Thanks are due to NASA and USGS for providing all the Landsat data used in this research at free of charge. Thanks are also due to the National Center for Computational Hydroscience and Engineering (NCCHE) and Mississippi Mineral Resources Institute (MMRI) at the University of Mississippi for providing all the logistics and computing facilities for conducting this research.

References Boone RB, Galvin KA, Lynn SJ (2000) Generalizing El Nino effects upon Maasai livestock using hierarchical clusters of vegetation patterns. Photogramm Eng Remote Sens 66:737–744 Chander G, Markham BL (2003) Revised Landsat-5 TM radiometric calibration procedures, and postcalibration dynamic ranges. IEEE Trans Geosci Remote Sens 41(11):2674–2677 Chander G, Markham BL, Helder DL (2009) Summary of current radiometric calibration coefficients for Landsat MSS, TM, ETM+, and EO-1 ALI sensors. Remote Sens Environ 113:893–903 Coll C, Galve JM, Sánchez JM et al (2010) Validation of Landsat-7/ETM+ thermal-band calibration and atmospheric correction with ground-based measurements. IEEE Trans Geosci Remote Sens 48(1):547–555 EPA (2015) Heat island effect. http://www.epa.gov/heatisland/. Accessed 20 Apr 2015 Gurjar BR, Nagpure AS, Singh TP et al (2014) Air quality in Megacities. The encyclopedia of earth. http://www.eoearth.org/view/article/149934/. Accessed 26 Jan 2015 Hossain A (2013) Flood inundation and crop damage mapping: a method for modeling the impact on rural income and migration in humid deltas. In: Roger P Sr (ed) Climate vulnerability: understanding and addressing threats to essential resources, vol. 5. Elsevier, Academic Pres, p 357–374. http://store. elsevier.com/product.jsp?locale=en_US&isbn=9780123847034 Hossain A, Easson G (2011) Predicting shallow surficial failures in the Mississippi river levee system using airborne hyperspectral imagery. Geomatics Nat Hazards Risk 3(1):55–78 Kriegler FJ, Malila WA, Nalepka RF et al (1969) Preprocessing, transformations and their effects on multispectral recognition. In: Proceedings of the sixth international symposium on remote sensing of environment. University of Michigan, Ann Arbor, pp 97–131 Lawrence MG, Butler TM, Steinkamp J et al (2007) Regional pollution potentials of megacities and other major population centers. Atmos Chem Phys 7:3969–3987 Maher IS, Kubaisy MHA (2014) Automatic surface temperature mapping in ArcGIS using Landsat8 TIRS and ENVI tools, case study: Al Habbaniyah lake. J Environ Earth Sci 4(12):12–17 New Scientist Magazine (2006) How big can cities get? 17 June 2006, p 41 Oke TR (1987) Boundary layer climates. Routledge, New York Oke TR (1997) Urban climates and global environmental change. In: Thompson RD, Perry A (eds) Applied climatology: principles & practices. Routledge, New York, pp 273–287 Rouse JW, Haas RH, Schell JA et al (1973) Monitoring vegetation systems in the Great Plains with ERTS. In: Third ERTS Symposium, NASA SP-351 I, pp 309–317 Schubel JR, Levi C (2000) The emergence of megacities. Med Glob Surviv 6(2):107–110 United Nation (2014) World urbanization prospects, the 2014 revision. Department of Economic and Social Affairs, United Nations, New York

Page 23

Handbook of Climate Change Mitigation and Adaptation DOI 10.1007/978-1-4614-6431-0_68-1 # Springer Science+Business Media New York 2015

United Nations (1997) The state of world population 1996: changing places: population, development and the urban future. United Nations, New York USGS (2014) Using the USGS Landsat 8 product. http://landsat.usgs.gov/Landsat8_Using_Product.php. Accessed 20 Apr 2015 World Resources Institute (1998) World resources 1996–97: a guide to the global environment: the urban environment. Oxford University Press, Oxford

Page 24

Handbook of Climate Change Mitigation and Adaptation DOI 10.1007/978-1-4614-6431-0_69-1 # Springer Science+Business Media New York 2015

The Advanced Recycling Technology for Realizing Urban Mines Contributing to Climate Change Mitigation Tatsuya Okia* and Toshio Suzukib a National Institute of Advanced Industrial Science and Technology (AIST), Onogawa Tsukuba, Ibaragi, Japan b National Institute of Advanced Industrial Science and Technology (AIST), Nagoya, Japan

Abstract Coping with sustainable civilization and utilization of renewable energy, climate change mitigation is one of the big challenges. Obtaining metal resources from urban mines (waste) for supporting the civilization of human races will contribute not only to support sustainable development of civilization society to the future but also to mitigate climate change. Urban mines are one of the promising resources especially for poor natural metal resource countries such as Japan. Fortunately, Japan is one of the major rare metal consumers and also is capable of smelting rare metal by its own. Japan’s urban mine will be more practical with top class recycling technology. In addition to these technological developments, it is necessary to reform the society system in order to realize productive and economical urban mines which overcome the natural mine. Furthermore, in order to continue a steady development of rare metal recycling, it is necessary to conduct well-planned technology development based on the prediction of the future material usage. In this chapter, the authors show the technical subjects for realizing total circulating usage of metal resources including rare metals and an attempt currently tackled in Japan.

Keywords Strategic urban mining research base; Strategic urban mining; Urban mining; Urban mine; Rare metals; Rare metal X; Minor metals; Metal resources circulation; Small domestic appliances; Waste small domestic appliances; Physical separation; Fine particles; Coarse particles; Degree of liberation; Liberation; Liberated; Liberated particles; Locked; Locked particles; Grinding; Crushing; Sensor based sorting; Separation; Gravity concentration; Shaking table; Spiral concentrator; Wet separation; Dry separation; Mineral processing; Settling velocity; Printed circuit board; Tantalum capacitor; Quartz resonator; Inclined and low intensity magnetic-shape separator; Double tubes pneumatic separator

Introduction Huge metal resources are needed to support the civilization of mankind. Not only developed countries but also the remarkably developing Asian region has been consuming metal resources. In the past time, a lot of metal resources have been acquired from natural mines. Although a true depletion of the resources may be in the far future, the metal content of mines continues to decline gradually, and the amount of associated heavy metals and radioactive substances tend to increase. Total material requirement (TMR) which indicates the total amount of material usage to produce 1 ton of metal is usually utilized to compare the material consumption for production (Yamasue et al. 2009). For example, copper ore grade is decreasing

*Email: [email protected] Page 1 of 23

Handbook of Climate Change Mitigation and Adaptation DOI 10.1007/978-1-4614-6431-0_69-1 # Springer Science+Business Media New York 2015

Fig. 1 Definition of the rare metals in Japan

Fig. 2 The estimated UO-TMR compared with NO-TMR (From Ref. Yamasue et al. 2009)

year by year worldwide; as a result, the TMR of copper is increasing. This means that total energy consumption involved in the copper production increases. TMR increases especially for minor metals, which nowadays are very important for supporting advanced industries. Japan is being the world’s largest minor metal-consuming country (in Japan, 47 species of minor metal are called “rare metals,” as shown in Fig. 1), while the production of rare metals tends to be monopolized by certain countries and the world production volume of each rare metal is small. Therefore, the access to these metals can be easily restricted by the control of the production. In addition, environmental destruction is becoming a serious problem because the regulation for metal production is not well effective in such areas. Figure 2 shows the estimated UO (urban ore)-TMR compared with NO (natural ore)-TMR (Yamasue et al. 2009). As can be seen, utilization of urban ore is already effective for minimizing the total energy consumption for almost all metal elements, and thus utilization of UO is considered to lead to mitigate the climate change in the future. In addition, considerably not far in the future, some of the metals are suspected to fall into critical shortage, and as a result, it is considered that maintenance of the substance society could be threatened. Deterioration of metal production efficiency and increase in the metal Page 2 of 23

Handbook of Climate Change Mitigation and Adaptation DOI 10.1007/978-1-4614-6431-0_69-1 # Springer Science+Business Media New York 2015

consumption mean an increase of the energy consumption for maintaining civilized society, which easily impacts on the climate change, too. Thus, an improvement of rare metal production technology for suppressing energy consumption for the metal production will be a significant step for climate change mitigation as well. On the other hand, almost all metal products remain somewhere on earth after usage. Some of them are soluble in water, but basically they exist in the solid state. According to the law of conservation of mass, most metals yielded ever from natural mines stay somewhere on earth, accumulated in the close area of human activities. Even in the narrow land of Japan, it is said that the total amount of accumulated metals in Japan is to surpass the annual metal consumption in the world. Thus, used products that are accumulated in the land have become to be called “urban mines.” Most urban mines include products that have been routinely used without harmful substances and radioactive materials and are considered to be relatively safe resources. Even when the metal quality of urban mines is not as high as that of natural mines where metals are concentrated by taking an enormous amount of time, for urban mine, it is possible to control of the distribution and concentration of the waste products for easy recycling. In the future, it will be possible to deliberately control the formation of urban mines, which will contribute not only to the sustainable development of civilized society but also to suppression of energy consumption due to the metal production, by extension, to contribute to climate change mitigation. Although Japan is poor in natural resources, Japan has been supplying high-tech products to the world by importing the raw materials produced from natural mines overseas over the years. In recent years, however, due to the rise of resource nationalism, one experienced the soaring of metal prices and the imminence of supply itself. From such a background, a strategy is considered to attempt to collect resources, including rare metals from urban mines, and the strategic program has been carried out in Japan for the first time in the world. The purpose is not for recycling to reduce diffusion and waste of harmful substances but for thorough resource recovery. In this chapter, the recent efforts of the strategy in Japan will be shown, and a discussion will be presented on the development of urban mines that realize sustainable civilized society and the relaxation of climate change.

Recent Development of Urban Mines in Japan From “Quantity Recycling” to “Quality Recycling” Japan relies for most of the natural metal resources to support its manufacturing industry on imports from abroad, and in recent years Japan encountered a situation that stable supply of the resources, especially “rare metals,” has been imminent by sudden export restriction and steep rises in metal prices. Since the characteristic of each rare metal is different from each element, it is difficult to be replaced by other rare metals, unlike energy resources. Therefore, even when weight-wise the usage of rare metals is negligible in the products, the shortage of the supply can lead to the stop of the production. True depletion of metal resources is considerably ahead of the future, and a sufficient amount of metals should exist somewhere on earth; however, the stable supply is not a guaranteed issue. In addition, production efficiency of natural metal resources in the process will gradually decrease toward the depletion, which will require enormous energy and cost. This will become a significant impact on the manufacturing industry and the world economy. By keeping the current situation that the society strongly depends on the natural metal resources, the world will become unsustainable at some point. On the other hand, the metals from natural mines, always, exist somewhere on earth. Most of them are already buried as wastes; some are used in the social infrastructure. Fossil energy disappears once it is used. Organic resources such as resin and paper are not possible to use repeatedly for a long period of time. However, the metals can be recycled forever in Page 3 of 23

Handbook of Climate Change Mitigation and Adaptation DOI 10.1007/978-1-4614-6431-0_69-1 # Springer Science+Business Media New York 2015

theory. Even if they lose product value, they still have industrial value in the element itself; the smelting process can completely restore the original raw material. The amount of recyclable metal is dependent on the degree of living standards and population in the area. Used products that are accumulated in a certain area (city) are called “urban mine” in contrast to the natural mine. In urban mine, resources can be acquired by recycling technology, not mine technology for a natural mine. Currently still good natural mines exist, and developing urban mines is not economical since recycling technologies are still costly. However, as the extraction of natural mines progresses to reduce the amount of reserves, the amount of reserves for urban mines increases. As described above, with the decline of natural mine production efficiency, one day, development of urban mines would become economically realistic. The pseudo-phenomenon actually occurred in Japan around 2010 due to import restrictions from soaring overseas metal prices. Therefore, in Japan, the development of urban mines was seriously considered in order to adapt the critical situation. In Japan, recycling has been actively carried out since 1990. At the time, recycling was encouraged due to the shortage of waste disposal sites, and the aim of recycling was to reduce the amount of wastes, the so-called quantitative recycling. Target materials are abundant materials such as resin, glass, iron, and aluminum, and a major goal is to reduce the volume of wastes, not to recover the resource to reuse. Until now, mass metals such as iron, aluminum, and precious metals have been recycled. But this time, “rare metal,” which is not used in large quantities and not expensive as precious metals, attracted attention the most. Rare metals are also called “vitamins for industry” in Japan. Even if the usage is negligible, without it, it is no longer possible to manufacture a number of high-tech products. That temporal collapse of the balance of supply and demand, the situation that would be anticipated in the future, would be completely upsetting Japan. Currently, there are few metal mines running in Japan, and raw materials have been imported and consumed by 120 million of Japanese people over the decades, and the several decades worth of materials has been accumulated in the national territory. This situation cannot still be called “urban mines” until the resource can be retrieved within a certain economy. While natural mines are formed by concentrating resources over incredibly long time, urban mines are not naturally formed by gathering a large amount of waste products. In other words, “urban mines” are not something to look for but to be intentionally developed (urban mining). Actually, urban mine development technology that realizes the reduction of the entropy of metal spread in land should be efficient and economical. When the urban mine development for retrieving rare metals was started in Japan, landfilled and reused wastes were not targeted, but the products owned by each person, “hoarding goods.” The hoarded goods refer to digital home appliances that were not in use, sleeping in a desk drawer without being discarded. In the early stage of urban mine development in Japan, where recycling infrastructure associated with the legislation in the 1990s is well established, it is thought that the rare metal can be easily recovered once the hoarded goods from the public are collected. The recovery operations of rare metal from small household appliances in the designated area were conducted in 2008 in Japan. However, in reality, it was not possible to recycle the rare metals utilizing the existing recycling facilities. Currently, among metals contained in the waste products, only noble metals (gold, silver, platinum, palladium, etc.) and some of base metals (iron, aluminum, copper, etc.) are recycled. Rare metals except for platinum and palladium are rarely recycled from waste products once available in the market. To realize the recovery system of rare metals from waste products, at least two problems must be overcome. First, used products widely spread to the consumers need to be stably collected when they are discarded. This requires the construction of rules and social systems. Second, it is required to develop the technology that realizes a high purity metal extraction from the collected waste products, high enough to be directly used by rare metal smelting and raw material manufacturers.

Page 4 of 23

Handbook of Climate Change Mitigation and Adaptation DOI 10.1007/978-1-4614-6431-0_69-1 # Springer Science+Business Media New York 2015

Fig. 3 Schematic image showing the existence status of rare metal in the waste product

Although rare metal density is relatively high in small appliance wastes, the system for collecting these product wastes systematically did not exist until recently. Therefore, Small Appliances Recycling Act was made effective in Japan in 2013. It made it possible to collect small appliance wastes widely across the administrative area, being one breakthrough for the first problem. No obligation, however, exists on rare metal recovery. Without overcoming the latter technical problem, recycling of metals will only stay on recovering precious metals and part of the base metal even for small appliances wastes. Unlike recycling a conventional iron and aluminum used in construction materials, various rare metal concentrations in the waste products are about several hundreds to several thousands ppm. Thus, there exists a technical leap for recycling rare metals from small appliances in traditional recycling infrastructure, like trying surgery by using the pickaxe. At this time, “quality recycling” was started to be more paid attention to than “quantitative recycling,” and the need for technological transformation had been widely recognized around 2010 in Japan.

Technical Challenges of Quality Recycling: Liberation Physical Sorting Techniques and the Composition of the Waste Products Because the amount of rare metals in waste products is very low compared to construction materials such as iron and aluminum, economical recovery of rare metals is difficult, and it is not practical to use hydrometallurgy processes with chemicals. Pyrometallurgy processes with high temperature, which are effective methods for recovering low concentrated copper and precious metals, are not ideal either because most of rare metals thermally dissolve and disperse into glass slug. Thus, it is of importance to separate copper and precious metals from the waste products by physical sorting before using smelting processes for rare metals. Development of low-cost physical sorting technology will be a key for realizing low concentrating rare metal recycling systems. Each waste product is thought to be consisting of various metal particles (hybrid particles) including rare metal particles. High purity sorting of rare metal strongly relies on the concentration and dispersion of the rare metal particles in the products. Figure 3 is a schematic image showing the existence status of rare metal in the waste product. As can be seen, rare metal X is one of more than ten rare metals used in a smartphone and is considered to be

Page 5 of 23

Handbook of Climate Change Mitigation and Adaptation DOI 10.1007/978-1-4614-6431-0_69-1 # Springer Science+Business Media New York 2015

concentrated by physical sorting. It is a great possibility of recovery if the concentration of the rare metal X is high enough after sorting. On the other hand, the possibility also depends on the dispersion of the rare metal X in the product. Here, dispersion is defined as the domain size (or distribution) of the rare metal X in the product. If the domain size is larger or the rare metal is concentrated in a particular area in the product, the dispersion of the rare metal X is considered to be small. Rectangular boxes in Fig. 3 show the rare metal X distribution in the product (smartphone). Status A is the best for physical sorting processes. The rare metal X exists in high concentration, locally in the product. In this case, it is quite easy to separate the particles with highly concentrated rare metal X from the rest of the particles by crushing or dismantling processes. This process is called “liberation – single separation.” This is a very important operation in the physical sorting process and will be discussed later. Status A is not only easy for single separation but also for the rest of sorting operation due to the fact that the concentration of the rare metal X is high in the particles. The next best status for physical sorting is Status B at the upper left in Fig. 3. Even though the total amount of the rare metal X is low, it is expected to have liberation as good as Status A. Only this will cause difficulty at the latter operations in the process due to small amount of the rare metal X. Status C, even though the concentration of the rare metal X is high, is more difficult for liberation due to high distribution of X in the product. Status D is the worst scenario for physical sorting processes. In the case of a smartphone, they typically have maldistribution of elements in the product, and thus it is not possible to apply physical sorting for single separation – liberation. Rather, they require very fine crushing processes to extract rare metal X. The required particle size after crushing totally depends upon the dispersion of the rare metal X. Usually, ordinal sorting processes can be applied down to several micron particle size, and it is not possible to apply physical sorting if liberation cannot be achieved still in this particle size. Even when the liberation is possible for such small particles, it requires large amounts of energy for fine grinding. In addition, as discussed later, it also requires separation processes in the wet condition for less than 0.5 mm particles, while the millimeter size of particles can be applied to dry separation processes. The wet processes require another electric power for pumping water, water treatment process unit, which adds more energy consumption and cost. The product obtained by wet process is still not as good as that by dry sorting processes in the millimeter range. There is a trade-off between the purity and the recovery of X. Thus, the combination of fine grinding and wet separation processes is the last to be chosen in the physical sorting processes. These processes, however, are cost-effective compared to chemical processes and are effective for collecting the rate metals that are considered to be difficult to recycle. In the case of electrode and fluorescent materials that are used as fine powder, no crushing process is required and can be applied to wet sorting process in the ordinary recycling facility. Importance of Liberation Physical sorting processes for collecting metals from waste products require the decomposition of “hybrid” components into small “individual element” pieces in the first step. Large products may be decomposed by hand, and then, the pieces are sent to crushing processes. In many industrial processes, the purpose of crushing is to obtain uniform fine particles from complex particles, and the processes improve physical properties of the particles such as mobility, processability, and reactivity (Owada 2007). On the other hand, the goal of the physical sorting is to complete liberation. The ideal status is that a single element is a single particle. The element that is the target material for recycling can be atom, alloy metal or parts, etc. In the case of the particle including more than two elements, it is called “locked.” It would be no problem when the locked material includes highly concentrated target material, and crushing processes would be effective for such locked constituents. If it is not the case, it is not possible to improve the purity of the element by utilizing physical sorting only. In the course of physical sorting processes, the crushing process is considered to be a pre-sorting process to realize the status of liberation, single separation of the target particle. Page 6 of 23

Handbook of Climate Change Mitigation and Adaptation DOI 10.1007/978-1-4614-6431-0_69-1 # Springer Science+Business Media New York 2015

Fig. 4 2D matrix model of liberation process

Fig. 5 Schematic image of the relationship between progress of liberation and nonuniformity of crushed pieces

Figure 4 shows the 2D matrix model of liberation process, proposed by A. M. Gaudin (1939). It is a classical model, yet not enough for modeling the actual situation, but it helps in the understanding of the concept of liberation. Figure 4a shows the status before the crushing process, including target material in the matrix. “a” in the matrix is the size of locked and assuming that the product is cut into pieces uniformly with the size of “a,” as shown in Fig. 4b, not influenced by the interface between the target material and the matrix. Some particles can be the status of liberation, but in the most of the pieces, the target material is still the status of locked. Another crushing process, the size less than “a,” may lead to liberation status for the target material. In the actual process, such random crushing does not likely occur and the status of liberation can be easily obtained. Only the efficiency can be different for each crushing method; thus, wise choice of the method is crucial in order to realize high-quality liberation. Figure 5 shows the schematic image of the relationship between progress of liberation and nonuniformity of crushed pieces. Let us consider Status A in Fig. 3 and apply crushing process in order to liberate the rare metal X. As crushing time proceeds, the particles become finer, and, eventually, particles of rare metal X are liberated. If this happens in a short period of time, which is the best scenario, Page 7 of 23

Handbook of Climate Change Mitigation and Adaptation DOI 10.1007/978-1-4614-6431-0_69-1 # Springer Science+Business Media New York 2015

Fig. 6 Example of selective crushing for liberation

the Status A shown in Fig. 5 will be realized, which leads to best pretreatment for physical sorting process. On the other hand, if longer crushing process time is required for liberating rare metal X, finer particles tend to be obtained, being Status B in Fig. 5 that is relatively difficult for physical sorting. Typically, Status B includes a wide range of particle size distribution, and it is almost impossible to collect particles under several microns by physical sorting process. Therefore, even when the nonuniformity of individual particle is realized, toward the right-hand side in Fig. 5, the status of well-mixed finer particles makes it difficult to separate the target particle. Also the same is applied to Status C in Fig. 5 where all particles are locked. To summarize, the process of liberation refers to achieving nonuniformity of individual particle by sacrificing the nonuniformity of the target material in the crushed product. The ideal situation is that nonuniformity of both individual particles and the target material is realized. Solid and broken lines in Fig. 5 show when the selective, optimized crushing process and random crushing process, respectively, are selected, which shows the idea of actual crushing process. Selective crushing is a very important technique to realize liberation at the coarse particle size. In order to realize efficient selective crushing, it is important to proceed to the breakup at the boundary of the rare metal X domain and the matrix. Since the waste products’ properties are different, even considering culler phone, each one has different structure; strength, depending upon the model; manufacturer; and year of product; there is no all-fitted selective crushing machine. Currently, the selective crushing property of particular products is investigated by utilizing an existing crushing machine. Study of theoretical and systematic approaches is expected for realizing ideal selective crushing process. Grinding Method Aiming at the Promotion of Liberation Mechanical Crushing Mechanical crushing is one of most realistic choices for early introduction to actual recycling plants. Mechanical crushing tends to break up products uniformly, and it is not easy to break up only at the interface as shown in Fig. 6a. It may, on the other hand, be possible to expect selective crushing as shown in Fig. 6b or c if there is a difference between mechanical properties of the target and the matrix. Figure 6b can be realized by combining thermal treatment and crushing processes, which were actually applied to the process for separating bone steel from concrete wastes (Mitsubishi Material Co, Ltd 2003; Matsumura 2003). Figure 6c can be realized by combining stirring and surface friction destruction processes. Not many cases are reported for metal recycling using mechanical crushing, but there is one good example, printed circuit boards. Actually, for the recycling of metal (copper) from the printed circuit board, the swing-type hammer mill process machine was applied, and it was found that the copper could be recycled as sphere particles due to its ductility (Furuyanaka et al. 1999). The selectivity of metal and nonmetal parts in the printed circuit board can be further improved by controlling the operating condition of the machine.

Page 8 of 23

Handbook of Climate Change Mitigation and Adaptation DOI 10.1007/978-1-4614-6431-0_69-1 # Springer Science+Business Media New York 2015

Fig. 7 (a) Schematic image of the active crushing system and (b, c) examples of operating patterns

Other new processes are also under development (Koyanaka et al. 2006; Furuyanaka 2006; Koyanaka et al. 2006). One of them is the so-called active crushing method. It controls multiple operating conditions of impact crushing simultaneously during the actual operation. Single separation of target material, control of particle size, and ejection of crushed materials are optimized timely and continuously during the crushing operation. Figure 7a shows the schematic image of the active crushing system (Furuyanaka 2006; Koyanaka et al. 2006). The system is based on the fast swing-type hammer mill, and the inverters for controlling hammer and feeder, electric air valve, and servo amplifier are connected to a PC, in which a certain operating pattern is programmed for automatic operation. In addition, the shape of the lining plate is also specially designed. Figure 7b and c shows examples of operating patterns for the speed of impact and the period of opening screen, which were actually applied to crushing circuit boards for TV after crushed using the cutter mill. As can be seen, the speed of hammer increases to 60 m/s with (b) 14 s and (c) 3 s, respectively. Using the pattern (c), one could obtain the metal separation efficiency of 50.6 %, with average particle sizes of 421 mm and 188 mm for metal and nonmetal, respectively. It was also shown that 59.6 % efficiency could be obtained by further optimization (Fig. 7b). As can be seen, accurate operation of mechanical

Page 9 of 23

Handbook of Climate Change Mitigation and Adaptation DOI 10.1007/978-1-4614-6431-0_69-1 # Springer Science+Business Media New York 2015

crushing can provide realistic liberation of target materials. Not much verification so far is reported, but it is expected to increase the application of such techniques on recycling for portable devices. Electrical Crushing In order to realize the liberation of target materials without excess crushing, it is ideal to have selective separation at the boundary of target materials and the matrix. Ordinal mechanical crushing methods, on the other hand, tend to result in uniform breakage, and it is difficult to crush designated area only. Therefore, electrical crushing is being considered, which allows crushing selectively along the boundary of the target material and the matrix. Mainly, there are two electrical crushing methods: electrical disintegration (ED) and electrohydraulic disintegration (EHD) methods. The ED method utilizes high voltage and large current in the liquid where particles (consisting of target material and matrix) are dispersed in a container. The particles are close to or touched at one electrode, and the other electrode is placed at the other side of the container. The high-voltage pulse of several 10 kV in several 10 ms is applied, so the large current only flows at the boundary in the particles that leads to crush the particles along the boundary (Fujita et al. 2002). Many studies are reported especially for rocks, coals, and concretes and, in recent years, for liberation from used products on recycling purposes. For example, the method was applied to liquid crystal panels used for cellular phones and laptop computers, and it was confirmed that the panel was separated into two glass substrates, and indium (ITO) could be collected after different treatments (Shibayama et al. 2002). The EHD method, on the other hand, utilized a shock wave generated by the large current flow in the liquid (Fujita et al. 2002). Explosives can be utilized instead of using the current flow. In either case, the shock wave generates tensile stress at the boundary and promotes selective crushes along the boundary. In the case of the cellular phone, it was reported that the shock wave propagated along the boundary between metal and resin and confirmed that metal parts were removed from the matrix (Kejun et al. 2001). As explained above, the electric crushing for liberation of used products is under development, and in the near future, it has a potential to be an innovative recycling method; however, it may also be difficult to introduce in the existing facilities due to the use of large current or explosives. Alternative Technologies for Hand Dismantling and Picking: Easy Sensing The most certain way to break up a complex product into pieces is dismantling. Dismantling is usually done by hand, while crushing is done by machines, and industrial robots may take human’s place for dismantling processes in near future. From the technical point of view, dismantling is defined as liberating operation for individual product, while crushing as liberating operation for massed products. Thus, crushing is much more efficient and also cost-effective compared to dismantling. Nevertheless, hand dismantling is still the major method for recycling because it is an easier process for liberation. Actually, even in Japan (where the labor cost is considered to be higher), hand dismantling is applied for recycling motors, which are relatively large pieces, from used appliances. Since there is no almighty method for liberation, hand dismantling is still applied in many cases, even in some that are not cost-effective. Another advantage of hand dismantling is that breaking up and separation of pieces proceed in the same time. Although this method is very useful for applying for the variety of products, there is a limitation of this method from economical point of view, especially in Japan. Thus, the development of automated dismantling machines has been carried out, and some processes are successfully automated, for example, sorting process utilizing advanced sensing technology. This technology is very useful for getting rid of impurities from the uniform particle, but it cannot be applied to widely scattered pieces from dismantled products. Under circumstances, the authors have been developing a cost-effective sorting machine utilizing “easy sensing” technology, alternative technologies for hand dismantling and picking. Instead of using Page 10 of 23

Handbook of Climate Change Mitigation and Adaptation DOI 10.1007/978-1-4614-6431-0_69-1 # Springer Science+Business Media New York 2015

Fig. 8 Automatic collection of Nd magnet from HDD by HDD cutting separator (HDD hard disk drive)

expensive, high-performance sensors, this machine utilized a combination of cost-effective sensors, which were close to human sensibility, and highly controlled operation procedure based on the nature of target products. For example, the authors have proposed a two-step crushing separation method for collecting neodymium magnets including rare-earth metal from hard disk drives (HDDs), “HDD cutting separator (HDD-CS)” as shown in Fig. 8. When HDD is normally crushed, the very strong neodymium magnets can be attached inside the crushing machine and cause many problems such as blockade at the screen. Even though they are luckily extracted out of the machine, they are agglomerated with metal pieces and not possible to be liberated. Thus, demagnetization process is typically applied in such case. Neodymium magnets have relatively low Curie temperature and can be demagnetized at around 350  C. However, it is not cost-effective to use thermal energy only for extracting the magnet, 2 wt% of HDD, meaning that it requires 50 times more thermal energy for demagnetization. The HDD-CS solves such a problem by utilizing four magnetic sensors and location sensors that identify the leakage magnetic flux density and the position of magnets in the HDD without destruction. Then, the magnet is punched out with a nonmagnetic cutter. The accuracy of sensing is kept improving by optimizing the machine using the database of the leakage magnetic flux density for each HDD. This, small and cost-effective, machine realizes an automatic separation process of 400,000–1,000,000 HDD per year and can concentrate the magnet component ten times. After demagnetization, impact crushing, and screening processes, 94–97 % of magnetic alloy particles are successfully collected (Oki et al. 2011). Another “easy sensing” technology, called “Arena Sorter,” a sensor-based sorting technology, is also developed as an alternative hand selecting sensing technology. It utilizes laser 3D measurement unit and weight detector that obtain the parameters (size, weight, and so on) of waste products, and they are recorded on the database. The system is operated using discrimination algorithm that utilizes neural network and the database (Koyanaka and Kobayashi 2011). For example of recycling cellular phones, the system successfully realized 90 % accuracy of automatic separation for tantalum capacitors from the cellular phones (Koyanaka et al. 2006).

Page 11 of 23

Handbook of Climate Change Mitigation and Adaptation DOI 10.1007/978-1-4614-6431-0_69-1 # Springer Science+Business Media New York 2015

Fig. 9 Category of separation technology and concept of low limit of applicable particle size

Technical Challenges of Quality Recycling: High-Quality Separation Challenges for the Optimization of Physical Sorting Processes Even when ideal liberation is realized, particles still remain in a mixed status, and thus separation is required. For example, let us consider liberated metal particles including 100 ppm of target metal. This separation means that we need to pick up a target particle from a bucket filled with 10,000 particles. In the case of particles with centimeter order, separation by hand can be applied with high accuracy; however, it is not economical at the end. As described above, one of the practical systems is sensor-based sorting system that utilizes materials’ information obtained from the sensor. This can be cost-effective for such separation and is also called individual separation. Pressured air can be applied to the particle separation in the range of several mm–300 mm (Furuyanaka 2010). In addition, a variety of sensing technologies, such as color, images, transmission X-ray, and fluorescence X-ray, can be applied, and they are effective for the separation of specific particles (Owada et al. 2010). In the case of mixed particles consisting of many kinds of materials, on the other hand, accurate sensing cannot be expected. As the particle size decreases, it becomes more difficult to separate the particles. In this case, it may be more efficient to handle the particles as an aggregate. This is called “bulk separation” or “mass separation,” which usually utilizes the difference in the properties of particles, such as density, magnetism, and wettability. In addition, the process can be categorized as dry separation and wet separation (usually in water). A dry separation process allows for high throughput and easy collection after separation, and a dry process unit is easy to install and costeffective. On the other hand, a wet separation process utilizing bulk properties is expected to improve the separation efficiency compared to a dry separation process; however, it also has disadvantages that it consumes more energy for water circulation, dehydration, and drying processes. In addition, in many cases a surfactant is utilized in a wet separation process, which enlarges load for effluent treatment. After all, these two process types have an optimum range of particle sizes for separation as shown in Fig. 9. We conveniently define low-limit particle sizes shown in Fig. 9, and under certain reliability, low-limit particle sizes for dry and wet separation using bulk properties are 1 mm and 50 mm, respectively. As described above, if the liberation is achieved at the stage of coarse particles, dry separation can be applicable, and it realizes economical and highly efficient separation. Once the crushing process is applied, it always generates fine particles less than 1 mm. Some rare metals in certain products have a tendency to be concentrated in the particles, and thus, such fine particles will also need to be separated and collected (Oki 2008a; Oki et al. 2008). So far, the particles below 50 mm Page 12 of 23

Handbook of Climate Change Mitigation and Adaptation DOI 10.1007/978-1-4614-6431-0_69-1 # Springer Science+Business Media New York 2015

Fig. 10 The model of the simplest separation process

required wet separation processes utilizing surface property, such as flotation process. A recent study, on the contrary, showed the possibility of a dry process that realizes gravity concentration up to 10 mm particles using strong centrifugal force (Oki 2009). On the contrary, one cannot achieve complete metal recycling no matter how individual elemental technology for recycling was developed. A great variety of products are manufactured and abolished in every year. Thus, construction of a flexible sorting process is necessary to cope with a change of such variations or the chronological changes. However, at the moment, the techniques to build these processes are not yet established, and one has to keep working on the development of these techniques to realize the most suitable sorting process and to derive the most suitable sorting condition. Figure 10 shows the model of the simplest separation process, combining a grinder and a sorter. The targets fed to the sorting process have a variety of constituents such as various kinds of printed circuit boards and electric parts. The grinder itself also has a variety of models operable in the different treatment conditions. In addition, there are so many options for separation of crushed particles. Thus, although the model shown in Fig. 10 has only seven items, it gives ten million ways of separation processes, supposing that each item provides ten conditions. If 20 conditions were given for each item, 1.28 billion ways will exist. Since the efficiency of the liberation is determined as multiplication of grinding efficiency and separation efficiency, both processes need to be well optimized. Even if the simple model shown in Fig. 10 has one grinding and one separation process, the combination of possible patterns will be more than 100 million. In the actual case, a couple of grinding processes and three to ten of separation processes will be applied, which is an astronomical figure. Only a small portion of these patterns can actually be ideal physical separations for urban mine resources, and what makes it more difficult is that the cycle of product release is short and the amount and kind of rare metal are different from each product. Thus, one optimum separation pattern will no longer be effective in a short period. Because of this, it is very difficult to optimize the separation pattern, and in reality, they are operated under inefficient conditions. Although the improvement of liberation processes by selecting optimum grinding and separation processes is important, it is very difficult to recognize the importance of the processes. Figure 11 shows the schematic image of a physical separation process that extracts and purifies rare metal X from waste product. For example, let us think about cellular phones as a feed. As described in Fig. 3, rare metal X can

Page 13 of 23

Handbook of Climate Change Mitigation and Adaptation DOI 10.1007/978-1-4614-6431-0_69-1 # Springer Science+Business Media New York 2015

Fig. 11 The relationship between the selection of separation method and the degree of liberation

be located anywhere at the variety of status points in the cellular phone, and this affects the difficulty of liberation using a crushing process. In the crushing process, as shown in Fig. 5, the type of grinding machine and its operating conditions determine the size of particles and the degree of liberation. Even if the ideal liberation is realized such as Status A (or Status B) in Fig. 11, the results can be different depending upon the efficiency of the latter separation process. However, efficient separation after Status D does not often achieve higher purity of rare metal X than that of Status B with inefficient separation. Here, the information that can be obtained from typical recycle plants through the series of processes is threefold: (1) purity of rare metal X in the feed, (2) particle size after grinding, and (3) purity and yield of rare metal X after separation. Actually, important parameters that determine the quality of the entire process, such as the dispersion of rare metal X in the waste product and handy analysis method for the degree of liberation, do not exist. Important parameters in the processes from product feed to collection of separated particle are in the black box, and it is not possible to find out the source of problems such as poor purity of rare metal X after recycling. It could be due to the quality of liberation or separation. In this current situation, the concept of liberation and importance of selective grinding are not well recognized, and in not a few cases, only latter separation process is discussed without considering the degree of liberation. In addition, component analysis of separated products does not provide sufficient feedback to the process for improvement, and this makes it difficult to optimize the process of physical separation. Development of simple and easy liberation measuring equipment, and the optimization of the selective grinding and sorting systems which depends on the liberation data, are indispensable issues for success of the rare metal recycling.

Page 14 of 23

Handbook of Climate Change Mitigation and Adaptation DOI 10.1007/978-1-4614-6431-0_69-1 # Springer Science+Business Media New York 2015

6mm

Sorting

Air Table

75µm 75µm

50mm 25mm

Dry Magnetic

75µm

18mm

1.7mm

Electrostatic

6mm Heavy Media Separation 13mm HMS Cyclone

150µm

wet gravity concentration

Jig (diaphragm)

75µm

Shaking Table

75µm

Spiral

Flowing

13mm

6mm

Without Cyclone

150µm

10µm

Jig (air or plunger)

1.7mm 75µm

2.5mm

6mm 6mm

1.7mm

100µm 3mm

Wet Magnetic Flotation Film

Dry separation Wet separation(bulk)

1.2mm

Wet separation(surface)

1µm

10µm

100µm

1mm

1cm

10cm

1m

Original figure:F.F.Aplan principles of mineral processing (2003)

Fig. 12 Applicable particle sizes for each separation technology

Selection of Sorting Technologies and Challenges During the crushing process, particles with various sizes are generated. The ideal particle size range for better sorting results differs depending upon each sorting technology. Typically, the particles are separated into two or three particle size groups by a screening process, and an optimum sorting process is applied. The particles in the range below several hundred microns are usually not collected; however, collection of such particle range is becoming important especially for recycling precious metals and rare metals, as well as the sorting technology for such particle range. In order to realize highly efficient, low-cost, and low-environmental impact sorting technologies, it is necessary to broaden the applicable particle range of each sorting technology, possibly of the technology in the right-hand side of the image shown in Fig. 8. For the example of improvement for the columnar pneumatic sorter, one of dry sorters, separation of 0.1 mm copper and aluminum particles is realized using model particles (Oki et al. 2007). A wet process can be utilized for finer particle separation where a dry process is no longer efficient. On the other hand, wet gravity concentration, based on particle bulk properties, still has the problem that the separation accuracy decreases as the particle size decreases due to low inertia. Typical particle size range for accurate separation is about 50 mm for conventional wet gravity concentrators such as shaking tables and spiral gravity concentrators. For the particle size below that, wet separation processes such as flotation using the surface properties of the particles can be effective. For the example of removal of ink from waste paper, flotation works very effectively. On the contrary, this process is not ideal for waste products with surface contamination that lower the separation efficiency significantly. Currently, application of wet separation techniques utilizing bulk property of particles is expected even for the particles below 50 mm. Another gravity concentration technology utilizing strong centrifugal fields has been developed since the 1980s and has shown the possibility of gravity concentration up to 10 m particles. Figure 12 shows a modified image of applicable particle sizes for each separation technology based on the literature by F. F. Aplan (2003). Among them, detailed information for conventional sorting technologies that can be Page 15 of 23

Handbook of Climate Change Mitigation and Adaptation DOI 10.1007/978-1-4614-6431-0_69-1 # Springer Science+Business Media New York 2015

Fig. 13 Category of wet gravity concentration devices and settling acceleration

used for mineral processing and metal recycling is available in the literature (Wills 2006). In this chapter, the authors focus on the gravity concentration technology that realizes wet separation of fine particles utilizing bulk properties. As the particle size decreases to below 50 mm, it becomes difficult to separate particles using wet gravity concentration. One of the reasons is that the mobility of particles in the water decreases and separation takes a lot of time. Another reason can be that it is difficult to separate the particles using the specific gravity difference as the inertia of the particles decreases. Gravity concentration using strong centrifugal fields, on the other hand, improves not only the mobility of particles but also the efficiency of separation. Figure 13 shows the category of wet gravity concentration devices and the acceleration of gravity or centrifugal field affecting the separation (OKi 2008b). Here, it shows the acceleration of rather small, lab-scale devices. Wet gravity concentration devices can be categorized into three types: 1. Water flow separation, method to separate utilizing particle sedimentation rate and velocity of the water stream 2. Film flow separation, method to separate utilizing the resistance between particles and water film on the slope and the friction between particles and the slope 3. Pulsatile flow separation, method to separate utilizing the upward and downward motion of water to differentiate the time reaching the bottom Among these methods, separation by gravity settling, especially shaking table and jig, has been widely used for a long time as typical wet separation method. Hydro-cyclone using rotational flow and spiral separation devices are conventional installations for fine particles utilizing wet gravity concentration. In addition, the compulsive rotational wet gravity concentration method realized 10 m particle separation utilizing strong centrifugal forces given by a mechanical rotating force. For the device shown in Fig. 13, maximum acceleration is in the range of 30–300 G (1 G = 9.80665 m s 2). Although it is not possible to define the low-limit particle size or separation accuracy by the acceleration due to the difference in the particle motion or the method of particle collection, it is clear that the velocity and inertia of particles are increased by the acceleration. So far, compulsive rotational devices were utilized mainly overseas;

Page 16 of 23

Handbook of Climate Change Mitigation and Adaptation DOI 10.1007/978-1-4614-6431-0_69-1 # Springer Science+Business Media New York 2015

however, the mechanism of particle separation and operability is still not clarified, and only few cases were applied to rare metal recycling. Since the wet process is promising, more application is expected in the future. New Sorting Technology for Urban Mine Development: Smart Operation A physical sorting process is usually combined from three to ten separation stages, and the combination of separation stages yields astronomical figures. Thus, most cases are abandoned before finding out the true performance of each device. To solve this situation, the authors have examined a system that promptly derives the optimum condition using a database and computer simulation. Without relying on experienced workers, the “smart operation” system realized automated operation at optimum condition. The system has been applied to recycling of printed circuit boards, and the author succeeded in the development of a sorting process that could collect tantalum capacitors at high purity for the first time in the world and achieved practical use upon introduction to a Japanese recycling plant in 2012. Since tantalum is one of the most expensive rare metals and most of it is not recycled, the Japanese Government chose tantalum as one of five important metals (tungsten, tantalum, cobalt, neodymium, and dysprosium) that are preferentially recycled in 2012. Tantalum is mostly used in a tantalum capacitor on printed circuit boards. At first, the recycling process of the printed circuit board for tantalum is developed based on the conventional liberation method (see Fig. 5). Considering the tantalum atom as the species of liberation, the authors aimed at the improvement of the liberation by fine grinding of the printed circuit board. After fine grinding, the authors conducted separation process based on the physical property of tantalum oxide, resulting in the concentration of tantalum to several times. Tantalum was, however, collected with a precious metal and other heavy metals due to low weight ratio of tantalum in printed circuit boards, around 1,000 ppm. As described before, rare metals such as tantalum need to be separated from copper or precious metals before pyrometallurgical treatment, and the recycling of tantalum could not be accomplished by the method mentioned above. At first, it was considered that it was almost impossible to collect a certain electric element from a printed circuit board where various electronic elements were mixed. A phenomenon, however, was found that an electronic element was exfoliated from a printed circuit board as the original form by using a certain crushing device. Thus, we made an attempt to find out the most optimum separation pattern, considering the tantalum capacitor as the species of liberation and each electronic element has peculiar sorting properties. The authors classified over 400,000 electric elements in 320 categories according to the size and the function and built the database for their physical and sorting properties. Then, three kinds of separation methods, viz., size, specific gravity, and magnetic properties, are considered, and by numerical computation, the authors predicted the sorting result of approximately 2,055 trillion ways of patterns that include repetition use and performed narrowing of the optimum that a tantalum capacitor could concentrate afterward. As a result, the authors found the sorting process that realizes over 80 % of recovery of tantalum capacitors and purity of tantalum capacitors, from the mixture of exfoliated electric elements (Fig. 14, Oki et al. 2010). Although the optimum sorting process pattern was clarified, there was no device that could realize the process pattern. Thus, the development of such device “inclined and low-intensity magnetic-shape separator” was conducted as the next step. This small device, rather used as an auxiliary unit, was a hybrid device that collected aluminum electrolytic capacitors at the inclined conveyor and collects quartz resonators at the low-magnetic field sorter. The device could collect iron and aluminum separately, and the rest that includes tantalum capacitor is sent to a special pneumatic sorter. This “double-tube pneumatic separator” is the main device of the sorting process and can control the airflow rates in the columns precisely using single blower. In the first column, elements heavier than the tantalum capacitor are collected by gravity, and in the second column, only tantalum capacitor is collected by gravity. The Page 17 of 23

Handbook of Climate Change Mitigation and Adaptation DOI 10.1007/978-1-4614-6431-0_69-1 # Springer Science+Business Media New York 2015

Fig. 14 Tantalum capacitor collecting process optimized by the simulation based on the database

flow rate of the first column is slightly faster than that of the second one based on the numeric calculation. In order to realize highly accurate gravity concentration, this device introduces new operating parameters for both software and hardware. Especially, it is possible to operate automatically from the calibration of the device to the collection of elements by selecting the target elements (not only tantalum capacitor but also other elements) on the display, by operation control using electric element database (Oki et al. 2010). It used to be assumed that the maximum separation efficiency of the tantalum capacitor was around 10–30 %; however, after such device development described above, the separation efficiency of 97 % was achieved by the trial run with a recycling plant where the device was installed (Oki et al. 2010, 2011). In this way, by using product information appropriately, it is possible to derive the most suitable sorting condition quickly and to recalculate the most suitable sorting condition by substitution of the information in the case of altering product specification, without going through again from the beginning. Use of the easy sensing technology and the smart operation technology just began, and it is expected that the development of recycling technology for other resources will progress further by the innovation of such physical sorting technology.

“Strategic Urban Mining” that Japan Aims for Missing Link of Resource Circulation and Resource Circulation Interface Function In recent years, the development of sensor-based sorting technology for recycling has been active around Europe, and in most of the cases, the mineral processing technology that was applied to the natural mine is converted into the physical separation in the recycling. The mineral processing technology at the natural mine does not only utilize magnets for iron and gravity concentration for heavy metal but also make the most of the property of “minerals” based on the knowledge of geology and mineralogy, and minute sorting was conducted.

Page 18 of 23

Handbook of Climate Change Mitigation and Adaptation DOI 10.1007/978-1-4614-6431-0_69-1 # Springer Science+Business Media New York 2015

Fig. 15 Missing link and interfacial function of resource circulation

In addition, natural mines are usually developed for several decades, and there is enough time to optimize the mineral processing technology for a specific mineral. As a result, it is possible to obtain a variety of metals including rare metals economically. On the other hand, when the technology is applied to urban mines, only the separating technology based on the element characteristic can be utilized due to less information of the waste products. As a result, except for precious metals, metals used for structural materials such as iron, aluminum, and copper became targets of the recycling. In recent years in Japan, urban mines are expected to develop for supplying rare metals enough to manufacturing products; however, the technology still requires improvement from the conventional “quantity recycling” technology. The collection of rare metals was difficult in the old urban mine concept, but it can be said that a technical breakthrough will be achieved by compiling the characteristics of the waste products into a database and utilizing it for the separation process, as shown in the example of the tantalum capacitor. On the other hand, however, the urban mining without considering infrastructure and system surrounding it does not realize efficient resource recovery as much as a natural mine can supply, even when new technology is introduced into a part of the resource circulation loop. Even if the recycling is promoted by both production and consumption sides, the resources do not circulate without considering an interface between both sides. In order to realize sustained circulation use of the strategic metals, it is important to construct a series of systems from the supply of reproduced raw materials to a product design, not only to develop resource recycling technology such as physical sorting. The authors thought that the introduction of innovative sorting technology and the eco-design functioned as a mediation technology, and the technology was named “resource circulation interface” (Fig. 15). As discussed above, even if advancement of the physical sorting technology is accomplished, it will be difficult to apply it to all product forms with that alone due to the variety of the product designs released up to now. Thus, an easy recycling design (eco-design) can be considered to compensate for the technology gap. The effective and minimum easy recycling design can be achieved by the suggestion to the products

Page 19 of 23

Handbook of Climate Change Mitigation and Adaptation DOI 10.1007/978-1-4614-6431-0_69-1 # Springer Science+Business Media New York 2015

Fig. 16 Summary of Strategic Metal Resource Circulation Technology (Urban Mining) Project

as well as the design guidance of the parts and products that realized easy to sort without spoiling products’ original function and charm. In this way, the improvement of the physical sorting technology is a key for the development of the urban mine including the interfacial function of global resource circulation of materials.

Aiming for Establishing Strategic Urban Mines In order to succeed with efficient and deliberate urban mining, it is of importance to build a society system that introduces product eco-design and physical sorting technology utilizing artifact databases. For this purpose, the authors have conducted a project called “Strategic Metal Resource Circulation Technology (Urban Mining)” between 2012 and 2014, aiming for the total development of urban mining in Japan. In this project, the authors specified important metals as “strategic metals” that are necessary to continue industrial activity and potentially have supply risk and evaluated the potential of urban mining and efficient collecting technologies. Venous industry (recycling industry), as shown in the upper part of Fig. 15, mainly focuses on the short-midterm technological subjects aiming for the development of urban mines that are scattered and disorderedly accumulated. Arterial industry (manufacturing industry) of the lower part of Fig. 16, on the other hand, focuses on the mid-long-term technological subjects for realizing a practical urban mining plan utilizing eco-design from manufacturer aspects. As described above, by considering the demand and supply risk of metal resources, as well as the system that realizes deliberately and efficiently collecting strategic metals, the authors named their initiative “Strategic Urban Mine” in contrast to conventional disordered urban mines. In addition, in November 2013, a new research base “Strategic Urban Mining Research Base (SURE)” was established in the National Institute of Advanced Industrial Science and Technology (AIST) for continuous research activity based on the project’s concept. SURE holds 37 researchers from AIST (Fig. 17). And SURE maintains a laboratory for evaluation of sorting technology (SURE LATEST) at the AIST Tsukuba West site, aiming at the improvement of physical sorting technology. The laboratory has large space room and four separate rooms that hold about 60 physical sorting devices for grinding, crushing, and separation processes. Twenty of them were originally developed at the AIST (Oki 2012,

Page 20 of 23

Handbook of Climate Change Mitigation and Adaptation DOI 10.1007/978-1-4614-6431-0_69-1 # Springer Science+Business Media New York 2015

Fig. 17 Structure of Strategic Urban Mining Research Base

2013a, b, c, 2014a, b, c). Such open laboratory, the core of physical sorting technology, is the first attempt in Japan and expected to contribute accelerating development of urban mining development. In order to introduce the strategic urban mine into the society, the support from industry side is necessary. For this purpose, the SURE consortium was organized in October 2014, together with companies related to metal resource circulation, aiming at an early realization of strategic urban mines by extracting needs from industry and society. Currently, the members of the consortium are 45 companies and 20 industry groups and public organizations and institutes. Members of the SURE consortium discuss common subjects in the industry group or individual company’s subjects such as eco-design and utilization technologies of recycled materials, in order to promote the strategic urban mining concept from the manufacturers’ point of view. In addition, they can utilize the facility in SURE LATEST aiming at the extraction of potential problems at recycling plants for better improvement of the technology. The SURE consortium is expected to propose a variety of new ideas related to urban mine development.

Future Prospects of Strategic Urban Mines Obtaining enormous amount of metal resources from urban mines for supporting the civilization of human races will contribute not only to support sustainable development of civilization society to the future but also to mitigate climate change. While several incidents had happened in Japan, which accelerated the development of urban mining, up to now, there are still many issues that need to be solved from both technical and society system points of view. In this chapter, the authors showed the technical subjects for realizing total circulating usage of metal resources including rare metals and an attempt currently tackled in Japan. Japan has already selected five important metals that need to be recycled and conducted related research on the nation level. It is, however, not possible to realize practical resource circulation with high international competitiveness if the technologies are developed individually.

Page 21 of 23

Handbook of Climate Change Mitigation and Adaptation DOI 10.1007/978-1-4614-6431-0_69-1 # Springer Science+Business Media New York 2015

Furthermore, even when one recycling technology has been established, the period of validity is not very long due to fast product cycle. The concentration of rare metal in the product also changes year by year, as well as characteristics of separation and crushing. In addition, the more a particular rare metal becomes important, the less material will be used in the product by promoting the use of alternative materials. It takes a lot of time to determine an ideal sorting pattern for a product from the enormous combination of crushing-sorting technologies, and when the process is ready, it is not a few cases that the target rare metal is no longer used anymore. Because of this, in the recycling process in many cases, a hand dismantling and picking process is used under inefficient conditions, and thus, the technology development does not always catch up with it. In order to continue a steady development of rare metal recycling, it is necessary to conduct wellplanned technology development based on the prediction of the future. From this point of view, there are two important forecasts: One is that which rare metals will be more important in the next 5 years and 10 years, in other words, which rare metals will be necessary to be recycled from urban mines. The other one is that we have to choose products from which rare metals are recycled. Currently in Japan, for the first one, five kinds of rare metals are selected as strategic rare metals; however, it is difficult to predict their true demand and price trend. At least, for the second one, those rare metals are already used in the products, and one can easily select appropriate products. In order to proceed strategic urban mine development, the authors organized the Strategic Urban Mining Research Base (SURE) in the National Institute of Advanced Industrial Science and Technology (AIST). In this research base, the metals considered to be important in the next generation, not only rare metals, are designated to be “strategic metals,” and they are evaluated for their recycling potential. In addition, the database of physical properties for waste products is being constructed, and based on the database, automatic sorting technology for products including “strategic metal” and pre-smelting treatment technology are under development for preparing raw materials by recycling. These efforts will contribute to economical collection of strategic metal from the current “disordered urban mine” accumulated in the land. Urban mines are one of the promising resources for Japan as a poor natural metal resource country. Fortunately, Japan is one of the major rare metal consumers and also is capable of smelting rare metal by its own. Japan’s urban mine will be more practical with the world top class recycling technology. In addition to these technological developments, it is necessary to reform the society system in order to realize productive and economical urban mine which overcomes the natural mines. The number of researchers and scientists will also be expected to increase for speeding up the development of technological aspect. Coping with sustainable civilization, utilization of renewable energy, and climate change mitigation is one of big challenges. All of them are interconnected, and from developing urban mine point of view, the activity of SURE including collaboration with private companies, and cultivation of human resources, is expected to contribute the climate change mitigation in the future.

References Aplan FF (2003) Gravity concentration principles of mineral processing. SME, Colorado, pp 185–219 Fujita T et al (2002) Liberation as pre-treatment of recycling process by electrical crushing and water explosion. Resour Treat Technol 49:187–196 Furuyanaka S (2006) Active crushing of waste products – selective crushing technology for composite waste materials. Funtai Kogyou 38:57–64 Furuyanaka S (2010) Crushing technology and eco-recycle. NGT, Tokyo, pp 110–115

Page 22 of 23

Handbook of Climate Change Mitigation and Adaptation DOI 10.1007/978-1-4614-6431-0_69-1 # Springer Science+Business Media New York 2015

Furuyanaka S et al (1999) Evaluation of liberation property for impact crushing and gravity concentration of waste printed electric circuit board. Funtai Kogyou 36:479–483 Gaudin AM (1939) Principles of mineral dressing. McGraw-Hill, New York, pp 70–91 Kejun L et al (2001) Extraction of metals from disposed fragmented portable telephones by various leaching solution. Mater Trans 42:2519–2522 Koyanaka S, Ohya H, Endoh S (2006) New grinding technique to simplify the recycling process of scrap electronic devices. Rev Automot Eng 27:353–355 Koyanaka S et al. AIST web page. https://staff.aist.go.jp/s-koyanaka/ARENNA.pdf Koyanaka S, Kobayashi K (2011) Res Conserv Recycl 55:515–523 Koyanaka S, Endoh S, Ohya H (2006) Effect of impact velocity control on selective grinding of waste printed circuit boards. Adv Powder Technol 17:113–126 Matsumura (2003) Concrete recycle technology. Consult Hokkaido 106:13–19 Mitsubishi Material Co, Ltd. (2003) Development of low environmental load type concrete from waste concrete. MITI Report of FY2003, Ministry of Economy, Trade and Industry Oki T (2008) Proceeding of 16th environmental resource engineering symposium. pp 24–30 OKi T (2008) Screening, separation and gravity concentration. Min Mater Process Inst Jpn Tech Semin Book: 31–44 Oki T (2009) Funtai Gijutu 1(5):39–48 Oki T (2011) Proceedings of the conference of metallurgists (COM2011). pp 69–77 Oki T (2012) Physical sorting technology for rare earth recycle. Automob Technol 66(11):74–79 Oki T (2013a) Physical sorting technology for strategic development of urban mine – unused, refractory resources and Japan’s resource vision. Systhesiology 6(4):238–245 Oki T (2013b) Physical sorting technology for strategic development of urban mine and future prospective. Kankyo Kanri 49(3):62–65 Oki T (2013c) Collection of electric element from waste printed circuit board based on the concept of strategic urban mining. Ceram Jpn 49(1):30–34 Oki T (2014) Urban mine development. Denki Hihyou 2014(2): 27–28 Oki T (2014b) Technical problems of rare metal recycle from waste portable appliances. Energy Resour 35(4):234–238 Oki T (2014c) Development of gas flow sorting device for the realization of strategic urban mine. Funtai Kogaku Gakkai-shi 51(7):527–531 Oki T et al (2007) Establishment of environmental friendly metal recycle system. AIST Environ Energy Symp Ser 1:20–24 Oki T et al (2008) Proc Spring Symp Min Mater Process Inst Jpn 2:91–92 Oki T et al (2010) IMPC2010. pp 3839–3844 Oki T et al (2011) Development of crushing and sorting device for collecting rare earth magnet from HDD. Kido-rui 58:34–35 Owada S (2007) Crushing/sorting technology. J Min Mater Process Inst Jpn 123:575–581 Owada S et al (2010) J Min Mater Process Inst Jpn: 153–156 Shibayama A et al (2002) Collection of materials from crushed liquid crystal panel using electrical crushing. J Min Mater Process Inst Jpn 118:490–496 Wills BA (2006) Will’s mineral processing technology, 7th edn. Butterworth-Heinemann, Oxford Yamasue E et al (2009) Novel evaluation method of elemental recyclability from urban mine – concept of urban ore TMR. Mater Trans 50(6):1536–1540

Page 23 of 23

Handbook of Climate Change Mitigation and Adaptation DOI 10.1007/978-1-4614-6431-0_71-2 # Springer Science+Business Media New York 2016

Non-technical Aspects of Household Energy Reductions Patrick Moriartya* and Damon Honneryb a Department of Design, Monash University, Melbourne, VIC, Australia b Department of Mechanical and Aerospace Engineering, Monash University, Melbourne, VIC, Australia

Abstract Domestic energy forms a significant part of total energy use in OECD countries, accounting for 22 % in the USA in 2011. Together with private travel, domestic energy reductions are one of the few ways that households can directly reduce their greenhouse gas emissions. Although domestic energy costs form a minor part of average household expenditure, the unit costs for domestic electricity and natural gas vary by a factor of 4 and 5, respectively, among OECD countries, and per capita use is strongly influenced by these costs. Other influences on domestic energy use are household income, household size, residence type (apartment/flat vs. detached house), and regional climate. Numerous campaigns have been carried out in various countries to reduce household energy use. A large literature has analyzed both the results of these studies and the general psychology of pro-environmental behavior, yet the findings often seem to conflict with the national statistical data. The authors argue that the rising frequency of extreme weather events (especially heat waves, storms, and floods), together with sea level rises, is likely to be a key factors in getting both the public and policy makers to treat global climate change as a matter of urgency. Costs of domestic energy are likely to rise in the future, possibly because of carbon taxes. But such taxes will need to be supplemented by other policies that not only encourage the use of more efficient energy-consuming appliances but also unambiguously support energy and emission reductions in all sectors.

Keywords Australia; Barriers to conservation; Carbon taxes; Climate extremes; Conflicting policies; Domestic energy consumption; Ecological citizenship; Electricity use; Energy conservation context; Energy costs; Energy efficiency; Energy performance rating; Environmentally friendly modes; European Union (EU); Extreme weather; Fossil fuel reserves; Fossil fuel depletion; Gross national income (GNI); Household expenditure; Household size; Household income; Income inequality; Information provision; Involuntary environmentalists; Japan; Moral licensing; National statistical data; Natural gas use; Organisation for Economic Co-operation and Development (OECD); Personal carbon trading; Proenvironmental behavior; Refrigerators; Representative Concentration Pathway (RCP); Smart houses; Social context; Social cost of carbon; Social marketing; Social psychology; Space heating; Structure of domestic energy costs; Unintended consequences; United Kingdom; Urban density; Urban heat island; United States

*Email: [email protected] Page 1

Handbook of Climate Change Mitigation and Adaptation DOI 10.1007/978-1-4614-6431-0_71-2 # Springer Science+Business Media New York 2016

Abbreviations ABS EIA EJ EPR EU GHG GJ GNI Gt IEA IPCC IT MWh NG OECD ONS PCT PEB RCP SBJ SCC UHI UN

Australian Bureau of Statistics Energy Information Administration (US) Exajoule (1018 J) Energy performance rating European Union Greenhouse gas Gigajoule (109 J) Gross national income Gigatonne (109 tonne) International Energy Agency Intergovernmental Panel on Climate Change Information technology Megawatt-hour (106 W-hour) Natural gas Organisation for Economic Co-operation and Development Office for National Statistics (UK) Personal carbon trading Pro-environmental behavior Representative Concentration Pathway Statistics Bureau Japan Social cost of carbon Urban heat island United Nations

Introduction In 2011, world CO2 emissions from energy and industry totalled 33.74 gigatonnes (Gt) (BP 2014). (A gigatonne = 1018 tonnes.) For the USA alone in 2011, total emissions were 5.5 Gt of CO2, resulting from total energy use of 102.5 EJ (EJ = exajoule = 1018 J). Of this total US energy, household energy use accounted for 22.6 EJ or 22.0 %, compared with 18.6 %, 31.4 %, and 28.0 % for the commercial, industry, and transportation sectors, respectively. The US Energy Information Administration (EIA) projects that domestic energy use in the USA will grow only slowly over the period 2012–2040, at an average of only 0.2 % per year (EIA 2014). Along with private transport, cutting energy use at the household level is an important way for individuals to directly reduce their carbon footprint. (In this chapter, the terms “household energy use” and “domestic energy use” are used interchangeably.) Two possible approaches for reducing domestic energy consumption are, first, to encourage the purchase and use of domestic energy-using devices (see also chapter “▶ Energy Efficiency – Comparison of Different Technologies” in this handbook) and, second, by reducing the use of such devices. This could involve having fewer appliances (e.g., dispensing with second refrigerators in the household), running energy devices for fewer hours (e.g., turning off lights), or running at a lower setting (e.g., lowering thermostat settings in winter). This second approach to energy reductions is more important for household energy use than for either commercial buildings or industry, because for both of these sectors, energy costs are likely to be monitored more closely and policies for energy reduction, both technical and nontechnical, more readily implemented for purely

Page 2

Handbook of Climate Change Mitigation and Adaptation DOI 10.1007/978-1-4614-6431-0_71-2 # Springer Science+Business Media New York 2016

economic reasons. Nevertheless, considerable scope still remains for both industry and commercial buildings to adopt these practices. There is a further reason for a focus on domestic energy use. In many Organisation for Economic Co-operation and Development (OECD) countries in recent years, total primary energy use per capita, or even total primary energy use, has fallen (BP 2014). In the UK, for instance, total energy use has not risen for four decades, and total CO2 emissions from fossil fuels peaked in 1970 and are now 26.5 % lower than the peak value. The problem is that recent decades have also seen the rise in imports to the OECD of energy- and CO2-intensive manufactured products from Asia. As Davis and Caldeira (2010) show, such embodied CO2 and energy can make a big difference to national emissions and energy statistics and render problematic the interpretation of energy time series data. Because domestic energy statistics only measure energy used by household equipment and not the embodied energy in the equipment, this problem is avoided. This chapter is structured as follows. Section “Factors Affecting Domestic Energy Consumption” looks at patterns of domestic energy use in various selected OECD countries. Domestic energy prices, household income, household size, and climate were all found to be important for present domestic energy use. (In this chapter, the terms energy reductions and CO2 reductions are used interchangeably, since, at the household level, energy reductions are – apart from rooftop solar devices and switching to gas from electricity for space and water heating – the only means available to reduce CO2 emissions.) In section “Strategies for Household Energy Reductions,” the numerous studies and field trials on reducing household energy use are reviewed. Researchers have looked at the effect of parameters such as income level, gender, age, and ethnicity on responsiveness to campaigns for energy reductions. The latest studies have concluded that significant energy reductions are possible, but stressed that households face many barriers to reductions, including lack of relevant information. Building on the energy cost data in the previous section, the authors stress the importance of future carbon taxes for motivating energy reductions. In section “Future Directions,” the possibility for conflicts between household energy savings and overall global climate change mitigation (or adaptation) is examined.

Factors Affecting Domestic Energy Consumption Table 1 shows domestic energy consumption by end use for the USA for year 2011. Over half the energy use is for space heating and cooling (with the UK having a similar proportion (Steg 2008)), with almost one-fifth for water heating and refrigeration/freezers. Since 1993, the share for space heating has fallen, and the shares for both space cooling and appliances have risen (EIA 2013). Energy for space heating in the USA overall is expected to fall out to the year 2040, for space cooling to continue to increase (EIA 2014). Although space heating presently uses almost six times as much energy as space cooling in the USA overall, this ratio varies with regional climate. Households in colder climates spend a higher share of their total expenditure on fuel because of high winter fuel bills, which more than compensates for lower need for space cooling in the warmer months. In Australia, for example, in subtropical Brisbane (27 300 S), the figure is 2.1 %, compared with 3.5 % for temperate Hobart (43 S) (Australian Bureau of Statistics (ABS) 2012). In OECD countries, nearly all of this domestic energy is supplied by reticulated electricity and natural gas. Table 2 shows the unit prices of these two domestic energy sources for 2012 for a number of OECD countries, including both the major economies and those with either very high or very low energy costs. The unit costs for domestic electricity and natural gas vary by a factor of 4 and 5, respectively, among OECD countries, with Mexico, one of the lowest-income OECD countries, having the lowest unit costs. The different costs have a large impact on domestic energy use. Comparing the USA and Japan, the much Page 3

Handbook of Climate Change Mitigation and Adaptation DOI 10.1007/978-1-4614-6431-0_71-2 # Springer Science+Business Media New York 2016

Table 1 Domestic energy use by function for the USA, 2011 Delivered energy consumption by end use Space heating Space cooling Water heating Refrigeration and freezers Cooking Clothes washers and dryers Lighting Dishwashers Televisions and related equipment Computers and related equipment Other uses Delivered energy

Share (%) 43.03 7.54 15.70 4.07 3.06 2.48 5.67 0.88 2.97 1.14 13.47 100.00

Source: EIA (2014) Table 2 Unit cost of domestic electricity and gas and per capita GNI for various OECD countries in 2012 Country Denmark Germany Japan Mexico Netherlands Sweden UK USA

Electricity $/MWh 383.43 338.75 276.76 90.20 238.24 223.96 220.74 118.83

Gas $/MWha 123.09 90.32 NA 30.36 98.7 156.89 73.65 35.22

GNI/capitab 59,870 45,170 47,870 9,640 48,110 56,120 38,500 52,340

Sources: International Energy Agency (IEA) (2013b), World Bank (2014) Gross heating value b Atlas method (US$ 2012) a

higher costs for domestic energy in Japan (2.3 times that of US electricity costs; see Table 2) coincide with a 3.5-fold reduction in domestic energy per capita (EIA 2014; Statistics Bureau Japan (SBJ) 2014). Only a small part of this difference can be explained by the 9.3 % higher gross national income (GNI)/capita reported for the USA (Table 2). Further, the slightly larger average household size in the USA compared with Japan (about 2.5 and 2.4 occupants, respectively) should, if anything, lower per capita domestic energy use. Similarly, the high-income, high-energy cost European countries (Denmark, Germany, and Sweden in Table 2) have much lower per capita domestic energy use than the USA. Further, the increase in domestic energy prices in the UK was seen as a partial explanation for the decrease in domestic energy use between 2005 and 2011 in England and Wales (Office for National Statistics (ONS) 2013). The burden of domestic energy costs depends not only on unit prices but also on income level. As expected, the share of household expenditure spent on domestic energy declines for the higher-income quintiles (Table 3). However, higher-income households also have more persons per household (ABS 2012; ONS 2013; EIA 2014; SBJ 2014), so it may be that the greater energy efficiency possible with larger households explains some or all of the decrease. The UK statistics also measure domestic energy costs for quintiles ranked on an “equivalised disposable income,” which adjusts for the number of persons per household (with, e.g., two adults counting as one unit, but two adults plus two children counting for 1.4 units) (ONS 2014). This 2012 UK data shows that, after such adjustment for size, the poorest fifth of Page 4

Handbook of Climate Change Mitigation and Adaptation DOI 10.1007/978-1-4614-6431-0_71-2 # Springer Science+Business Media New York 2016

Table 3 Domestic energy expenditure vs. household income quintile, Australia and Japan Country Australiaa Australiab Japanc,d Japane

Lowest 4.0 % 1,147 7.0 % 162.6

Second 3.4 % 1,460 6.3 % 178.7

Third 2.7 % 1,616 5.8 % 187.8

Fourth 2.5 % 1,929 5.2 % 197.8

Highest 2.0 % 2,294 4.5 % 224.7

Average 2.6 % 1,721 5.5 % 190.5

Sources: ABS (2012), SBJ (2014) 2009–2010 survey data b $Aust (2009–2010) per year c 2012 survey data d Only households with two or more persons are included e In 1000 yen (2012) per year a

households spent 10.9 % of their disposable income on domestic fuel, compared with only 2.8 % for the wealthiest quintile – nearly a fourfold difference. For both quintiles, the share of disposable household income spent on domestic fuel had risen since 2002 (from 8.0 % for lowest and from 1.7 % for highest), even though average domestic energy use in the UK had fallen by 17 % over the decade. Nevertheless, in absolute terms, the highest-income quintile spent more on domestic energy (and produced more CO2 emissions) than the lowest-income quintile, and the same was true for gross expenditure on domestic energy in Japan and Australia (see Table 3). Energy use in households also varies with housing type. A 2008 UK study (Druckman and Jackson 2008) found that those living mainly in flats (“city living”) had a much lower share of weekly expenditure on energy than “countryside” residents, mainly living in detached houses. An earlier study in Australia (Moriarty 2002) found that inner-city residents of Melbourne and Sydney, with a high share of residents living in flats, spent a lower share of disposable income on household fuel than outer suburban residents or nonurban residents, both groups mainly living in detached houses. Along the same lines, a Canadian study (Larivière and Lafrance 1999) measured the residential electricity consumption of Québec’s 45 most populous cities and towns and found that the per capita energy rose as the share of single dwellings increased. This result would be expected if electric power was an important form of heating in a cold climate. The size of the residence (in square meters (m2)) is also an important factor, particularly for domestic heating and cooling. It helps explain some of the large difference between US and Japanese energy use. In the USA in 2011, average residence size was 154.5 m2, compared with only 94.1 m2 for Japan overall and as low as 63.9 m2 for Tokyo prefecture (2008 values, the latest available) (EIA 2014; SBJ 2014). The high population density of Japan and the resulting high land prices explain much of this difference. Other important differences are in both ownership and average size of appliances. For example, in Japan in 2009, only 27 % of households of two persons or more owned dishwashers; in the USA in 2009, the corresponding figure was 64 %. Also, both the number of refrigerators per 1,000 households and their average capacity were larger in the USA (EIA 2013; SBJ 2014). The behavior of the occupants has also been found to be crucial. A British study (Pilkington et al. 2011) examined space heating demands in “a terrace of six similar, passive solar dwellings with sunspaces.” Space heating demand per occupant was found to vary by a factor of 14. This finding clearly indicates both that behavioral factors are important for domestic energy use and also that considerable potential exists for energy reductions. Further evidence comes from a study of 3,400 German homes: SunikkaBlank and Galvin (2012) again found that dwellings with the same energy performance rating (EPR) varied widely in space heating energy use. (The EPR, measured in kWh/m2 per year, assesses the overall energy efficiency of a building, either using actual or modeled energy use data (Corrado and Mechri

Page 5

Handbook of Climate Change Mitigation and Adaptation DOI 10.1007/978-1-4614-6431-0_71-2 # Springer Science+Business Media New York 2016

2009).) But they also found that (energy inefficient) dwellings with high EPR ratings consumed much less energy than calculated, while the reverse was true for low EPR (energy efficient) dwellings. Similar results appeared to hold for several other EU countries. They concluded that the potential energy savings from changes to occupant behavior may be far greater than assumed. Section “Strategies for Household Energy Reductions” looks further at this potential.

Strategies for Household Energy Reductions Domestic energy reductions rely on far fewer policy options than are available for reducing household private travel energy. In addition to legislation on vehicular fuel efficiency (see also chapter “▶ Reducing Personal Mobility for Climate Change Mitigation”), authorities can also influence the levels of private travel (in vehicle-km) by measures such as street closures in the inner city, speed limit reductions, priority for public transport and nonmotorized modes, limits on availability of parking spaces, road pricing as in London and Singapore, reduced arterial road construction in urban areas, provision of improved public transport, as well as increased charges for parking and taxes on transport fuels. Authorities can implement these measures because most of the road systems, and often the public transport systems, are publicly owned. A further limit on the capacity of regulations to drive change is the much longer lifetimes of housing stock relative to road vehicles. Apart from increasing fuel costs, these policy levers are not available for reducing domestic energy use. As with road vehicle efficiency, governments can legislate the use of energy-efficient light globes and establish energy ratings for domestic appliances and minimum insulation standards for new buildings. But too much intervention in the domestic sphere would meet strong popular resistance. Because of this, authorities must rely far more on voluntary behavior change (and domestic energy cost increases) for reducing domestic energy use than in other areas of energy use. Nevertheless, domestic energy conservation has at least one important advantage over travel energy conservation. It is very difficult for individual households to reduce car travel if other households do not, particularly for countries like the USA, where car travel accounts for over 90 % of all surface vehicular travel. Even if car travel is reduced as a result of a campaign, households will usually soon relapse back to former practices, since car travel is usually faster than other modes. On the other hand, individual households can make domestic energy reductions even if others do not. This section first reviews the extensive social psychology literature (see section “Social Psychology and Pro-environmental Behavior”) on pro-environmental behavior (PEB), particularly the importance of information provision (see section “The Role of Information Provision”) before examining the role of carbon taxes (see section “The Role of Monetary Approaches and Carbon Taxes”) and, finally, the context for energy conservation (see section “The Context for Domestic Energy Conservation”).

Social Psychology and Pro-environmental Behavior A vast literature is now available on the application of social psychology to energy conservation, and pro-environmental behavior in general, together with policy recommendations. According to Dietz (2014), “modest policies” aimed at raising the efficiency of US household energy consumption (presently 22 % of total energy) could reduce overall CO2 emissions in the USA by 7 %. Surveys in OECD countries have consistently found that the public regard protecting the environment and saving energy as important (Steg 2008; Booth 2009). Obviously, it is not how people respond to surveys about PEB (i.e., their stated attitudes toward the environment and energy conservation) that is important but whether or not households do in fact reduce their energy use and whether any such reductions continue in the long term. For as Dietz (2014) has also stressed, few studies with a social psychology approach are Page 6

Handbook of Climate Change Mitigation and Adaptation DOI 10.1007/978-1-4614-6431-0_71-2 # Springer Science+Business Media New York 2016

able to study actual environmental behavior, as distinct from stated intentions. A 2010 survey of energy use in Hungarian households has shown how stated intentions and actual behavior can differ. The survey of over 1,000 people found that those who “consciously act in a pro-environmental way” did not necessarily use less residential energy than respondents who did not exhibit PEB (Tabi 2013). Much of the social psychology literature on PEB has concentrated on individual attributes. Jagers et al. (2014) found that 21 % of the respondents in a Swedish survey met the requirements for “ecological citizenship.” Their conclusions found support for a strong relationship between ecological citizenship and PEB: “Our results suggest that individuals who think along the lines of ecological citizenship are more likely than others to behave in an environmentally friendly way in their daily lives.” Yet PEB was measured by response to items such as “try to save household electricity” rather than comparing actual household electricity use with other householders; hence, pro-environmental attitudes rather than actual pro-environmental behavior were being measured. Gifford and Nilsson (2014) have recently reviewed the various “personal and social factors that influence pro-environmental concern and behavior.” We discuss here some of their findings relevant to methods for reducing household energy use. In general, survey respondents with more knowledge about environmental problems indicated greater overall environmental concern. (The role of information is discussed in more detail in section “The Role of Information Provision.) Interestingly, older people generally reported higher pro-environmental behavior than younger people and women more than men. Not only did the authors report that environmentalists “tend to be middle- or upper-middle-class individuals” but, at the national level, “environmental concern has a clear positive relation with gross domestic product (GDP) per capita.” Yet, as we have seen in section “Factors Affecting Domestic Energy Consumption,” the higher-income quintiles in OECD countries have higher domestic energy use per household, and, globally, the OECD and other high-income countries have much higher primary energy use and CO2 emissions per capita than low-income countries (IEA 2013b). Residents of low-income households and countries could thus be regarded as “involuntary environmentalists” (see Moriarty and Honnery 2012): their low incomes constrain their energy use and carbon emissions. Further, survey results are usually only given as percentage reductions, ignoring the fact that low-income households already use much less energy than the average. But in the USA, a recent study (Bohr 2014) found that income effects on belief about the reality of climate change were mediated by political beliefs. Briefly, lower-income Republicans regarded climate change as a more serious issue than higher-income Republicans, but the reverse was found true for Democrats. Clearly, one needs to be careful in evaluating the transfer of social psychology findings to the national energy policy domain. A related point is the choice of incentives for promoting PEB. One fairly consistent result in the published literature is the apparent superiority of nonmonetary over monetary rewards. Steg (2008) has argued that domestic energy conservation is best served by appealing to normative and environmental values, because they provide a more enduring basis for change than ones which maximize personal interests, such as cost savings. If, for example, cost reductions disappear, then so will the conservation behavior, if formed on that basis. Overall, this view can be summed up by arguing that “green” reasons for change are superior to “mean” reasons (de Groot and Steg 2009). Dietz (2014) simply stated that “selfinterest is only one of several values that underpin environmental decision making.” Turaga et al. (2010) reached similar conclusions and stated that the empirical evidence suggested that PEB was more likely for people whose “core values” could be described as “social altruistic” and/or “biospheric”. Of course, this still leaves the problem of how to promote PEB in householders whose behavior is more in line with homo economicus. These researchers also warned that government policies might crowd out motivations for altruistic behavior. They thus stress the need to create “carefully

Page 7

Handbook of Climate Change Mitigation and Adaptation DOI 10.1007/978-1-4614-6431-0_71-2 # Springer Science+Business Media New York 2016

structured institutions.” The question of monetary incentives in the real world is, however, not so simple and is discussed in greater detail in section “The Role of Monetary Approaches and Carbon Taxes”. Steg (2008), looking specifically at domestic energy conservation, reported three barriers to conservation. The first barrier was that many households do not have sufficient knowledge of means to effectively reduce their energy consumption, as discussed in section “The Role of Information Provision.” The second was the low-priority households attach to reducing energy use. As discussed in section “Factors Affecting Domestic Energy Consumption,” domestic energy is typically only of the order of 5 % of household expenditure, although a higher proportion for low-income households. The third barrier was the high costs for some energy-saving strategies, particularly if it involved the purchase of more energyefficient appliances. She reported that two general strategies can be used to reduce household energy use. First, use psychological insights to change householders’ “knowledge, perceptions, motivation, cognitions, and norms related to energy use and conservation.” Second, alter the context in which energy use decisions are made; this important topic is treated in detail in section “The Context for Domestic Energy Conservation.” “Social marketing” has become popular as a means of changing people’s attitudes and behavior on environmental issues. As defined by Corner and Randall (2011), social marketing “is the systematic application of marketing concepts and techniques to achieve specific behavioural goals relevant to the social good.” Their study is a critique of the application of social marketing techniques in the UK to engage the public more fully on climate change. The study showed that the approach may in some circumstances be effective, particularly for encouraging PEB that needs only minor lifestyle changes, such as recycling household waste. However, given the scope of overall CO2 reductions needed by the UK and other high-emitting countries, social marketing for carbon reductions appeared to be less effective, and some of the approaches tried were even counterproductive. One particular problem was that attempting to tailor messages to individual groups may lead to compromises that negatively impact PEB in the longer term and in other domains. In other words, it is risky to consider the various environmental problems (and even non-environmental problems) in isolation. A complication of this type arises from what social psychologists term “moral licensing.” According to Merritt et al. (2011), moral licensing “occurs when past moral behavior makes people more likely to do potentially immoral things without worrying about feeling or appearing immoral.” Tiefenbeck et al. (2013) carried out a field experiment in 154 households of a 200-apartment complex in Greater Boston in the USA. The study examined the effect of a household water conservation program on electricity consumption. “The results show that residents who received weekly feedback on their water consumption lowered their water use (6.0 % on average), but at the same time increased their electricity consumption by 5.6 % compared with control subjects.” They concluded that such moral licensing “can more than offset the benefits of focused energy efficiency campaigns, at least in the short-term.” In some ways, this effect is similar to the well-known concept of “energy rebound,” where improving energy efficiency makes the operation of energy-using devices cheaper, thus leading either to some increased use of such devices or using the money saved for other (energy-using) goods and services.

The Role of Information Provision It seems intuitive that households need accurate information on both energy costs and energy use of specific household equipment. Studies have shown that there is indeed an information gap. Attari et al. (2010) conducted a nationwide US survey on estimated energy savings from such actions as turning off lights or replacing existing lights by more efficient ones. They concluded that “For a sample of 15 activities, participants underestimated energy use and savings by a factor of 2.8 on average, with small overestimates for low-energy activities and large underestimates for high-energy activities.”

Page 8

Handbook of Climate Change Mitigation and Adaptation DOI 10.1007/978-1-4614-6431-0_71-2 # Springer Science+Business Media New York 2016

Householders may simply think that the unit energy consumption of an appliance is simply related to its size (Steg 2008). Correcting this bias would seem essential for domestic energy decision-making. At least one government agency (the Office for National Statistics in the UK) saw better information as partly responsible for the observed declines in UK household energy use in recent years. The list of possible explanatory factors for household natural gas and electricity reductions included technical efficiency measures such as better cavity wall insulation and “improved efficiency of gas boilers and condensing boilers to supply properties with both hot water and central heating,” as well as continuous rises in the price of domestic gas and electricity after the mid-2000s (ONS 2013). But they also saw the provision of better information to households as important, specifically the “introduction of energy rating scales for properties and household appliances, allowing consumers to make informed decisions about their purchases” and “generally increasing public awareness of energy consumption and environmental issues.” But as the following discussion documents, simply providing more information can have unexpected effects on energy use. The rise of the new information technology (IT) has greatly enlarged the scope for providing data on domestic energy use to households. There is now a growing literature on intelligent or smart cities and smart houses. Cook (2012) has described possible future houses equipped with a vast number of sensors (“ambient intelligence”) to automatically adjust temperature and lighting levels, for example. A barrier to the realization of such smart houses is the privacy issue. But even if the privacy issue could be overcome, there are doubts about the extent of energy savings possible with such information provision alone. An Irish study (McCoy and Lyons 2014) reported the results of a controlled trial of 2,500 electricity consumers. Householders were supplied with smart meters which gave them detailed information on usage. They found that electricity use fell as expected, but compared to the control group, these householders invested less on energy-saving equipment. The authors speculated that householders might realize that conservation measures can be an alternative to energy efficiency investments. In other words, energy conservation and energy efficiency measures may not always be complementary measures, as is often assumed. Delmas et al. (2013) performed a meta-analysis on 156 published “information-based energy conservation experiments” conducted over the period 1975–2012. The studies focused on household electricity savings. The type of information provided in the various experiments included items such as tips on how to save energy and the provision of detailed data on own energy use or that of peers. From these experiments, they found average measured savings in electricity use of 7.4 %. However, the savings found depended greatly on the type of information provided. The authors concluded that “strategies providing individualized audits and consulting are comparatively more effective for conservation behavior than strategies that provide historical, peer comparison energy feedback.” They also reported potential problems with information campaigns, in that feedback on costs and monetary incentives led to relative increases in household electricity use. Another recent study from the USA (Gromet et al. 2013) provided further evidence that simply giving more information will not necessarily encourage PEB in householders. In one study, they compared responses of self-identified political liberals with self-identified political conservatives. They showed that conservatives were less likely to buy more expensive but also more energy-efficient light bulbs if they were labeled as being good for the environment than if they were not so labeled. Responses were reversed for liberals. The researchers believed that “the political polarization surrounding environmental issues” in the USAwas the explanation for their unexpected findings. This finding may not therefore be applicable to other OECD countries: in the EU, several conservative governments support deep cuts to CO2 emissions.

Page 9

Handbook of Climate Change Mitigation and Adaptation DOI 10.1007/978-1-4614-6431-0_71-2 # Springer Science+Business Media New York 2016

The Role of Monetary Approaches and Carbon Taxes One government policy often mentioned as both an important and necessary part of any carbon reduction strategy in all sectors of the economy is carbon taxes (Van Vuuren et al. 2011a). The European Union (EU) already has an emissions trading scheme (ETS), although the prices in 2013 were at historically low levels. Given that every tonne of CO2 emitted, regardless of location, has the same climate effect, a global carbon market would be preferable. At present, though, apart from the multinational regional EU market, existing carbon markets are either national or even subnational (Newell et al. 2014). But reliance on market-based incentives can be criticized because it can (and has) led to abuses, particularly the “reducing emissions from deforestation and forest degradation” (REDD) scheme of the UN Framework Convention on Climate Change (Moriarty and Honnery 2011). Nevertheless, such carbon taxes are sometimes regarded as providing motivation for domestic energy use reductions. Certainly, the cross-country evidence presented in section “Factors Affecting Domestic Energy Consumption” on domestic energy costs suggests that monetary considerations are important for energy use. The IPCC estimated that after 2050, carbon taxes required to meet Representative Concentration Pathway 2.6 (RCP2.6) target would need to be as high as $250 per tonne CO2 (Van Vuuren et al. 2011b). Here, we provide a rough estimate of the effect such a tax would have on lower end OECD domestic electricity prices. In recent years, electricity generation in OECD countries overall has led to emissions of 0.434 tonne CO2/MWh (IEA 2013a). At $250 per tonne CO2, this works out as $108.5/ MWh. From Table 2, this value would roughly double domestic electricity prices in Mexico and the USA. An important question is whether such carbon taxes would be regressive. Dissou and Siddiqu (2014) have argued that they need not be, if seen in the context of a comprehensive analysis. They argued that “Most studies have assessed the distributional impact of carbon taxes through their effects on commodity prices alone, while ignoring their impact on individual welfare brought about by changes in factor prices.” They found a U-shaped curve for income inequality (as measured by the Gini coefficient) when plotted against the level of carbon tax. Although maximum income equity was found at a tax of about $50, a zero tax had about the same equity effect as one of over $100 per tonne CO2. Nevertheless, a tax rate of $250 per tonne CO2 would substantially increase inequality. However, a carbon tax is not the only way of reducing emissions by monetary means. Starkey (2012) examined the equity effects of “personal carbon trading” (PCT). In this UK proposal, every adult would receive for free an equal carbon quota, with the sum of these quotas amounting to perhaps 40 % of total allowable national emissions. The author compared this proposal with other emission reduction schemes, including a carbon tax, and showed that these other schemes can be designed to be as equitable as any PCT one. Another possible approach is to alter the structure of domestic energy costs, with lower fixed costs on energy bills and higher unit costs for energy use. This change could be revenue neutral overall, but, again, its equity implications would need to be evaluated for each country. Future price rises for fossil fuels are likely inevitable; policies will have to be designed to ensure that lower-income households, who already pay a higher share of household income for domestic energy, are not be further disadvantaged. The level of carbon tax necessary will depend on the costs of either replacing fossil fuels by nonfossil alternatives (renewable and nuclear energy) or the costs of various carbon sequestration methods. The latter include biological sequestration in plants (especially forests) and in soils and also mechanical sequestration techniques such as capturing CO2 from the flue stacks of fossil fuel electricity plants, followed by compression, transport, and geological burial (Van Vuuren et al. 2011b; Moriarty and Honnery 2011). For details, see also chapter “▶ Reducing Greenhouse Gas Emissions with CO2 Capture and Geological Storage” in this handbook. A recent study (Marshall 2013) found an extremely broad range of both the unit costs of various carbon sequestration methods – from $10 to $2,000/tonne CO2 sequestered – and global potential (in Gt CO2).

Page 10

Handbook of Climate Change Mitigation and Adaptation DOI 10.1007/978-1-4614-6431-0_71-2 # Springer Science+Business Media New York 2016

Of course, imposing such high carbon taxes would be unwarranted if it could be shown that the costs of climate adaptation (again, measured in $ per tonne of CO2 or equivalent) were much smaller. Some economists have argued that the global economic costs of a 2  C temperature rise will be small, and an official US estimate was that the “social cost of carbon” (SCC) was $37 per tonne of CO2 emitted. However, these and similar results derived from economic models have been heavily criticized (Revesz et al. 2014). In any case, integrated climate models show that climate costs will rise in a nonlinear fashion as temperature rises beyond the nominal 2  C “safe limit.” Ackerman and Stanton (2012) have thus argued that the SCC could easily be an order of magnitude higher or, given certain assumptions, even infinite.

The Context for Domestic Energy Conservation What will induce households, as well as policy makers, to take climate mitigation seriously? At present, despite the high profile of the climate change problem over the past two to three decades in both the press and scholarly publications, there has been no real progress in mitigation: in 2013, emissions from fossil fuels were 2.1 % higher than in 2012 (BP 2014). Possible reasons for such climate inaction could include pressure from fossil fuel-based industries and energy-exporting countries, skepticism about the reality of global warming, and belief in the efficacy of future technical fixes such as aerosol geoengineering to painlessly solve the problem (see also chapter “▶ Geoengineering for Climate Stabilization” in this handbook). Interestingly, the Asian OECD countries (Japan and South Korea) both report very high levels of belief in the reality of global warming (Wikipedia 2014) and have insignificant fossil fuel reserves (BP 2014). The Intergovernmental Panel on Climate Change (IPCC) (Stocker et al. 2013), in its latest report (Fifth Assessment Report (AR5)), warned that the world will increasingly face greater extremes in climate, particularly in the form of heat waves and high-intensity precipitation. Over the past century, global mean temperature has risen less than 1.0  C, yet in temperate climates, daily variation can be 20  C or more. Such variation makes it difficult for laypersons to feel the urgency that climate scientists feel. But already, recent heat waves in Europe and elsewhere have led to tens of thousands of excess deaths (Moriarty and Honnery 2014). Climate scientists speak of climate forcing (or radiative forcing, in watts per square meter) from GHGs. Analogously, it can be expected that the spread of extreme weather events, both in intensity and frequency, will provide the forcing for both the public and their policy makers in all countries to take decisive action on climate mitigation, although intense arguments will continue regarding the sharing of emission reductions between countries. On the other hand, there is a disconnection between national costs for climate mitigation and benefits accruing to the same nation (Moriarty and Honnery 2014). Climate mitigation is an example of a global public good; these tend to be undersupplied by market economies. In contrast, most of the health benefits from reducing local air pollution in a city will accrue to that city. The problem would have been less serious when the OECD countries both had the highest per capita emissions of CO2 and accounted for most of the global emissions. In 1965, the OECD produced 68.3 % of fossil fuel CO2; in 2012, even a much enlarged OECD accounted for only 40.3 % (BP 2014). The urgent need for countries like China and India, still with relatively low emissions per capita, but large total emissions, to reduce their emission levels, will complicate popular acceptance of deep emission reductions in the OECD. Another factor that must impact ordinary citizens’ perception of both the need for reducing fossil fuel use and acceptance of high domestic fossil fuel energy prices is national fossil fuel reserve estimates. One set of reserve estimates, those of BP (2014), is shown in Table 4. Countries like Denmark, Japan, South Korea, and Sweden have negligible amounts of fossil fuels. German reserves are almost all low-quality lignite, which produces high CO2 emissions per unit of delivered energy – an embarrassment for a country striving for “green” credibility. Even the UK, once the world leader in coal production, and, until recently,

Page 11

Handbook of Climate Change Mitigation and Adaptation DOI 10.1007/978-1-4614-6431-0_71-2 # Springer Science+Business Media New York 2016

Table 4 Fossil fuel reserve estimates at the end of 2012 for various OECD countries, in EJ Country Australia Canada Denmark Germany Japan Mexico South Korea Sweden UK USA World

Oil 22.3 993.2 4.0 1.5 0.3 65.1 0.0 0.0 17.7 199.9 9531.3

NG 143.2 75.4 1.5 2.3 0.0 15.1 0.2 0.0 7.5 320.3 7058.0

Coal 1584.0 141.0 2.6a 569.4 9.7 28.9 1.8 0.0 6.4 4825.9 17665.0

All fossil fuels 1749.4 1209.5 8.1 573.2 10.0 109.1 2.0 0.0 31.7 5346.0 34254.3

Source: BP 2014 In Greenland

a

an important natural gas (NG) and oil producer, finds itself today with only a few years’ reserves of all three fuels at their current production rates. Most fossil fuels used in these countries are thus imported, and so these importing countries are increasingly dependent on the continued goodwill of both oil- and NG-exporting countries. Restrictions on both oil and gas exports have been used for political purposes. In the USA, in contrast, the public is being led to believe that shale gas (and even shale oil) will lead once again to oil and gas independence for the USA. The context in which appeals to the public to conserve energy are made in OECD Europe or Asia, compared with fossil fuel-rich North America or Australia, is thus very different.

Future Directions One important consideration for both future climate change mitigation and adaptation is to ensure that actions taken at the local level (such as an urban area) do not conflict with actions needed at the global level. Similarly, local actions for climate mitigation or adaptation must not conflict with policies needed for ecological sustainability in general. One proposal for climate change mitigation is to paint urban building roofs (and even roads) with reflecting paint, in order to increase their albedo – the share of insolation that is reflected directly back into space (Royal Society 2009). Unlike most other geoengineering proposals, such roof whitening should meet with little international opposition, since the actions involved are clearly on national territory. It would also represent a means for households to directly mitigate climate change without reducing domestic energy use. And unlike most climate mitigation measures, most of the temperature reduction benefits would accrue to the urban area concerned (see also chapter “▶ Geoengineering for Climate Stabilization” in this handbook). However, a recent study (Jacobsen and ten Hoeve 2012) modeled both local and global climate effects. The study found that although local temperature reductions would indeed occur, the global effect would be a modest temperature increase. Even if different climate models were to give different results, the study illustrates the importance of such potential conflicts. A further problem is that “cool roofs” will reduce winter as well as summer temperatures, both inside and outside buildings. In temperate climates, domestic heating energy needs will therefore rise. Another study of reflective pavements (Yang et al. 2013) also found a similar unintended consequence: “reflected radiation from high-albedo pavements can increase Page 12

Handbook of Climate Change Mitigation and Adaptation DOI 10.1007/978-1-4614-6431-0_71-2 # Springer Science+Business Media New York 2016

the temperature of nearby walls and buildings, increasing the cooling load of the surrounding built environment and increasing the heat discomfort of pedestrians.” In section “Factors Affecting Domestic Energy Consumption,” it was found that residents of apartment blocks, and higher urban density living in general, reduced household energy consumption. Increasing the residential density of cities might therefore appear as a way to lower energy use and carbon emissions. Further, increased urban density has also been promoted as a means of reducing urban car travel and their associated emissions (Moriarty and Honnery 2013). However, several possible conflicts arise. First, higher-density living might interfere with the ability to use passive solar energy for temperature control and natural lighting (Steemers 2003). Second, it might reduce the potential for individual households to use PV (photovoltaic) roof panels or solar hot water systems or, in low rainfall regions, tanks for rainwater storage. One also needs to consider the effect of urban density on the urban heat island (UHI) effect. Of course, reduction in household energy use will reduce urban waste heat, which is one component of the UHI effect. But according to Kleerekoper et al. (2012), a more important cause is the “urban canyon” effect, which prevents escape of radiant heat, and impervious surfaces, which prevent evaporative cooling. Both are likely to be more important in densely built-up urban areas. This chapter has shown that deep reductions in household energy and thus CO2 emissions will require a variety of compatible policies. The national statistical data presented in sections “Factors Affecting Domestic Energy Consumption” and “Strategies for Household Energy Reductions” showed that household domestic energy use is lowest in countries with low fossil fuel reserves; these countries also usually have higher prices for domestic energy use. Public support for decisive action on climate change varies from country to country and even in one country from month to month. However, the rise in frequency of extreme weather events – heat waves, storms, and floods – together with rising sea levels, is likely to increase support for action in all countries. The review of domestic energy conservation campaigns discussed in section “Social Psychology and Pro-environmental Behavior” found only limited permanent measured energy reductions. But it could be that this disappointing result occurs because respondents presently do not really feel that energy security and fossil fuel depletion and climate change are serious problems that will necessarily involve major lifestyle changes. In future, it is likely that, for both fossil fuel depletion and climate change reasons, the context in which domestic energy decisions are made will change. Past research on domestic energy conservation may then be of little relevance. But new research will also have to take a more comprehensive view of energy savings than in the past, to ensure that neither conflicts between energy efficiency and energy conservation do not occur nor conflicts between energy savings in different sectors.

References Ackerman F, Stanton EA (2012) Climate risks and carbon prices: revising the social cost of carbon. Available at (http://dx.doi.org/10.5018/economics-ejournal.ja.2012-10) Attari SZ, DeKay ML, Davidson CI et al (2010) Public perceptions of energy consumption and savings. Proc Natl Acad Sci 107(37):16054–16059 Australian Bureau of Statistics (ABS) (2012) 2009–10 household expenditure survey: summary of results. ABS, Canberra, Cat No 6530 Bohr J (2014) Public views on the dangers and importance of climate change: predicting climate change beliefs in the United States through income moderated by party identification. Clim Chang. doi:10.1007/s10584-014-1198-9 Booth C (2009) A motivational turn for environmental ethics. Ethics Environ 14(1):53–78 BP (2014) BP statistical review of world energy 2014. BP, London Page 13

Handbook of Climate Change Mitigation and Adaptation DOI 10.1007/978-1-4614-6431-0_71-2 # Springer Science+Business Media New York 2016

Cook D (2012) How smart is your home? Science 335:1579–1581 Corner A, Randall A (2011) Selling climate change? The limitations of social marketing as a strategy for climate change public engagement. Glob Environ Chang 21:1005–1014 Corrado V, Mechri HE (2009) Uncertainty and sensitivity analysis for building energy rating. J Build Phys 33(2):125–156 Davis SJ, Caldeira K (2010) Consumption-based accounting of CO2 emissions. Proc Natl Acad Sci 107:5687–5692 de Groot JIM, Steg L (2009) Mean or green: which values can promote stable pro-environmental behavior? Conserv Lett 2:61–66 Delmas MA, Fischlein M, Asensio OI (2013) Information strategies and energy conservation behavior: a meta-analysis of experimental studies from 1975 to 2012. Energy Policy 61:729–739 Dietz T (2014) Understanding environmentally significant consumption. Proc Natl Acad Sci 111(14):5067–5068 Dissou Y, Siddiqu MS (2014) Can carbon taxes be progressive? Energy Econ 42:88–100 Druckman A, Jackson T (2008) Household energy consumption in the UK: a highly geographically and socio-economically disaggregated model. Energy Policy 36:3177–3192 Energy Information Administration (EIA) (2013) 2009 RECS survey data. Accessed on 18 Jun 2014 at (http://www.eia.gov/consumption/residential/data/2009/#sf?src=‹ Consumption Residential Energy Consumption Survey (RECS)-b1) Energy Information Administration (EIA) (2014) Annual energy outlook 2014. US Department of Energy, Washington, DC Gifford R, Nilsson A (2014) Personal and social factors that influence pro-environmental concern and behaviour: a review. Int J Psychol. doi:10.1002/ijop.12034 Gromet DM, Kunreuther H, Larrick RP (2013) Political ideology affects energy-efficiency attitudes and choices. Proc Natl Acad Sci 110:9314–9319 International Energy Agency (IEA) (2013a) CO2 emissions from fuel combustion: highlights, 2013th edn. IEA/OECD, Paris International Energy Agency (IEA) (2013b) Key world energy statistics 2013. IEA/OECD, Paris Jacobson MZ, ten Hoeve JE (2012) Effects of urban surfaces and white roofs on global and regional climate. J Clim 25:1028–1044 Jagers SC, Martinsson J, Matti S (2014) Ecological citizenship: a driver of pro-environmental behaviour? Environ Polit 23(3):434–453 Kleerekoper L, van Esch M, Salcedo TB (2012) How to make a city climate-proof, addressing the urban heat island effect. Resour Conserv Recycl 64:30–38 Larivière I, Lafrance G (1999) Modelling the electricity consumption of cities: effect of urban density. Energy Econ 21:53–66 Marshall M (2013) Transforming earth. N Sci 220 (2938):10–11 McCoy D, Lyons S (2014) Better information on residential energy use may deter investment in efficiency: case study of a smart metering trial. MPRA paper no. 55402. http://mpra.ub.uni-muenchen. de/55402/ Merritt AC, Effron DA, Monin B (2010) Moral self-licensing: when being good frees us to be bad. Soc Personal Psychol Compass 4(5):344–357 Moriarty P (2002) Environmental sustainability of large Australian cities. Urban Policy Res 20(3):233–244 Moriarty P, Honnery D (2011) Rise and fall of the carbon civilisation: resolving global environmental and resource problems. Springer, London

Page 14

Handbook of Climate Change Mitigation and Adaptation DOI 10.1007/978-1-4614-6431-0_71-2 # Springer Science+Business Media New York 2016

Moriarty P, Honnery D (2012) Chapter 51. Reducing personal mobility for climate change mitigation. In: Chen W-Y, Seiner JM, Suzuki T, Lackner M (eds) Handbook of climate change mitigation. Springer, New York Moriarty P, Honnery D (2013) Greening passenger transport: a review. J Clean Prod 54:14–22 Moriarty P, Honnery D (2014) Future earth: declining energy use and economic output. Foresight 16(6):1–18 Newell RG, Pizer WA, Raimi D (2014) Carbon market lessons and global policy outlook. Science 343:1316–1317 Office for National Statistics (ONS) (UK) (2013) Household energy consumption in England and Wales, 2005–11. Accessed at http://www.ons.gov.uk/ons/dcp171766_321960.pdf Office for National Statistics (ONS) (UK) (2014) Expenditure on household fuels 2002–2012. Accessed at http://www.ons.gov.uk/ons/rel/household-income/expenditure-on-household-fuels/2002—2012/ sty-energy-expenditure.html Pilkington B, Roach R, Perkins J (2011) Relative benefits of technology and occupant behaviour in moving towards a more energy efficient, sustainable housing paradigm. Energy Policy 39:4962–4970 Revesz RL, Howard PH, Arrow K et al (2014) Improve economic models of climate change. Nature 508:173–175 Royal Society (2009) Geoengineering the climate: science, governance and uncertainty. Royal Society, London Starkey R (2012) Personal carbon trading: a critical survey. Part 1: equity. Ecol Econ 73:7–18 Statistics Bureau Japan (SBJ) (2014) Japan statistical yearbook 2014. Statistics Bureau, Tokyo, Available at http://www.stat.go.jp/english/data/nenkan/index.htm Steemers K (2003) Energy and the city: density, buildings and transport. Energy Build 35:3–14 Steg L (2008) Promoting household energy conservation. Energy Policy 36:4449–4453 Stocker TF, Qin D, Plattner G-K et al (eds) (2013) Climate change 2013: the physical science basis. CUP, Cambridge, UK Sunikka-Blank M, Galvin R (2012) Introducing the prebound effect: the gap between performance and actual energy consumption. Build Res Inf 40(3):260–273 Tabi A (2013) Does pro-environmental behaviour affect carbon emissions? Energy Policy 63:972–981 Tiefenbeck V, Staake T, Roth K et al (2013) For better or for worse? Empirical evidence of moral licensing in a behavioral energy conservation campaign. Energy Policy 57:160–171 Turaga RMR, Howarth RB, Borsuk ME (2010) Pro-environmental behavior: rational choice meets moral motivation. Ann N Y Acad Sci 1185:211–224 Van Vuuren DP, Edmonds J, Kainuma M et al (2011a) The representative concentration pathways: an overview. Clim Chang 109:5–31 Van Vuuren DP, Stehfest E, den Elzen MGJ et al (2011b) RCP2.6: exploring the possibility to keep global mean temperature increase below 2 C. Clim Chang 109:95–116 Wikipedia (2014) Climate change opinion by country. Accessed at http://en.wikipedia.org/wiki/Climate_ change_opinion_by_country World Bank (2014) GNI per capita, Atlas method (current US$). Available at http://data.worldbank.org/ indicator/NY.GNP.PCAP.CD Yang J, Wang Z, Kaloush KE (2013) Unintended consequences: a research synthesis examining the use of reflective pavements to mitigate the urban heat island effect. Arizona State University National Center of Excellence for SMART Innovations. Accessed at http://www.asphaltroads.org/assets/_control/con tent/files/unintended-consequences-1013.pdf

Page 15

Handbook of Climate Change Mitigation and Adaptation DOI 10.1007/978-1-4614-6431-0_72-1 # Springer Science+Business Media New York 2015

Geoengineering for Climate Stabilization Maximilian Lackner* Institute of Chemical Engineering, Vienna University of Technology, Vienna, Austria

Abstract Engineering the climate by means of carbon dioxide removal (CDR), Earth radiation management (ERM), and/or solar radiation management (SRM) approaches has recaptured the attention of scientists, policy makers, and the public. Climate engineering is being assessed as a set of tools to deliberately, and on a large scale, moderate or retard global warming. There are several concepts available, like injecting aerosol-forming SO2 into the stratosphere or placing huge objects in orbit to partly shade Earth from incoming radiation or fertilizing the ocean with iron for increased algae growth and creation of carbon sinks. Such concepts are highly speculative, and irrespective of whether they would work, they bear huge risks, from adversely affecting the complex climate system on a regional or global scale to potentially triggering droughts, famine, or wars. More research is needed to better understand promising concepts and to work them out in depth, so that options are made available in case they should become necessary in the future, when climate change mitigation and adaptation measures do not suffice and fast action becomes imperative. Apart from the technological hurdles, which are anyhow mostly far beyond today’s engineering capabilities, huge social, moral, and political issues would have to be overcome. The purpose of this chapter is to highlight a few common concepts of CDR, ERM, and SRM for climate engineering to mitigate climate change.

Keywords Climate engineering; Geoengineering; Solar radiation management (SRM); Carbon dioxide removal (CRD); Earth radiation management (ERM); Meteorological reactor; Stratospheric aerosols; Ocean fertilization; Biochar; Dyson dots; Enhanced weathering

Introduction Climate engineering (also dubbed geo-engineering, geoengineering) is defined as “the deliberate largescale intervention in the Earth’s climate system, in order to moderate global warming” (Shepherd 2009). Another, more positive term found in the literature is “climate remediation” or “climate intervention.” It can be considered a variant of macroengineering (the implementation of extremely large-scale design projects such as the Panama Canal) and similar in type to terraforming (planetary engineering, i.e., altering the environment of an extraterrestrial world). The expression is not to be confused with geological engineering (likewise termed geoengineering or geotechnical engineering, which is concerned with the design and construction of earthworks, including excavations, hydraulic fracturing (fracking), drilling, and underground infrastructure). Climate engineering can be seen as the most desperate, bizarre climate change mitigation measure. Yet, due to slow progress with conventional and incremental measures, it has recaptured widespread attention *Email: [email protected] Page 1 of 28

Handbook of Climate Change Mitigation and Adaptation DOI 10.1007/978-1-4614-6431-0_72-1 # Springer Science+Business Media New York 2015

among scientists, politicians, and the public. Climate engineering or “hacking the planet” (Kintisch 2010) is hyped as “quick fix” and “only solution” on the one hand and bedeviled and rejected as wacky idea, simply gambling, being impossible, and very dangerous on the other hand. Some see it as a metaphoric “Faustian bargain” or man’s attempt to “play God.” Finally, one needs to acknowledge that climate engineering concepts mostly “treat the symptoms rather than cure the illness” of climate change. It is not so easy to find one’s position toward climate engineering, and according to Heyward and Rayner (2013), some “scientists involved in geoengineering discourse convey mixed messages about the need for technocratic management of the anthropocene at the same time as expressing strong commitments to the importance of public participation in decision making about geoengineering.” The Intergovernmental Panel on Climate Change (IPCC) states that every option has to be considered, yet it expresses a critical attitude toward climate engineering due to the inherent, unknown risks and assesses it in its 2007 report as “largely speculative and unproven and with the risk of unknown side-effects.” It was around the year 2008 (Ming et al. 2014) to 2009 that a critical discourse of geoengineering started to emerge, mainly in American magazines (Biello 2009; Kunzig 2008) and German newspapers (Anshelm and Hansson 2014). Kennedy et al. (2013) write that “No study of coping with climate change is complete without considering geoengineering.” Social science teaches that transformation dynamics evolve from hope-inspired alternative choices rather than fear-driven technical constraints (Stirling 2014). With a lot of disappointment from commitment and implementation of climate change mitigation measures over the last years, and continued GHG emissions, many scientists feel certain despair, giving an inclination toward options provided by climate engineering. Climate engineering can be considered a complementary approach to conventional measures: Preserving the climate (quick fix) while CO2 is gradually brought under control by natural and/or artificial processes. In this scenario, climate engineering would “buy time” for mankind and the globe. The major issue, even with reversible actions of climate engineering, is that the climate system is very complex. Identifying unintended consequences is not a trivial – if at all possible – task. Such consequences could be most severe and irreversible, like droughts or wars. In this context, it is worthwhile to think about the theory of chaos, which is rooted in the pioneering work of MIT meteorologist and mathematician Edward N. Lorenz (1963). Moreover, a slight drifting of the continents or a minor shifting of ocean currents may bring ice to one land and desert sands to another; see Lorenz (1972).

Safe Limits The concept of Earth as a self-regulatory system was developed in the late 1960s by J. E. Lovelock and became popular under the name “Gaia hypothesis” and “Daisyworld” model (it is a parable on the biological homeostasis of the global environment. “Daisyworld” contains white and black flowers. When temperatures rise, more white daisies grow, increasing reflection. Sinking temperatures are counteracted by a growth of black daisies: They absorb more sunlight. Hence the balance of white to black daisies controls the temperature and stabilizes it. The simplistic Daisyworld model intuitively describes the coupling between climate and the biosphere). Lovelock’s concept is being discussed controversially (Weaver and Dyke 2012; Boston 2008). For sure nature can buffer anthropogenic impact to some extent, but not endlessly, and climate change is testimony for this finite buffering capacity. In their seminal paper “A safe operating space for humanity” (compare The Limits to Growth work by the Club of Rome in 1972), Rockström et al. write that “Although Earth has undergone many periods of significant environmental change, the planet’s environment has been unusually stable for the past 10,000 years. This period of stability — known to geologists as the Holocene — has seen human civilizations arise, develop and thrive” (Rockström et al. 2009). They define nine interlinked planetary boundaries, three of which have already been overstepped. For instance, the estimated safe threshold Page 2 of 28

Handbook of Climate Change Mitigation and Adaptation DOI 10.1007/978-1-4614-6431-0_72-1 # Springer Science+Business Media New York 2015

identified for atmospheric CO2 is 350 ppm or a total increased warming of 1 W/m2 (current warming is approx. 1.9 W/m2 radiative forcing from 400 ppm of CO2 (Butler and Montzka 2013), not considering the additional radiative forcing by other greenhouse gases such as CH4).

Climate Engineering Approaches Climate engineering is still in its infancy, at a theoretical stage, where ideas are being generated, discussed, and elaborated. The Secretariat of the Convention on Biological Diversity concluded that “There is no single geoengineering approach that currently meets all three basic criteria for effectiveness, safety and affordability. Different techniques are at different stages of development, mostly theoretical, and many are of doubtful effectiveness” (Secretariat of the Convention on Biological Diversity 2012). Global climate engineering is untested and mostly untestable (MacMynowski et al. 2011). Its roots go back to 1965, when advisors to US President Lyndon B. Johnson suggested spreading reflective particles over 13 million km2 of ocean in order to reflect an extra 1 % of sunlight away from Earth (Kintisch 2010). This was one of the first high-level acknowledgements of climate change. Interestingly, no suggestions to cut down CO2 emissions were reported to have been made. The president did not follow these early geoengineering suggestions. Even prior to that, in 1955, John von Neumann foresaw “forms of climatic warfare as yet unimagined” in Fortune magazine (von Neumann 1955). In 1974, the Russian researcher Mikhail Budyko suggested that cooling down the planet could be achieved by burning sulfur in the stratosphere, which would create a haze from the resulting aerosols (higher albedo) (Teller et al. 1997). This and other concepts will be touched upon below. Space-based geoengineering concepts build upon Tsiolkovsky’s and Tsander’s 1920s idea of utilizing mirrors for space propulsion (Kennedy et al. 2013). As these examples show, ideas to engineer the climate came up quite early. Small-scale weather modification can already be achieved today, e.g., by cloud seeding to induce rainfall. The historical Project “Stormfury” (1962–1983) attempted to weaken tropical cyclones with silver iodide (Willoughby et al. 1985). For a brief review on “rainmaking attempts” and “weather warfare,” which is outside the scope of this chapter, see Chossudovsky (2007) and Climate Modification Schemes, American Institute of Physics (AIP) (2011). Weather modification action has been limited by the international community, e.g., during war by the 1977 UN Environmental Modification Convention. Another regulation in this respect is the London Convention (1972) and its 1996 Protocol, which are global agreements regulating dumping of wastes at sea. Article 6 prohibits exports of wastes for dumping in the marine environment, which includes, e.g., CO2 in CCS (carbon capture and storage) schemes (Dixon et al. 2014). Examples where man has modified local climate (impacts) include artificial snow in skiing resorts or irrigation for crop yield amelioration. Previous environmental interventions by man have sometimes brought about unwanted – and unexpected – effects, also in the near past, e.g., streamlining riverbeds leading to local floods or the creation of urban heat islands. Joe Romm, founding editor of the blog Climate Progress, has linked “geo-engineering to a dangerous course of chemotherapy and radiation to treat a condition curable through diet and exercise — or, in this case, emissions reduction” (McGrath 2014). Al Gore, former vice president of the USA, was quoted on climate engineering to be “utterly mad and delusional in the extreme.” He said that searches for an instant solution were born out of desperation, were misguided, and could lead to an even bigger catastrophe (Goldenberg 2014). “The idea that we can put a different form of pollution into the atmosphere to cancel out the effects of global warming pollution is utterly insane” (Goldenberg 2014).

Page 3 of 28

Handbook of Climate Change Mitigation and Adaptation DOI 10.1007/978-1-4614-6431-0_72-1 # Springer Science+Business Media New York 2015

In fact, the idea of “engineering” the Earth’s climate is a shocking one. There is yet little information available, and “technically feasible” concepts are totally vague on costs, effectiveness, reversibility, risks, and side effects. However, serious scientists have started to investigate options for climate engineering more deeply, since swift remedial action might be needed once the Earth’s climate system reaches a “tipping point” (positive feedback, thermal runaway, e.g., thawing of permafrost releases CH4, which further increases temperatures). It seems necessary to study climate engineering, to be prepared. There is also the threat of unilateral action by another country (Dean 2011), should a local benefit from such action be expected. Tipping point rhetoric is challenged in Heyward and Rayner (2013). Climate engineering ideas and concepts fall into two broad groups: carbon dioxide removal (CDR) and solar radiation management (SRM). Several researchers discern Earth radiation management (ERM) from SRM, where ERM techniques focus on atmospheric convection enhancement (building of thermal bridges) and increasing outgoing IR heat radiation (i.e., long wavelength). The focus of SRM is on (short wavelength) incoming radiation. The term ERM was introduced by David L. Mitchell et al. (2011). He includes CRD and cirrus cloud reduction into SRM (Mitchell and Finnegan 2009). CRD techniques are remediation, whereas SRM are intervention. CDR techniques are generally not considered that controversial, and they do not seem to introduce global risks, as they work on the local scale. Costs and technical feasibility have been limiting CDR deployment, e.g., reforestation or CCS. CRD attacks the root cause of climate change. However, the effects work slowly to bring down temperatures again. SRM targets an increase in the amount of solar energy radiated back into space, effectively dimming the Sun. The necessary albedo enhancement is envisioned for deserts, oceans, mountains, clouds, and also manmade objects like roofs or roads. Prominent concept examples include deployment of giant orbiting sunshields in space, emission of huge amounts of SO2 (Crutzen 2006) and particles into the stratosphere to mimic the action of volcanoes, increase of the Earth’s albedo by “painting” deserts white, spraying sea water into the atmosphere to produce and whiten clouds, redirecting ocean streams and changing their salinity (Could a massive dam 2010), or pumping seawater into pole regions and creating ice. Such techniques bear the risk of upsetting the Earth’s natural rhythms. SRM approaches act quickly. However, they do not remove the root cause of climate change, mainly CO2 levels in the atmosphere, so other aspects like ocean acidification are not tackled. Raymond Pierrehumbert, professor in Geophysical Sciences at University of Chicago, said “The term ‘solar radiation management’ is positively Orwellian. It’s a way to increase comfort levels with this crazy idea” (Rotman 2013). According to Shepherd (2009), CDR methods should be regarded as preferable to SRM methods. SRM methods are expected to be cheaper, though. The Royal Society wrote in a 2009 report (Shepherd 2009): “Solar Radiation Management methods could be used to augment conventional mitigation. However, the large-scale adoption of Solar Radiation Management methods would create an artificial, approximate, and potentially delicate balance between increased greenhouse gas concentrations and reduced solar radiation, which would have to be maintained, potentially for many centuries. It is doubtful that such a balance would really be sustainable for such long periods of time, particularly if emissions of greenhouse gases were allowed to continue or even increase.” Although technological hurdles exist, it is expected that devising working technologies (i.e., installations that cool the atmosphere) are easier than understanding their effects or how governance (Shepherd 2009) should be applied. The focus of this chapter lies on SRM, which directly modify the Earth’s radiation balance; compare Fig. 1. It also covers CDR, which influences the global carbon cycle (see Fig. 2), and ERM, as well as touching upon governance and other related aspects of climate engineering. Page 4 of 28

Handbook of Climate Change Mitigation and Adaptation DOI 10.1007/978-1-4614-6431-0_72-1 # Springer Science+Business Media New York 2015

Fig. 1 Schematic showing the global average energy budget of the Earth’s atmosphere. Yellow indicates solar radiation; red indicates heat radiation; and green indicates transfer of heat by evaporation/condensation of water vapor and other surface processes. The width of the arrow indicates the magnitude of the flux of radiation and the numbers indicate annual average values. At the top of the atmosphere, the net absorbed solar radiation is balanced by the heat emitted to space (Source: Shepherd 2009)

Fig. 2 Simplified representation of the global carbon cycle. The values inside the boxes are standing stocks (in Pg C); the arrows represent annual fluxes (Pg C/y). The black arrows and numbers show the preindustrial values of standing stocks and fluxes; the red arrows and numbers indicate the changes due to anthropogenic activity (Source: Cole 2013)

Page 5 of 28

Handbook of Climate Change Mitigation and Adaptation DOI 10.1007/978-1-4614-6431-0_72-1 # Springer Science+Business Media New York 2015

Radiation Balance Energy on Earth mainly comes from the Sun. The solar constant is approx. 1,361 W/m2, which translates into a power of 1.730  1017 W for the entire Earth. The average incoming solar radiation is approx. ¼ of the solar constant (342 W/m2). The radiation balance of the Earth is shown in Fig. 1 in a simplified version. Climate engineering aims at modifying this radiation balance to achieve a lower net heating effect. In climate science, radiative forcing or climate forcing is defined as the difference of insolation (sunlight) absorbed by the Earth and energy radiated back to space. Currently, it is 2.916 W/m2, which corresponds to 479 CO2-eq. 1.88 W/m2 thereof is due to CO2 and 0.51 W/m2 due to CH4 (Butler and Montzka 2013).

Global Carbon Cycle

Figure 2 shows the simplified global carbon cycle in Gt of carbon per year (1 Gt = 1 Pg = 1015 g). One can see that the ocean is the largest sink. The various carbon sinks present opportunities for geoengineering. Subsets of special techniques are biogeoengineering and Arctic geoengineering. In biogeoengineering, plants or other living organisms are used or modified to beneficially influence the climate on Earth, e.g., by creating carbon sinks. An example is iron fertilization of the oceans. Iron is a growth-limiting factor, so fertilization would be expected to produce more algae, taking up CO2, like land-based biomass. “Global dimming” is an aspect that could be exploited for climate engineering. Monoterpenes from boreal forests (Rinnan et al. 2011; Aaltonen et al. 2011) were found to contribute to global dimming (cooling), apart from being a CO2 sink, so tree planting would be a working biogeoengineering approach. Global dimming, generally, is caused by an increase in particulates such as sulfate aerosols in the atmosphere due to human action. The effect of anthropogenic global dimming has interfered with the hydrological cycle by reducing evaporation and so may have reduced rainfall in some areas. Global dimming also creates a cooling effect that may have partially counteracted the effect of greenhouse gases on global warming. With sulfur levels in fuels being further reduced, e.g., for ships, the global warming contribution of combustion emissions will increase in the future. Arctic geoengineering focuses geographically on the Arctic, which plays a key role in maintaining current climate due to its albedo and stored methane. The Arctic ice is disappearing quickly, though, and concepts have been envisioned to support ice buildup.

Impacts of Climate Engineering The targeted impact of climate engineering is to bring down global air and surface temperatures. Undesired side effects might also occur, though, particularly in SRM schemes. Several researchers have run computer models to investigate the effect of blocking part of the solar radiation. Shading the Sun would, according to the models, reduce the global temperatures, but also lead to profound changes to precipitation patterns including disrupting the Indian Monsoon (Shepherd 2009). Anthropogenic SO2 in the stratosphere at a level necessary to counteract the radiative forcing of human CO2 and CH4 could cut rainfall in the tropics by 30 % (Ferraro et al. 2014). Also, it would lead to acid rain. There is further concern that SO2 in the troposphere can harm the ozone layer; see also section “Stratospheric Sulfate Aerosols.” Evidence that such action would in fact result in a net cooling was provided by the eruption of the volcano Mt. Pinatubo in the Philippines in June 1991. It resulted in a 0.5  C variation in the Earth surface temperature, due to the effect of sulfate aerosol-induced albedo enhancement. However, already by the year 1995, the effect had vanished, and the temperature returned to the former value (Gomes and de Araújo 2011). Note: Another volcanic event with transient, global impact on the climate was the 1815

Page 6 of 28

Handbook of Climate Change Mitigation and Adaptation DOI 10.1007/978-1-4614-6431-0_72-1 # Springer Science+Business Media New York 2015

eruption of Mount Tambora in Indonesia, which led to a “year without summer” and famine due to reduced crop yields (Stilgoe et al. 2013a). Sticking with this geoengineering example, potential side effects of SO2 injected into the stratosphere by, e.g., balloons, artillery, or jet planes, are: • • • • • • • • • •

CO2 emissions from the missions Litter, e.g., from returning balloon shells Noise, e.g., from the artillery Depletion of ozone Regional droughts, e.g., in Africa and Asia from weaker monsoon activity Impact on cloud formation, particularly cirrus clouds, with unpredicted effects Acidic rain, leading to further ocean acidification, and other effects on the ecosystem Whitening of the sky due to aerosols, more diffuse radiation Less yield from solar energy collectors, impacting renewable energy production Temperature changes in the stratosphere, influencing atmospheric circulations in the troposphere with unknown effects

The Geoengineering Model Intercomparison Project (GeoMIP) around Ben Kravitz assesses the projected impacts of geoengineering by different climate models, focusing on SRM (http://climate. envsci.rutgers.edu/GeoMIP/publications.html). In 2013, 12 climate models simulating quadrupled atmospheric carbon dioxide levels and a corresponding reduction in solar radiation were compared (Kravitz 2013). In Fig. 3, an overview by the Convention on Biological Diversity (http://www.cbd.int/convention/) shows which intended and unintended effects might result from geoengineering. It is expected that both SRM and SDR would affect biodiversity and ecosystems, which finally have a significant impact on human well-being. As stated above, quantification of intended and also identification of unintended consequences of SRM and to a lesser extent ERM and CDR techniques are difficult to achieve. On the benefits, risks, and costs of stratospheric geoengineering, see e.g., Robock et al. (2009).

Legal, Moral, and Social Issues “Whose hand will be on the planetary thermostat?”(Robock 2014). Action by one nation would impact climate globally, but who is entitled to enact and control climate engineering? Would the target of climate engineering be to reduce future global warming, i.e., to maintain current temperatures; to limit global warming to, e.g., 2 K; or to bring back temperatures to preindustrial levels? Who would set the target? These questions cannot be answered at this point in time, as outlined in this section of this chapter.

Legal Issues Signed by 150 government leaders at the 1992 Rio Earth Summit, the Convention on Biological Diversity is dedicated to promoting sustainable development. Conceived as a practical tool for translating the principles of Agenda 21 (a voluntarily implemented action plan of the United Nations with regard to sustainable development), it states “that no climate-related geo-engineering activities that may affect biodiversity take place, until there is an adequate scientific basis on which to justify such activities and appropriate consideration of the associated risks for the environment and biodiversity and associated social, economic and cultural impacts, with the exception of small-scale scientific research studies” (http://www.cbd.int/convention/). Thereby, private or public experimentation and adventurism are Page 7 of 28

Handbook of Climate Change Mitigation and Adaptation DOI 10.1007/978-1-4614-6431-0_72-1 # Springer Science+Business Media New York 2015

Anthropogenic emissions 3

1

Greenhouse gases

4a

Solar radiation balance

1a

4b

Climate Change

Extreme Sea level Temperature Precipitation events rise

4b

2

2 Food

3 1b Ocean acidification

Biodiversity Ecosystems

5b

5b

5

Ecosystem services Clean Disease and Climate Aesthetic water pest control regulation

2

intended unintended

5a

CO2

CDR geoengineering

SRM geoengineering

4

Emissions reductions

Human well-being intended unintended

Fig. 3 Conceptual overview of how greenhouse gas emission reductions and the two main groups of geoengineering techniques may affect the climate system, ocean acidification, biodiversity, ecosystem services, and human well-being (Numbers refer to the chapters in the cited source, from which reproduction with permission was made (Secretariat of the Convention on Biological Diversity 2012))

avoided, yet research is possible. R&D in climate engineering is justified so that man understands his options once a said environmental tipping point has been surpassed (contingency planning to have “something on the shelves” when needed). Research priorities in this respect are worked out in Shepherd (2009).

Moral and Social Issues While anthropogenic greenhouse gas emissions are an unwanted side effect, climate engineering constitutes a large-scale, intentional effort to alter the climate. Responsibilities and global political governance are not clear. It is conceivable that different governments have different targets for global temperatures. Some areas of the world show higher crop yield in an elevated temperature scenario, for instance. So actions by one country to alter the climate, motivated by expected local benefits, might result in war. Multilateral commitments and agreements over time periods of several 100 years would be necessary, as this is the time that, e.g., SO2 from climate engineering would have to remain in the stratosphere in a delicate balance with anthropogenic CO2 emissions it is offsetting, so there would also have to be imperative controls over CO2 levels at the same time. The governance of emerging science and innovation is discussed in Stilgoe et al. (2013b), citing canceling the geoengineering project “SPICE” (see below) as an example. For public perception of geoengineering, see Corner et al. (2013) and Sikka (2012). Governance principles concerning climate engineering were also elaborated in the 2010 Asilomar International Conference on Climate Intervention Technologies (http://climate.org/resources/climate-archives/conferences/asilomar/report.html). Page 8 of 28

Handbook of Climate Change Mitigation and Adaptation DOI 10.1007/978-1-4614-6431-0_72-1 # Springer Science+Business Media New York 2015

Preliminary Climate Engineering Field Experiments Climate engineering has a global scale, and documented field trials to date are very limited. Some concepts can hardly be tested at all. One of the largest experiments, known as LOHAFEX, was an Indo-German fertilization experiment in 2009, in which six tonnes of iron as iron sulfate solution was spread over an area of 300 km2 (Ebersbach et al. 2014) in the South Atlantic. It was expected to trigger an algal bloom, resulting in CO2 update and some of the algae ending up in the ocean bed as carbon sink. A much disputed, similar experiment was carried out in July 2012 by entrepreneur Russ George, who put approx. 100 t of iron sulfate into the Pacific Ocean several hundred miles west of the islands of Haida Gwaii/Canada. The intention was to increase the production rate of phytoplankton for salmon fishing (Sweeney 2014). In 2005, a pilot project in Switzerland to cover a glacier with a reflective foil was carried out. On the Gurschen glacier, it was found that the blanket reduced the melding by 80 % (Pacella 2007). More trials on an area of more than 28,000 m2 were done on the Vorab glacier (Pacella 2007). Painting the Andes: In 2009, the World Bank has awarded a seed grant to 26 innovative climate adaptation projects, selected from 1,700 proposals (World Bank). Among them was one idea from Peruvian inventor Eduardo Gold to whiten the Chalon Sombrero peak in the Andes (Collyns 2010). This pilot project (see Fig. 4) has received positive media attention. In the UK SPICE project (Stratospheric Particle Injection for Climate Engineering, 2015), a trial balloon flight was planned; see Fig. 5. The idea was to send a balloon 1 km into the sky and to eject water droplets. These droplets should create clouds, increasing the albedo. The experiment had to be canceled due to opposition from environmental groups (Shukman 2014; Zhang et al. 2015). Tree planting (reforestation, afforestation) (Zomer et al. 2008; Schirmer and Bull 2014; Trabucco et al. 2008) and peatland restoration (Bonn et al. 2014) activities are being considered in several parts of the world. According to the IPCC, reforestation refers to establishment of forest on land that had recent tree cover, whereas afforestation refers to land that has been without forest for longer time periods (IPCC 2015). Cool roof experiments: In cities, the temperature is typically 1–3  C higher than in the surrounding countryside, due to, e.g., heat-absorbing infrastructure such as dark asphalt parking lots and dark roofs (Oke 1997). By increasing the reflectivity, more radiation is sent back into space, and energy costs (air

Fig. 4 Whitening the mountain Chalon Sombrero in Peru in a geoengineering pilot project (Source: Collyns 2010) Page 9 of 28

Handbook of Climate Change Mitigation and Adaptation DOI 10.1007/978-1-4614-6431-0_72-1 # Springer Science+Business Media New York 2015

Tethered boom delivery system Balloon

Fine spray of water droplets

1km Pipe

Pipe tethered to a ship

Fig. 5 Concept of the SPICE experiment (Source: Vidal 2011)

Fig. 6 Image of a 700  900 m2 wheat field in Western Australia in which a 66 m diameter evaporation pond was created (Source: Edmonds and Smith 2011)

conditioning) can be reduced. Pilot projects are, e.g., the “White Roof Project” (http://www. whiteroofproject.org/) and New York’s “NYC  CoolRoofs” (http://www.nyc.gov/html/coolroofs/html/ home/home.shtml). Keeping groundwater level and salinity low. In Australia, rising levels of salty groundwater pose a problem for farmers. By pumping that groundwater into shallow evaporation ponds, crops are protected, with a positive side benefit of increased albedo (Edmonds and Smith 2011); see Fig. 6 (note that “geoengineering” is a side effect here). Edmonds and Smith (2011) also describe reflective covers on water bodies to prevent evaporation losses. According to Ming et al. (2014), 40–50 % of the water stored in small farm dams of “hot” countries may be lost due to evaporation. Such covers, as a side effect, increase the albedo and thereby contribute to climate change mitigation; compare Fig. 7.

Page 10 of 28

Handbook of Climate Change Mitigation and Adaptation DOI 10.1007/978-1-4614-6431-0_72-1 # Springer Science+Business Media New York 2015

Fig. 7 Reflective evaporation covers on a mine reservoir at Parkes in Australia (Edmonds and Smith 2011)

Proposed Strategies for Climate Engineering Potential approaches are surface based (e.g., albedo modification of land or ocean), troposphere based (e.g., cloud whitening), stratosphere based (e.g., injection of SO2 or Al2O3), and space based (e.g., gigantic space-based mirrors, lenses, or sunshades). Below, several selected concepts are briefly introduced.

Carbon Dioxide Removal (CDR)

As mentioned above, the first set of concepts can be summarized as CO2 removal schemes (CDR) as visually summarized in Fig. 8. Carbon capture and storage (CSS) and carbon sequestration projects are out of the scope of this chapter; see elsewhere in this handbook and in the DOE/NETL CO2 capture and storage roadmap (2010). Other CDR concepts include (Shepherd 2009): • • • •

Use of biomass as carbon sink. Protection of and (re)creation of terrestrial carbon sinks such as grasslands. Enhanced weathering to remove CO2 from the atmosphere. Direct capturing of CO2 from the ambient air (concepts to wash CO2 out of the atmosphere include “artificial trees” and scrubbing towers), known as industrial air scrubbing (IAS) or direct air capture (DAC) (de_Richter et al. 2013). Costs are expected to be prohibitively high (House et al. 2011). • Enhancement of oceanic uptake of CO2, for example, by fertilization of the oceans with naturally scarce nutrients such as iron or by changing ocean currents. • Biochar (when biomass is pyrolyzed, char (biochar) remains. It can be mixed with soil to create terra preta, a carbon sink (Hyland and Sarmah 2014)). There are numerous other concepts, such as removing (dark) vegetation from the mountain tops or changing the composition of ship and aircraft exhaust. The interested reader will find a collection of ideas in various internet sources such as Wikipedia. Out of the concepts presented from (Shepherd 2009) above, two are described briefly as an example.

Page 11 of 28

Handbook of Climate Change Mitigation and Adaptation DOI 10.1007/978-1-4614-6431-0_72-1 # Springer Science+Business Media New York 2015

Fig. 8 Depiction of some popular CDR concepts. See text for details (Source: Climate 2010)

Enhanced Weathering In enhanced weathering, inorganic matter is used to take up CO2, a process that occurs in nature, but slowly. For instance, if carbonates are formed, CO2 is stored long term. This chemical approach to geoengineering involves land- or ocean-based techniques. Examples of land-based enhanced weathering techniques are in situ carbonation of silicates such as ultramafic rocks (ultrabasic rocks, which are igneous and metaigneous rocks with a very low silica content and a high magnesium and iron content). Oceanbased techniques involve alkalinity enhancement of the sea, e.g., by grinding, dispersing, and dissolving olivine, limestone, silicates, or calcium hydroxide against ocean acidification and for CO2 sequestration. Enhanced weathering is considered as one of the most cost-effective options. CarbFix (2015) is a feasibility project of enhanced weathering in Iceland. For details on mineral carbonation/mineral sequestration, see, e.g., Herzog (2002) and Goldberg et al. (1998). Bioenergy with Carbon Sequestration (BECS), Biochar, and Wood Burning BECS is a hybrid approach in which bioenergy crops are grown and used as fuel, and the CO2 emissions are captured and stored (see CCS elsewhere in this handbook). Biochar and BECS could together contribute a carbon sink of 14 GtC/year by 2100 (Edenhofer et al. 2012). The concept of burying wood in anoxic environments (e.g., deep in the soil) is that decomposition would be much slower, providing a long-term carbon sink; compare Fig. 9. According to Zeng (2008), the long-term carbon sequestration potential for wood burial is 10  5 GtC per year, and currently about 65 GtC is available on the world’s forest floors in the form of coarse woody debris suitable for burial. The cost for wood burial is estimated to be lower than the typical cost for power plant CCS. Approx. 100 tC are bound as coarse wood carbon from

Page 12 of 28

Handbook of Climate Change Mitigation and Adaptation DOI 10.1007/978-1-4614-6431-0_72-1 # Springer Science+Business Media New York 2015

Fig. 9 Schematic diagram of forest wood burial and storage (Source: Zeng 2008)

a typical mid-latitude forest area of 1 km2 in 1 year (Zeng 2008). However, there is the potential for counterproductive emissions of methane from anaerobic decomposition of the buried wood. It is estimated that, by storing carbon in deep sediments, deep ocean sequestration can capture up to 15 % of the current global CO2 annual increase. It was hence suggested to dump crop residues in the deep ocean (Strand and Benford 2009).

Solar Radiation Management (SRM)

The second set of techniques for climate engineering is the SRM category. SRM stands for “solar radiation management” or “sunlight reflection methods;” compare Fig. 10. Four of such SRM concepts are explained below. Cloud Reflectivity Modification This approach considers altering the reflectivity of clouds in two ways: thinning of cirrus clouds and brightening (low) marine clouds. High, cold cirrus clouds let sunlight penetrate but capture infrared radiation. Hence, thinning or removing cirrus would have a net cooling effect on Earth. By contrast, low, warm clouds (stratocumulus, which cover approx. 1/3 of the ocean’s surface) reflect sunlight efficiently. This “cloud whitening” or “marine cloud brightening” could be achieved with cloud condensation nuclei (CCN) such as fine seawater droplets. The effect is considered to be more pronounced on the sea than on the land, as clouds over the landmass have more (natural and anthropogenic) CCN available. Proposed schemes include seawater sprays produced by unmanned ship, ocean foams (Evans et al. 2010) from air bubble bursting, ultrasonic excitation (Barreras et al. 2002), and electrostatic atomization. Stratospheric Sulfate Aerosols SO2 is known to cause global dimming, as it leads to aerosol formation, and the aerosols reflect sunlight. The mechanism is that SO2 is oxidized to sulfuric acid, which is hygroscopic, has a low vapor pressure, and hence forms aerosols (Robock 2014). It was suggested to inject sulfur into the stratosphere as SO2, sulfuric acid, or hydrogen sulfide by artillery, aircraft, and balloons (Rasch et al. 2008). According to estimates by the Council on Foreign Relations, “one kilogram of well placed sulfur in the stratosphere would roughly offset the warming effect of several hundred thousand kilograms of carbon dioxide” (Victor et al. 2009). This approach was estimated to be over 100 times cheaper than producing the same temperature change by reducing CO2 emissions (Keith et al. 2010). The SO2 injection would have to be maintained, as tropospheric sulfur aerosols have a comparatively short atmospheric lifetime. Also, other particles have been considered, e.g., Al2O3.

Page 13 of 28

Handbook of Climate Change Mitigation and Adaptation DOI 10.1007/978-1-4614-6431-0_72-1 # Springer Science+Business Media New York 2015

1 Space Stratosphere

2

Troposphere 4 3 5

6

1. Space-based reflective mirrors 2. Stratospheric aerosol injection 3. Cloud-brightening 4. Painting roofs white 5. Planting more reflective crops 6. Covering desert surfaces with reflective material

Fig. 10 Depiction of solar radiation management (Source: Climate 2010)

Space Lenses, Space Mirrors, and “Dyson Dots” Space-based concepts aim at transforming the solar constant into a controlled solar variable (Kennedy et al. 2013). They envision large space-based objects, which might be manufactured on the moon, mining local materials, or using material from asteroids. Concepts of giant lenses (Early 1989), dust rings (Bewick et al. 2013), and sunshades (Kosugi 2010) to block part of the Sun’s incoming radiation using the effects of reflection, absorption, and diffraction were worked out. A convex lens with 1,000 km in diameter is considered sufficient, and in a Fresnel embodiment, it would only be a few millimeters thick (Early 1989). Shading the Sun by approx. 55,000 orbiting mirrors with 100 km2 size, made from wire mesh, or by trillions of smaller mirrors (comparable to a DVD), was suggested (Ming et al. 2014); however, such concepts are widely viewed as unrealistic. Current engineering capabilities are far from being able to realize such science-fiction-like concepts, not speaking about the costs, which are estimated at a century worth of global domestic product of all nations combined (Ming et al. 2014). The “mirrors and smoke in space” concept was refined and coined “Dyson dots” (Kennedy et al. 2013). The concept is to place one or more large lightsail(s) in a radiation-levitated position sunward of the Lagrange point 1 (L1, SEL1). In this point, the gravitational forces on an object exerted by Earth and the Sun are equal. L1 is approx. 1.5 million km from Earth. A 700,000 km2 parasol in L1 would reduce insolation on Earth by at least 0.25 %. A photovoltaic power station on the sunny side of the parasol could “beam” energy to Earth via a maser (microwave laser) on the order of global demand, hence essentially funding the entire project. The “Dyson dot” concept is shown in Fig. 11.

Page 14 of 28

Handbook of Climate Change Mitigation and Adaptation DOI 10.1007/978-1-4614-6431-0_72-1 # Springer Science+Business Media New York 2015

Fig. 11 Dyson dot concept with “self-funding” master energy delivery to the Earth (Reproduced with permission from Kennedy et al. 2013)

The expression “Dyson dot” is based on the concept of a “Dyson sphere,” a hypothetical megastructure imagined by Freeman Dyson in 1960, who speculated in a science article entitled “Search for Artificial Stellar Sources of Infrared Radiation” that advanced extraterrestrial civilizations could have housed in their star with a megastructure, maximizing energy capturing. A 0.25 % reduction in the Sun’s energy output was observed in the period of mid-sixteenth to mid-seventeenth century dubbed “sunspot cycle shutdown time,” “Maunder Minimum,” or “Little Ice Age,” so this order of magnitude is what space geoengineers are aiming at. Dust Clouds Clouds of extraterrestrial dust placed in the vicinity of the L1 point are an alternative concept to thin-film reflectors, aiming at significantly reducing the manufacturing efforts. The material should be mined from captured asteroids, being moved by solar collectors or mass drivers (Bewick et al. 2012); see Fig. 12. For details on such a dust concept, see, e.g., Bewick et al. (2012). Dust for sunlight blocking might also be mined on the moon.

Other Greenhouse Gas Remediation Ideas There are many other geoengineering concepts than those introduced above, some of which are mentioned here: CFC Destruction by Lasers Chlorofluorocarbons (CFC) are persistent in the atmosphere, having huge GWP, yet they are accessible via their photochemistry (Stix 1993). Extremely powerful lasers might be used to break up tropospheric CFC.

Page 15 of 28

Handbook of Climate Change Mitigation and Adaptation DOI 10.1007/978-1-4614-6431-0_72-1 # Springer Science+Business Media New York 2015

Fig. 12 Impression of an L1 positioned dust cloud for space-based geoengineering (Source: Bewick et al. 2012)

Ocean Heat Transport Ocean heat transport (downwelling of ocean currents) is outlined in Zhou and Flynn (2005). This concept aims at changing oceanic currents to shovel heat energy to deeper regions of the ocean. Also, solar-driven heat pumps might be used to this end. Methane Remediation Since methane is also a GHG of big concern, other geoengineering concepts target reducing CH4 emission, e.g., by soil oxidation into CO2 (Tate 2015).

ERM and Energy Production Earth radiation management (ERM) aims at increasing the long wavelength radiation sent into space, which today is being trapped by GHG. ERM can be combined with energy production in so-called meteorological reactors (Ming et al. 2014). The term “meteorological reactor” stands for a climate engineering installation that fulfills two purposes: reduction of radiative forcing and energy production. Possible embodiments are: • • • • •

Solar updraft tower Solar downdraft energy tower Atmospheric vortex engine Heat pipes Radiative cooling, emissive energy harvesters (EEH)

Figure 13 shows an overview of such ERM schemes. The “chimney effect” is used to create air motion, which can drive a generator. The hot air is moved into higher layers of the atmosphere, where it can radiate off heat energy. In Fig. 14, emissive energy harvesters (EEH) designs are depicted.

Page 16 of 28

Handbook of Climate Change Mitigation and Adaptation DOI 10.1007/978-1-4614-6431-0_72-1 # Springer Science+Business Media New York 2015

Fig. 13 Principal longwave radiation targets of meteorological reactors (Source: Ming et al. 2014)

Fig. 14 Two possible EEH designs. (a) In a thermal EEH, a heat engine operates between the ambient temperature and a radiatively cooled plate. (b) In an infrared rectenna EEH, the whole panel is at ambient temperature, but the circuit’s electrical noise is coupled to the cold radiation field via antennas (Source: Byrnes et al. 2014)

For details on “meteorological reactors” in ERM mode, see Ming et al. (2014) and http://www.solartower.org.uk/meteorological-reactors.php.

Climate Engineering in the Context of Climate Change Mitigation and Adaptation Figure 15 is an illustration of the conceptual relationship between SRM and CDR with climate change adaptation and mitigation, in the context of the interdependent human and climatic systems. The Kaya identity (O’Mahony 2013) mentioned in the caption of Fig. 15 is based on Japanese scientist Kaya and can mathematically be expressed as F = pop * (GDP/pop) * (E/GDP) * (F/E), with F being global anthropogenic CO2 emissions, pop being global population growth, G the world GDP, and E the global energy consumption. Carbon emissions F can be estimated as the product of growth (pop), economic expansion (GDP/pop), energy intensity (E/GDP), and carbon efficiency (F/E).

Page 17 of 28

Handbook of Climate Change Mitigation and Adaptation DOI 10.1007/978-1-4614-6431-0_72-1 # Springer Science+Business Media New York 2015

Fig. 15 Illustration of mitigation, adaptation, solar radiation management (SRM), and carbon dioxide removal (CDR) methods in relation to the interconnected human, socioeconomic, and climatic systems and with respect to mitigation and adaptation. The top part of the figure represents the Kaya identity. REDD stands for Reducing Emissions from Deforestation and Forest Degradation (Source: Edenhofer et al. 2012)

Is It Geoengineering or Not? The term geoengineering expresses, as stated initially, deliberate large-scale intervention in the Earth’s climate system. CDR methods with a local to regional and/or low global impact are hence not real geoengineering approaches. The delineation is not exactly clear-cut. An attempt was made by the 2011 IPCC Expert Meeting on Geoengineering; see Fig. 16. As Fig. 16 shows, ocean fertilization and ocean alkalinization are seen as geoengineering-type projects, as can be large afforestation/reforestation.

Discussion Having presented some geoengineering concepts, a discussion about their targeted effectiveness and commercial viability has to be carried out. Geoengineering appraisals in their context frames were studied in Bellamy et al. (2012), where “climate emergency,” “insufficient mitigation,” and “climate change impacts” were cited most often. The appraisals were found to be mostly expert analytic, involving calculations/computer modeling, expert reviews and opinions, economic assessments, and MCA (multi-criteria analysis) (Bellamy et al. 2012). This study also investigated the frequency of different geoengineering proposals; see Fig. 17. Stratospheric aerosols and space reflectors were investigated most often. There was a balance between solar- and carbon-based concepts. A qualitative ranking of storage potentials and local vs. global impact is shown in Fig. 18. As Fig. 18 shows, concepts with a large estimated global potential are carbon sinks, with the ocean being particularly important. For these, transboundary issues arise.

Page 18 of 28

Handbook of Climate Change Mitigation and Adaptation DOI 10.1007/978-1-4614-6431-0_72-1 # Springer Science+Business Media New York 2015

Fig. 16 Scale and impact are important determinants of whether a particular CDR method and specific application should be considered as geoengineering or not. Note that the specific positioning of the different methods is only illustrative and does not constitute a consensus view of the experts participating in the 2011 IPCC Expert Meeting on Geoengineering that produced this chart (Source: Edenhofer et al. 2012)

Blue carbon is the carbon captured by the world’s oceans and coastal ecosystems (Blue Carbon Initiative 2015). An overall evaluation in terms of affordability and effectiveness, reproduced from Shepherd (2009), is shown in Fig. 19. The color of the bullets in Fig. 19 indicates the level of system safety (red = low; yellow = medium; green = high), whereas the size of the bullets relates to the timeliness of the techniques (large = quick; small = slow). One can see from Fig. 19 that urban surface albedo enhancements like “white roofs” are safe, but lack effectiveness technically and financially. Afforestation, also a safe technique, is affordable,

Page 19 of 28

Handbook of Climate Change Mitigation and Adaptation DOI 10.1007/978-1-4614-6431-0_72-1 # Springer Science+Business Media New York 2015 25

Frequency in appraisals

Solar geoengineering proposals 20

Carbon geoengineering proposals Other geoengineering proposals

15

10

5

0

s s ge on do on do on do on on do on do do ng ng on do ng ng er ol or i i h li li ti ti ti ti ti ti ti os lect tora isa albe sta albe uc albe tra ddi albe ddi albe albe el el isa albe her her Ot r w w t t ae ref d s rtil ud ore an rod nd ues e a ert s a nd nt wn up rtil ud ea ea e t ff p e f fe lo ir c ce an u o w w n A Urb ar opla seq ona Des hor ssla lem e d nce en l clo ed ed e a c h t a ph Sp ture Iro ical -c Cr on arb sp Gra et anc nh trog gica anc anc o os p o b i n S i t r E N a h C o nh nh B ra ha Ph ca ol rc En St ec Bi ial e n e Ai ith a M r w st ce gy re O er er n T e oBi Geoengineering proposals

Fig. 17 Relative abundance of geoengineering concepts in the scientific literature. “Others” were cited no more than once (Source: Bellamy et al. 2012)

Fig. 18 The relative estimated total storage potential for emission reduction and sink creation projects at different scales (Source: Edenhofer et al. 2012)

but has a lower effectiveness potential than stratospheric aerosols, which are more risky, are more costly, and take more time. Such comparison charts can help define research priorities. Results from another, similar study are depicted in Figs. 20 and 21. Lenton and Vaughan (2009) concludes “only stratospheric aerosol injections, albedo enhancement of marine stratocumulus clouds, or sunshades in space have the potential to cool the climate back toward its pre-industrial state. Strong mitigation, combined with global-scale air capture and storage, afforestation,

Page 20 of 28

Handbook of Climate Change Mitigation and Adaptation DOI 10.1007/978-1-4614-6431-0_72-1 # Springer Science+Business Media New York 2015

Fig. 19 Preliminary overall evaluation of the geoengineering technique (Shepherd 2009)

Fig. 20 Schematic overview of the climate geoengineering proposals considered. Black arrowheads indicate shortwave radiation; white arrowheads indicate enhancement of natural flows of carbon; gray downward arrow indicates engineered flow of carbon; gray upward arrow indicates engineered flow of water; dotted vertical arrows illustrate sources of cloud condensation nuclei; and dashed boxes indicate carbon stores (Source: Lenton and Vaughan 2009)

and bio-char production, i.e., enhanced CO2 sinks, might be able to bring CO2 back to its pre-industrial level by 2100, thus removing the need for other geoengineering.” A third study (Goes et al. 2010) which is being presented here has compared four scenarios: BAU (business as usual), CO2 abatement, intermediate geoengineering (next 50 years), and continuous geoengineering from the present until 2150; see Figs. 22 and 23. The two geoengineering scenarios deploy stratospheric aerosol injection. CO2 emissions are assumed to be equally increasing in all scenarios except the abatement one. Two key observations from this study (Goes et al. 2010) are:

Page 21 of 28

Handbook of Climate Change Mitigation and Adaptation DOI 10.1007/978-1-4614-6431-0_72-1 # Springer Science+Business Media New York 2015

Fig. 21 Summary of estimates of the radiative forcing potential of different climate geoengineering options from Lenton and Vaughan (2009). The potential of longwave (CO2 removal) options is given on three different time horizons, assuming a baseline strong mitigation scenario. The rightward pointing arrows, which refer to mirrors in space, stratospheric aerosols, and air capture and storage on the year 3,000 timescale, indicate that their potential could be greater than suggested by the diamonds (which in these cases represent a target radiative forcing to be counteracted: 3.71 W/m2 due to 2  CO2 = 556 ppm for the shortwave options and 1.43 W/m2 due to 363 ppm CO2 in the year 3000 under a strong mitigation scenario) (Source: Lenton and Vaughan 2009)

Fig. 22 Radiative forcing (panel a), global mean atmospheric CO2 (panel b), global mean surface temperature change (panel c), and the rate of global mean surface temperature change (panel d) for BAU (circles), abatement (dashed line), intermittent geoengineering (crosses), and continuous geoengineering (solid line). Note that these results neglect potential economic damages due to aerosol geoengineering forcing. BAU business as usual, GWP gross world product (Source: Goes et al. 2010)

• Radiative forcing in the “intermediate geoengineering” scenario would reach the same levels as that in the BAU scenario soon after the geoengineering was stopped. • Compared to the BAU scenario, a temperature rise of up to 1.5 K per decade, as opposed to less than 0.5 K per decade, would result. Such a strong change might finally be even worse for flora and fauna – and humans than a steady increase. Page 22 of 28

Handbook of Climate Change Mitigation and Adaptation DOI 10.1007/978-1-4614-6431-0_72-1 # Springer Science+Business Media New York 2015

Fig. 23 Economic damage of climate change (panel a), total costs (i.e., CO2 abatement costs and climate change damages cost), abatement, (panel b), fraction of CO2 abatement (panel c), and per capita consumption (panel d) for BAU (circles), optimal abatement (black dashed line), intermittent geoengineering (crosses), and continuous geoengineering (solid line). Note that these results neglect potential economic damages due to aerosol geoengineering forcing. BAU business as usual, GWP gross world product (Source: Goes et al. 2010)

Figure 23 gives projections on the costs of the four scenarios. As one can deduct from Fig. 23, damage and total costs of the BAU and intermediate geoengineering scenarios are highest, whereas the continuous geoengineering scenario presents itself as the economically most favorable one. As the authors conclude, aerosol geoengineering for CO2 abatement can be an economically ineffective strategy. Failure to sustain the aerosol forcing can lead to huge and abrupt changes to the climate: “Substituting aerosol geoengineering for greenhouse gas emissions abatements constitutes a conscious risk transfer to future generations, in violation of principles of intergenerational justice which demands that present generations should not create benefits for themselves in exchange for burdens on future generations” (Goes et al. 2010).

Conclusions As this brief, introductory chapter to geoengineering has shown, several concepts that at first sight look tempting to “quickly fix global warming” have been developed. Ideas range from more tree planting to huge constructions in space, they include techniques to substantially alter the albedo of manmade objects, deserts, or mountains, and they consider injecting vast amounts of chemicals into the ocean and/or the stratosphere. At the present time, the consequences of such measures, and even the magnitude of their very effect, are hard if not impossible to predict, possibly generating huge risks from irreversibly messing up the complex climate system of our Earth for centuries, altering rainfall patterns, and provoking severe military activities, to name but a few possible side effects. Yet, climate engineering poses an option to deal with the impending aggravation of climate change, and once scientists know more about the various options, one or the other of them might in fact become a viable support in global climate change mitigation and adaptation measures to bring the anthropogenic impacts back under control. On the question of geoengineering ethics, Alan Robock concludes that “in light of continuing global warming and dangerous impacts on humanity, indoor geoengineering research is ethical and is needed to provide information to policymakers and society so that we can make informed decisions in the future to deal with climate Page 23 of 28

Handbook of Climate Change Mitigation and Adaptation DOI 10.1007/978-1-4614-6431-0_72-1 # Springer Science+Business Media New York 2015

change. This research needs to be not just on the technical aspects, such as climate change and impacts on agriculture and water resources, but also on historical precedents, governance, and equity issues. Outdoor geoengineering research, however, is not ethical unless subject to governance that protects society from potential environmental dangers. . .Perhaps, in the future the benefits of geoengineering will outweigh the risks, considering the risks of doing nothing. Only with geoengineering research will we be able to make those judgments” (Robock 2012). So to conclude, one can say that climate engineering is an interesting topic of research, and CDR techniques that are less risky than SRM techniques might complement conventional climate change mitigation actions. For approaches with global impact, clear governance rules need to be established and enforced.

Outlook Research of geoengineering should be enhanced, as recommended, e.g., by the UK Royal Society, the American Meteorological Society, the American Geophysical Union, the US Government Accountability Office, and prominent scientists (Robock 2014). Unrealistic and potentially dangerous concepts will be abandoned, and new, innovative ones emerge, possibly providing new options for climate change mitigation and adaptation.

References Aaltonen H, Pumpanen J, Pihlatie M, Hakola H, Hellén H, Kulmala L, Vesala T, Bäck J (2011) Boreal pine forest floor biogenic volatile organic compound emissions peak in early summer and autumn. Agr Forest Meteorol 151(6):682–691 Anshelm J, Hansson A (2014) Battling Promethean dreams and Trojan horses: revealing the critical discourses of geoengineering. Energy Res Soc Sci 2:135–144 Barreras F, Amaveda H, Lozano A (2002) Transient high frequency ultrasonic water atomization. Exp Fluids 33:405–413 Bellamy R, Chilvers J, Vaughan NE, Lenton TM (2012) Appraising geoengineering. Tyndall Centre for Climate Change Research, working paper 153, June 2012 Bewick R, Sanchez JP, McInnes CR (2012) The feasibility of using an L1 positioned dust cloud as a method of space-based geoengineering. Adv Space Res 49(7):1212–1228 Bewick R, L€ ucking C, Colombo C, Sanchez JP, McInnes CR (2013) Heliotropic dust rings for Earth climate engineering. Adv Space Res 51(7):1132–1144 Biello D (2009) World’s craziest geoengineering scheme. http://www.scientificamerican.com/podcast/ episode/worlds-craziest-geoengineering-sche-09-09-03/ Blue Carbon Initiative (2015) http://thebluecarboninitiative.org/ Bonn A, Reed MS, Evans CD, Joosten H, Bain C, Farmer J, Emmer I, Couwenberg J, Moxey A, Artz R, Tanneberger F, von Unger M, Smyth M-A, Birnie D (2014) Investing in nature: developing ecosystem service markets for peatland restoration. Ecosyst Serv 9:54–65 Boston PJ (2008) Gaia hypothesis. In: Reference module in earth systems and environmental sciences, from encyclopedia of ecology. Elsevier Science Ltd, Amsterdam, pp 1727–1731 Butler JH, Montzka SA (2013) The NOAA annual greenhouse gas index (AGGI). NOAA/ESRL Global Monitoring Division. http://www.esrl.noaa.gov/gmd/aggi/aggi.html (2015)

Page 24 of 28

Handbook of Climate Change Mitigation and Adaptation DOI 10.1007/978-1-4614-6431-0_72-1 # Springer Science+Business Media New York 2015

Burke J (2010) Could a massive dam between Alaska and Russia save the Arctic? Huffington Post. 27 Nov 2010. Retrieved 10 Mar 2011 Byrnes SJ, Blanchard R, Capasso F (2014) Harvesting renewable energy from Earth’s mid-infrared emissions. Proc Natl Acad Sci U S A 111(11):3927–3932 CarbFix (2015) https://www.or.is/en/projects/carbfix Chossudovsky M (2007, Dec 7) Weather warfare: beware the US Military’s experiments with climatic warfare. The Ecologist, Global Research. http://www.globalresearch.ca/weather-warfare-beware-theus-military-s-experiments-with-climatic-warfare/7561 Climate Modification Schemes, American Institute of Physics (AIP) (2011) http://www.aip.org/history/ climate/RainMake.htm Cole JJ (2013) Chapter 6 – The carbon cycle: with a brief introduction to global biogeochemistry. In: Fundamentals of ecosystem science. pp 109–135 Collyns D (2010) Can painting a mountain restore a glacier? BBC. http://www.bbc.co.uk/news/10333304 Corner A, Parkhill K, Pidgeon N, Vaughan NE (2013) Messing with nature? Exploring public perceptions of geoengineering in the UK. Glob Environ Change 23(5):938–947 Crutzen PJ (2006) Albedo enhancement by stratospheric sulfur injections: a contribution to resolve a policy dilemma? Clim Change 77:211–220 de_Richter RK, Ming T, Caillol S (2013) Fighting global warming by photocatalytic reduction of CO2 using giant photocatalytic reactors. Renew Sustain Energy Rev 19:82–106 Dean C (2011) Group urges research into aggressive efforts to fight climate change. http://www.nytimes. com/2011/10/04/science/earth/04climate.html?_r=0 Dixon T, Garrett J, Kleverlaan E (2014) GHGT-12, update on the London protocol – developments on transboundary CCS and on geoengineering. Energy Procedia 63:6623–6628 DOE/NETL carbon dioxide capture and storage RD&D roadmap, Dec 2010. http://www.netl.doe.gov/ File%20Library/Research/Carbon%20Seq/Reference%20Shelf/CCSRoadmap.pdf Early JT (1989) Space-based solar shield to offset greenhouse effect. J Br Interplanet Soc 42:567–569. http://www.see.ed.ac.uk/~shs/Climate%20change/Data%20sources/Early%20earth%20shield1989.pdf Ebersbach F, Assmy P, Martin P, Schulz I, Wolzenburg S, Nöthig E-M (2014) Particle flux characterisation and sedimentation patterns of protist plankton during the iron fertilisation experiment LOHAFEX in the Southern Ocean. Deep Sea Res I Oceanogr Res Pap 89:94–103 Edenhofer O, Pichs-Madruga R, Sokona Y, Field C, Barros V, Stocker TF, Dahe Q, Minx J, Mach K, Plattner G-K, Schlömer S, Hansen G, Mastrandrea M (2012) IPCC expert meeting on geoengineering, meeting report, Lima, 20–22 June 2011, ISBN 978-92-9169-136-4 Edmonds I, Smith G (2011) Surface reflectance and conversion efficiency dependence of technologies for mitigating global warming. Renew Energy 36:1343–1351 Evans J, Stride E, Edirisinghe M, Andrews D, Simons R (2010) Can oceanic foams limit global warming? Climate Res 42(2):155–160. doi:10.3354/cr00885. edit Ferraro AJ, Highwood EJ, Charlton-Perez AJ (2014) Weakened tropical circulation and reduced precipitation in response to geoengineering. Environ Res Lett 9:014001 (7 pp) Goes M, Tuana N, Keller K (2010) The economics (or lack thereof) of aerosol geoengineering. In: Climatic change. Springer. doi:10.1007/s10584-010-9961-z, http://www.aoml.noaa.gov/phod/docs/ Goes_etal_2011.pdf Goldberg P, Chen Z-Y, O’Connor W, Walters R, Ziock H (1998) CO2 mineral sequestration studies in US. http://www.netl.doe.gov/publications/proceedings/01/carbon_seq/6c1.pdf Goldenberg S (2014) Al Gore says use of geo-engineering to head off climate disaster is insane, theguardian.com, Wednesday 15 Jan 2014. http://www.theguardian.com/world/climate-consensus-97per-cent/2014/jan/15/geo-al-gore-engineering-climate-disaster-instant-solutio Page 25 of 28

Handbook of Climate Change Mitigation and Adaptation DOI 10.1007/978-1-4614-6431-0_72-1 # Springer Science+Business Media New York 2015

Gomes MS de P, de Araújo MSM (2011) Artificial cooling of the atmosphere – a discussion on the environmental effects. Renew Sustain Energy Rev 15(1):780–786 Herzog H (2002) Carbon sequestration via mineral carbonation: overview and assessment (PDF). Massachusetts Institute of Technology. Retrieved 5 Mar 2009 Heyward C, Rayner S (2013) A curious asymmetry: social science expertise and geoengineering. Climate geoengineering governance working paper series: 007. http://geoengineering-governance-research. org/perch/resources/workingpaper7heywardrayneracuriousasymmetry.pdf House KZ et al (2011) An economic and energetic analysis of capturing CO2 from ambient air. Proc Natl Acad Sci U S A 108:20428–20433 Hyland C, Sarmah AK (2014) Chapter 25 – Advances and innovations in biochar production and utilization for improving environmental quality. In: Bioenergy research: advances and applications. Elsevier Ltd, Amsterdam, pp 435–446 IPCC (2015) 2.2.3. Afforestation, reforestation, and deforestation. http://www.ipcc.ch/ipccreports/sres/ land_use/index.php?idp=47 Keith DW, Parson E, Morgan MG (2010) Research on global sun block needed now. Nature (Nat Publ Group) 463(7280):426–427 Kennedy RG III, Roy KI, Fields DE (2013) Dyson dots: changing the solar constant to a variable with photovoltaic lightsails. Acta Astronaut 82(2):225–237 Kintisch E (2010) Hack the planet: science’s best hope – or worst nightmare – for averting climate catastrophe. Wiley, Hoboken. ISBN 0-470-52426-X Kosugi T (2010) Role of sunshades in space as a climate control option. Acta Astronaut 67(1–2):241–253 Kravitz B (2013) Geoengineering has its limits. Nature 501:9. doi:10.1038/501009a Kunzig R (2008) Geoengineering: how to cool earth – at a price. Scientific American. http://www. scientificamerican.com/article/geoengineering-how-to-cool-earth/ Lenton TM, Vaughan NE (2009) The radiative forcing potential of different climate, geoengineering options. Atmos Chem Phys 9:5539–5561. www.atmos-chem-phys.net/9/5539/2009/ Lorenz EN (1963) Deterministic non-periodic flow. J Atoms Sci 20:130–141 Lorenz EN (1972) Predictability: does the flap of a butterfly’s wings in Brazil set off a tornado in Texas? American Association for the Advancement of Science, 139th meeting, 29 Dec 1972. http://eaps4.mit. edu/research/Lorenz/Butterfly_1972.pdf MacMynowski DG, Keith DW, Caldeira K, Shin HJ (2011) Can we test geoengineering? Energy Environ Sci 4:5044–5052 McGrath M (2014) Geoengineering plan could have ‘unintended’ side effect. http://www.bbc.com/news/ science-environment-25639343 Ming T, de_Richter R, Liu W, Caillol S (2014) Fighting global warming by climate engineering: is the Earth radiation management and the solar radiation management any option for fighting climate change? Renew Sustain Energy Rev 31:792–834 Mitchell DL, Finnegan W (2009) Modification of cirrus clouds to reduce global warming. Environ Res Lett 4:1–8 Mitchell DL, Mishra S, Lawson RP (2011) Cirrus clouds and climate engineering: new findings on ice nucleation and theoretical basis. In: Planet earth. Intech, Rijeka, pp 257–288 O’Mahony T (2013) Decomposition of Ireland’s carbon emissions from 1990 to 2010: an extended Kaya identity. Energy Policy 59:573–581 Oke TR (1997) Urban climates and global environmental change. In: Thompson RD, Perry A (eds) Applied climatology: principles & practices. Routledge, New York, pp 273–287 Pacella RM (2007) Duct tape methods to save the earth: insulate the glaciers. Pop Sci. http://www.popsci. com/node/3245 Page 26 of 28

Handbook of Climate Change Mitigation and Adaptation DOI 10.1007/978-1-4614-6431-0_72-1 # Springer Science+Business Media New York 2015

Rasch PJ, Tilmes S, Turco RP, Robock A, Oman L, Chen C, Stenchikov GL, Garcia RR (2008) An overview of geoengineering of climate using stratospheric sulphate aerosols. Philos Transact A Math Phys Eng Sci 366(1882):4007–4037 Rinnan R, Rinnan Å, Faubert P, Tiiva P, Holopainen JK, Michelsen A (2011) Few long-term effects of simulated climate change on volatile organic compound emissions and leaf chemistry of three subarctic dwarf shrubs. Environ Exp Bot 72(3):377–386 Robock A (2012) Is geoengineering research ethical? Peace Secur 4:226–229 Robock A (2014) Stratospheric aerosol geoengineering. Issues Environ Sci Techol (special issue “Geoeng Clim Syst”) 38:162–185 Robock A, Marquardt A, Kravitz B, Stenchikov G (2009) Benefits, risks, and costs of stratospheric geoengineering. Geophys Res Lett 36, L19703. doi:10.1029/2009GL039209 Rockström J, Steffen W, Noone K, Persson Å, Chapin FS III, Lambin EF, Lenton TM, Scheffer M, Folke C, Schellnhuber HJ, Nykvist B, de Wit CA, Hughes T, van der Leeuw S, Rodhe H, Sörlin S, Snyder PK, Costanza R, Svedin U, Falkenmark M, Karlberg L, Corell RW, Fabry VJ, Hansen J, Walker B, Liverman D, Richardson K, Crutzen P, Foley JA (2009) A safe operating space for humanity. Nature 461:472–475. doi:10.1038/461472a Rotman D (2013) A cheap and easy plan to stop global warming. http://www.technologyreview.com/ featuredstory/511016/a-cheap-and-easy-plan-to-stop-global-warming/ Rusco F, Stephenson J (2010) Climate change, a coordinated strategy could focus federal geoengineering research and inform governance efforts, report to the Chairman, Committee on Science and Technology, House of Representatives, GAO-10-903. United States Government Accountability Office (GAO). http://www.gao.gov/assets/320/310105.pdf Schirmer J, Bull L (2014) Assessing the likelihood of widespread landholder adoption of afforestation and reforestation projects. Glob Environ Chang 24:306–320 Secretariat of the Convention on Biological Diversity (2012) Geoengineering in relation to the convention to biological diversity: technical and regulatory matters. CBD technical series no 66. Montreal, ISBN 92-9225-429-4. http://www.cbd.int/doc/publications/cbd-ts-66-en.pdf Shepherd J (2009) Geoengineering the climate: science governance and uncertainty. The Royal Society, London. https://royalsociety.org/policy/publications/2009/geoengineering-climate/ Shukman D (2014) Geo-engineering: climate fixes ‘could harm billions’. http://www.bbc.com/news/ science-environment-30197085. Accessed 5 Jan 2015 Sikka T (2012) A critical theory of technology applied to the public discussion of geoengineering. Technol Soc 34(2):109–117 SPICE Stratospheric Particle Injection for Climate Engineering (2015) http://www.spice.ac.uk/ Stilgoe J, Watson M, Kuo K (2013a) Public engagement with biotechnologies offers lessons for the governance of geoengineering research and beyond. PLoS Biol 11(11). doi:10.1371/journal. pbio.1001707 Stilgoe J, Owen R, Macnaghten P (2013b) Developing a framework for responsible innovation. Res Policy 42:1568–1580 Stirling A (2014) Transforming power: social science and the politics of energy choices. Energy Res Soc Sci 1:83–95 Stix TH (1993) Removal of chlorofluorocarbons from the troposphere, Plasma Science. IEEE conference record – abstracts, 1993 I.E. international conference on, ISBN 0-7803-1360-7 Strand SE, Benford G (2009) Ocean sequestration of crop residue carbon: recycling fossil fuel carbon back to deep sediments. Environ Sci Technol 43(4):1000–1007. doi:10.1021/es8015556 Sweeney JA (2014) Command-and-control: alternative futures of geoengineering in an age of global weirding. Futures 57:1–13 Page 27 of 28

Handbook of Climate Change Mitigation and Adaptation DOI 10.1007/978-1-4614-6431-0_72-1 # Springer Science+Business Media New York 2015

Tate KR (2015) Soil methane oxidation and land-use change – from process to mitigation. Soil Biol Biochem 80:260–272 Teller E, Hyde R, Wood L (1997) Global warming and ice ages: prospects for physics-based modulation of global change (PDF). Lawrence Livermore National Laboratory. Retrieved 30 Oct 2018. See pages 10–14 in particular Trabucco A, Zomer RJ, Bossio DA, van Straaten O, Verchot LV (2008) Climate change mitigation through afforestation/reforestation: a global analysis of hydrologic impacts with four case studies. Agr Ecosyst Environ 126(1–2):81–97 Victor DG, Granger Morgan M, Apt J, Steinbruner J, Ricke K (2009) The geoengineering option: a last resort against global warming?. Geoengineering. Council on Foreign Affairs. Retrieved 19 Aug 2009 Vidal J (2011) Giant pipe and balloon to pump water into the sky in climate experiment. The Guardian. http://www.theguardian.com/environment/2011/aug/31/pipe-balloon-water-sky-climate-experiment von Neumann J (1955) Can we survive technology? Fortune, June, pp 106–108, 151–152. Reprinted in Sarnoff D (ed) (1956) The fabulous future: America in 1980. Dutton, New York, pp 33–48 Weaver IS, Dyke JG (2012) The importance of timescales for the emergence of environmental selfregulation. J Theor Biol 313:172–180 Willoughby HE, Jorgensen DP, Black RA, Rosenthal SL (1985) Project STORMFURY, a scientific chronicle, 1962–1983. Bull Am Meteorol Soc 66:505–514 World Bank and Partners Award $4.8 Million to 26 Innovative Ideas to Save the Planet, http://web.worldbank. org/WBSITE/EXTERNAL/COUNTRIES/AFRICAEXT/EXTAFRSUMESSD/EXTFORINAFR/0,, contentMDK:22389504~menuPK:2493506~pagePK:64020865~piPK:149114~theSitePK:2493451,00. html. Accessed 16 Jan 2015 Zeng N (2008) Carbon sequestration via wood burial. Carbon Balance Manage 3:1. doi:10.1186/17500680-3-1, http://www.cbmjournal.com/content/3/1/1 Zhang Z, Moore JC, Huisingh D, Zhao Y (2015) Review of geoengineering approaches to mitigating climate change. J Clean Prod 15:898–907 Zhou S, Flynn PC (2005) Geoengineering downwelling ocean currents: a cost assessment. Clim Change 71(1–2):203–220. doi:10.1007/s10584-005-5933-0 Zomer RJ, Trabucco A, Bossio DA, Verchot LV (2008) Climate change mitigation: a spatial analysis of global land suitability for clean development mechanism afforestation and reforestation. Agr Ecosyst Environ 126(1–2):67–80

Page 28 of 28

Social Efficiency in Energy Conservation Patrick Moriarty and Damon Honnery

Contents Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 Social Efficiency: Transport . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 Passenger Transport . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 Freight Transport . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 Social Efficiency: Buildings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8 Social Efficiency: Agriculture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9 Future Directions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13

Abstract

Global energy use, fossil fuel carbon dioxide (CO2) emissions, and atmospheric CO2 levels continue to rise, despite some progress in mitigation efforts. Improving energy efficiency is seen as an important means of reducing emissions, but absolute reductions in global energy use remain elusive because of continued growth in the numbers of important energy-using devices such as transport vehicles, and energy rebound. Limiting the rise in average surface temperature above preindustrial to 2  C is widely regarded as the limit for avoiding dangerous anthropogenic climate change. Given the magnitude of CO2 emission reductions necessary for this limit to be met, other approaches are needed for reducing energy use and its resultant emissions. This chapter discusses social efficiency (nontechnical means for reducing energy use) and stresses the social P. Moriarty (*) Department of Design, Monash University, Melbourne, VIC, Australia e-mail: [email protected] D. Honnery Department of Mechanical and Aerospace Engineering, Monash University, Melbourne, VIC, Australia e-mail: [email protected] # Springer Science+Business Media New York 2015 W.-Y. Chen et al. (eds.), Handbook of Climate Change Mitigation and Adaptation, DOI 10.1007/978-1-4614-6431-0_73-1

1

2

P. Moriarty and D. Honnery

and environmental context in which energy consumption occurs in various sectors. Three important sectors for energy use, transport, buildings, and agriculture, are used to illustrate the potential for social efficiency in energy reductions. We argue that by focusing more clearly on the human needs energy use is meant to satisfy, it is possible to find new, less energy-intensive ways of meeting these needs. Abbreviations

ABS CO2-eq EIA EJ GHG GJ Gt IEA IPCC IT MJ OECD p-km SBJ t-km UK v-km

Australian Bureau of Statistics Carbon dioxide equivalent Energy Information Administration (US) Exajoule (1018 joule) Greenhouse gas Gigajoule (109 joule) Gigatonne (109 tonne) International Energy Agency Intergovernmental Panel on Climate Change Information technology Megajoule Organisation for Economic Co-operation and Development Passenger-km Statistics Bureau Japan Tonne-km United Kingdom Vehicle-km

Introduction Most studies on energy efficiency focus on technical efficiency measures such as megawatt-hour output of electricity per megajoule (MJ) of primary energy input for a power station, or vehicle-km per MJ of fuel input for a car. While the potential for technical efficiency improvements in energy-consuming devices is large (Cullen et al. 2011) and efficiency gains are expected to be a major factor in future carbon mitigation scenarios (Van Vuuren et al. 2011a, b), the results to date have been disappointing. Many barriers, both technical and socioeconomic, hinder the implementation of energy efficiency policies. Global energy use and fossil fuel CO2 emissions continue to grow (BP 2015), resulting in a steady climb in atmospheric CO2 levels. Further, energy efficiency is subject to the well-known rebound effect. Rebound occurs because efficiency improvements (e.g., in light bulbs or passenger cars) lower the cost of operation, either encouraging more use of the energy-using device or allowing the money so saved to be spent on other energy-using goods or services (Druckman et al. 2011; Moriarty and Honnery 2015). A related concept is the demonstration effect of present lifestyles in high-income countries on the

Social Efficiency in Energy Conservation

3

expectations of residents of industrializing countries. If their ownership of energyintensive goods such as private vehicles or air conditioners rises to levels near those prevailing in high-income countries, any efficiency gains will be swamped by rising global use. Technical energy efficiency is important for carbon mitigation, but needs to be supplemented by nontechnical measures for deep carbon reductions. Energy efficiency measures are discussed in detail in the Chapter “▶ Energy Efficiency: Comparison of Different Systems and Technologies”. It is not enough that the energy (or carbon) intensity of economies – primary energy (or carbon) consumed per unit of gross national income – be reduced; absolute levels of fossil fuel energy must also be greatly cut. As used here in the context of energy conservation, the term social efficiency refers to a system-based approach that stresses the social and environmental context in which energy consumption occurs in various sectors. As Haas et al. (2008) have stressed, households and organizations do not use energy for its own sake; they use it to enjoy the energy services it provides. Put simply, social efficiency will refer to nontechnical means for reducing energy use. As discussed, technical efficiency improvements are made by getting more output from a given energy input, such as electric output per unit of fossil fuel input energy; social efficiency improvements, on the other hand, are made by getting more social “value” from each unit of the output of an energy-using device (Moriarty and Honnery 1996, 2014). It is instructive to compare approaches in climate change modeling and energy efficiency studies. Climate scientists have utilized general circulation models for decades and, more recently, coupled carbon-climate models. Such modes have been used to show, for example, that afforestation could exacerbate climate change (Keller et al. 2014). The newly grown forests would decrease the albedo (the fraction of the insolation reflected directly back into space), offsetting the carbon sequestration in the trees. With the exception of the rebound literature, energy efficiency research, in contrast, mostly does not look at effects on other energy sectors. To illustrate both the approaches used and the potential for social energy efficiency, this chapter focuses on three important energy-using sectors as case studies: transport, both passenger and freight; energy use in buildings, both household dwellings and commercial buildings; and agriculture, particularly for food production. These case studies were chosen not only for their importance for global energy use and greenhouse gas (GHG) emissions but also because they illustrate different aspects of social efficiency. By examining social efficiency in these three sectors, we hope to point to new ways at looking at energy and GHG emissions reductions. In particular, we stress the need to examine more closely what energy is used for (Shove and Walker 2014). As energy researcher Benjamin Sovacool (2014) put it: “Academic researchers frequently obsess over technical fixes rather than ways to alter lifestyles and social norms.” The usual, technical approach is to find ways of performing existing tasks more efficiently; instead we ask whether the tasks should be done in the first place and whether the human needs underpinning energy use can be met in a different, less energy-intensive way.

4

P. Moriarty and D. Honnery

Social Efficiency: Transport In 2012, the fuel for all modes of vehicular transport, including both passenger and freight, accounted for 27.9 % of global final demand for energy, up from 23.1 % in 1973 (International Energy Agency (IEA) 2014). Transport is heavily reliant on fossil fuels (96.6 % in 2012) and global road vehicle numbers are still rising rapidly. GHG emissions from this sector are thus rising, and it will be very difficult for technical solutions alone to stop further rises, let alone greatly reduce them.

Passenger Transport The conventional measure of the passenger transport task is passenger-km (p-km). For example, 10 p-km is generated if one passenger travels 10 km or ten passengers each travel for 1 km. Transport efficiency in turn is measured as p-km per MJ of primary fuel. Measures such as miles per gallon and liters per 100 km are still widely used for vehicle efficiency, but suffer from two drawbacks. First, the useful output of passenger travel is p-km rather than vehicle-km (v-km); in other words, the occupancy rate is important. With this measure of output, buses can now be compared with car travel. Second, although liters per 100 km is useful for comparing the efficiency of various petrol or diesel-fueled vehicles, it is not useful for comparing electric-powered vehicles, whether for private or public transport, with vehicles that run on liquid fuels. But p-km/MJ of primary enables fair comparison of all passenger transport modes, regardless of energy source or vehicle carrying capacity (Moriarty and Honnery 2012). However, even though p-km is a more useful measure than v-km, transport is still mainly a derived demand. Although some travel, especially by car, is undertaken for its own sake, in most cases travelers undergo the monetary and time costs required in order to access out-of-home activities such as working, education, or shopping. Access to activities, not mobility, is the real purpose of transport; hence one measure of the social efficiency of passenger transport would be access per primary MJ of transport fuel. Obviously, all else being equal, doubling the occupancy rate of all vehicular transport should double the access achieved per primary MJ. Occupancy rates are covered in more detail in the Chapter “▶ Reducing Personal Mobility for Climate Change Mitigation”. Of course, it is much easier to measure p-km than it is to measure access, which has a subjective component. As Halden (2011) has stated: “accessibility is an attribute of people and goods rather than transport modes or service provision.” Mobility may be easy to measure, but given that travel is mainly a derived demand, he further argued that it is difficult to say whether more travel is better than less. How can access be improved? The journey to work trip will be used to illustrate the possibilities. First, we need to define two terms, minimum average work trip length and actual average work trip length. For convenience, we will use the example of cities and assume that all workplaces within the city boundaries are filled by resident workers. The minimum average work trip length for any city will

Social Efficiency in Energy Conservation

5

then be the average distance traveled to work if workplaces are fixed, but workers can change residences such that total travel to work for all city residents is minimized. The actual average work trip length is that calculated from origindestination studies of citywide work trip patterns. Clearly, total work travel will be less if the actual average distance is close to the minimum. Studies have shown that, in real cities, there is a large amount of “excess” work travel. For a range of US and Australian cities, actual work travel was found to be roughly twice the minimum. On the other hand, for a sample of Japanese and South Korean cities, actual work travel was much closer to the minimum (Moriarty and Honnery 2013). The much higher densities of the latter cities compared with US and Australian cities, and the accompanying traffic congestion, are important explanatory factors. However, even if excess work travel is very low, overall work travel could still be reduced in some cities. This happens if there is a mismatch between residences and workplaces – if, for example, workplaces are largely located in one area of a city and residents (and thus workers) in other areas. With the rise of the service sector (and the decline in manufacturing jobs) in Organisation for Economic Co-operation and Development (OECD) cities, this potential mismatch is disappearing. Jobs such as those in the education, health, retail shopping, and other service sectors serve a local area and so are found intermingled with residential areas. With manufacturing, in contrast, the intended market for the products is far wider, often national or even international in scope, and so did not need to be close to residential areas. The result of the rising share of service jobs is improvement in the balance of workplaces and resident workers at the local level (Australian Bureau of Statistics (ABS) 2013; Cervero 1996). For other trip types, access is more difficult to define. For shopping, access is not only a function of distance to the nearest shopping center but also the range of shops available. Authorities can and do improve resident access to services by strategic location of schools, parks, local government offices, and health centers, for example. As discussed in the Chapter “▶ Reducing Personal Mobility for Climate Change Mitigation (Section 7), if we lower the convenience of car travel in an attempt to reduce its external costs (e.g., air and noise pollution, GHG emissions, community disruption, traffic casualties), the balance between private travel and other modes will be fundamentally altered. In particular, nonmotorized modes will now be relatively more convenient, as well as faster and safer. Vehicular trips for different purposes will be combined more often, and preferred destinations for discretionary trips will change. In brief, if we remove the priority accorded to vehicular travel, especially by car, overall vehicular travel will fall. But access levels need not decline with reduced (vehicular) travel, since travel patterns and the intensity of use of various local services can be expected to change over time to maintain access levels. As Gabrielli and von Karman (1950) showed many decades ago, there is a tradeoff between speed and energy efficiency. Slower modes of transport are usually more energy efficient. So an important way of lowering the convenience of car travel is by speed reductions. This will not only reduce transport energy use but also traffic-related air and noise pollution. It will also reduce the frequency and severity

6

P. Moriarty and D. Honnery

of traffic accidents, both for vehicle occupants and for nonmotorized travelers. For the latter group, research has found that at “impact speeds of 32 km/h, only about 5 % of pedestrians are killed and injuries are minor. At 48 km/h, 50 % are killed and many are seriously injured, while at 80 km/h most do not survive the impact” (Moriarty and Honnery 1999, 2008). Similarly, vehicle impacts with other vehicles or roadside objects are reduced in both number and severity. These benefits can be used to justify lower speed levels. Even before the global financial crisis, land passenger travel per capita had started decreasing in a number of OECD countries (Millard-Ball and Schipper 2011). However, air travel continues its strong growth, except for Japan, where it has been falling since the year 2000 (Statistics Bureau Japan (SBJ) 2014). Globally, Airbus projects air travel to grow at an average rate of 4.7 % over the years 2014–2033, with international tourism a key driver (Airbus 2014). Hence, an important approach to reducing fuel use and GHG emissions in air transport is the substitution of more local for international tourist destinations. Why do so many people feel the need for international holidays and distant travel in general? One reason is the large and rising number of people, often from lower-income countries, working in wealthier countries (OECD 2014), who visit their families or friends in their home country. Another possible reason is the stress of modern industrial life and work, which impels people to take their vacations in distant locations to get away from this situation (Chen and Petrick 2013). What these examples do show is that global social and economic conditions form part of the explanation of the present high levels of travel. Finally, we need to explore the ways in which the new information technology (IT) affects travel behavior. Although this question has been explored for nearly four decades, no definitive answer has emerged. Dal Fiore et al. (2014) concluded, as others have, that the new IT, especially mobile technology, has the potential to both increase and decrease levels of passenger travel. Given that per capita vehicular travel levels are falling in many OECD nations, it is possible that either IT is now actively decreasing travel levels or that it is at least enabling people to cope with less travel.

Freight Transport In a similar manner to passenger transport, technical freight transport efficiency can be measured by tonne-km (t-km) of payload freight per MJ of primary fuel. Even more so than for passenger travel, the efficiency of the various freight modes varies by two to three orders of magnitude (Edenhofer et al. 2014), with air transport being by far the least energy efficient (but the fastest) form of freight transport. Although freight trucks in all weight classes have shown technical efficiency gains in recent years (in terms of t-km/primary MJ), there has also been a trend toward increased use of smaller freight vehicles for delivery. In Australia overall, but especially in urban areas, light commercial vehicles are carrying a rising share of total t-km (ABS 2013). In London, the same trend has been found and is expected

Social Efficiency in Energy Conservation

7

to continue out to the year 2050 (Zanni and Bristow 2010). Such trends undo some of the efficiency gains in truck freight transport. “Just-in-time” logistics is intended to keep parts inventory in manufacturing to a minimum. One consequence is that delivery frequencies have risen, and average loads have decreased. In effect, the private costs of inventory have fallen, but at the expense of increased external costs for highway freight. The list of external costs for road freight vehicles is similar to those already discussed for road passenger travel. The “occupancy rate,” or payload to tare weight ratio, for freight vehicles is just as important as for passenger travel. For specialized transport vehicles, such as oil or liquefied natural gas tankers, return loads are not feasible. For general goods carriers, two-way loadings can often be improved by better logistics planning, resulting in overall energy reductions (Edenhofer et al. 2014). But a prior question should be: is this particular freight transport needed at all? Most tonne-km of freight globally is moved by international ocean vessels. In most OECD countries, large volumes of products are either imported from other countries or from a different region of the same country. Yet similar products are often also made in the importing country or region. Consumer choice may be important; the point that is often overlooked is that the external costs of the necessary freight transport are unpaid, leading to underpricing of the imported goods. However, social changes already underway could move freight in a more sustainable direction. Recently much attention has been given to “food miles” and a general preference for locally produced goods, such as that sold at farmers’ markets. Food miles represent the distance between the point of food production and the point of consumption. Van Passel (2013) has shown that the concept needs to be modified to meet the charge of oversimplification. He argued for its extension to include freight transport externalities and even added that “all relevant economic, social, and ecological aspects should be taken into account.” Many existing food products are endorsed as variously being “fair trade,” “organically grown,” or “dolphin safe” and often have detailed nutritional information on their packets. In the future we could well see information such as the energy costs, kilometers of transport, and transport mode added to labels. Just as many countries worry about energy security, food security could also become more important for consumer preferences. This point is discussed further in the section “Social Efficiency: Agriculture.” As an example of the kind of systems thinking needed about freight, Schewel and Schipper (2012) have examined in detail “retail goods movement” in the USA, which in 2009 accounted for 6.6 % of US energy demand. They start with the point of import or of manufacture, followed by transport of these goods to a central warehouse, then distribution to individual shops, and finally transport, usually by car, of the purchased goods to the consumer’s final destination. They point out the conflict that can arise between freight transport energy costs and the final consumer’s energy costs. The trend toward fewer, larger stores has improved freight energy efficiency by allowing use of higher capacity trucks, but on the other hand, shoppers have had to drive further to the more widely spaced retail outlets. This retail case study shows that it is not always possible to separate passenger travel energy from freight transport energy.

8

P. Moriarty and D. Honnery

Social Efficiency: Buildings According to the Intergovernmental Panel on Climate Change (IPCC), in 2010 “buildings accounted for 32 % of total global final energy use” (Edenhofer et al. 2014). They were also responsible for 19 % of energy-related GHG emissions, or 9.18 Gt CO2-eq., when electricity-related emissions were included. Further, according to the IPCC, this “energy use and related emissions may double or potentially even triple by mid-century.” Although building energy is growing strongly in industrializing countries, in the core OECD countries, and the economies in transition, total building energy, both direct and indirect, has peaked (Edenhofer et al. 2014). But, as pointed out in the Chapter “▶ Non-technical Aspects of Household Energy Reductions (Section 2)”, research has shown that occupant behavior can make a huge difference in domestic energy use and presumably for commercial buildings as well. Thus, it is becoming increasingly acknowledged that the social context in which building energy use occurs is crucial. Energy use in buildings is different from energy use in passenger transport in that it can and does occur without a human presence. In many cases, this raises few problems: refrigerators, freezers, and electric clocks are best left running continuously. But lighting and space heating/cooling in buildings, as well as computers, televisions, radios, and other appliances, are often left running, whether or not humans are there to enjoy the energy services provided. Even when not running, their standby energy use can be collectively important. Rather than measuring the power consumption of, for example, a television set, a possible measure of social efficiency might be the number of person-hours of actual viewing per hour the set is operating or per MJ of energy the device consumes. In OECD countries, by far the most important energy use in buildings is for space heating and cooling. Yet temperature controls for both heating and cooling could be set much closer to ambient levels, if clothing more appropriate to outside temperatures were worn indoors. For offices, it would mean that dress codes would need to change to allow more appropriate clothing for the season. One problem with fixed temperature settings is that it weakens the possibility of acclimatization to seasonal temperatures. For example, in hot regions of the world, Auliciems (2009) has reported that inhabitants preferred temperatures of “34  C or even higher.” This seasonal or regional acclimatization can be lost where air-conditioning of buildings is common. Passive solar energy is in some ways a misnomer, since its use requires a far more active participation by building occupants for space heating or cooling, for example, than does mechanical air-conditioning, which merely requires a thermostat setting. Passive solar can be used for lighting as well as thermal conditioning of buildings. For residences, use of passive solar involves the judicious opening and shutting of windows and blinds for temperature control and lighting and even varying the timing and use of cooking stoves and ovens, depending on whether the heat will add or subtract from thermal comfort. In some situations, it may be worthwhile investigating a change to hours of employment as a means of improving occupant comfort. Passive solar (in the form of wind energy) can also be used for

Social Efficiency in Energy Conservation

9

clothes drying, but in many communities the use of outdoor clothes drying is still banned (Lee 2009), because of the claim that clothes lines are unsightly. The case study by Pilkington et al. (2011) in the UK showed the importance of occupant behavior for making the most of energy savings from passive solar design. The six terrace houses studied were of identical design, with superefficient insulation and “sunspaces.” They found that space heating use per occupant varied by a factor of 14. The main reason for the variation was that the higher energy users kept the internal doors open on winter days. Overall, the approach we advocate here can be contrasted with the “smart buildings” approach. Instead of predetermined settings, even if set by the occupants, we advocate that the building occupants actively respond to changing ambient conditions and adjust openings and shading accordingly. Just as for passenger transport, the occupancy rate of buildings is important in determining energy use. For residences, the occupancy rate has generally fallen in recent decades in most OECD countries, a consequence of both declining family size and rising incomes (e.g., ABS 2012; SBJ 2014; US Census Bureau 2012). The result is that dwelling space (in m2) per occupant has also risen, which tends to raise the energy for heating or cooling and lighting energy per occupant. If the trend toward declining household size could be reversed, not only would domestic energy efficiency rise but the occupancy rate for cars, and thus their energy efficiency, would also rise. Possibilities for increase include young adults staying at home longer and multi-family and group households. Both the latter groups are already common in many OECD countries, forming an increasing share of total households (e.g., ABS 2012). However, they have not been able to stem the overall fall in household sizes.

Social Efficiency: Agriculture Modern industrial agriculture is not only heavily reliant on energy, especially petroleum-based fuels, but is also a major producer of the GHGs methane and nitrous oxide, as well as CO2. Over the years 2000–2100, the IPCC estimated that average annual emissions of all GHGs from agriculture were in the range 5.0–5.8 Gt CO2-eq., compared with all-sector global 2010 emissions of 49 Gt CO2-eq. (Edenhofer et al. 2014). For these reasons alone, a change is needed in the way the world produces its food and fiber. But present agricultural methods also produce a host of other serious environmental problems and costs, including air and water pollution, loss of biodiversity, soil erosion, and soil salinity. The emphasis in industrial agriculture is on cost minimization per unit of output, given that its products must compete in the marketplace. However, these costs are usually viewed in a narrow accounting sense; the external costs (including GHG emissions and the other costs listed above) can often be ignored (Weis 2010), just as they often are for transport. These environmental costs, such as surface water pollution, will incur energy costs for their remediation (Moriarty and Honnery 2011).

10

P. Moriarty and D. Honnery

Ho and Ulanowicz (2005) investigated the energy return (in terms of food kilojoules) to energy input ratio for three types of agricultural systems: preindustrial, semi-industrial, and full-industrial. They found that the range of energy return ratios for the first two systems was 6.9–11.5 and 2.1–9.7, respectively. For the full-industrial agricultural systems, the ratio was little better than unity. It appears that trying to coax more output per hectare in industrial agriculture is subject to diminishing returns on the largely fossil fuel energy invested (Bos et al. 2013). While it may save land (and for industrial agriculture, land rent, real or imputed, is an important part of production costs), such systems carry higher energy costs, as well as higher general environmental costs. A narrow technical approach to GHG emissions from agriculture can also lead to conflicts for overall climate mitigation. Emphasis on CO2 emissions reduction may lead to nitrous oxide emissions from fertilizer being overlooked. And the projected use of corn stover for conversion to liquid fuels runs the risk of increased soil erosion and loss of soil carbon. Using corn stover (and other agricultural wastes) for bioenergy will enable some fossil fuel carbon to be left in the ground, but soil carbon losses could partly or wholly offset this benefit. Nevertheless, for agriculture, we have to look wider than these biophysical environmental impacts. McMichael (2011), in his article on multi-functionalism in agriculture and the “food sovereignty” movement, stressed that farming is not simply about food production, vital though this is. It is valued also for “its contribution to ecosystem management, landscape protection, rural employment, fostering farming knowledge, rural life, cuisine maintenance, and regional heritage.” The recognition of the importance of multi-functionalism means that agriculture has diverse impacts on overall energy use; agriculture’s energy use implications go far beyond that involved in food and fiber production, even after the energy costs of environmental damages are taken into account. Whether or not these energy costs incurred in other sectors are greater or less for alternative than for industrial agricultural practices is presently unknown; the important point is that they should be considered in the analysis of the total energy costs of agriculture. Farming is a social activity. When discussing passenger transport in section “Passenger Transport,” we emphasized the importance of the question: What is the purpose of transport? Similarly, we need to ask: What is the purpose of agricultural production? What are the products used for? The answer might seem obvious, since food is a basic human necessity, with no substitutes. However, according to the Food and Agriculture Organization (OECD/FAO 2014), in 2014, 34.4 % of the global grain harvest (estimated at 2.461 Gt) was fed to livestock, with a further 6.8 % used for liquid fuel production. The FAO expects the share of these nonfood uses to rise modestly in the future, together reaching 42.3 % of the grain harvest by 2023. Grain provides the bulk of the human diet. Other important agricultural foodstuffs used for feedstock for animals or for fuels include soybeans, oilseeds, and sugarcane. It is true that feedstocks are used to produce meat and dairy produce. However, in many OECD countries, including the USA and Australia, meat and dairy produce consumption is well in excess of the level regarded as producing a healthy diet. Further,

Social Efficiency in Energy Conservation

11

one billion people still suffer dietary deficiencies of various kinds (Conway 2012). If the world shifted to a more vegetarian diet, this deficiency could be remedied. A related issue is food waste: much of the food that is produced and intended for direct human consumption is not eaten but wasted. For industrializing countries, food losses are greatest in the immediate post-harvest part of the food supply chain. In contrast, for OECD countries, the greatest overall food losses resulted at the food retailer, food services industry, and household levels (Parfitt et al. 2010). Atkinson (2014) has even claimed that half the food supplied in high-income countries is thrown away uneaten. As a solution, Parfitt et al. have called for “[C]ultural shifts in the ways consumers value food” and educating the public on the environmental costs of such food waste. Urban agriculture could also be an important means for reducing food production energy use. Teng et al. (2011) have estimated that, worldwide, around 800 million nominally urban residents are involved in food production, many of them full time, especially in low-income cities. But food production is also growing in popularity in OECD cities. Growing food in cities for the household’s own use reduces the relevant “food miles” discussed in section “Freight Transport” to zero, thus cutting freight energy use. According to Ackerman et al. (2014), urban farms can also potentially produce a range of social and economic similar to rural farms: “Urban agriculture not only provides a source of healthful sustenance that might otherwise be lacking, it can also contribute to a household’s income, offset food expenditures, and create jobs.” It also helps the environmental sustainability of the city, and larger urban farms can provide job training for underserved populations. Varying diets to match the growing seasons of locally produced fruit and vegetables can also reduce agricultural transport costs.

Future Directions If efforts at mitigating climate change in the coming decades are no better than those in recent decades, the world could be heading to a 4  C rise in average surface temperature above preindustrial values by the century’s end. Although some adaptative measures will be clearly needed, since further climate change is unavoidable given climate (and social) inertia, adaptation cannot be expected to cope with such a temperature rise. At a 4  C rise, the broadly linear response so far observed in various climate subsystems may break down (New et al. 2011), resulting in changes difficult to predict. Adaptation efforts would then be continuously aiming at a moving target. Hence, mitigation is the only long-term solution to climate change, and the later it is postponed, the more drastic will be the changes needed. There are encouraging signs that the world’s leaders are starting to appreciate how serious the climate change problem is. The European Union now binds its member states to reduce their GHG emissions by at least 40 % from 1990 levels by 2030, mainly through improved energy efficiency and more renewable energy. The world’s largest carbon emitter, China, has recently pledged to stabilize its GHG emissions by 2030, and the

12

P. Moriarty and D. Honnery

second largest, the USA, has promised that by 2025, its emissions will be 14–16 % less than those in 1990 (Anon 2014; Malakoff 2014). Although technical solutions will doubtless be the preferred approach to meet these targets, the latest IPCC report on mitigation (Edenhofer et al. 2014) has placed far more emphasis on nontechnical approaches than did the earlier IPCC reports. A key advantage of technical energy efficiency improvements is that they are a much better fit to the existing growth-oriented market economies than the social efficiency approaches discussed here. At present, most of these social efficiency measures are not politically feasible. But, in addition to the GHG reduction targets just discussed, changes are under way which will ensure that, in future, social efficiency is less “unthinkable” than it is today. Rockstro¨m et al. (2009) have discussed nine “planetary limits” which Earth is approaching, including climate change, ocean acidification, and the rate of biodiversity loss. To this list must be added the global depletion of fossil fuels, particularly oil, and of key minerals essential for industrial economies. The official position is still that reserves of fossil fuels are more than enough to sustain rising production for decades to come (BP 2015; Energy Information Administration (EIA) 2014). The IPCC (Stocker et al. 2013) projects that natural gas will be an important component of future energy use, being seen as especially valuable as a transition fuel to a low carbon future, because of its low CO2 emissions per unit of energy compared with coal or oil. In contrast to this optimistic position, recent research (Inman 2014) has cast doubt on the long-term future for shale gas, which was thought to have large reserves, particularly in the USA. A fine-grained analysis of US shale formations has found that the “sweet spots” where production is presently occurring are not typical of the formations as a whole. Overall, US natural gas production could peak within a decade and then fall sharply. If the US example serves as a model for shale gas production globally, the decarbonization of the energy supply could even be reversed. More generally, Schindler (2014) and others (e.g., Hӧӧk and Tang 2013) have stressed that oil, both conventional and unconventional, will peak soon, and natural gas and even coal will peak within a very few decades as well. If the potential for renewable energy to reduce GHG emissions is also less than anticipated, energy reductions will have to be the main approach for GHG reductions. In the future, then, we can expect to see greatly increased attention to system approaches to energy use, as well as more research that combines both technical and social approaches to energy, for the following reasons: • Rising energy costs for producing energy partly offset any gains in device energy efficiency. • Energy rebound significantly offsets any gains in energy efficiency. • As evidenced by the continued rise in atmospheric concentrations of CO2 and GHGs overall, the current emphasis on technical solutions is not working. • Many of the easily made technical energy efficiency measures have already been implemented; those remaining will be progressively more difficult to implement. In contrast, social efficiency potential has barely been tapped.

Social Efficiency in Energy Conservation

13

This chapter has shown that it is usually very difficult to neatly compartmentalize the various energy sectors. Improving freight energy efficiency may negatively impact on passenger transport efficiency, for example. A large range of social practices impact on energy uses, as shown in this chapter. The efficiency of providing thermal comfort in buildings can be greatly improved by changes to the lifestyles of the occupants, including their clothing choices, and by the active use of passive solar energy. Agricultural practices can affect both transport energy costs and the energy costs of ecosystem maintenance. Choosing a diet with less meat and dairy products can greatly improve the overall energy efficiency of national and global food systems. If the world is to produce the carbon mitigation effort needed to avoid disruptive climate change, the question that will increasingly need to be asked is: What is the energy for? What we have termed social efficiency is an attempt to address this vital question.

References Anon (2014) On the road to a climate fix. New Sci 13 Dec: 8–9 Ackerman K, Conard M, Culligan P et al (2014) Sustainable food systems for future cities: the potential of urban agriculture. Econ Soc Rev 45(2):189–206 Airbus (2014) Global market forecast: flying on demand 2014–2033. Accessed on 8 Dec 2014 at http://www.airbus.com/company/market/forecast/ Atkinson A (2014) Urbanisation: a brief episode in history. City 18(6):609–632 Auliciems A (2009) Human adaptation within a paradigm of climatic determinism and change. Chapter 11. In: Ebi KL, Burton I, McGregor G (eds) Biometeorology for adaptation to climate variability and change. Springer Science + Business Media B.V, Dordrecht Australian Bureau of Statistics (ABS) (2012) 2009–10 Household expenditure survey: Summary of results. Cat No 6530 ABS, Canberra. (Also earlier surveys) Australian Bureau of Statistics (ABS) (2013) Survey of motor vehicle use, Australia, 12 months ended 30 June 2012. ABS Cat. No. 92080DO001_1230201206 Bos JFFP, Smit AL, Schro¨der JJ (2013) Is agricultural intensification in The Netherlands running up to its limits? Wagening J Life Sci 66:65–73 BP (2015) BP statistical review of world energy 2015. BP, London Cervero R (1996) Jobs-housing balance revisited: trends and impacts in the San Francisco Bay area. J Am Plan Assoc 62(4):492–511 Chen C-C, Petrick JF (2013) Health and wellness benefits of travel experiences: a literature review. J Travel Res 52(6):709–719 Conway G (2012) One billion hungry: can we feed the world? Cornell University Press, Ithaca Cullen JM, Allwood JM, Borgstein EH (2011) Reducing energy demand: what are the practical limits? Environ Sci Technol 45:1711–1718 Dal Fiore F, Mokhtarian PL, Salomon I et al (2014) “Nomads at last”? A set of perspectives on how mobile technology may affect travel. J Transp Geogr 41:97–106 Druckman A, Chitnis M, Sorrell S et al (2011) Missing carbon reductions? Exploring rebound and backfire effects in UK households. Energy Policy 39:3572–3581 Edenhofer O, Pichs-Madruga R, Sokona Y et al (eds) (2014) Climate change 2014: mitigation of climate change. CUP, Cambridge, UK Energy Information Administration (EIA) (2014) Annual energy outlook 2014. US Department of Energy, Washington, DC Gabrielli G, von Karman T (1950) What price speed? Specific power required for propulsion of vehicles. Mech Eng 72(10):775–781

14

P. Moriarty and D. Honnery

Haas R, Nakicenovic N, Ajanovic A et al (2008) Towards sustainability of energy systems: a primer on how to apply the concept of energy services to identify necessary trends and policies. Energy Policy 36:4012–4021 Halden D (2011) The use and abuse of accessibility measures in UK passenger transport planning. Res Transp Bus Manag 2:12–19 Ho M-W, Ulanowicz R (2005) Sustainable systems as organisms. Biosystems 82:39–51 Hӧӧk M, Tang X (2013) Depletion of fossil fuels and anthropogenic climate change – a review. Energy Policy 52:797–809 Inman M (2014) The fracking fallacy. Nature 516:28–30 International Energy Agency (IEA) (2014) Key world energy statistics 2014. IEA/OECD, Paris Keller DP, Feng EY, Oschlies A (2014) Potential climate engineering effectiveness and side effects during a high carbon dioxide-emission scenario. Nat Commun 5:3304. doi:10.1038/ ncomms4304 Lee A (2009) The right to dry. New Sci 2732:26–27 Malakoff D (2014) China’s peak carbon pledge raises pointed questions. Science 346:903 McMichael P (2011) Food system sustainability: questions of environmental governance in the new world (dis)order. Glob Environ Change 21:804–812 Millard-Ball A, Schipper L (2011) Are we reaching peak travel? Trends in passenger transport in eight industrialized countries. Transp Rev 31(3):357–378 Moriarty P, Honnery D (1996) Social factors in household energy conservation. In: Proceeding of 13th Clean and Environ Conference, 22–25 Sept, Adelaide, 186–191 Moriarty P, Honnery D (1999) Slower, smaller and lighter urban cars. Proc Inst Mech Engr 213 (D):19–26 Moriarty P, Honnery D (2008) Low mobility: the future for transport. Futures 40(10):865–872 Moriarty P, Honnery D (2011) Is there an optimum level for renewable energy? Energy Policy 39:2748–2753 Moriarty P, Honnery D (2012) Energy efficiency: lessons from transport. Energy Policy 46:1–3 Moriarty P, Honnery D (2013) Greening passenger transport: a review. J Clean Prod 54:14–22 Moriarty P, Honnery D (2014) Reconnecting technological development with human welfare. Futures 55:32–40 Moriarty P, Honnery D (2015) Reliance on technical solutions to environmental problems: caution is needed. Environ Sci Tech 49:5255–5256 New M, Liverman D, Schroeder H et al (2011) Four degrees and beyond: the potential for a global temperature increase of four degrees and its implications. Phil Trans R Soc A 369:6–19 OECD (2014) OECD factbook 2014: economic, environmental and social statistics. OECD, Paris, Available at: http://dx.doi.org/10.1787/factbook-2014-en OECD/Food and Agriculture Organization (FAO) (2014) OECD-FAO agricultural outlook 2014 OECD, Paris. Available at: http://dx.doi.org/10.1787/agr_outlook-2014-en Parfitt J, Barthel M, Macnaughton S (2010) Food waste within food supply chains: quantification and potential for change to 2050. Phil Trans R Soc B 365:3065–3081 Pilkington B, Roach R, Perkins J (2011) Relative benefits of technology and occupant behaviour in moving towards a more energy efficient, sustainable housing paradigm. Energy Policy 39:4962–4970 Rockstro¨m J, Steffen W, Noone K et al (2009) A safe operating space for humanity. Nature 461:472–475 Schewel LB, Schipper LJ (2012) Shop ‘till we drop: a historical and policy analysis of retail goods movement in the United States. Environ Sci Technol 46:9813–9821 Schindler J (2014) Chapter 2: The availability of fossil energy resources. In: Angrick M et al (eds) Factor X: policy, strategies and instruments for a sustainable resource use, vol 29, Eco-Efficiency in Industry and Science. Springer, Dordrecht Shove E, Walker G (2014) What is energy for? Social practice and energy demand. Theory Cult Soc 31(5):41–58 Sovacool BK (2014) Energy studies need social science. Nature 511:529–530

Social Efficiency in Energy Conservation

15

Statistics Bureau Japan (SBJ) (2014) Japan statistical yearbook 2014. Statistics Bureau, Tokyo, Available at http://www.stat.go.jp/english/data/nenkan/index.htm. (Also earlier editions) Stocker TF, Qin D, Plattner G-K et al (eds) (2013) Climate change 2013: the physical science basis. CUP, Cambridge, UK Teng, P, Escaler M, Caballero-Anthony M (2011) Urban food security: feeding tomorrow’s cities. Significance June, pp 57–60 US Census Bureau (2012) The 2012 statistical abstract: historical statistics. Available at http:// www.census.gov/compendia/statab/hist_stats.html Van Passel S (2013) Food miles to assess sustainability: a revision. Sust Dev 21:1–17 Van Vuuren DP, Edmonds J, Kainuma M et al (2011a) The representative concentration pathways: an overview. Clim Change 109:5–31 Van Vuuren DP, Stehfest E, den Elzen MGJ et al (2011b) RCP2.6: exploring the possibility to keep global mean temperature increase below 2 C. Clim Chang 109:95–116 Weis T (2010) The accelerating biophysical contradictions of industrial capitalist agriculture. J Agrar Chang 10(3):315–341 Zanni AM, Bristow AL (2010) Emissions of CO2 from road freight transport in London: trends and policies for long run reductions. Energy Policy 38:1774–1786

Handbook of Climate Change Mitigation and Adaptation DOI 10.1007/978-1-4614-6431-0_74-1 # Springer Science+Business Media New York 2015

Measuring Household Vulnerability to Climate Change Sofie Waage Skjeflo* UMB School of Economics and Business, Norwegian University of Life Sciences, Ås, Norway

Abstract This chapter summarizes research on the potential impacts of climate change on households, with a particular focus on contributions from different methodological approaches to understanding impacts for households in developing countries. Agriculture has been a central focus of this literature, both because of the sensitivity of the agricultural sector to a changing climate and also because of the importance of agriculture for the livelihoods of the poor. The literature review shows that developing countries are largely expected to be disproportionally hurt by projected changes in temperature, precipitation, and extreme events. On the other hand, the actual household level response to these changes is not well understood, and there are still gaps in the methodological approaches to understanding these issues. The recent literature reveals promising approaches that may complement and improve existing methods as more data becomes available.

Keywords Climate change; Agriculture; Sub-Saharan Africa; Climate variability; Drought; Flood; Panel data; Computable general equilibrium; Economy-wide; Farm household; Crop simulation models; Adaptation; Vulnerability; Shock; Risk; Smallholder agriculture; Food prices; Inequality; Poverty; Adaptation policy; Market imperfections; Crop productivity

Introduction Despite remarkable achievements in improving standards of living and reducing the proportion of the world’s population living in poverty over the past century (Easterlin 2000; Chen and Ravallion 2010), securing basic needs remains a challenge for a large share of the global population. In 2010, the estimated share of the world’s population living in extreme poverty, defined as less than $1.25 per day measured in purchasing power parity terms, was about 20 % (World Bank 2014). In sub-Saharan Africa, the estimated share is almost 50 % (World Bank 2014). The majority of the world’s poor live in rural areas and rely on agriculture as their main livelihood (World Bank 2014). In the face of a changing climate, the challenge of improving the livelihoods of the poor may be even greater (IPCC 2014). The physical characteristics of agriculture create a strong link between the climate, agriculture, and poverty (Porter et al. 2014). Understanding the potential impacts of climate change therefore requires knowledge of how the rural poor might be affected, through which channels and how policies to improve livelihoods interact with these impacts. This chapter aims to give an overview of the literature on climate change impacts in developing countries, with a particular emphasis on agriculture in sub-Saharan Africa. The focus is on *Email: sofie.skjefl[email protected] *Email: sofieskjefl[email protected] Page 1 of 12

Handbook of Climate Change Mitigation and Adaptation DOI 10.1007/978-1-4614-6431-0_74-1 # Springer Science+Business Media New York 2015

the contributions of different methodological approaches to understanding climate change in rural developing countries, as well as empirical findings. After discussing the direct impacts of changing temperature and precipitation trends, the household level response to these impacts is discussed. This requires an understanding of household characteristics and the context in which rural households interact, which is discussed in section Household Heterogeneity and Market Characteristics. The final section concludes with some challenges for future research.

Climate Change Impacts on Agriculture The most recent report from the Intergovernmental Panel on Climate Change (IPCC) concludes that the climate system is warming and that it is very likely that weather extremes have become more frequent and severe due to climate change (IPCC 2013). Projections show that continued greenhouse gas emissions will cause average temperatures to increase further, and there is high confidence that the near-term increase will be larger in tropical and subtropical regions than midlatitude regions. Projections for average precipitation are less clear. It is likely that precipitation variability will increase, but the projections are uncertain and vary considerably across regions in sub-Saharan Africa (IPCC 2013). Figure 1 shows the trends and projected trends in temperature and precipitation for Africa. The top left panel of the figure shows the trend in temperature from 1901 to 2012, where white areas are areas where there is insufficient data to conclude on any trend. The top right panel shows the projected difference in annual mean temperature between mid- and late twenty-first century and 1986–2005. The projections for the RCP8.5, or the “business-as-usual” scenario, show up to 6 warming by the end of the twenty-first century in some areas of Africa. The RCP2.6 scenario, which is a scenario with aggressive mitigation efforts where emissions are near zero by the end of the century, also shows warming in Africa, highlighting that adaptation efforts may be needed even if emissions are cut dramatically. The bottom two figures show that observed trends and projected changes in precipitation are much less clear, with missing data preventing clear insights on past trends. Studies of impacts of climate change in developing countries have to a large extent focused on impacts through agriculture, both because of the importance of the agricultural sector in terms of production and employment and because of the sensitivity of this sector to climate change (Arndt et al. 2012). Early studies of quantitative impacts of climate change on agriculture relied on crop simulation models to simulate the impact of changing temperature, precipitation, and concentration of CO2 in the atmosphere on crop growth (Kurukulasuriya and Rosenthal 2003). These models capture the effect of genetic factors; climate variables such as solar radiation, maximum and minimum temperatures, and precipitation; as well as soil characteristics and farm management practices on yields (Parry et al. 1999). The models can also take into account the fertilization effect of increased CO2 concentration in the atmosphere, as explained by Darwin and Kennedy (2000), and different adaptation options can be simulated by exogenously changing planting dates, fertilization, irrigation, and so forth. Since these models require detailed input and are constructed for separate crops, the applications of crop models for country- or region-level studies of Africa are scarce (Hertel and Rosch 2010; Thurlow et al. 2012). Thurlow et al. (2012) therefore use a less detailed hydro-crop model in their study of impacts of climate variability on Zambian agriculture. Based on climatic and agronomic statistics from the past three decades, their hydro-crop model predicts 14–77 % maize yield losses in the most drought-prone agroecological zone during a severe drought event and up to 48 % yield losses during more moderate drought events. An application at a more aggregated scale is provided by Jones and Thornton (2003) who use global circulation model (GCM) output to generate weather scenarios for surfaces in Africa and Latin America Page 2 of 12

Handbook of Climate Change Mitigation and Adaptation DOI 10.1007/978-1-4614-6431-0_74-1 # Springer Science+Business Media New York 2015

Fig. 1 Observed and projected changes in annual average temperature and precipitation (top panel, left). Map of observed annual average temperature change from 1901 to 2012, derived from a linear trend [WGI AR5 Figures SPM.1 and 2.21] (bottom panel, left). Map of observed annual precipitation change from 1951 to 2010, derived from a linear trend [WGI AR5 Figures SPM.2 and 2.29]. For observed temperature and precipitation, trends have been calculated where sufficient data permit a robust estimate (i.e., only for grid boxes with greater than 70 % complete records and more than 20 % data availability in the first and last 10 % of the time period). Other areas are white. Solid colors indicate areas where trends are significant at the 10 % level. Diagonal lines indicate areas where trends are not significant (top and bottom panel, right). CMIP5 multi-model mean projections of annual average temperature changes and average percent changes in annual mean precipitation for 2046–2065 and 2081–2100 under RCP2.6 and 8.5, relative to 1986–2005. Solid colors indicate areas with very strong agreement, where the multi-model mean change is greater than twice the baseline variability (natural internal variability in 20-year means) and >90 % of models agree on sign of change. Colors with white dots indicate areas with strong agreement, where >66 % of models show change greater than the baseline variability and >66 % of models agree on sign of change. Gray indicates areas with divergent changes, where >66 % of models show change greater than the baseline variability, but 4

Hm0 (m)

Fig. 2 Seasonal wave height roses at La Isla Bonita in the Island of La Palma (Iglesias and Carballo 2010b) (Hm0, significant wave height)

which wave energy is a source of concern (of loading, in technical terms) rather than a benefit. For this reason, the outcome of previous work, albeit informative, is often insufficient for purposes of assessing the wave resource, and an ad hoc characterization of the wave resource is necessary. The areas with realistic potential for the development of wave energy present average power values above 20 kWm–1 and tend to be located in the mean and high

Wave Power - Climate Change Mitigation and Adaptation

7

a 43.8 200 43.6 100 43.4 −7 0

−6.5

−6

0.4

−5.5

−5

1.2

0.8

−4.5

1.6

2

Hm0 (m)

b 43.8

200 43.6 100 43.4 −7 0

−6.5 2

−6

−5.5

4

6

−5 8

−4.5 10

J (kW/m)

Fig. 3 Wave patterns: significant wave height and direction (a) and wave power (b) in the North Coast of Spain for the following deepwater wave conditions on 13/04/2009 at 03:00 UTC: significant wave height, 1.5 m; energy period, 10.0 s; mean wave direction, 315 (Iglesias and Carballo 2010c)

latitudes owing to the global atmospheric circulation. Furthermore, the seasonal variability of the wave resource is typically lower in the Southern Hemisphere than in the Northern. On this basis, many coastal areas of South America, Africa, and Australia would be particularly attractive for wave energy exploitation – with the downside of their distance to the energy consumption centers. The characterization of the wave resource has been undertaken recently in a number of areas with potential (Fig. 3) for the development of wave energy areas (Bernhoff et al. 2006; Defne et al. 2009; Folley and Whittaker 2009; Gonc¸alves et al. 2014; Iglesias and Carballo 2009, 2010a, c, 2011; Lenee-Bluhm et al. 2011; Pontes et al. 1998; Rusu and Guedes Soares 2012; Stopa et al. 2011; Thorpe 2001; Vicinanza et al. 2013). However, much work remains to be done in the small-scale characterization of the nearshore variability of the wave resource in the areas of interest. This characterization is typically carried out through numerical models, ideally calibrated and validated with wave data from wave buoys or other sources (Iglesias and Carballo 2009, 2010c; Iglesias et al. 2009). The following section summarizes the main principles of the characterization of the wave resource. For additional details the interested reader is referred to Carballo and Iglesias (2012).

Mathematical Aspects of Ocean Waves Ocean waves consist of a superposition of a very large number of individual sinusoidal (harmonic) waves, each with its own amplitude, frequency, and

8

G. Iglesias and J. Abanades

direction. This superposition is expressed mathematically as a Fourier series, which can represent any sea state (Holthuijsen 2007). The Fourier series, a time domain concept, has its counterpart in the frequency domain in the wave energy density spectrum, usually referred to for brevity as the wave spectrum, which quantifies the distribution of wave energy over the different frequencies. Often, the directional information is contained in the spectrum, which is then a directional spectrum. On the basis of the directional wave energy spectrum, it is possible to compute the wave parameters of interest, some of which are presented below for convenience; further details on irregular wave theory may be found in Holthuijsen (2007). If the directional wave energy density is denoted by S( f, θ), with f the wave frequency, the spectral moments may be defined as 2ðπ 1 ð

mn ¼

f n Sðf , θÞdf dθ, n ¼ 0, 1, 2 . . .

(1)

0 0

The significant wave height is then given by 1=2

H s ¼ 4m0

(2)

and the peak wave period may be computed as the inverse of the frequency at the spectral peak ( fp),  1 Tp ¼ f p

(3)

Wave power, or wave energy flux, is given by 2ðπ 1 ð

P ¼ ρg

cg ðf , hÞSðf , θÞdf dθ

(4)

0 0

where ρ is the seawater density, g is the gravitational acceleration, and cg is the group celerity, i.e., the velocity at which wave energy propagates, which is a function of the wave frequency and water depth (Eq. 19). Seawater density depends on salinity and temperature, which vary in time and space; an average value was taken for this work, ρ = 1025 kg/m3. Equation 6 yields the wave power per unit width of wave front; if a certain wave energy converter (WEC) captures the energy of a width b of wave front, the corresponding power is PWEC ¼ Pb

(5)

Naturally the actual power output will depend on the converter efficiency. The mean wave direction may be obtained from the directional wave energy spectrum through

Wave Power - Climate Change Mitigation and Adaptation

9

2ðπ 1 ð

Sðf , θÞ sin θ df dθ θm ¼ arctan 2π0 10 ð ð

(6) Sðf , θÞ cos θ df dθ

0 0

In many cases, the detailed shape of the spectrum is unknown, and only some of the characteristic wave parameters are given. In this case, the wave power, also known as wave energy flux, can be computed from the following approximation: J¼

ρg2 T m01 H 2s 64π

(7)

where ρ is the water density, g is the acceleration due to gravity, Hs is the significant wave height (m), and Tm01 (s1) the wave energy period: T m01 ¼

m0 m1

(8)

Wave Resource Characterization The characterization of the wave resource is usually carried out by means of numerical wave models, which are introduced in section “Wave Models.” The main points for their practical application to characterize the wave resource are explained in section “Methodology.”

Wave Models In characterizing the wave resource, it is necessary to assess not only the average values of wave power or the total annual resource but also the variability of the resource in terms of sea states or, in other words, the characteristics of the waves (significant wave height, peak period, mean direction, etc., or even more accurately, the directional wave spectrum) behind the resource. This characterization is customarily carried out with numerical models of coastal wave propagation (e.g., SWAN, Simulating WAves Nearshore). There are many types of models, each type solving a specific equation that applies to certain conditions. A review of existing wave models is outside the scope of this chapter, and the interested reader is referred to Folley et al. (2012). A brief summary is presented in the following. Coastal wave models can be classified into phase-resolving and phase-averaged (spectral) wave models. In turn, each of these categories comprises many models,

10

G. Iglesias and J. Abanades

solving different equations or variants of equations. Many phase-resolving models are based on Berkhoff’s mild-slope equation (Berkhoff 1974), in its fully fledged (elliptical) version or in its parabolic and hyperbolic incarnations. Alternatively phase-resolving models may be based on Boussinesq’s equation, e.g., Johnson (1997), which is adept at modeling nonlinear energy transfer in shallow water and has been recently extended to intermediate water depths. As for phase-averaged models, usually referred to as spectral models, they typically solve the spectral wave action balance equation (Hasselmann 1971; Holthuijsen et al. 1989; LonguetHiggins and Stewart 1961) and are often the preferred choice in characterizing the wave resource, for their ability to model wave generation and propagation over large domains efficiently. Spectral models solve the spectral wave action balance equation without a priori assumptions on the shape of the wave spectrum. The wave field is described by the two-dimensional wave action density spectrum, N(ω,θ), where ω is the angular wave frequency and θ is the wave direction. The wave action density spectrum is used in lieu of the energy density spectrum, for action density is conserved in the presence of currents whereas energy density is not; in any case, the wave energy spectrum may be computed from the wave action spectrum. The spectral wave action balance equation reads   @N @ ðcx N Þ @ cy N @ ðc ω N Þ @ ðc θ N Þ F þ þ þ ¼ þ @t @x @ω @θ ω @y

(9)

The first term on the left-hand side represents the local rate of change of wave action density in time; the second and third terms stand for the propagation of wave action over geographical space, with propagation velocities cx and cy in the x and y directions, respectively; the fourth term quantifies the shifting of the relative frequency due to variations in depths and currents, with propagation velocity cω in the ω direction; finally, the fifth term represents the effects of refraction induced either by depth variations or by currents, with propagation velocity cθ in the θ direction. The expressions of the above propagation velocities are derived from linear wave theory. As for the right-hand side of Eq. 9, F is the source term representing the effects of generation, dissipation, and nonlinear wave-wave interactions. Wave power is then computed as 2ðπ 360 ð

Jx ¼ 0

ρgcx Eðσ, θÞdσdθ

(10)

ρgcy Eðσ, θÞdσdθ

(11)

0

2ðπ 360 ð

Jy ¼ 0

0

Wave Power - Climate Change Mitigation and Adaptation

11

where E(σ, θ) is the directional spectral density, which specifies how the energy is distributed over frequencies (σ) and directions (θ). The wave power magnitude is then calculated as  12 J ¼ J 2x þ J 2y

(12)

Methodology In practice, the detailed characterization of the resource for the purposes of wave energy exploitation is of interest in nearshore areas, for it is in these areas that wave farms can be deployed. Indeed, the water depths and lengths of submarine connections (cables) quickly place a practical limit on the distances to the coastline at which it is economically feasible to build a wave farm. Thus, it is the nearshore resource that is of practical interest. First, the offshore wave resource must be properly characterized as a prerequisite, for offshore wave conditions are required to prescribe the boundary conditions. The characterization of the offshore wave resource can be carried out using: (i) deepwater wave buoy data, where available; (ii) numerical models that account for wave generation and propagation in deep water (e.g., WAVEWATCH III); (iii) remote sensing (satellite, HF-RADAR, etc.); (iv) hindcast datasets; or a combination of the above. The primary factors that produce the spatial distribution of the wave resource in the nearshore are the deepwater wave resource and the bathymetry. The typical source of bathymetric data is nautical charts. In some areas, however, it may be advisable to complement existing charts with ad hoc surveys, in particular of the area of interest. Therefore, the second consideration in characterizing the nearshore resource is that the bathymetry file must have a good spatial resolution in the area of interest, ideally O(101 m), for the model to be able to calculate wave propagation accurately. Nevertheless, this fine spatial resolution is not necessary throughout the computational domain, and medium and even coarse resolutions – which are normally available from off-the-shelf nautical charts – can be acceptable from deep water to the area of interest, depending on the level of accuracy required (Fig. 4). Third, the geometry of the computational grid should be chosen based on the geometry of the coastline and the area of interest. Cartesian, curvilinear, or combined grids may be used (Fig. 5) (Carballo and Iglesias 2012; Iglesias and Carballo 2009). It is important to bear in mind that numerical disturbances often arise at the boundaries, so these must be set far enough from the study area for model results not to be polluted. Fourth, the computational grid must have sufficient resolution, not least in the study area, which usually encompasses the lee of the wave farm up to the shoreline and the wave farm itself. The optimum resolution may be established through a sensitivity analysis, in which the grid is progressively refined until no significant

12

G. Iglesias and J. Abanades

Fig. 4 Perspective view of the bathymetry used for the characterization of the nearshore wave resource (Iglesias and Carballo 2009)

−200

−250

−150

−100

−50

0

Depth (m)

x 106 4.84 Vilán-Sisargas 4.82

Northing (UTM)

4.8 Langosteira 4.78

4.76

4.74

4.72

4.7 4.4

4.6

4.8

5

5.2

Easting (UTM)

5.4

5.6

x 105

Fig. 5 Curvilinear grid for the characterization of the nearshore wave resource (Iglesias and Carballo 2009)

differences can be detected in the results. A fine grid in the area of interest is often nested into a coarser grid with a view to limiting the computational cost of the model. Alternatively, a variable size mesh can be adopted, with increasing resolution toward the area of interest.

Wave Power - Climate Change Mitigation and Adaptation

13

Last, but not least, the numerical model ought to be calibrated and validated. The model should ideally be driven with offshore data from wave buoys and calibrated and validated by comparing numerical results with observations from coastal wave buoys located within the computational domain. Time series of relevant wave parameters can be used to compare the model results with the coastal wave buoy data, and statistical indicators (e.g., R2, RMSE, NMSE) can be applied to quantify the goodness of fit. It is well known that goodness of fit can decrease significantly under storm conditions. Furthermore, wave buoys are in the habit of failing in heavy storms; consequently, storm values are typically underrepresented in the dataset, which may lead to biased (and not conservative) assessments of goodness of fit.

Wave Energy Conversion Wave energy conversion is a relatively young technology, and intensive research efforts have been devoted over the last years to the development of WECs (Carballo and Iglesias 2012). These technologies were reviewed by a number of authors (Cle´ment et al. 2002; Drew et al. 2009; Falca˜o 2010; Falnes 2007; Iglesias et al. 2010; McCormick 1981; Thorpe 1999). At present no single technology can be deemed to be the definitive technology. A number of crucial aspects – from the energy performance to survivability under storm conditions – require further investigation. In addition to the existing technologies, new patents continue to appear. This interest in the sector is driven by the fact that the global wave resource is vast and more than sufficient to set wave energy on a par with other renewables such as hydropower or wind energy.

Historical Perspective Although research and development of wave energy conversion did not gain momentum until 1973, with the world economy under the effects of the first oil crisis, the potential of wind waves for energy generation has long been acknowledged. Indeed, the first attempt to harness the wave resource dates back to the eighteenth century: the Girard brothers’ patent registered in France in 1799. There is little information on the device in question, but it is known that in the late eighteenth century, it was tentatively applied in England to elevate seawater and subsequently using the energy thus stored as potential energy. This was followed by the tests carried out in Algeria by the French engineer M. Fursenot in the first decades of the nineteenth century with a device that captured wave oscillation and transformed it into energy by means of cams and gears. Historically buoyant systems oscillating on the free surface feature prominently in the development of wave energy conversion, as in the case of the mechanism invented by P. Wright and patented in March of 1898 under the name “Wave Motor.” In the following year, a new development came out, the Ocean Grove;

14

G. Iglesias and J. Abanades

based on an intake plate which, united to the shafts of a series of pumps, raised water to a group of elevated tanks, this WEC was designed so as to facilitate the subsequent use of the potential energy of the stored water. A device with a similar concept was in operation at the Oceanographic Museum of Monaco for 10 years, pumping seawater to the aquarium using wave energy. This WEC was ultimately destroyed by wave action. The French scientist Montgolfier developed a “compliant flap” and installed it in a pilot plant in the Black Sea in 1917. With a nominal power of 10 kW, it was based on the dynamic pressures exerted by the movement of water particles. Its operating principle was very simple: a flexible sheet was placed perpendicular to the direction of wave propagation, thereby absorbing wave energy. However, major efforts to convert wave power into energy did not begin in earnest until the oil crisis of 1973. A great deal of concepts for wave energy conversion were put forward, many of which were patented; however, only a few progressed to the testing stages, and even fewer managed to actually produce energy. These first-generation converters laid the foundations for the current technologies. It was also in the context of the oil crisis in the 1970s that studies aimed at a better understanding of wave mechanics were undertaken, often in connection with oil platforms. In the same decade research was carried out on cavity resonance devices, including the work by Yoshio Masuda at the Japan Marine Science and Technology Center and by R. M. Ricafranca at the RMR Research and Engineering Services in the Philippines. Working separately, they created the first two commercial WECs, belonging to the category that would later be called OWCs, or Oscillating Wave Columns. The operating principle of OWCs is the oscillation of a water column inside a chamber connected with the ocean through an opening below the surface. The oscillating water column acts like a piston for the air in the upper part of the chamber, alternately compressing and decompressing it. The upper part of the chamber is connected with the exterior through a conduit in which the power take-off (PTO) system is installed – a bidirectional air turbine connected to a generator. When the water column rises, the air is compressed and expelled from the chamber, thereby driving the PTO. In the next half cycle, when the water column falls, the air pressure in the chamber decreases and air is absorbed from outside, also driving the PTO. A British project carried out by the National Engineering Laboratory (Glasgow) capitalized on the knowledge acquired in the preceding years to improve on Masuda’s WEC. However, this work did not progress beyond the prototype phase. Instead, Masuda was able to develop his device, a floating OWC named “Kaimai,” into operation. Mounted on a barge and with a rated power of 1.3 MW, Kaimai was deployed off the coast of Japan. In those years no less than £13 M were allocated for development and research in the field of marine energies by the British government. Stephen Salter, an engineer with the University of Edinburgh, presented the so-called Salter Duck in the 1980s. Rocking under wave action, its elements pumped a hydraulic fluid which in turn

Wave Power - Climate Change Mitigation and Adaptation

15

drove a generator. The project eventually stalled due to high operating costs (estimated). A recent reexamination of this WEC produced cost figures ten times lower than those initially estimated. Also in the UK (in Southampton) Christopher Cockerell worked in the development of a WEC based on the relative movement of plates connected by hinged joints. Waves cause a relative movement between the plates, which pumps a highpressure hydraulic fluid that drives a turbine connected to a generator. In 1974 the company Wave Power Limited was founded in order to commercialize the work and patents held by Cockerell’s research group. The first of these devices was huge, measuring 50 m wide and 100 m long. Experiments with unidirectional waves and plates of different lengths were undertaken to determine the optimum configuration; when the prototypes were subjected to real operating conditions, they proved to be rather inefficient. In Oxford Robert Russell, director of the Hydraulics Research Station at Wallingford, designed a device that would operate in shallow waters. The system, called the HRS Rectifier, was an anchored structure that breached through the surface of the water, with sluice gates closing off two tanks. Although these WECs did not pass the experimental phase or, when they did, were not deemed efficient enough to warrant further development, they did form the knowledge basis upon which many of today’s devices are built.

Classification of WECs Wave energy converters can be classified according to different criteria: (i) installation site, (ii) dimensions relative to the wave length, and (iii) principle of operation. The latter is arguably the most usual and will be developed in more detail.

Classification According to the Installation Site Three types of WECs can be distinguished according to this criterion: (i) Onshore WECs, located entirely on land. (ii) Onshore-offshore WECs, which capture wave energy in the nearshore and transform it into electricity in an onshore facility. (iii) Offshore WECs, which are deployed in the sea. This group may be further divided according to whether the WECs are floating or resting on the seabed.

Classification According to the Dimensions Relative to Wave Length This criterion distinguishes between two types of WECs: point absorbers and line absorbers. The dimensions of point absorbers are at least one order of magnitude smaller than the wave length, whereas the predominant dimension of line absorbers is of the same order of magnitude as the wave length. Line absorbers can be orientated transversally or longitudinally to the incoming wave direction.

16

G. Iglesias and J. Abanades

Classification According to the Principle of Operation There are three categories within this classification: (i) Overtopping devices, based on waves overtopping a barrier, and the overtopping flow rate being stored at a reservoir and subsequently used to drive a turbine (ii) Wave-activated bodies, which capture wave energy through the heaving motion of a floater (iii) Oscillating Water Columns (OWCs), which – as indicated above – use a water column as a piston to create an air flow which in turn drives a turbinegenerator group Based on the principle of energy capture, they can be classified into: OWC, oscillating bodies, and overtopping devices. A short description of the three types is provided below.

WEC Technologies Having put forward different criteria on which WEC technologies can be classified, in this section the state of the art is reviewed through notable WECs. The first criterion of classification (installation site) is followed to systematize the review.

Onshore WECs As explained above, this group encompasses WECs that have both the energy capture and electricity generation systems onshore. Also known as first-generation devices, their technology is the most mature within the field. Indeed, some WECs in this category have already been in operation for a number of years. They are characterized by their relatively simple and inexpensive maintenance, a result of both their good accessibility as onshore devices and the fact that they are less exposed to the harsh marine environment than the other categories. Another substantial advantage is the absence of a submarine cable – unlike offshore devices. As a disadvantage, the wave resource that onshore WECs can exploit is smaller due to bottom friction and depth-induced wave breaking. A further disadvantage is the occupation of land and the corresponding environmental impact on the coastline, which can be more or less significant depending on the type of shoreline and the actual design of the device. Within the onshore category, the oscillating water column (OWC) is the most advanced technology. The water column is housed in a semi-submerged concrete or (less commonly) steel chamber connected to the sea by an underwater opening. The lower part of the chamber is flooded with seawater, and its upper part contains air. The oscillation of the water inside the chamber (the “water column”) induced by the waves outside causes the alternate compression and decompression of the air above, which is used to drive a bidirectional air turbine coupled to an electrical generator.

Wave Power - Climate Change Mitigation and Adaptation

17

Fig. 6 Mutriku breakwater-mounted OWC (top) and schematic of the system (bottom) (Courtesy of EVE (2014))

There are currently several OWCs in operation (Falca˜o 2000; Torre-Enciso et al. 2009), e.g., Mutriku (Fig. 6 – Spain) or Pico (Acores, Portugal). Civil works are the most important chapter in the cost of an OWC plant. In order to minimize this cost, OWCs have been installed on breakwaters, using the breakwater caissons to accommodate the chamber and its incumbent mechanical and electrical parts. Moreover, installation on a breakwater reduces not only the construction cost of the OWC itself, but also that of its road access and electricity connection. Maintenance costs are also reduced due to the easy access. Examples of OWCs installed on breakwaters are Sakata (Japan) and Mutriku (Spain), the latter

18

G. Iglesias and J. Abanades

with 16 chambers, each connected to a Wells turbine with a nominal power of 0.75 MW. Another project, finally not carried out, was envisaged at the recently built jetty at the Foz do Douro (Porto, Portugal). At present there are plans to install the so-called U-OWC (Arena et al. 2013), an OWC with a U-shaped chamber, in the caisson breakwater of Civitavecchia (Italy). Another group within this category consists of WECs whose principle of operation is wave overtopping, e.g., the Tapchan (Mehlum 1986) and the Seawave SlotCone Generator (Vicinanza and Frigaard 2008; Vicinanza et al. 2008). The former consists of a funneled-shape channel concentrating waves and directing their run-up toward an elevated reservoir. The difference in elevation between the water in this reservoir and the sea surface is used to drive the PTO. The volume of the reservoir can be large enough to store energy as potential energy, so that it is only transformed into electricity when the demand arises. On the minus side, its disadvantages are related to the installation site, first, the very extension of the littoral area that it occupies; second, the environmental impact; third, the need for a relatively deep nearshore area for waves to reach the shoreline (and the entrance to the channel) without breaking and thus losing their energy; and, finally, the requirement of a microtidal range at the installation site (under 1 m), or else the efficiency would be considerably reduced. Tapchan was installed in Toftestallen, Øygarden (near Bergen, Norway), in 1985 and subsequently decommissioned. In the Seawave Slot-Cone Generator, wave run-up over a sloping ramp (in principle on a rubble-mound breakwater) is captured through horizontal slots at three different levels, each connected with a reservoir. The outlets of the three reservoirs are connected to a multistage turbine, itself coupled with either an electrical generator or a hydrogen generation system. The capture system of WaveStar (Denmark) consists of a number of hemispheric floaters, each connected to a hydraulic pump. The floaters are accommodated on a platform supported by two steel piles and oriented so that they are aligned perpendicular to the prevailing wave direction. Although the WaveStar platform is not strictly speaking on the shoreline, it was connected when installed to the shoreline by a catwalk; on these grounds, WaveStar is classified here as an onshore WEC. A 1:10 model installed at sea was recently tested (Kramer et al. 2011; Marquis et al. 2010).

Onshore-Offshore WECs Onshore-offshore WECs can be regarded as a variant of onshore WECs in that the electricity generation occurs onshore; the difference with purely onshore WECs is that the capture system is in the sea. This concept attempts to combine the advantages of onshore and offshore systems. The greater resource and smaller environmental impact of offshore WECs are achieved (to a certain extent), and the easier access and cheaper maintenance of onshore WECs are also attained (in part). Oyster (Cameron et al. 2010) is one of the better known examples within this category. Its structure, with floating elements, is hinged at a base on the seabed and made to oscillate by waves. The oscillation around its vertical (rest) position drives

Wave Power - Climate Change Mitigation and Adaptation

19

Fig. 7 Examples of onshore-offshore converters: the CETO wave energy converter (Courtesy of Carnegie Wave Energy)

a seawater piston that pumps water through a high-pressure flow line to an onshore hydroelectric power conversion plant. In order to minimize the cost of the line and bring friction losses down, the point absorber should not be far from the onshore part of the installation, which places a limit on the water depth at which the wave energy capture system can be installed – a downside shared by all the WECs in this category. Moreover, the nearshore installation of the capture system is not without a non-negligible environmental impact, including, not least, a visual impact. The CETO III point absorber (Fig. 7), an evolution of the CETO I, consists of a number of submerged spheres that move back and forth with the waves (Mann 2011). These spheres are connected to pumps which drive high-pressure water to the onshore PTO, consisting of a Pelton turbine and an electrical generator. Alternatively, the pumped water could be used to obtain freshwater using inverse osmosis. Seadog is a point absorber device consisting of a floatation element that moves up and down inside a chamber with the passage of waves. This motion is used to propel a pump which pressurizes seawater; the high-pressure water is driven to the onshore installation, where it is used to power a turbine-generator module or a desalination plant. Several Seadog devices can be connected in parallel or in series. A water reservoir is part of the onshore facility, so the energy can be stored as potential energy and the generation of electricity can be accommodated to the demand – much as in the Tapchan. Waveberg is a line absorber device consisting of a series of interconnected floaters. The joints between the floaters are activated with the passing waves, and their motion is used to propel piston-type pumps that drive high-pressure water to

20

G. Iglesias and J. Abanades

the shoreline part of the system, consisting of a turbine-generator group. The recommended water depth for the capture system is of the order of 50 m. Finally, the wave-powered diaphragm pump is a point absorber system that exploits the oscillating movement generated by waves. At present it is an idea that has not yet been put into practice. The base of this device is a box-shaped concrete structure, full of rocks, designed to maintain the WEC stable on the ocean floor. The upper part of this base is an octagonal structure of steel housing 16 cylindrical columns. Within each one of these is a pump, separated from the other 15 through the use of used tires with cylindrical separators. All of these pumps are, in turn, housed in a large cylinder suspended in a raft measuring 32 m in diameter, which provides the buoyancy necessary to lift it when wave crests pass. To transport the water from the capture system to the onshore generation system, piping is used to which the 16 pumps are connected using nonreturn valves. Once the fluid has been transferred onshore, it can be made to descend through a conventional hydraulic turbine connected to an electrical generator, or it may be stored in a reservoir for use when required. As in the case of Waveberg, the wavepowered diaphragm pump is designed to work at depths of approximately 50 m.

Offshore WECs Offshore WECs are possibly the most prolific branch of wave energy conversion technology and can be classified into floating and fixed devices. Floating Devices The following WEC designs, Wave Dragon and WaveCat (Fig. 8), are based on wave overtopping (Fernandez et al. 2012; Iglesias et al. 2008; Kofoed et al. 2006; Tedd and Kofoed 2009). Overtopped water is collected in one or several reservoirs above the sea level. The water in these reservoirs is driven back to sea through one or several Kaplan turbines, much as in a conventional hydropower scheme. In the case of Wave Dragon, overtopping occurs at a ramp perpendicular to the direction of the incoming waves. In order to capture a wave front length larger than the ramp itself, two deflectors protrude from the ramp sides; they focus the waves toward the ramp, thereby enhancing wave height. Overtopping water is collected in a reservoir above the sea level; the difference in elevation between the water in the reservoir and the outside (sea) water is used to propel an ultra-low head Kaplan turbine. Freeboard and draft are varied according to the wave conditions. Wave Dragon is one of the heaviest WECs, with a structure around 30,000 t. This requires a substantial mooring system. The WaveCat differs from the Wave Dragon in its structure and in the way overtopping occurs. Like a catamaran – from which it takes its name – WaveCat consists of two hulls. Unlike a catamaran, however, the hulls are convergent rather than parallel, so that from above they form a wedge. With a single-point mooring system (e.g., CALM, catenary anchor leg mooring), WaveCat swings so that it is always orientated in the direction of the incoming waves, which propagate into the wedge. As a wave crest advances between the two hulls, the wave height is enhanced by the tapering channel until, eventually, the inner hull sides are

Wave Power - Climate Change Mitigation and Adaptation

21

Fig. 8 Overtopping wave energy converters: Wave Dragon (top) and WaveCat (bottom) (Courtesy of Wave Dragon AS (2005) and the COAST Research Group at Plymouth University and Fernandez et al. (2012))

overtopped. Unlike Wave Dragon, in which the overtopping wave crest impinges normally on a ramp, in the case of WaveCat the overtopping crest impinges obliquely on the hull side. Overtopping water is collected in three reservoirs on each hull, at different levels – all above the mean sea level. The difference in elevation with respect to the exterior (sea) level is used to drive a turbine for each reservoir as the water is let out back to sea. Freeboard and draft, as well as the wedge angle, can be varied according to the sea state. The advantages are threefold. First, overtopping occurs along the hull sides, so the WEC motions do not significantly affect the overtopping volumes but merely shift the point along the hull where overtopping starts. Second, the oblique overtopping signifies that the wave loads on the structure are considerably lower

22

G. Iglesias and J. Abanades

Fig. 9 Examples of oscillating body converters: Pelamis II (left) and PowerBuoy (right) (Courtesy of Pelamis Wave Power (2014) and Ocean Power Technologies Inc (2014))

than in the case of normally incident waves. Last, but not least, the wedge between the hulls can be closed so WaveCat becomes a conventional (monohull) ship – a useful survivability strategy. Maintenance costs are expected to be low, similar to those of a ship, and the fact that it can be towed to a dry dock in its closed configuration (monohull) and repaired using the same installations is an added advantage. Pelamis (Fig. 9) belongs to another category of onshore WECs (wave-activated bodies). It is a semi-submerged device composed of four cylindrical sections held together by Cardan joints. As waves pass, the cylinders are displaced up and down by the buoyancy force, thereby activating the joints. These are equipped with a hydraulic system that takes advantage of the joint motions to pressurize an oleo hydraulic fluid. This pressure is used to propel a turbine-generator module. Pelamis is conceived for deployment at intermediate water depths, at distances from the coastline between 5 and 10 km. For complex repair operations, the Pelamis must be towed to a nearby port (Henderson 2006). Another device that oscillates perpendicularly to the wave front is the Anaconda (Chaplin et al. 2007), which consists of a submerged flexible tube made of plastic materials and filled with seawater at low pressure; the tentative dimensions of the system are 5 m (diameter) and 150 m (length). It is moored at a single point so that it swings when the wave direction changes; thus, it is always head to sea. The tube is filled with seawater under low pressure. As a wave passes along the tube, a pressure bulge is excited and then moves down the tube in front of the crest, continuously capturing energy from the wave. At the stern is a turbine, which is driven by the flow resulting from the periodical pressure bulges. The turbine is coupled to an electrical generator. Among its advantages is its null visual impact (for it is submerged). The next group of devices within the offshore category is composed of floatation elements oscillating on the sea surface. Among these is WET EnGEN, a point absorber consisting of a floating element that moves up and down along a steel mast with the passage of wave crests. The mast is inclined at an angle of 45 relative to

Wave Power - Climate Change Mitigation and Adaptation

23

the quiescent sea surface and swivels freely around its base with the purpose of being aligned to the wave direction. As the float moves up and down with the sea surface, it pulls on a cable housed within the mast, which conveys the energy to a rotational generator installed within the foundation. The Aegir Dynamo WEC is another point absorber with an internal cylinder and an external, floating ring. The cylinder is anchored by means of cables or chains to deadmen laying on the seabed so that it remains practically stationary. The external, floating ring moves vertically with the sea surface. The cylinder houses a rotational generator that is driven continuously by the alternating motion of the external ring through a transmission system that transforms the ring’s vertical motion into rotational motion. Another subgroup within the offshore category is floating platform WECs, which have in common two main characteristics: they float and all their essential systems are above sea level – the mooring excepted. A representative of this subgroup is FO3, which takes advantage of the motion of the sea surface under waves by means of buoys mounted in rows and connected to the platform. The buoys drive pumps that pressurize an oleo hydraulic fluid, which in turn drives an electrical generator. Finally, another line of development in floating offshore WECs is floating OWC devices (Lo´pez and Iglesias 2014; Lo´pez et al. 2014), e.g., Oceanlinx MK1 and Sperboy. The principle of operation of the OWC itself is the same as for onshore OWCs. The Oceanlinx MK1 has a large rectangular chamber housed in a substantial structure. The turbine is designed so as to increase the energy efficiency of the device taking into account the prevailing wave frequency. Soft start systems powered by the electricity network are also included for the turbine to gain velocity faster. The Sperboy is a vertical cylinder with hollow walls for flotation; within the cylinder is the chamber, connected to the sea through the cylinder bottom, which is open. While the lower part of the chamber is inundated, its upper part is filled with air, as in all OWCs. On the top of the device are four horizontal ducts through which air leaves and enters the chamber, driven by the alternating ascent and descent of the water surface within the chamber. The alternating air flow propels four Wells turbines (one in each duct), which in turn drive electrical generators. The MAWEC converter, developed by Leancon, has a number of features in common with OWCs but also some differences – among which, the fact that the flow driving the turbine is recirculated. The device in plain view has a V shape. Each arm of the “V” has the transverse section of an inverted channel. The upper section is connected to two rows of vertical pipes, each with 30 pipes. The principle of operation is based on the fact that the length of the device is greater than the average wave length. When a wave reaches the end of the channel, it creates high- and low-pressure zones situated in front and behind the wave crests, respectively. The high pressure in front of the wave crest drives air flow through the corresponding tubes to a turbine, which drives a generator. The circuit is closed when the air expelled by the turbine is absorbed by the tubes under low pressure behind the

24

G. Iglesias and J. Abanades

wave crest. Valves are fitted to the ends of the low-pressure tubes to ensure that the flow occurs always in the same direction, i.e., the tubes are closed when they are in front of the wave crest. This conception allows for conventional turbines to be used rather than the far more expensive Wells turbines typical of OWCs. Submerged Devices Submerged offshore WECs are less common than floating ones, the reason being that underwater components add a new level of complexity. Designs are mostly at an early stage of development, and some of the most developed eventually failed. The Archimedes Wave Swing WEC (Fig. 10) is composed of two cylinders measuring 9.5 m in diameter (de Sousa Prado et al. 2006; Vale´rio et al. 2007). One of the cylinders is anchored to the seabed by means of a concrete structure. The other is located outside the first, at a higher elevation and full of air. When the crest of the wave passes over, the airbag is compressed, pressing the float downward. Conversely, when the trough of the wave passes over, the air expands, and the cylinder rises again. Magnets located in the upper cylinder generate electricity when they move relative to the coil inside the lower cylinder. This device cannot be seen on the surface of the sea, for it is located at a depth of 6 m – which reduces its visual impact. It does not have any hydraulic fluids, so there will be no leaks leading to contamination. On the other hand, the maintenance of this device is relatively complex, as all the equipment is underwater. However, this is advantageous in terms of survivability under storms. The device must be refloated in order to perform maintenance work. The manufacture of this WEC is of intermediate difficulty, given that it is a very large structure which requires significant resources in order to transport it to its anchoring site. The Stellenbosch Wave Energy Converter is one of the first ideas for wave energy conversion, which did not materialize. The device’s collector is installed on the seabed, forming a “V,” and the power take-off is located at the tip of the V. It can be classed in the group of point absorbers. A bag of air is placed in a cavity at the top of the structure. As the crest of the wave passes over, the trapped air is compressed, thereby opening the high-pressure valves and allowing the air to flow toward a series of high-pressure pipes. When the trough of the wave passes, the pressure inside the bag drops and the low-pressure valves open. This yields a continual, one-way flow of air. In order to exploit this flow, on the converter is placed a one-way turbine between the low- and high-pressure collection tubes. The environmental impact of this device would be classified as intermediate, given that it is a large apparatus occupying a large amount of space on the seabed. Nevertheless, it has no moving parts and does not interfere with marine life. The wave rotor device represents a class of WECs that exploit the movement of the water as waves pass to drive a series of turbines. It employs two types of turbines: Darrieus and Wells. The rotors are used to generate electricity from upward and downward currents. The device is based on the same principle as wind turbines, which may also be installed on top of the WEC, taking advantage of the structure necessary for the wave energy converter. These systems can be quite efficient, as they feature no intermediate phases of energy transfer. The environmental impact of this

Wave Power - Climate Change Mitigation and Adaptation

25

Fig. 10 Archimedes wave swing (Picture courtesy of AWS Ocean Energy (2015))

type of device is intermediate, due to its visual impact and the existence of moving parts. Maintenance is relatively simple, but the fact that most of their components are underwater does complicate servicing. This type of device is constructed of familiar materials that are widely known but subjected to particular stresses. This converter is currently in a computational model and partial-scale design phase. Finally, the BioWAVE device moves in the same way as a seaweed when a wave passes over it. This swaying motion is converted into electricity through the use of a specially developed generator called the O-DriveTM, which is found at the point of articulation on the bottom of the device. If a storm approaches, the device protects itself by lying down on the seabed. Its environmental impact is thought to be minimal, as it has no high-speed parts which could harm marine life, and the anchoring system will also have only a slight impact. The maintenance difficulty of this device will be intermediate, as this type of apparatus has all its important mechanisms underwater. The manufacture of this converter is of intermediate complexity. Although the mast is easy to manufacture, the production of the energy absorption point is fraught with more difficulties.

Wave Farms for Coastal Protection Introduction A very significant proportion of the world’s population (44 %) live within 150 km of the coast, and eight of the ten major cities are located on the coast (Oceans 2011), transforming the littoral into an extended hub for socioeconomic activity. This

26

G. Iglesias and J. Abanades

creates a substantial pressure on the coastal environment, with demands of use arising from fishing and aquaculture to residential, transport, industrial, or recreational purposes. This pressure is exerted on the single most sensitive, dynamic environment on the planet – the littoral. The resulting complex picture is further compounded by climate change, which can affect the coast chiefly through two main effects: sea-level rise and increased storminess. In view of the large population of coastal areas throughout the world, it follows that climate change is likely to have one of its main impacts – in terms of socioeconomic consequences – through its effects on the coast. For example, in the case of the UK (Fig. 1), a significant portion of the coastline (17 %) is threatened by erosion; in the case of England and Wales, erosion rates exceed 10 cm per year along approximately 28 % of the coastline (~1040 km) (EUROSION 2004). These figures are set to increase as a result of sea-level rise and increased storminess due to climate change, e.g., Pugh (2004), Chini et al. (2010), and Wadey et al. (2014) (Fig. 11). It is hard to overstate the economic and environmental consequences of coastal erosion and flooding: loss or damage to property and infrastructure, disruption to the transport chain, losses through decreased revenues in the tourist and recreational sectors, etc. – hence, the importance of coastal defense. Almost half of the coastline (44 %) of England and Wales is protected by structures or artificial beaches (Huthnance and Mieszkowska 2010), and substantial investment (over £3.2 billion from April 2010 to March 2015) is undertaken by the Department for Environment, Food & Rural Affairs (DEFRA) in flood and coastal erosion risk management (DEFRA 2015). The conventional approach to defending the coast against flooding and erosion involves coastal structures – this is the so-called “hard engineering” approach. The downsides of this approach are well known, not least its visual impact (armored coastlines) and, in the context of transition coasts, the inability of structures to adapt

Fig. 11 Coastal infrastructure and property at risk

Wave Power - Climate Change Mitigation and Adaptation

27

Fig. 12 Consequences of increased storminess in Soulac-sur-Mer (France) after winter 2013/ 2014

to sea-level rise. Indeed, there have been recently many cases of coastal structures failing to cope with the increased pressures of climate change (Castelle et al. 2015; Kendon and McCarthy 2015; Senechal et al. 2015; Sibley et al. 2015; Slingo et al. 2014; Spencer et al. 2015). A case in point is the failure of the Dawlish seawall (Devon, England) in the winter 2013/2014 – the stormiest period of the past 60 years, compounded by the highest sea-level of the past 100 years (Haigh et al. 2015) – which led to the destruction under massive overtopping of a critical section of railway and consequently the disruption to the rail link between SW England and the rest of the country for 3 months (Fig. 1). Other examples include the failure of the La Coruna seawall (Spain), the collapse of the Aberystwyth seawall (Wales), and the dramatic erosion affecting many beaches throughout Europe, from Spain (e.g., Aviles, Barreiros) to France (e.g., Truc Vert, Biscarrosse) to the UK (e.g., Chesil, Perranporth, Isle of Man), which in some cases resulted in the demolition of buildings due to the great volumes of erosion that affected the foundations, such as in Soulac-sur-Mer (France – Fig. 12). These examples of failures of coastal structures – due to either structural collapse or excessive overtopping – expose the dramatic consequences of the inadequacy of many of the existing structures in the current transition scenario. The conventional approach to solving this problem entails upgrading the existing structures or building new ones, in both cases at a large cost. The downsides of this approach are related to the difficulty of fixed coastal structures in dealing with the transition conditions, notably sea-level rise, and more generally to the environmental, and particularly visual, impact of these structures on the littoral. The advent of marine renewable energy, particularly wave energy, posits wave farms as an alternative to coastal defense that presents numerous advantages, not least in the current transition environment. Wave farms, or arrays of wave energy converters (WECs), extract energy from the waves. Recent research has proven that

28

G. Iglesias and J. Abanades

nearshore wave farms lead to not only milder wave climates in their lee (Carballo and Iglesias 2013; Iglesias and Carballo 2014; Millar et al. 2007; Palha et al. 2010; Reeve et al. 2011; Rusu and Guedes Soares 2013; Vidal et al. 2007) but, importantly, reduced beach erosion under storm and post-storm conditions (Abanades et al. 2014a, b, 2015a, b; Mendoza et al. 2014; Zanuttigh and Angelelli 2013), which amounts to effective coastal protection – much as that provided by a conventional coastal structure. However, nearshore wave farms present three main advantages relative to coastal structures. First, by providing renewable, carbon-free energy, wave farms contribute to decarbonizing the energy supply and thereby combating the man-made causes of climate change. Second, the environmental impact of wave farms on the littoral – the single most sensitive environment in the planet – is considerably lower than that of coastal structures. Last, but not least, wave farms consisting of floating WECs (e.g., WaveCat, Wave Dragon, DEXA) adapt naturally to sea-level rise and therefore can cope well with the main impact on the littoral of climate change. Thus, rather than resorting to the conventional approach (more structures) to fix obsolete, underperforming structures, deploying wave farms to generate carbon-free energy as their main purpose and, in synergy with it, defend the coastline against erosion and flooding is a new alternative that warrants consideration. Incidentally, their application to coastal defense would enhance their economic viability through the savings achieved in conventional defense schemes. This alternative to conventional coastal defense schemes (based on structures such as groynes, detached breakwaters, etc.) is in fact a new paradigm to mitigating climate change and confronting two global challenges: the environmental repercussions of the current energy model and the risks to properties and infrastructure posed by coastal erosion. As explained, these two challenges are connected, for climate change is set to exacerbate coastal erosion through its effects of sea-level rise and increased storminess. The effects of wave farms on coastal processes and, in particular, their effectiveness for coastal protection are investigated through a case study: Perranporth, a beach in Cornwall (SW England) that has experienced significant erosion over the last years, not least during the harsh winter 2013/2014. Indeed, the Shoreline Management Plan has identified the area as subject to significant erosion risk, and a number of options are being assessed to confront this challenge (CISCAG 2011). The case study is analyzed through a suite of state-of-the-art process-based models, combined for the first time for this purpose – a third-generation spectral wave model and a coastal processes model. The evolution of the beach with and without the nearshore wave farm is studied in different scenarios, involving wave farms deployed at different locations and distances from the shoreline, plus the baseline scenario (without the farm). The response of the coastal system is analyzed at different time scales, from the short (days) to the medium (months) term, with the long-term study as future work. A set of ad hoc indicators is defined to quantify the effects of wave energy absorption by the farm on coastal processes and, in consequence, the degree of protection afforded by the wave farm in the different scenarios.

Wave Power - Climate Change Mitigation and Adaptation

29

51.2

120 51 100

Northing (°)→

50.8 80 50.6

(m) 60 Perranporth Beach

50.4 Wave Hub Project

Perranporth

50.2

50 −6

40

20

−5.5

−5

0

Easting (°)→

Fig. 13 Location of Perranporth Beach and the Wave Hub in SW England (left; water depths in m) and aerial photo of the beach (right; courtesy of Coastal Channel Observatory)

Case Study: Perranporth Beach (UK) The effectiveness of wave farms in mitigating dune erosion on the beach was analyzed by means of a case study: Perranporth Beach (Fig. 13). The selection of this case study is motivated by two reasons: (i) the erosion experienced by the beach over the last years, particularly under the cluster of heavy storms (Fig. 14) of February 2014 (CISCAG 2011), and (ii) the interest of the area for wave energy development, as shown, e.g., by the nearby Wave Hub – a gridconnected offshore facility for WEC testing (Gonzalez-Santamaria et al. 2013; Reeve et al. 2011). For this work wave buoy data were used alongside hindcast (numerical modeling) data. Half-hourly data were obtained from the directional wave buoy off Perranporth, in approximately 10 m of water, operated by the Coastal Channel Observatory. The analysis of these data reflects the exposure of the area to heavy swells generated by the long Atlantic fetch, as well as to locally generated wind seas. The average significant wave height (Hs), peak period (Tp), and peak direction (θp) in the period covered by the wave buoy data (2006–2012) were 1.79 m, 10.36 s, and 280 , respectively. Hindcast data were obtained from WaveWatch III, a third-generation offshore wave model that is run on global and regional (nested) grids, the latter with a resolution of 0.5 (Tolman 2002). These values were prescribed at the outer boundaries of the wave model.

30

G. Iglesias and J. Abanades

Fig. 14 Damages at Perranporth Beach after storms in winter 2013/14. Courtesy of West Briton

In addition to these large-scale hindcast wave data, three-hourly wind data obtained from the Global Forecast System (GFS) weather model were used to drive the wave model. The mean wind speed at a height of 10 m above the sea surface was u10 = 9.5 ms1 during the study period; the strongest winds, from the NW, had u10 values over 20 ms1. SW England is subjected to a semidiurnal tidal regime and a large tidal range (macrotidal), with a mean spring value of 6.3 m at Perranporth. The tide was accounted for in the modeling, with constituents obtained from the TPXO 7.2 global database (Egbert et al. 1994). A 4 km long beach with a nearly flat intertidal area (tan β = 0.015–0.025), Perranporth (Austin et al. 2010; Masselink et al. 2005) has medium sand size (D50 = 0.27–0.29 mm). The bathymetry (Fig. 15), kindly provided by the Coastal Channel Observatory, showed elevation values between 20 and 25 m relative to the local chart datum (LCD). A submarine bar is present between 5 and 10 m and has a bearing on behavior of the beach; under energetic waves and increased offshore sediment transport, it grows at the expense of the intertidal beach face. Overall, profile changes at Perranporth affect primarily the lower intertidal and subtidal active regions (Scott et al. 2011). In addition to the aforementioned submarine bar, Perranporth is characterized by a well-developed dune system.

Wave Power - Climate Change Mitigation and Adaptation Fig. 15 Computer-modeled bathymetry at Perranporth Beach, including profiles P1, P2, and P3 (Water depth in m)

31

x 104 5.75

Northing(m) →

5.7

Profile P1

5.65

5.6 Profile P2 5.55

5.5 Profile P3 5.45 1.745

−20

1.76 1.75 1.755 Easting (m) →

0

20

40

1.765 x 105

60

Bed level (m)

Suite of Process-Based Numerical Models Wave Propagation Model Wave propagation is calculated with a third-generation numerical model, SWAN (Simulating WAves Nearshore), described in section “Wave models.” This model was successfully applied in a number of works (Abanades et al. 2014b; Carballo and Iglesias 2013; Iglesias and Carballo 2014; Millar et al. 2007; Palha et al. 2010; Smith et al. 2012) to model wave farm effects on nearshore wave conditions. Two computational grids (Fig. 16) were used to obtain a high-resolution results in the area of interest without compromising computational efficiency, as follows: (i) a large-scale grid covering approx. 100  50 km with a resolution of 400  200 m and (ii) a small-scale (nested) grid focused on Perranporth Beach, covering an area of approx. 15  15 km with a resolution of 20  20 m. Thanks to the fine resolution of the nested grid, the individual WECs in the farm could be demarcated and their individual wakes modeled with accuracy – a prerequisite to establishing the wave farm effects on the beach profile (Carballo and Iglesias 2013).

32

G. Iglesias and J. Abanades

80

Offshore grid (SWAN). Resolution: 400 x 200 m 50.65

70

50.6 50.55

60

Northing (°) →

50.5 50

Nearshore grid (SWAN) Resolution: 16 x 12 m

50.45 50.4

40

50.35 Coastal grid (Xbeach) Resolution: 6 x 12 m

50.3 50.25

30 20

50.2 10 50.15 −6

−5.9

−5.8

−5.7

−5.6

−5.5 Easting (°) →

−5.4

−5.3

−5.2

−5.1

−5 0

Fig. 16 Boundaries of the three levels computational grids used by the wave propagation and coastal processes models (SWAN and XBeach, respectively) (water depths in m)

The offshore and nearshore bathymetric information – obtained from the UK data center Digimap and the Coastal Channel Observatory, respectively – was interpolated onto these grids (Fig. 17). Based on the review of WEC technologies (section “WEC Technologies”), WaveCat, a floating overtopping WEC for offshore deployment, was selected. The wave farm considered consisted of 11 WaveCat WECs arranged in two rows (Fig. 18) – the same layout as in Carballo and Iglesias (2013), with a distance between devices of 90 m (equal to the distance between the twin bows of a single WaveCat WEC). Wave-WEC interaction was characterized on the basis of ad hoc laboratory tests (Fig. 18) reported by Fernandez et al. (2012).

Coastal Processes Model A two-dimensional model for wave propagation, long waves and mean flow, sediment transport, and morphological changes of the nearshore area, beaches, dunes, and back barrier during storms, XBeach solves the time-dependent short wave action balance, the roller energy equations, the nonlinear shallow water equations of mass and momentum, sediment transport formulations, and bed update on the scale of wave groups concurrently. A full description of XBeach can be found in Roelvink et al. (2006) or Roelvink et al. (2009). The input conditions for XBeach, the coastal processes model were obtained from the output of the SWAN wave propagation model. Sediment transport is modeled with the following depth-averaged advection-diffusion equation (Galappatti and Vreugdenhil 1985):

Wave Power - Climate Change Mitigation and Adaptation

33

50.45

35

30

m 40 14 m 0 18

50.4

0

18

Northing (°) →

70

m

25

m

11

20

15 50.35 10

5

50.3

−5.28

−5.26

−5.24

−5.22

−5.2

−5.18

−5.16

−5.14

0

Easting (°) →

Fig. 17 The array of WECs, or wave farm, off Perranporth (water depths in m)

    @ ðhCÞ @ ðhCuE Þ @ @C @ ðhCvE Þ @ @C þ þ Ds h þ Ds h þ @t @x @x @x @y @y @y ¼

hCeq  hC Ts

(14)

where C represents the depth-averaged sediment concentration, which varies on the wave-group time scale, Ds is the sediment diffusion coefficient, the terms uE and vE represent the Eulerian flow velocities, Ts is the sediment concentration adaptation time scale that depends on the local water depth and the sediment fall velocity, and Ceq is the equilibrium concentration, thus representing the source term in the sediment transport equation. The sediment transport formula defined by Van Thiel de Vries (2009) was used to determine the sediment equilibrium concentration. For this case study, the model was applied in one and two dimensions horizontal, 1DH (x,z) and 2DH (x, y, z), respectively. In the 1DH case the evolution of two

34

G. Iglesias and J. Abanades

Fig. 18 Physical model tests of WaveCat (Carballo and Iglesias 2013)

profiles (Profiles P1 and P2 in Fig. 15) was investigated. In the 2DH case the entire Perranporth Beach was studied, with a computational grid that extended 1250 m across shore and 3600 m alongshore with a resolution of 6.25 and 18 m, respectively. In both cases the model was driven by spectral parameters obtained from the nearshore wave propagation model, the root-mean-square wave height, Hrms; mean absolute wave period, Tm01; mean wave direction, θm; and directional spreading coefficient, s; these were used to construct time series of wave amplitudes, including wave groups – of relevance in beach behavior under erosive conditions (Baldock et al. 2011).

Impact Factors A set of ad hoc impact indicators recently developed were used to analyze impacts of the wave farm on the beach morphodynamics. Three impact indicators developed by Abanades et al. (2014a) were applied to the results of the coastal processes model (XBeach): (i) bed level impact (BLI), (ii) beach face eroded area (FEA), and (iii) nondimensional erosion reduction (NER). The bed level impact (BLI), with units of m in the S. I., represents the change in bed level caused by the wave farm, calculated as

Wave Power - Climate Change Mitigation and Adaptation

BLI ðx, yÞ ¼ ζ f ðx, yÞ  ζ b ðx, yÞ

35

(16)

where ζ f(x, y) and ζ b(x, y) are the seabed level in the presence or absence of the farm, respectively, at a generic point of the beach. The y-axis is aligned with the general coastline orientation, and the x-axis is positive away from the sea. BLI > 0 or BLI < 0 signify that the wave farm leads to a higher or lower seabed level relative to the baseline (no farm) scenario, respectively. The beach face eroded area (FEA), with units of m2 in the S. I., is a profile function that quantifies the storm-induced erosion in the beach face. Unlike the preceding parameter, which compared the farm and baseline (no farm) scenarios, the FEA index is defined separately for both scenarios, baseline (FEAb) and wave farm (FEAf): xmax ð

FEAb ðyÞ ¼

½ζ 0 ðx, yÞ  ζ b ðx, yÞ dx;

(17)

 ζ 0 ðx, yÞ  ζ f ðx, yÞ dx;

(18)

x1 xmax ð

FEAf ðyÞ ¼ x1

where ζ 0(x, y) is the initial bed level at the point of coordinates (x, y), and x1 and xmax are the values of the x-coordinate at the seaward end of the beach face and landward end of the profile, respectively. A second profile function is the nondimensional erosion reduction (NER), given by NERðyÞ ¼ 1  ðxmax  x1 Þ

1

xmax ð

 ζ 0 ðx, yÞ  ζ f ðx, yÞ ½ζ 0 ðx, yÞ  ζ b ðx, yÞ1 dx; (19)

x1

which quantifies the change in the eroded area of a generic profile ( y) caused by the wave farm as a fraction of the total eroded area of the same profile. NER > 0 and NER < 0 signify a reduction or increase in the eroded area.

Medium-Term Impacts Based on the wave buoy data from November 2007 to October 2008, the wave propagation model was validated. An excellent agreement between model results and wave buoy observations was achieved (Fig. 19), with a coefficient of correlation (R) and root-mean-square error of 0.97 and 0.38 m, respectively.

1-Feb-2008

1-May-2008

Fig. 19 Time series of modeled (Hs, SWAN) vs. observed (Hs, buoy) significant wave height

0 1-Nov-2007

1

2

3

4

5

6

7

8

9

10

1-Aug-2009

Hs,buoy

31-Oct-2000

Hs,SWAN

36 G. Iglesias and J. Abanades

Wave Power - Climate Change Mitigation and Adaptation Hsf (m)

Hs (m)

50.42

50.42 50.41

50.4

50.4

50.39

50.39 Northing (°) →

Northing (°) →

50.41

50.38 50.37 50.36

50.38 50.37

50.35

50.34

50.34

50.33

50.33 −5.24

−5.22

−5.2

50.32

−5.16

−5.18

P1

50.36

50.35

50.32

37

P2

−5.24

Easting (°) → 0

1

2

3

−5.22

−5.2

−5.18

−5.16

Easting (°) → 4

5

6

7

8

9

10

Fig. 20 Significant wave height in the no farm or baseline scenario (Hs, left) and in farm scenario (Hsf, right) at the storm peak on 10 Mar 2008, 18:00 UTC (deepwater wave conditions: Hs0 = 10.01 m, Tp = 15.12 s, θp = 296.38 ). Profiles P1 and P2 are delineated for reference

Upon validation, the numerical model was applied to compare the wave patterns with and without the wave farm, and to establish the wave conditions that were to be the input to the coastal processes model. The effects of the wave farm on the nearshore wave patterns are clear, for instance, in Fig. 20, corresponding to a storm peak. The significant wave height goes down by over 30 % in the direct wakes of the WECs – a reduction that is less marked on the beach itself, as a result of wave energy diffracted from the sides of the farm into its lee. This reduction of wave height is particularly apparent in the northern section of the beach owing to the deepwater wave direction (approx. WNW). As explained, wave power on the beach was reduced by the presence of the farm. To quantify the effects of this reduction, the profiles P1 and P2 were followed from November 2007 to April 2008. The coastal processes model was forced with the spectra generated by the wave propagation model run with and without the wave farm. In the presence of the wave farm (Fig. 21), the evolution of the profiles from the initial conditions to 3 months into the simulation is characterized by erosion concentrated on the beach face – the section exposed to wave uprush – with the eroded sediment depositing on lower sections. In Fig. 22 the comparison between the farm and no farm scenarios is presented for profile P2. The energy extracted by the farm leads to a substantial reduction (approx. 3 m) in the erosion of the dune, which displaces the landward extreme of the eroded area more than 10 m toward the sea. This displacement can be of particular relevance in cases such as Soulac-sur-Mer (Fig. 12), where some buildings were at risk due to storm-induced erosion at the toe of the foundations. Had a

38

G. Iglesias and J. Abanades Profile P1

10 Beach profile with wave farm Initial profile

5 ς (m) →

0 −5 −10 −15 −20 −25

0

100

200

300

400

500

600 700 x (m) →

800

900 1000 1100 1200 1300

800

900 1000 1100 1200 1300

Profile P2 10 Beach profile with wave farm Initial profile

ς (m) →

5 0 −5 −10 −15 −20

0

100

200

300

400

600 700 x (m) →

500

Fig. 21 Bed level at Profiles P1 and P2: initial (1 Nov 2007, 0000 UTC) and after 3 months, in the presence of the wave farm (22 Jan 2008, 15:47 UTC)

Profile P2

10 9 8

Beach profile with wave farm Initial profile Beach profile without wave farm

ζ (m) →

7 6 5 4 3 2 1 1200

1210

1220

1230

1240

1250

1260

x (m) →

Fig. 22 Beach face level at Profile P2: initial (1 Nov 2007, 0000 UTC) and after 3 months with and without the wave farm (22 Jan 2008, 15:47 UTC)

Wave Power - Climate Change Mitigation and Adaptation

39

Profile P1 1.25

M6 M3 M1

BLI (m)→

1 0.75 0.5 0.25 0 −0.25

0

200

400

600

800

1000

1200

800

1000

1200

x (m)→ Profile P2

0.8 M6 M3 M1

BLI (m)→

0.6 0.4 0.2 0 −0.2

200

0

400

600 x (m)→

Fig. 23 BLI values along Profiles P1 and P2 at different points in time: 1 month (M1), 3 months (M3), and 6 months (M6) into the study period

Table 1 FEA and NER factors for Profiles P1 and P2 at different points in time: 1 month (M1), 3 months (M3), and 6 months (M6) after the beginning of the study period

Profiles Profile P1 Profile P2

M1 FEAb (m2) 20.53

FEAf (m2) 14.11

NER (%) 31.27

M3 FEAb (m2) 16.3

FEAf (m2) 10.42

NER (%) 36.07

M6 FEAb (m2) 23.85

FEAf (m2) 18.66

NER (%) 21.76

15.69

12.91

17.72

21.31

16.85

20.93

25.53

21.42

16.10

wave farm been deployed off that section of the coast, the demolition of many a building would be avoided. The impact of the wave farm on the beach profile was analyzed through the parameters defined in section “Suite of Process-Based Numerical Models.” The BLI parameter along Profiles P1 and P2 (Fig. 23) was analysed for three different points in time: 1 month (M1), 3 months (M3), and 6 months (M6) into the study period. A significant reduction of the erosion on the beach face and the submarine bar (x ~ 600 m) is apparent. Given that the bar is an essential element of the beach response to storms, its reinforcement thanks to the wave farm clearly increases the resilience of the system. This effect strengthens over time, as BLI values soar over the submarine bar.

40

G. Iglesias and J. Abanades

The BLI values for both profiles were similarly non-negligible on the beach face, indicating that the wave farm reduces the erosion also in this section of the profile. To quantify these effects, the FEA and NER indicators were computed (Table 1). Comparing the two profiles, the effectiveness of the wave farm is more apparent in the northern section of the beach (Profile P1). The comparison between the FEAb and FEAf values and the nondimensional values of NER reflect the effectiveness of the wave farm in reducing storm-induced erosion and in maintaining this reduction over the medium term (M6 values, after 6 months). The relatively smaller reduction in proportional terms (NER) after 6 months is due to the fact that the later part of this period is not particularly stormy (March and April) – with fewer storms and less erosion, the reduction in storm-induced erosion is quantitatively smaller. Nevertheless, NER values after 6 months are still relevant. The results in Table 1 showed a significant reduction of the erosion along profiles P1 and P2, which may indicate some degree of coastal protection owing to the presence of the wave farm nearshore. However, these results must be corroborated by means of a 3D study of the beach – and this is the main aim of the following section.

Short-Term Impacts Having studied the medium-term effects of the wave farm on the beach profile, this section focuses on the response of the system in the short term during the storm, from 5 December 2007 0000 UTC to 10 December 2007 0600 UTC. The mean values of significant wave height, peak period, and peak direction were Hs = 4.2 m, Tp = 12.1 s, and θp = 295 , respectively. Based on the results of the wave propagation model, the coastal processes model was applied to determine how the modification of the nearshore wave conditions affected the coastal processes and, consequently, the beach morphology. As in the previous section, the impact was quantified by means of the indicators defined in section “Suite of Process-Based Numerical Models.” The reduction of the erosion was particularly apparent on the dune, with values of BLI over than 4 m (Fig. 24) – the outcome of wave energy extraction by the wave farm. Similarly, the submarine bar experienced reduced erosion, particularly in water depths between 5 and 10 m and in the middle area of the beach, where the BLI parameter reached 0.5 m. The material that was eroded from the dune deposited primarily on lower sections of the profile, between the bar and the dune, leading to negative values of BLI, of the order of –0.5 m. The beach face eroded area (FEA) values confirmed the effect of the wave farm in reducing erosion (Fig. 25). The most drastic erosion occurred in the southernmost end of the beach (y ~ 0 m) and in its northern section, the reason being that the former is not backed by the dune system, and the latter is exposed to larger waves. As regards the mid-south area of the beach, the NER factor presented a large variability in the section 500 m < y 473 K). The sorbent treated with ammonia at 673 K showed the adsorption capacity of 1.73 mmol/g. This improvement was ascribed to the introduction of nitrogen-containing groups to carbon structure. Pevida and his group (Pevida et al. 2008) made a conclusion that the CO2 adsorption capacity is not directly related to the total nitrogen content of sorbents but rather to specific nitrogen functionalities that are responsible for increasing the CO2–adsorbent affinity. Alesi et al. (2010) studied CO2 adsorption and regeneration conditions of tertiary amidine derivatives supported on AC from 302 to 323 K. It was found that CO2 adsorption on the amidine-modified AC only occurred in the existence of moisture. Adsorption of water vapor on the hydrophilic AC support limits the CO2 capture capacities. Maroto-Valer et al. (2005; Tang et al. 2004; Zhang et al. 2004; Zhong et al. 2004) found that anthracite coal with 2 h of activation at 1163 K achieved a CO2 adsorption capacity of 1.49 mmol/g (Maroto-Valer et al. 2005). The feasibility of a high-surface-area sorbent from low-cost anthracites was also investigated (Zhong et al. 2004; Maroto-Valer et al. 2005). The adsorption capacity of polyethylenimine (PEI)-impregnated deashed anthracite sorbent was 2.13 mmol/g at 348 K (Zhong et al. 2004). In another study, they observed a decrease in adsorption capacity of activated anthracites impregnated with PEI with increasing adsorption temperature (Maroto-Valer et al. 2005).

Ammonia-treated AC Ammonia-treated char

2.79 1.98

0.831 0.28

1190 653

1617 954

0.48

SWNT AC Ammonia-treated AC

2

540

1300

Anthracite-based AC

0.6–0.8

Temperature T (K) 298

298 298

308 298 309.15

303

298 298

1.5–2.2

BET surface area (m2/g)

AC AC

Pore volume (cm3/g)

298 298

Pore size (nm)

AC AC

Sorbent AC

Table 5 CO2 adsorption capacity of carbonaceous solid sorbents

1 1

1 0.1 1

1

1 1

1 0.2

Pressure p (atm) 1

1.91 2.20

2.07 0.57 1.73

1.49

3.23 2.61

2.45 0.75

Capacity (mmol/g) 2.07

(continued)

References (Kikkinides et al. 1993) (Chue et al. 1995) (Do and Wang 1998) (Na et al. 2001) (Siriwardane et al. 2001) (Maroto-Valer et al. 2005) (Cinke et al. 2003) (Lu et al. 2008) (Przepiorski et al. 2004) (Pevida et al. 2008) (Plaza et al. 2009)

CO2 Capture Using Solid Sorbents 13

APTES-grafted CNTs APTS-grafted CNTs TEPA-impregnated CNTs TEPA-impregnated CNTs

Sorbent Amine-enriched fly ash PEI-impregnated fly ash APTS-modified CNT PEI-functionalized SWCNTs Graphene CMS

Table 5 (continued)

0.91 0.056

0.108

25.47

0.603

0.70

8.9 19.04

Pore volume (cm3/g)

Pore size (nm)

17

198 8.9

1725

BET surface area (m2/g)

0.15/0.5 1 1 1

293 300 195 303

343

0.1

0.15 0.1 0.02

1

348

293 298 313

Pressure p (atm) 0.1

Temperature T (K) 298

1.32 0.93 3.56 3.87(humid) 3.09

0.8 2.43

0.98/2.59 2.1

1.02

Capacity (mmol/g) 2.05

(Liu et al. 2014a)

(Ghosh et al. 2008) (Burchell et al. 1997) (Hsu et al. 2010) (Lu et al. 2008) (Ye et al. 2012)

(Arenillas et al. 2005) (Su et al. 2009) (Dillon et al. 2008)

References (Gray et al. 2004)

14 Y. Shi et al.

CO2 Capture Using Solid Sorbents Fig. 2 Isotherm for adsorption of CO2 on activated carbon (Do and Wang 1998)

15

Adsorbed (mmole/g)

0.8

298 K 323 K

0.6

373 K 0.4

0.2

0.0 0

5

10 Pressure (kPa)

15

20

Carbon-enriched fly ash concentrates treated with various amines were developed by Gray et al. (2004), Maroto-Valer et al. (2008; Zhang et al. 2004), and Arenillas et al. (2005). A typical comparison of CO2 adsorption capacities of activated fly ash carbon and its alkanolamine-modified counterparts at various temperatures was reported by Maroto-Valer and others (Maroto-Valer et al. 2008). It was found that activation by steam before impregnation could successfully increase the pore volume and surface area, consequently resulting in the increase of CO2 capture capacity (Zhang et al. 2004; Maroto-Valer et al. 2008). The impregnation with PEI could significantly enhance the adsorption capacity of this class of sorbents up to 2.13 mmol/g at 348 K, which is much higher than that without impregnation (0.22 mmol/g at 348 K) (Zhang et al. 2004). Arenillas et al. (2005) achieved the CO2 adsorption capacity of 1.02 mmol/g at 348 K using activated fly ash-derived sorbents impregnated with PEI (Zhang et al. 2004; Arenillas et al. 2005). Fly ash impregnated with PEI and its blend with poly (ethylene glycol) (PEG) was also investigated. The addition of PEG into the PEI-loaded sorbents improves the CO2 adsorption capacity and kinetics, which could be attributed to the bicarbonate formation reaction in the presence of PEG (attracts more water). CNT can act as a suitable candidate for CO2 capture by choosing the appropriate pore size and optimum conditions (Huang et al. 2007; Razavi et al. 2011). Considerable experimental research and theoretical modeling efforts are being devoted to investigate the adsorption of CO2 on CNTs. Cinke et al. (2003) reported CO2 adsorption on purified single-walled carbon nanotubes (SWCNTs) in the temperature range from 273 to 473 K (see Fig. 3). The CO2 adsorption capacity of SWCNTs was twice that of AC. Lu et al. (2008) reported CO2 capture by CNT and its modification. After functionalization, CNT showed a significant enhancement in

16

Y. Shi et al.

Volume absorbed (cm3/g STP)

100

purified HiPco

90 activated carbon

80 70

raw HiPco

60 50 40 30 20 10 0 0

200

400

600

800

1000

Pressure (mmHg)

Fig. 3 Comparison of CO2 adsorption capacities of high-pressure CO conversion (HiPco) singlewalled nanotubes (SWNTs) and activated carbon (AC) at 308 K (Cinke et al. 2003)

CO2 adsorption capacity. CNTs modified by APTES were also tested for their CO2 adsorption potential at various temperatures by Su et al. (2009). The CO2 adsorption capacities of CNTs and CNTs (APTS) increased with water content, and decreased with temperature, indicating the exothermic nature of adsorption process. The CO2 adsorption capacity of CNT (APTS) was 2.59 mmol/g at 293 K. The potential application of CNT as the support for amine-impregnated sorbents has been studied by Fifield et al. (2004). In order to increase the affinity of the carbon structure, pyrene methyl picolinimide (PMP) was introduced as anchors. Dillon et al. (2008) synthesized and characterized PEI-functionalized SWNTs. A maximum adsorption of 2.1 mmol/g was reported for PEI (25000)-SWNT at 300 K. Hsu et al. (2010) proposed that a combination of thermal and vacuum desorption of CNT (APTES) at 393 K could reduce the regeneration time. The adsorption capacities and physicochemical properties were preserved after 20 cycles of adsorption/ regeneration. Industrial-grade multi-walled carbon nanotubes (IG-MWCNTs) impregnated with tetraethylenepentamine (TEPA) were systematically investigated for CO2 capture by Liu et al. (2014a, b). TEPA-impregnated IG-MWCNTs were shown to have high CO2 adsorption capacity comparable to that of TEPA-impregnated P-MWCNTs (Ye et al. 2012). The adsorption capacity of IG-MWCNT-based adsorbents was in the range of 2.145–3.088 mmol/g, depending on adsorption temperatures. The isosteric heat of adsorption for CO2 decreased with the increase in CO2 loading (Fig. 4). The high heat of adsorption in the lower loading region was due to the reaction of CO2 with the active sites of TEPA. The process of IG-MWCNTs-50 adsorption of CO2 was partly physical and partly chemical adsorption. The adsorption/desorption kinetics of CO2 on TEPA-impregnated

CO2 Capture Using Solid Sorbents

17

60

50

Q (KJ/mol)

40

30

20

10 0.0

0.5

1.0

1.5

2.0

2.5

3.0

3.5

4.0

4.5

5.0

Adsorption Capacity (mmol/g)

Fig. 4 Isosteric heats of adsorption of CO2 for IG-MWCNTs-50 in a binary mixture of CO2 and N2 (Liu et al. 2014a)

IG-MWCNTs was investigated to obtain insight into the underlying mechanisms on the fixed bed. Avrami’s fractional-order kinetic model provided the best fitting for the adsorption behavior of CO2. In order to find the optimal regeneration method, three desorption methods were evaluated for the regeneration of solid sorbents. The activation energy Ea of CO2 adsorption/desorption was calculated to evaluate the performance of the adsorbent. The effect of gas contaminants on the adsorption behavior of adsorbents for CO2 was also studied. H2O and NO had a minimal impact on CO2 adsorption capacity, while the effect of SO2 on CO2 adsorption was influenced by adsorption temperature and SO2 concentration. Skoulidas et al. (2006) carried out simulations to analyze the adsorption and transport diffusion of CO2 and N2 in SWCNTs at room temperature. They reported that transport diffusivities for CO2 in nanotubes with diameters ranging from 1 to 5 nm are approximately independent of pressure. The observed diffusion mechanism is not Knudsen-like diffusion. Based on Monte Carlo simulations, Huang et al. (2007) showed that CO2 adsorption in the range of 4–9 mmol/g is an increasing function of the diameter of the CNTs. Additionally, CNTs demonstrated a higher selectivity toward CO2 than other sorbents, such as ACs, zeolite 13X, and MOFs. Razavi et al. (2011) also concluded that CNTs exhibited a higher selectivity of CO2 over N2, compared to other carbon-based materials, for the separation of CO2/N2 mixture. In summary, the impregnation of amines for carbonaceous materials is found to be effective to enhance CO2 adsorption capacity. However, the impregnation also caused a significant decrease of the surface areas and pore volume. Although the exact mechanism of these changes is still not well understood, it is believed that the size and molecular structure of amines play an important role.

18

Y. Shi et al.

Polymer-Based Sorbents Polymeric amine sorbents have been used for years to capture CO2 in closed environments, such as aircraft, submarine, and space shuttles, under the concentrations of CO2 1000), was conducted by Harlick and Tezel (2004) for the capture of CO2 from flue gas. Adsorption capacities of the adsorbents increased in the following order (in the pressure range of 0–2 atm):13X (Si/Al = 2:2) > NaY (Si/Al = 5:1) > H-ZSM-530 (Si/Al = 30)> HiSiv3000 > HY-5 (Si/Al = 5) (Fig. 5). It might be on account of a low Si/Al ratio with cations (sodium) in the structure that show strong interactions with CO2. Zukal et al. (2010) investigated CO2 adsorption on six high-silica zeolites (SiO2/ Al2O3 > 60): TNU-9, IM-5, SSZ-74, ferrierite, ZSM-5, and ZSM-11. TNU-9 and IM-5 were found to have the highest CO2 adsorption capacity, attaining 2.61 and 2.42 mmol/g at the pressure of 100 kPa, respectively.

Sorbent Zeolite 13X NaY H-ZSM-5-30 HiSiv 3000 HY-5 Ferrierite ZSM-5 ZSM-11 TNU-9 IM-5 SSZ-74 NaY Cs-treated NaY Na-ZSM-5 13X MEA-modified 13X 13X MEA-modified 13X TEPA-modified Y-type zeolite TEPA-impregnated beta zeolite 811 483

0.118 0.163 0.131 0.165 0.134 0.123 0.46 0.21

0.34 0.059 0.34 0.059

1.54 1.58

11 11 11 11

680

615.5 9.15 615.5 9.15

BET surface area (m2/g)

Pore volume (cm3/g)

Pore size (nm)

Table 7 CO2 adsorption capacity of zeolites

0.15 0.15

393 333

0.1

1 0.15

303 303

303

1

0.1

Pressure p (atm) 1

473

293

Temperature T (K) 295

Capacity (mmol/g) 4.61 4.06 1.9 1.44 1.13 2.03 2.30 2.17 2.61 2.42 1.92 0.46 0.80 0.75 1.25 0.78 0.18 0.63 2.54 4.27 (water) 2.08

(Fisher et al. 2009)

(Katoh et al. 2000) (Jadhav et al. 2007) (Jadhav et al. 2007) (Su et al. 2010)

(Diaz et al. 2008)

(Zukal et al. 2010)

References (Harlick and Tezel 2004)

CO2 Capture Using Solid Sorbents 21

22

5

13X NaY

Amount Adsorbed (mol/kg)

Fig. 5 Comparison of CO2 adsorption isotherms for fresh zeolites at 295 K (Harlick and Tezel 2004). The filled symbols were obtained by regenerating the fresh adsorbent at 200  C for 12 h followed by the adsorption study. The open symbols were obtained as a repeat of 200  C regeneration for 12 h followed by adsorption without changing the adsorbent sample

Y. Shi et al.

4

3

HZSM-5-30 HiSiv 3000

2

HY-5 1

0 0.0

0.5

1.0 1.5 Pressure (atm.)

2.0

2.5

To enhance the adsorption capacity of CO2, the modification of zeolites via the introduction of electropositive and polyvalent cations was concentrated. Khelifa et al. (2004) concluded that NaX (Si/Al = 1.21) zeolite exchanged with Ni2+ and Cr3+ showed a decrease in CO2 adsorption capacity, compared to that of the parent NaX zeolite, due to a weak CO2–sorbent interaction. NaX and NaY and those resulting from ions exchanged with Cs, since it is the most electropositive metal of the periodic table, were tested regarding the adsorption of CO2 by Diaz et al. (2008). Cs-treated zeolites performed better and were more active for adsorption at higher temperatures (373 K). Zhang et al. (2008a) prepared chabazite (CHA) zeolites (Si/Al < 2.5) and exchanged them with alkali cations (e.g., Li, Na, and K) and alkaline-earth cations (e.g., Mg, Ca, Ba) to evaluate their potential for CO2 capture from flue gas by VSA below 393 K. From the adsorption isotherm, it was found that the NaX zeolite shows superior performance at relatively low temperatures, while NaCHA and CaCHA hold comparative advantages for high temperature (>273 K) CO2 separation. According to the research of the selectivity of ion-exchanged ZSM-5 zeolites by Katoh et al. (2000), M-ZSM-5 (M = Li, Na, K, Rb, and Cs) might be attributed to the fact that almost all CO2 molecules strongly adsorbed on the cation sites, while N2 interacted with the wall of the H-ZMS-5. There is a detrimental effect of water vapor on CO2 adsorption for zeolite, due to its preferential adsorption from the gas mixture (Brandani and Ruthven 2004; Li et al. 2008c). Trace amounts of water vapor could significantly decrease the CO2 adsorption capacity, because it gets competitively adsorbed on the zeolite surface and blocks the access for CO2 (Brandani and Ruthven 2004). It was demonstrated by another study that the adsorption of CO2 is considerably inhibited by H2O as CO2 and water vapor adsorption on zeolite 13X (Li et al. 2008c).

CO2 Capture Using Solid Sorbents

23

Zeolites with large surface area and pore volume present a potential option for CO2 adsorption. However, CO2 adsorption capacity on zeolites decreases significantly as the temperature increases. The capacity will also be very low in the existence of moisture. Therefore, a few researches (Table 7) synthesized aminated zeolites as alternative sorbents. Zeolite 13X was modified with MEA by Jadhav et al. (2007) by impregnation method. Compared with unmodified zeolite, a higher capacity at 293 K was obtained with MEA loading of 50 wt%, despite reduced pore volume and lower surface area resulted from impregnation. The chemical interaction between CO2 and amine probably played a significant role in adsorption of CO2 at 293 K. Similarly, Su et al. (2010) dispersed TEPA into commercially available Y-type zeolite (Si/Al = 60). They obtained a CO2 adsorption capacity of 4.27 mmol/g at 333 K in the presence of 15 % CO2 and 7 % water vapor in gas stream. In another study, Fisher et al. (2009) employed β-zeolite as a solid support for TEPA impregnation and compared it with TEPA-impregnated silica and alumina. The TEPAmodified β-zeolite exhibited a CO2 adsorption capacity up to 2.08 mmol/g at 303 K under the 10 % CO2/90 % argon flow, outperforming TEPA/SiO2 and TEPA/Al2O3 sorbents. TEPA/β-zeolite maintains its CO2 capture capacity for more than 10 adsorption/regeneration cycles. Their study suggests that the higher capacity of TEPA/β-zeolite can be related to zeolite’s high surface area.

Silica-Based Sorbents Impregnated Silica-Supported Sorbents The first amine-impregnated silica used to capture CO2 was reported by Song and others (Xu et al. 2002), and they used wet impregnation of hydrothermally synthesized MCM-41 with PEI to create an adsorbent in terms of a “molecular basket” (Xu et al. 2002, 2003, 2005a, b). In their further study, Xu et al. (2003) reported the highest CO2 adsorption capacity of 3.02 mmol/g with MCM-41-PEI at a PEI loading of 75 wt% under a pure CO2 atmosphere and at 348 K. As expected, increased PEI loadings led to higher adsorption of CO2. Compared with the chemical adsorption for higher PEI loadings, the physical adsorption on the unmodified pore wall of MCM-41 (and the capillary condensation in the mesopore) is negligible. Additionally, a synergetic effect of MCM-41 and PEI for CO2 adsorption was hypothesized (Xu et al. 2003). When the mesoporous pores were loaded with 50 wt% PEI, the highest synergetic adsorption gain was obtained. MCM-41 impregnated with PEI exhibited an increase in adsorption capacity with increasing temperature, in comparison to ACs and zeolites. Xu et al. (2003) assumed that low adsorption rate caused by kinetic limitations results in the low adsorption capacity at low temperature. Therefore, the overall process is kinetically controlled. Song and his groups also studied a series of performance and stability using a MCM-41-PEI sorbent to capture CO2 from simulated flue gas, flue gas from a natural gas-fired boiler, and simulated humid flue gas using a packed-bed

24

Y. Shi et al.

Fig. 6 Schematic representation of the synthesis of PME (Heydari-Gorji and Sayari 2011)

adsorption column (Xu et al. 2005a, b). The adsorbent showed a separation selectivity of 180 for CO2/O2 and >1000 for CO2/N2. The adsorbent was stable at 348 K after ten cycles of adsorption/desorption process, while it was not stable when the operation temperature was >373 K. The observation of NOx to be adsorbed simultaneously with CO2 indicates the need for preremoval of NOx from flue gas (Xu et al. 2005a, b). In addition, the CO2 adsorption capacity was enhanced when the moisture concentration is lower than that of the CO2, which could be due to the formation of bicarbonate ion (reaction 2) during the chemical interaction between PEI and CO2 in the presence of moisture. To enhance the CO2 adsorption capacity of MCM-41-PEI, PEI supported on pore-expanded (PE) MCM-41 was studied by Heydari-Gorji et al. (2011). The adsorption capacity exhibits as high as 4.68 mmol/g at 348 K for 55 wt% PEI loading due to the well-dispersed PEI inside the PE-MCM-41 pores (Heydari-Gorji and Sayari 2011) (Fig. 6). As an alternative strategy to further improve efficiency for CO2 adsorption, Yue et al. (2008a) impregnated TEPA into as-prepared MCM-41 that had been prepared with the ionic surfactant cetyltrimethylammonium bromide (CTAB). The as-prepared MCM-41 impregnated with 50 wt% TEPA exhibited a CO2 adsorption capacity of 4.16 mmol/g in 5 % CO2, probably owing to better amine distribution using ionic surfactant templates (Yue et al. 2008a). The type, amount, and distribution of the surfactant in the pores of MCM-41 all have significant influences on adsorption process. However, their studies showed that the adsorbent required only 1.5 min to reach the adsorption halftime but 140 min to reach close to equilibrium adsorption capacities. SBA-15-supported sorbent loaded with 50 wt% PEI was developed by Ma et al. (2009). The CO2 adsorption capacity of 3.18 mmol/g was obtained at 348 K under a CO2 partial pressure of 15 kPa. The CO2 adsorption capacity was 50 % higher than that of their previously reported MCM-41-PEI sorbent, probably due to the higher pore diameter and pore volume of SBA-15. This allows the PEI-modified

CO2 Capture Using Solid Sorbents

25

sample prepared from SBA-15 to have a higher surface area for the same PEI loading. The significant role of the distribution of the amine groups impregnated in the porous materials has been confirmed by Zhu and his group (Yue et al. 2006). As-prepared mesoporous SBA-15 occluded with an organic template (Pluronic P123) was used to impregnate TEPA. The CO2 adsorption capacity of the modified SBA-15 with a TEPA loading of 50 wt% was higher than that of the calcined SBA-15. The presence of the template enhanced the accessibility of CO2 to TEPA because of a better dispersion and distribution of amines. Furthermore, Yue et al. (2008b) also synthesized as-prepared SBA-15-supported sorbents through dispersing amine blends of TEPA and DEA. The hydroxyl group in DEA is found to significantly improve the CO2 adsorption. Hydroxyl group facilitates the formation of the carbamate zwitterion; therefore, equilibrium CO2 loadings of amine can reach 2 mol CO2 (mol amine)1. Ahn and coworkers (Son et al. 2008) synthesized a series of PEI-loaded (50 wt%) ordered mesoporous silica supports, namely, MCM-41, MCM-48, SBA-15, SBA-16, and KIT-6, to evaluate their CO2 adsorption performance. All impregnated sorbents showed substantially higher CO2 sorption capacities and stability, as well as faster adsorption kinetics, than that of pure PEI. The CO2 adsorption capacities were in the following order: KIT-6 (dp = 6.5) > SBA-15 (dp = 5.5)  SBA-16 (dp = 4.1) > MCM-48 (dp = 3.1) > MCM-41 (dp = 2.1), where dp is the average pore diameter (nm). The adsorption performance was proposed to be influenced by the pore diameter and pore arrangement of mesoporous silica materials. Bulky PEI is assumed to be introduced to the pore easily as the pore size in the support increases. Goeppert et al. (2010) studied nanostructured fumed silica impregnated with various organoamines, namely, PEI, MEA, DEA, TEPA, and PEHA, as well as 2-amino-2-methyl-1-3-propanediol (AMPD), 2-(2-amino-ethylamino)ethanol (AEAE), etc. Simple amines such as MEA, DEA, AEAE, etc. are not suitable for impregnation, due to amine leaching problems at higher temperature. As shown in Table 8, amine-impregnated silica sorbents can effectively adsorb CO2 with relatively higher working capacity. The modification of pore size of silica support can further enhance the adsorption capacity. The adsorption capacity of amine-impregnated silica sorbents is not sensitive to the presence of moisture (in many cases, moisture helps to obtain higher capacity). However, the durability and regeneration kinetics of the amine-impregnated solid sorbents have not been tested adequately. Their desorption kinetics is still slow. In addition, considerable loss of amines is a major drawback for impregnated amine-functionalized sorbents.

Grafted Silica-Supported Sorbents The synthesis and characterization of amine-grafted mesoporous silica sorbents (Class 2 category) for CO2 capture were reported by many groups. In this case, amine, mainly aminosilane, is covalently tethered to the silica support (Choi et al. 2009). Three methods have been used for the grafting of amine onto a silica support: post-synthesis grafting, direct synthesis by co-condensation, and anionic template synthesis with the help of the interaction between the cation head in

PE-MCM-41 DEA(77 wt%)-impregnated PE-MCM-41 MCM-41 PEI(50 wt%)-impregnated MCM-41 MCM-48 PEI(50 wt%)-impregnated MCM-48 SBA-15 PEI(50 wt%)-impregnated SBA-15 SBA-16 PEI(50 wt%)-impregnated SBA-16 KIT-6 PEI(50 wt%)-impregnated KIT-6 PEI(65 wt%)-impregnated monolith TEPA(50 wt%)-impregnated MCM-41 MCM-41 PEI(50 wt%)-impregnated MCM-41 SBA-15 PEI(50 wt%)-impregnated SBA-15 TEPA(50 wt%)-impregnated SBA-15

Sorbent MCM-41 PEI(50 wt%)-impregnated MCM-41 PEI(50 wt%)-impregnated MCM-41

1.15 0.03 1.31 0.2 0.02

1229 11 950 80 7

16

0.03

2.7 – 6.6 6.1 –

1042 4 1162 26 753 13 736 23 895 86

0.85 0.01 1.17 0.10 0.94 0.04 0.75 0.02 1.22 0.18

2.8 – 3.1 – 5.5 – 4.1 – 6.0 5.3

917

BET surface area (m2/g) 1480 4.2

2.03

Pore volume (cm3/g) 1.0 0.011

2–3

Pore size (nm) 2.75 0.4

Table 8 CO2 adsorption capacity of silica solid sorbents

0.15 0.15 0.15 0.05

348 348 348

0.05 0.05

348 348 348

1

0.05

Pressure p (atm) 1 0.1 0.13

348

298

Temperature T (K) 348 348 348

0.14 2.02 0.11 3.18 3.23

3.75 4.54

2.52 2.89 2.93 2.70 3.07

2.93

Capacity (mmol/g) 0.195 2.05 3.08 (humid)

(Yue et al. 2006)

(Ma et al. 2009)

(Ma et al. 2009)

(Chen et al. 2009) (Yue et al. 2008a)

(Franchi et al. 2005) (Son et al. 2008)

(Xu et al. 2005a)

References (Xu et al. 2002)

26 Y. Shi et al.

18.6

DAEAPTS-grafted HMS

2.21

10

9.6

APTES-grafted SBA-15 DAEAPTS-grafted PE-MCM-41

DAEAPTS-grafted PE-MCM-41

1.05

0.54 0.40 0.29

APTES-grafted SBA-15 AEAPS-grafted SBA-15 DAEAPTS-grafted SBA-15

0.46

0.67

19.0

429

950

374 250 183

926

1125

6.16

125

0.704 0.016

3.9

0.01

3.6



TEPA(30 wt%) + DEA(20 wt%)impregnated SBA-15 PEI(40 wt%)-impregnated mesoporous silica TEPA(83 wt%)-impregnated MC400/10 APTES-grafted silica gel APTS-grafted HMS

343

298 298

333

0.05

0.04 0.05

0.15

0.9

1 0.9

323 293 293

0.1

1

0.05

348

348

348

2.28

0.66, 0.65 (humid) 1.36, 1.51 (humid) 1.58, 1.80 (humid) 0.4 2.65

1.34

0.89 1.59

5.57

2.4

3.61

(continued)

(Gray et al. 2005) (Harlick and Sayari 2007) (Serna-Guerrero et al. 2010a)

(Leal et al. 1995) (Knowles et al. 2005) (Knowles et al. 2006) (Hiyoshi et al. 2005)

(Goeppert et al. 2010) (Qi et al. 2011)

(Yue et al. 2008b)

CO2 Capture Using Solid Sorbents 27

APTES-grafted SBA-12 APTES-grafted MCM-41 APTES-grafted SBA-15

AEAPTS-grafted SBA-16

Sorbent Aziridine polymer-grafted SBA-15

Table 8 (continued)

37 9 52 38 55 28

Pore size (nm)

0.54

Pore volume (cm3/g)

1347 310 1506 239 687 134

715

BET surface area (m2/g)

298

300

Temperature T (K) 348 298

0.1

1

Pressure p (atm) 0.1

1.04 0.57 1.53

Capacity (mmol/g) 1.98 (humid) 3.11 (humid) 1.4

(Knofel et al. 2007) (Zelenak et al. 2008)

References (Hicks et al. 2008)

28 Y. Shi et al.

CO2 Capture Using Solid Sorbents

29

Fig. 7 Modified hexagonal mesoporous silica (HMS) materials (Chaffee et al. 2002)

aminosilane and anionic surfactants (Chew et al. 2010). The mesoporous nature of the silica permits high diffusivity of organic amine into the pore structure and, following functionalization, easy diffusion for CO2 to enter and leave the pores. A wide variety of aminosilanes (see Table 8) have been grafted onto the surface of porous silica for the investigation of the impact of amine type and loadings on the CO2 adsorption capacity. Leal et al. (1995) first investigated the chemisorption of CO2 onto APTESgrafted surface of silica gel. However, the CO2 adsorption capacity of this sorbent was far below the requirement for industrial application of the sorbents. Afterward, a series of aminopropyl-grafted hexagonal mesoporous silica (HMS) compounds was prepared and characterized by Chaffee’s group to enhance CO2 adsorption. The grafted HMS materials, as shown in Fig. 7, were developed by Delaney et al. (Chaffee et al. 2002) using 3-aminopropyl-trimethoxysilane (APTS), aminoethyl-aminopropyl-trimethoxysilane (AEAPTS), and N-[3-(trimethoxysilyl) propyl) diethylenetriamine (DAEAPTS), ethylhydroxyl-aminopropyltrimethoxysilane (EHAPTS), and diethylhydroxyl-aminopropyl-trimethoxysilane (DEHAPTS) (see Table 8). The modified silica supports showed high surface area with varied concentrations of surface-bound amine and hydroxyl functional groups. The modified HMS sorbents were also shown to reversibly adsorb significantly more CO2 than modified silica gel, as reported by Leal et al. (1995). The ratio of CO2 molecules adsorbed per available N atom was 0.5 for HMS-APTS, HMS-AEAPTS, and HMS-DEAPTS, which is consistent with the carbamate formation mechanism, as presented by reaction 1. For HMS-DEHAPTS, the ratio was 1. Because tertiary amines cannot form stable carbamates, it was proposed that the hydroxyl groups may serve to stabilize carbamate-type zwitterions. Based on a systematic investigation on CO2 adsorption on different mesoporous silica substrates and their amine-functionalized hybrid product, Knowles et al. (2005, 2006; Chaffee 2005) also pointed out that the extent of surface functionalization is found to be dependent on substrate morphology (e.g., available surface area, pore geometry, and pore volume), diffusion of reagents to the surface, as well as the silanol concentration on the substrate surface. The higher nitrogen

30

Y. Shi et al.

content of the tether caused a higher CO2 adsorption capacity. The CO2 adsorption performance of hybrid materials exhibited highest CO2 capacity of 1.66 mmol/g at 293 K in dry 90 % CO2/10 % argon mixture, and good adsorption kinetics, reaching equilibrium within 4 min. Hiyoshi et al. (2004, 2005) revealed the potential application of aminosilanemodified mesoporous silica for the separation of CO2 from gas streams in the presence of moisture. In their subsequent research (Hiyoshi et al. 2005), DAEAPTS-SBA-15 showed enhanced CO2 adsorption capacity after SBA-15 was treated with boiling water for 2 h, followed by the grafting of aminosilanes. The CO2 adsorption capacity reached 1.58 and 1.80 mmol/g in the absence and presence of moisture, respectively. The efficiencies of the aminosilanes at identical amine surface density were in the following order: APTES > AEAPTS > DAEAPTS. Gray and coworkers (Gray et al. 2005; Chang et al. 2003; Khatri et al. 2005, 2006) also prepared a series of amine-grafted SBA-15 sorbents for CO2 adsorption. Enhanced CO2 adsorption capacity was observed in the presence of H2O because of the formation of carbonate and bicarbonate (Chang et al. 2003) confirmed by Khatri et al. (2006). Khatri et al. (2006) and Zheng et al. (2004) studied the thermal stability of several grafted SBA-15 and found these to be stable up to 523 K. Furthermore, the SO2 adsorption on APTES-SBA-15 led to a sharply decrease of CO2 adsorption capacity, indicating the necessity of SO2 removal before aminebased CO2 adsorption (Khatri et al. 2006). Sayari and coworkers (Harlick and Sayari 2006, 2007; Sayari et al. 2005; SernaGuerrero et al. 2008, 2010a, b, c; Belmabkhout et al. 2010; Belmabkhout and Sayari 2010; Sayari and Belmabkhout 2010) developed pore-expanded MCM-41 mesoporous silica (PE-MCM-41) grafted with amine. The DAEAPTS-grafted PE-MCM-41 support with aminosilane loading of 5.98 mmol (N)/g showed an adsorption capacity of 2.05 mmol/g at 298 K and 1.0 atm for a dry 5 % CO2 in N2 feed mixture (Harlick and Sayari 2006). The amine surface density of the sorbent had a strong impact on the adsorption capacity. However, the existence of moisture did not significantly improve the performance of the amine-impregnated PE-MCM-41 sorbents. Subsequently, Harlick and Sayari (2007) found that, compared with dry grafting procedure, wet grafting via the co-addition of water at 358 K showed an increase in the total amine content, resulting in a 90 % overall improvement. The sorbent exhibited good stability over 100 cycles with an average working adsorption capacity of 2.28 mmol/g for pure CO2 through regeneration under a vacuum at 343 K (Serna-Guerrero et al. 2010a), while the temperature swing regeneration process was suitable only at >393 K (SernaGuerrero et al. 2010b). In addition to thermal stability, it also showed extremely high selectivity for CO2 over N2 and O2 (Serna-Guerrero et al. 2010b, c; Belmabkhout et al. 2010; Belmabkhout and Sayari 2010). It was also confirmed by Belmabkhout and Sayari (2010) that SO2 has an adverse effect for CO2 capture (Khatri et al. 2006). In addition, this group noted that their sorbent underwent over 700 cycles without any loss of capacity when adsorption and regeneration was carried out using a humid gas with 7.5 % relative humidity at 343 K. Furthermore, experimental data of CO2 uptake as a function of time at temperatures between 298 and 373 K were fit to a series of kinetic models, namely, Lagergren’s pseudo-first- and pseudo-second-order and

CO2 Capture Using Solid Sorbents

31

Fig. 8 Hyperbranched amino silica (Drese et al. 2009)

Avrami’s kinetic models. The adsorption kinetics of CO2 on amine-functionalized PE-MCM-41 was successfully described using Avrami’s kinetic model with a reaction kinetic order of 1.4, which has been associated with the occurrence of multiple adsorption pathways (Serna-Guerrero and Sayari 2010). Jones and his coworkers developed a covalently tethered hyperbranched aminosilica (HAS) sorbent (Fig. 8) with high amine content capable of capturing CO2 reversibly from flue gas. They also compared it with other covalently supported solid sorbents (Hicks et al. 2008; Drese et al. 2009). HAS was synthesized via a one-step surface polymerization reaction of aziridine monomer inside SBA-15 pores (Drese et al. 2009). The HAS sorbent had an amine loading of 7.0 mmol N/g and CO2 adsorption capacity of 3.08 mmol/g when tested in a packed-bed reactor under a flow of 10 % CO2/90 % argon saturated with water at 298 K. It was stable over 12 cycles with regeneration temperature at 403 K. In another study, Drese et al. (2009) proposed modification of the HAS synthesis conditions, such as the aziridine-to-silica ratio and the solvent to further tune the sorbent’s composition, adsorbent capacity, and kinetics. They found that higher amine loadings contributed to a better adsorption capacity. The comparison of three APTES-grafted mesoporous silica materials, namely, MCM-41 (dp = 3.3), SBA-12 (dp = 3.8), and SBA-15 (dp = 7.1), was made by Zelenak et al. (2008). The sorbent capacity was consistent with the order of pore size and amine surface density, similar to that observed in the amine-impregnated mesoporous silica sorbents. Kim et al. (2008) developed and tested a series of amine-functionalized mesoporous silica sorbents via anionic surfactant-mediated synthesis method for CO2 adsorption at room temperature. As expected, higher amine loading on the mesoporous structure was the governing factor to achieve high CO2 adsorption.

32

Y. Shi et al.

Table 8 lists the CO2 adsorption capacity of various amine-grafted adsorbents. Although the functionalization of mesoporous silicas with amine functional groups significantly enhances the CO2 adsorption capacity of silica substrate, the reported equilibrium CO2 adsorption capacities are not as high as those reported with amineimpregnated mesoporous silicas. Moreover, the low thermal stability of mesoporous silicas, in the presence of water vapor at elevated temperature, is still one of the major concerns.

Metal–Organic Frameworks MOFs are network solids composed of metal ion or metal cluster vertices and organic linkers. The ability to freely incorporate and vary organic linkers in MOFs translates to abundant options to control pore shape, pore size, and the chemical potential of the adsorbing surfaces and, consequently, their capacity, selectivity, and kinetics. MOFs have two important features: (i) their syntheses can be modular, and (ii) the solids are crystalline. Because the bonding (coordinate covalent) is weaker than those in metal oxides, solvated pores are not sure to necessarily exist if the solvent is removed. Indeed, these compounds can be classified into three generations due to this fact: first generation, those that collapse irreversibly and are not porous; second generation, those that retain their structures and show reversible gas sorption isotherms; third generation, a category where the material behaves more like a sponge and changes structure reversibly with guest sorption (Kitagawa et al. 2004). Many of the MOFs show large CO2 adsorption capacities at pressures at and above 1 bar, due to their high surface areas. However, the adsorption capacities at lower CO2 pressures are often not directly reported, and these have been carefully estimated for CO2 (0.15 bar) and N2 (0.75 bar) and are listed in Table 9. Table 9 also lists the calculated selectivity values for CO2 over N2 at 298 K for selected MOFs, using the molar ratio of the CO2 uptake at 0.15 bar and the N2 uptake at 0.75 bar. The direct measurement of multicomponent isotherms, which has not been performed for CO2/N2 mixtures, is necessary in order to evaluate the accuracy of selectivity factors predicted from single-component isotherms and ideal adsorbed solution theory (IAST).

Surface Functionalization of MOFs It is essential to tune the affinity of the framework functionalities toward CO2 for optimization of the adsorptive properties. The various kinds of functionalities to enhance CO2 capture performance are discussed in the following sections, including amines, strongly polarizing organic functionalities, and exposed metal cation sites. • Pores Functionalized by Nitrogen Bases. MOFs functionalized with basic nitrogen-containing organic groups have been widely investigated for the CO2 adsorption. The dispersion and electrostatic forces due to the interaction between

Al(OH)(2-amino-BDC)

Cu3(BTC)2 H3[(Cu4Cl)3(BTTri)8(mmen)12] Zn2(ox)(atz)2 Pd(μ-F-pymo-N1,N3)2 Cu3(TATB)2 Co2(adenine)2(CO2CH3)2 Fe3[(Fe4Cl)3(BTT)8(MeOH)4]2 Al(OH)(bpydc)• 0.97Cu(BF4)2 Zn(nbIm)(nIm)

Zn2(dobdc)

NH2-MIL-53 (Al), USO-1-AlA

ZIF-78

CuTATB-60 bio-MOF-11 Fe-BTT

Ni-MOF-74, CPO-27-Ni Co-MOF-74, CPO-27-Co Zn-MOF-74, CPO-27-Zn HKUST-1 mmen-Cu-BTTri

Ni2(dobdc)

Co2(dobdc)

Common names Mg-MOF-74, Mg-CPO-27

Material chemical formula Mg2(dobdc)

Table 9 CO2 and N2 uptake in selected MOFs

0.70

2.64 2.15 1.89 1.48 1.32 1.23 1.20 0.91 0.75

1.73

3.23

CO2 uptake at 0.15 bar (mmol/g) 4.68 4.30 3.80 3.30 3.84

0.29 0.1 0.34 0.12 0.13

0.15 0.07

N2 uptake at 0.75 bar (mmol/g) 0.65 0.5 0.39 0.31 0.76

24 65 18 39 30

101 165

Selectivity 44 52.3 58.8 61.1 30

298

293 298 293 293 298 298 298 298 298

296

298

Temperature (K) 303 313 323 333 298

(continued)

(Aprea et al. 2010) (McDonald et al. 2011) (Vaidhyanathan et al. 2009) (Navarro et al. 2007) (Kim et al. 2011) (An et al. 2010) (Sumida et al. 2010) (Bloch et al. 2010) (Phan et al. 2010; Banerjee et al. 2009) (Arstad et al. 2008b)

(Caskey et al. 2008)

(Dietzel et al. 2009; Yazaydin et al. 2009b) (Yazaydin et al. 2009b)

References (Mason et al. 2011)

CO2 Capture Using Solid Sorbents 33

Co4(OH)2(doborDC)3

Zn4O(BTB)2 Zn4O(BDC)(BTB)4/3 Zn4O(BDC)3

Zn2(bmbdc)2(4,4’-bpy) Ni2(BDC)2(DABCO) V(IV)O(BDC) Al(OH)(bpydc) Zn20(cbIm)39(OH) Zn4O(NO2-BDC)1.19((C3H5O)2BDC)1.07-((C7H7O)2-BDC)0.74 Zn2(BTetB)(py-CF3)2 Zn(MeIM)2 Zn4O(BDC-NH2)3

MOF-177 UMCM-1 MOF-5, IRMOF-1

ZIF-8 IRMOF-3

USO-2-Ni MIL-47 MOF-253 ZIF-100 MTV-MOF-5EHI

MIL-53(Al), USO-1-A

UMCM-150

Cu3(BPT)2

Zn2(BTetB) Al(OH)(BDC)

USO-2-Ni-A UMC-150(N)2

Common names Cu-BTTri

Material chemical formula H3[(Cu4Cl)3(BTTri)8] Zn2(bpdc)2(bpee) Ni2(2-amino-BDC)2(DABCO) Cu3(BPT(N2))2

Table 9 (continued)

0.11

0.14 0.11 0.11

0.20 0.14 0.14

0.32 0.27 0.25 0.23 0.23 0.23

0.41 0.39

0.41

CO2 uptake at 0.15 bar (mmol/g) 0.66 0.48 0.48 0.43

0.03

0.14

0.02

0.13 0.05

0.11

N2 uptake at 0.75 bar (mmol/g) 0.06 0.01

18

4

50

9 22

19

Selectivity 19 44

298

298 298 298

298 298 298

298 298 298 298 298 298

298 298

298

Temperature (K) 298 298 298 298

(Bae et al. 2010)

(Bae et al. 2009) (Yazaydin et al. 2009b) (Millward and Yaghi 2005; Yazaydin et al. 2009b) (Mason et al. 2011) (Yazaydin et al. 2009b) (Yazaydin et al. 2009b)

(Henke and Fischer 2011) (Arstad et al. 2008b) (Yazaydin et al. 2009b) (Bloch et al. 2010) (Wang et al. 2008b) (Deng et al. 2010)

References (Demessence et al. 2009) (Demessence et al. 2009) (Wu et al. 2010) (Yazaydin et al. 2009b; Park et al. 2011) (Yazaydin et al. 2009b; Park et al. 2011) (Bae et al. 2009) (Arstad et al. 2008b)

34 Y. Shi et al.

CO2 Capture Using Solid Sorbents

35

the quadrupole moment of CO2 and localized dipoles generated by heteroatom incorporation are typically responsible for the enhanced CO2 adsorption performance. In some cases, acid–base-type interactions of the lone pair of nitrogen with CO2 have also been observed. The degree to which nitrogen incorporation improves CO2 adsorption depends significantly on the properties of the functional group. Three major classes of nitrogen-functionalized MOFs have been synthesized: heterocycle (i.e., pyridine) derivatives (Stylianou et al. 2011; An and Rosi 2010; An et al. 2010), aromatic amine (i.e., aniline) derivatives (Millward and Yaghi 2005; Zhao et al. 2009a; Arstad et al. 2008a; Stavitski et al. 2011; Couck et al. 2009), and alkylamine (i.e., ethylenediamine) bearing frameworks (McDonald et al. 2011; Demessence et al. 2009; Hwang et al. 2008). • Other Strongly Polarizing Organic Functional Groups. In addition to the nitrogen-based functionalities, organic linkers with heteroatom functional groups (other than amines) have also been examined on the CO2 adsorption behavior (Phan et al. 2010; Banerjee et al. 2008, 2009; Deng et al. 2010). These functional groups contain hydroxy, nitro, cyano, thio, and halide groups, and the degree to which CO2 adsorption capacity is improved in these cases depends primary upon the extent of ligand functionalization and the polarizing strength of the functional group. Generally, strongly polarizing groups will influence CO2 adsorption favorably. • Exposed Metal Cation Sites. The generation of structure types bearing exposed metal cation sites on the pore surface is another way that has been used to enhance the affinity and selectivity of MOFs toward CO2 (Mason et al. 2011; Chui et al. 1999; Bordiga et al. 2007; Vishnyakov et al. 2003; Caskey et al. 2008; Bloch et al. 2010). Cu3(BTC)2 (HKUST-1) is one of the most studied materials featuring such binding sites (Chui et al. 1999). It shows a cubic, twisted boracite topology constructed from dinuclear Cu2+ paddlewheel units and triangular 1,3,5-benzenetricarboxylate linkers. The as-synthesized form of the framework has bound solvent molecules on the axial coordination sites of each Cu2+ metal center, which can be subsequently removed in vacuo at elevated temperatures to create open binding sites for guest molecules. The open metal cation sites serve as charge-dense binding sites for CO2, which is adsorbed more strongly at these sites due to its greater quadrupole moment and polarizability.

Application for MOFs in Harsh Environment Understanding the effects of the contaminants (water vapor, SO2, NOx, et al.) is critical to properly evaluate MOFs in a realistic CO2 capture process. Here, we discuss a number of researches that have aimed to study the performance of MOFs under more realistic conditions. • Stability to Water Vapor. Although partial dehydration of the effluent may be possible, it is costly and most likely not feasible on a large scale to dry the gas completely prior to adsorbing CO2 (Granite and Pennline 2002; Lee and Sircar 2008). In evaluating MOFs for applications in CO2 capture processes, it is significant to consider not only the stability of the framework to water vapor but also the effect of water vapor on the adsorption of CO2.

36

Y. Shi et al.

Regarding water stability, the metal–ligand bond is typically the weakest point of a MOF, and hydrolysis can cause the displacement of bound ligands and collapse of the framework structure (Low et al. 2009). This was first observed in MOF-5, which is water sensitive and begins to lose crystallinity when exposed to small amounts of water vapor. It was found that the basic zinc acetate clusters characteristic of most zinc carboxylate MOFs, such as MOF-177 and the IRMOF series, are most susceptible to hydrolysis (Cychosz and Matzger 2010). The trinuclear chromium clusters found in many of the MIL series of frameworks are the most stable, while the copperpaddlewheel carboxylate clusters found in HKUST-1 exhibit intermediate stability. One way to increase the water stability of MOFs is to use azolate-based linkers rather than the typical carboxylate linkers (Demessence et al. 2009). The azolate linkers can bind metals with a similar geometry to carboxylate ligands, but their greater basicity typically results in stronger M–N bonds and greater thermal and chemical stability. The relative M–N bond strengths can be predicted based on the pKa values associated with the deprotonation of the free ligand. Therefore, stability typically decreases with the pKa: pyrazole (pKa = 14.4) linkers exhibit the greatest stability, imidazole (pKa = 10.0) and triazole (pKa = 9.3) are intermediate, and tetrazole (pKa = 4.6) linkers are the most labile (Fig. 9). An alternative strategy for increasing the metal–ligand bond strength in MOFs is through the use of trior tetravalent metal cations (Low et al. 2009). Generally, frameworks containing Cr3+, Al3+, Fe3+, and Zr4+ cations exhibit a high degree of stability in water. Specifically, MIL-53 (M(OH)(BDC), M = Cr3+, Fe3+, Al3+) is a flexible framework that expands or contracts based on the absence or presence of water (Serre et al. 2002; Whitfield et al. 2005; Loiseau et al. 2004). The overall framework scaffold remains intact upon repeated exposure to water, due to the reversibility of the structural transition. MIL-100 and MIL-101 are rigid trivalent frameworks built from trinuclear metallic clusters that have shown a high stability in both boiling water and steam. On the contrary, the zirconium(IV)-based UiO-66, which contains extremely robust Zr6O4(OH)4(CO2)12 cluster units (Fig. 10), exhibits their high solubility in water (Cavka et al. 2008). There have been several researches of MOF stability in liquid water, but few regarding different levels of humidity. HKUST-1 was initially focused on understanding the effect of water on CO2 capture in MOFs. The effect of water coordination on the CO2 adsorption performance of HKUST-1 was tested (Yazaydin et al. 2009a). Near 5 mmol/g of CO2 was adsorbed for the dehydrated form, compared with less than 1 mmol/g at 1 bar in the fully hydrated form. This is in agreement with a similar research, which found that HKUST-1 shows a decrease in CO2 uptake to about 75 % of its original value and a concurrent loss of some crystallinity after exposure to 30 % relative humidity (Liu et al. 2010). In a similar study, CO2 adsorption isotherms were measured at different water loadings for HKUST-1 and Ni2(dobdc) (Liu et al. 2010). Both MOFs retained some adsorption capacity for CO2 at low water loadings but exhibited essentially no capacity above 70 % relative humidity. Significantly, water adsorption caused a much faster decrease in CO2 adsorption for zeolites 5A and NaX than for either MOF.

CO2 Capture Using Solid Sorbents

37

Fig. 9 The general trend of increasing pKa for ligands built from carboxylic acids, tetrazoles, triazoles, and pyrazoles. The metal–ligand bond is expected to be stronger as the pKa increases (Sumida et al. 2012)

Fig. 10 A portion of the crystal structure of the high-stability metal–organic framework UiO-66 (Cavka et al. 2008). Yellow, gray, and red spheres represent Zr, C, and O atoms, respectively. H atoms are omitted for clarity

In order to evaluate their CO2/N2 separation performance, the effects of water vapor on the performance of the M2(dobdc) (M = Zn, Ni, Co, and Mg) series of MOFs were studied (Fig. 11) (Kizzie et al. 2011). Although Mg2(dobdc) has the highest reported adsorption capacity for CO2 at low pressures, it performed the worst out of the series with a recovery of only 16 % of its initial capacity after regeneration. Ni2(dobdc) and Co2(dobdc) performed far better with recoveries of

38

Y. Shi et al.

Fig. 11 Comparison of the flow-through CO2 capacities, as determined from breakthrough experiments using a 5:1 N2/CO2 mixture, for pristine M2(dobdc) and regenerated M2(dobdc) after exposure to 70 % RH (Kizzie et al. 2011)

61 % and 85 % of their initial CO2 capacity, respectively. This is in consistent with a similar study that found Ni2(dobdc) could maintain its CO2 capacity after steam conditioning and long-term storage, while Mg2(dobdc) suffers a significant loss in capacity (Liu et al. 2011). Some flexible MOFs exhibit promising CO2 adsorption properties in the existence of water vapor (Cheng et al. 2009). In one report, water induced structural changes in MIL-53 that promoted a higher selectivity for CO2 over CH4 (Llewellyn et al. 2006). The breakthrough CO2 adsorption of the flexible framework NH2-MIL53(Al) in the presence of 5 % water vapor was also investigated (Stavitski et al. 2011). Interestingly, CO2 is selectively retained by the framework even in the presence of water. While the abovementioned experiments are crucial for the initial assessment of MOFs for CO2 capture, multicomponent adsorption isotherms are of high priority for evaluating and understanding the performance under conditions likely to be encountered in an actual capture system (Keskin et al. 2010). • Other Minor Components. The exact amount of each species present in actual gas varies based on the specific configuration of a given plant. Particularly, in order to research the influence of flue gas contaminants, MIL-101(Cr) was selected as the focal point by Liu (Liu et al. 2013). CO2 adsorption capacity of MIL-101(Cr) was able to maintain a high level of performance in trace gas-contaminated environments as well as after multiple cycles of adsorption and mild-condition regeneration. The addition of H2O, SO2, and NO to a 10 vol. % CO2/N2 feed flow was found to have only a minor effect on adsorption capacity. Under feed flow conditions of 10 vol.% CO2, 100 ppm SO2, 100 ppm NO, and 10 % RH, MIL-101(Cr) preserved greater than 95 % of its adsorption capacity after 5 cycles of adsorption/desorption.

CO2 Capture Using Solid Sorbents

39

Regenerable Alkali-Metal Carbonate-Based Sorbents Due to the low operating temperature ( AC > vermiculite > silica gel. To identify the best sorbent support system, some researchers (Lee et al. 2006a; Lee and Kim 2007) have prepared several K2CO3-based sorbents by impregnating it on various supports, such as AC, TiO2, Al2O3, MgO, SiO2, and zeolites. CO2 adsorption capacities of K2CO3/AC, K2CO3/TiO2, K2CO3/MgO, and K2CO3/Al2O3 (with an active phase loading of 30 wt%) were 2.0, 1.9, 2.7, and 1.9 mmol/g, respectively. However, the CO2 adsorption capacities of K2CO3/Al2O3 and K2CO3/ MgO decreased after regeneration at 473 K , because of the formation of KAl (CO3)2(OH)2, K2Mg(CO3)2, and K2Mg(CO3)2  4(H2O) phases during carbonation, which were not completely converted to the original K2CO3 phase. However, in the case of K2CO3/AC and K2CO3/TiO2 sorbent systems, regeneration was not a problem in the temperature range of 403–423 K. “KZrI30” (30 wt% K2CO3/ZrO2 sorbent system) was developed in 2009. The CO2 adsorption capacity of the sorbent was 96 % of the theoretical value in the presence of 1 % CO2 and 9 % H2O at 323 K, and it almost remained the same in multicycle operation (Lee et al. 2009). It is reported that the enhanced CO2 capture capacity can be obtained by converting the entire K2CO3  1.5H2O phase to the KHCO3 phase if the sorbents are fully activated with excess water (Lee et al. 2006b). Lee et al. (2011) reported a new regenerable modified Al2O3 support for K2CO3 sorbent for CO2 adsorption below 473 K. The CO2 adsorption capacity of the 48 wt% K2CO3-loaded sorbent was 2.9 mmol/g and did not decrease over five cycles. Zhao et al. (2009b, c) found that K2CO3 with hexagonal crystals has superior carbonation kinetics over monoclinic K2CO3, because of the crystal structure similarities between K2CO3 (hexagonal) and KHCO3. Table 10 summarizes the literature data on alkali carbonate sorbents for CO2 capture. In summary, the high CO2 capture capacity of Na2CO3 (9.43 mmol/g) and K2CO3 (7.23 mmol/g) and favorable carbonation/regeneration temperature between 333 K and 473 K suggest that they are potentially excellent adsorbents for CO2 capture. Moreover, they have the additional advantage of being relatively inexpensive. However, to be commercially viable, the long-term stability and persistence performance of these sorbents under real flue gas conditions of post-combustion applications have yet to be established. The above discussion clearly indicates that several chemisorbents, such as amine-functionalized sorbent (both impregnated and grafted), have shown promise to meet the desired working capacity target in simulated flue gas conditions. Mostly, chemisorbents are found to have higher CO2 selectivity. Aminefunctionalized polymer-based sorbents showed very high adsorption capacity, but

AC, TiO2, Al2O3, MgO, ZrO2, CaO, SiO2, and zeolites

“Sorb KX35” (proprietary recipe)

“Sorb A” (proprietary recipe)

AC, silica gel, activated Al2O3 Modified Al2O3 support KAl(CO3) (OH)2

K2CO3

K2CO3

K2CO3

K2CO3

K2CO3

Ceramic supported sorbents

Support AC, activated coke, and silica Ceramic supported sorbents Ceramic supported sorbents

Na2CO3

Na2CO3

Na2CO3

Active phase K2CO3

35 wt%

Ads.: 333 Reg.: 473 (in N2) Ads.: 343–363 Reg.: 403 (in N2)

Simulated flue gas: (dry basis)12 % CO2 and 88 % N2; 7–30 % moisture Slipstream coal-fired flue gas: 7–9 % CO2 (dry basis), 10–19 % H2O 15 % CO2, 15 % H2O, and N2 balance 1 % CO2, 9 % H2O, and N2 balance

30 wt%

Ads.: 333–373 Reg.: 403–673 (Moisture up to 9 % and balance N2) Ads.: 333–373 Reg.: 393–493 (in N2) Ads.: 343–363 Reg.: g150 (in N2)

~28–48 wt%

~25 wt%

35 wt%

Simulated flue gas: 14.4 % CO2, 5.4 % O2, 10 % H2O, and 70.2 % N2 1 % CO2, 0–11 % H2O, and N2 balance

Gas composition Simulated flue gas and actual flue gas in slipstream Simulated flue gas and actual flue gas 10 % CO2, 12.2 % H2O, and 77.8 % N2

20–50 wt%

35 wt%

wt% active phase ~35 wt% in AC 10–40 wt%

Ads.: 323–343 Reg.: 393 (in N2)

Temperature of operation (K) Ads.: 373 Reg.: 423 Ads.: 333–343 Reg.: 393–413 Ads.: 323–343 Reg.: >408 (in N2)

Table 10 Alkali carbonate sorbents for CO2 capture

~2.9 (~48 wt% K2CO3 loading)

(Zhao et al. 2009d) (Lee et al. 2011)

(Park et al. 2009)

CO2 >85 % (capacity not available) ~0.34–1.7

(Yi et al. 2007)

(Lee et al. 2006a, b, 2009; Lee and Kim 2007)

(Lee et al. 2008b)

References (Hayashi et al. 1998) (Samanta et al. 2012) (Seo et al. 2007)

~2.1 (~96 % sorbent efficiency)

~2.6 (~80 % efficiency with a 35 % active phase) ~2.3 (>80 % sorbent efficiency with a 30 % active phase) ~1.1–2.7

Capacity (mmol/g) ~2.1 (Ads. efficiency ~80 %) ~0.5–3.2

CO2 Capture Using Solid Sorbents 41

42

Y. Shi et al.

there is not adequate information regarding other selection criteria. Impregnated mesoporous silica sorbents showed improved capacity in the presence of water vapor, whereas grafted silica showed good thermal stability at high temperature (90 % CO2 purity during tests with 200 standard l min1 of flue gas. In addition, the column operated for approximately 7000 adsorption/regeneration cycles with no signs of adsorbent degradation and no loss in process or adsorbent performance. The project partners are now looking at scaling up to pilot-scale testing. Due to the early stages of development, not much information is available currently on the pilot plants; the availability of data on these projects in the future will represent a crucial step toward the deployment of adsorption processes at commercial scale.

46

Y. Shi et al.

Future Directions Compared to conventional liquid amine processes, solid sorbents for capture CO2 are advantageous in regeneration energy cutback, corrosion prevention, and cost reduction. However, as discussed previously, solid sorbents also have limitations and challenges to be addressed before they can be applied commercially. The following are three recommendations for further research: • Synthesis and modification of the potential solid sorbents to enhance the adsorption performance, such as working capacity, selectivity, and multicycle durability • Comparison of some of the most promising solid sorbents, based on technoeconomic assessment of the system including thermal integration • Performance study of the potential solid sorbents under actual gas conditions using various bed configurations, such as fixed bed, fluidized bed, moving bed, and circulating bed

References Alesi WR, Gray M, Kitchin JR (2010) CO2 adsorption on supported molecular amidine systems on activated carbon. Chemsuschem 3(8):948–956 An J, Rosi NL (2010) Tuning MOF CO2 adsorption properties via cation exchange. J Am Chem Soc 132(16):5578–5579 An J, Geib SJ, Rosi NL (2010) High and selective CO2 uptake in a cobalt adeninate metal-organic framework exhibiting pyrimidine- and amino-decorated pores. J Am Chem Soc 132(1):38–39 Aprea P, Caputo D, Gargiulo N, Iucolano F, Pepe F (2010) Modeling carbon dioxide adsorption on microporous substrates: comparison between Cu-BTC metal-organic framework and 13X zeolitic molecular sieve. J Chem Eng Data 55(9):3655–3661 Arenillas A, Smith KM, Drage TC, Snape CE (2005) CO2 capture using some fly ash-derived carbon materials. Fuel 84(17):2204–2210 Arstad B, Fjellva˚g H, Kongshaug KO, Swang O, Blom R (2008a) Amine functionalised metal organic frameworks (MOFs) as adsorbents for carbon dioxide. Adsorption 14(6):755–762 Arstad B, Fjellvag H, Kongshaug KO, Swang O, Blom R (2008b) Amine functionalised metal organic frameworks (MOFs) as adsorbents for carbon dioxide. Adsorption J Int Adsorption Soc 14(6):755–762 Avrami M (1939) Kinetics of phase change I – general theory. J Chem Phys 7(12):1103–1112 Avrami M (1941) Granulation, phase change, and microstructure – kinetics of phase change. III. J Chem Phys 9(2):177–184 Bae YS, Farha OK, Hupp JT, Snurr RQ (2009) Enhancement of CO2/N-2 selectivity in a metalorganic framework by cavity modification. J Mater Chem 19(15):2131–2134 Bae YS, Spokoyny AM, Farha OK, Snurr RQ, Hupp JT, Mirkin CA (2010) Separation of gas mixtures using Co(II) carborane-based porous coordination polymers. Chem Commun 46(20):3478–3480 Banerjee R, Phan A, Wang B, Knobler C, Furukawa H, O’Keeffe M, Yaghi OM (2008) Highthroughput synthesis of zeolitic imidazolate frameworks and application to CO2 capture. Science 319(5865):939–943 Banerjee R, Furukawa H, Britt D, Knobler C, O’Keeffe M, Yaghi OM (2009) Control of pore size and functionality in isoreticular zeolitic imidazolate frameworks and their carbon dioxide selective capture properties. J Am Chem Soc 131(11):3875–3877

CO2 Capture Using Solid Sorbents

47

Belmabkhout Y, Sayari A (2010) Isothermal versus non-isothermal adsorption-desorption cycling of triamine-grafted pore-expanded MCM-41 mesoporous silica for CO2 capture from flue gas. Energy Fuel 24:5273–5280 Belmabkhout Y, Serna-Guerrero R, Sayari A (2010) Adsorption of CO2-containing gas mixtures over amine-bearing pore-expanded MCM-41 silica: application for gas purification. Ind Eng Chem Res 49(1):359–365 Benedict JB, Coppens P (2009) Kinetics of the single-crystal to single-crystal two-photon photodimerization of alpha-trans-cinnamic acid to alpha-truxillic acid. J Phys Chem A 113(13):3116–3120 Berger AH, Bhown AS (2011) Comparing physisorption and chemisorption solid sorbents for use separating CO2 from flue gas using temperature swing adsorption. 10th international conference on greenhouse gas control technologies, vol 4, pp 562–567 Berlier K, Frere M (1996) Adsorption of CO2 on activated carbon: simultaneous determination of integral heat and isotherm of adsorption. J Chem Eng Data 41(5):1144–1148 Bloch ED, Britt D, Lee C, Doonan CJ, Uribe-Romo FJ, Furukawa H, Long JR, Yaghi OM (2010) Metal insertion in a microporous metal-organic framework lined with 2,20 -bipyridine. J Am Chem Soc 132(41):14382–14384 Bordiga S, Regli L, Bonino F, Groppo E, Lamberti C, Xiao B, Wheatley PS, Morris RE, Zecchina A (2007) Adsorption properties of HKUST-1 toward hydrogen and other small molecules monitored by IR. Phys Chem Chem Phys 9(21):2676–2685 Brandani F, Ruthven DM (2004) The effect of water on the adsorption of CO2 and C3H8 on type X zeolites. Ind Eng Chem Res 43(26):8339–8344 Burchell TD, Judkins RR, Rogers MR, Williams AM (1997) A novel process and material for the separation of carbon dioxide and hydrogen sulfide gas mixtures. Carbon 35(9):1279–1294 Caplow M (1968) Kinetics of carbamate formation and breakdown. J Am Chem Soc 90(24):6795–6803 Caskey SR, Wong-Foy AG, Matzger AJ (2008) Dramatic tuning of carbon dioxide uptake via metal substitution in a coordination polymer with cylindrical pores. J Am Chem Soc 130(33):10870–10871 Cavka JH, Jakobsen S, Olsbye U, Guillou N, Lamberti C, Bordiga S, Lillerud KP (2008) A new zirconium inorganic building brick forming metal organic frameworks with exceptional stability. J Am Chem Soc 130(42):13850–13851 Cestari AR, Vieira EFS, Vieira GS, Almeida LE (2006) The removal of anionic dyes from aqueous solutions in the presence of anionic surfactant using aminopropylsilica – a kinetic study. J Hazard Mater 138(1):133–141 Chaffee AL (2005) Molecular modeling of HMS hybrid materials for CO2 adsorption. Fuel Process Technol 86(14–15):1473–1486 Chaffee AL, Delaney SW, Knowles GP (2002) Hybrid mesoporous materials for carbon dioxide separation. Abstr Pap Am Chem Soc 223:U572–U573 Chang ACC, Chuang SSC, Gray M, Soong Y (2003) In-situ infrared study of CO2 adsorption on SBA-15 grafted with gamma-(aminopropyl)triethoxysilane. Energy Fuel 17(2):468–473 Chen C, Yang ST, Ahn WS, Ryoo R (2009) Amine-impregnated silica monolith with a hierarchical pore structure: enhancement of CO2 capture capacity. Chem Commun 24:3627–3629 Cheng Y, Kondo A, Noguchi H, Kajiro H, Urita K, Ohba T, Kaneko K, Kanoh H (2009) Reversible structural change of Cu-MOF on exposure to water and its CO2 adsorptivity. Langmuir 25(8):4510–4513 Chew TL, Ahmad AL, Bhatia S (2010) Ordered mesoporous silica (OMS) as an adsorbent and membrane for separation of carbon dioxide (CO2). Adv Colloid Interface Sci 153(1–2):43–57 Choi S, Drese JH, Jones CW (2009) Adsorbent materials for carbon dioxide capture from large anthropogenic point sources. Chemsuschem 2(9):796–854 Chue KT, Kim JN, Yoo YJ, Cho SH, Yang RT (1995) Comparison of activated carbon and zeolite 13x for Co2 recovery from flue-gas by pressure swing adsorption. Ind Eng Chem Res 34(2):591–598

48

Y. Shi et al.

Chui SSY, Lo SMF, Charmant JPH, Orpen AG, Williams ID (1999) A chemically functionalizable nanoporous material [Cu-3(TMA)(2)(H2O)(3)](n). Science 283(5405):1148–1150 Cinke M, Li J, Bauschlicher CW, Ricca A, Meyyappan M (2003) CO2 adsorption in single-walled carbon nanotubes. Chem Phys Lett 376(5–6):761–766 Coriani S, Halkier A, Rizzo A, Ruud K (2000) On the molecular electric quadrupole moment and the electric-field-gradient-induced birefringence of CO2 and CS2. Chem Phys Lett 326(3–4):269–276 Couck S, Denayer JFM, Baron GV, Remy T, Gascon J, Kapteijn F (2009) An amine-functionalized MIL-53 metal-organic framework with large separation power for CO2 and CH4. J Am Chem Soc 131(18):6326–6327 Crini G, Peindy HN, Gimbert F, Robert C (2007) Removal of CI basic green 4 (Malachite Green) from aqueous solutions by adsorption using cyclodextrin-based adsorbent: kinetic and equilibrium studies. Sep Purif Technol 53(1):97–110 Cychosz KA, Matzger AJ (2010) Water stability of microporous coordination polymers and the adsorption of pharmaceuticals from water. Langmuir 26(22):17198–17202 de Menezes EW, Lima EC, Royer B, de Souza FE, dos Santos BD, Gregorio JR, Costa TMH, Gushikem Y, Benvenutti EV (2012) Ionic silica based hybrid material containing the pyridinium group used as an adsorbent for textile dye. J Colloid Interface Sci 378:10–20 Demessence A, D’Alessandro DM, Foo ML, Long JR (2009) Strong CO2 binding in a waterstable, triazolate-bridged metal-organic framework functionalized with ethylenediamine. J Am Chem Soc 131(25):8784–8786 Deng H, Doonan CJ, Furukawa H, Ferreira RB, Towne J, Knobler CB, Wang B, Yaghi OM (2010) Multiple functional groups of varying ratios in metal-organic frameworks. Science 327 (5967):846–850 Diaz E, Munoz E, Vega A, Ordonez S (2008) Enhancement of the CO2 retention capacity of Y zeolites by Na and Cs treatments: effect of adsorption temperature and water treatment. Ind Eng Chem Res 47(2):412–418 Dietzel PDC, Besikiotis V, Blom R (2009) Application of metal-organic frameworks with coordinatively unsaturated metal sites in storage and separation of methane and carbon dioxide. J Mater Chem 19(39):7362–7370 Dillon EP, Crouse CA, Barron AR (2008) Synthesis, characterization, and carbon dioxide adsorption of covalently attached polyethyleneimine-functionalized single-wall carbon nanotubes. ACS Nano 2(1):156–164 Do DD, Wang K (1998) A new model for the description of adsorption kinetics in heterogeneous activated carbon. Carbon 36(10):1539–1554 Donaldson TL, Nguyen YN (1980) Carbon-dioxide reaction-kinetics and transport in aqueous amine membranes. Ind Eng Chem Fundam 19(3):260–266 Drese JH, Choi S, Lively RP, Koros WJ, Fauth DJ, Gray ML, Jones CW (2009) Synthesisstructure–property relationships for hyperbranched aminosilica CO2 adsorbents. Adv Funct Mater 19(23):3821–3832 Fifield LS, Fryxell GE, Addleman RS, Aardahl CL (2004) Carbon dioxide capture using aminebased molecular anchors on multi wall carbon nanotubes. Abstr Pap Am Chem Soc 227:U1089 Filburn T, Helble JJ, Weiss RA (2005) Development of supported ethanolamines and modified ethanolamines for CO2 capture. Ind Eng Chem Res 44(5):1542–1546 Fisher JC, Tanthana J, Chuang SSC (2009) Oxide-supported tetraethylenepentamine for CO2 capture. Environ Progr Sustain Energy 28(4):589–598 Franchi RS, Harlick PJE, Sayari A (2005) Applications of pore-expanded mesoporous silica. 2. Development of a high-capacity, water-tolerant adsorbent for CO2. Ind Eng Chem Res 44(21):8007–8013 Ghosh A, Subrahmanyam KS, Krishna KS, Datta S, Govindaraj A, Pati SK, Rao CNR (2008) Uptake of H-2 and CO2 by graphene. J Phys Chem C 112(40):15704–15707 Goeppert A, Meth S, Prakash GKS, Olah GA (2010) Nanostructured silica as a support for regenerable high-capacity organoamine-based CO2 sorbents. Energy Environ Sci 3(12):1949–1960

CO2 Capture Using Solid Sorbents

49

Granite EJ, Pennline HW (2002) Photochemical removal of mercury from flue gas. Abstr Pap Am Chem Soc 223:U523–U524 Gray ML, Soong Y, Champagne KJ, Baltrus J, Stevens RW, Toochinda P, Chuang SSC (2004) CO2 capture by amine-enriched fly ash carbon sorbents. Sep Purif Technol 35(1):31–36 Gray ML, Soong Y, Champagne KJ, Pennline H, Baltrus JP, Stevens RW, Khatri R, Chuang SSC, Filburn T (2005) Improved immobilized carbon dioxide capture sorbents. Fuel Process Technol 86(14–15):1449–1455 Gray ML, Champagne KJ, Fauth D, Baltrus JP, Pennline H (2008) Performance of immobilized tertiary amine solid sorbents for the capture of carbon dioxide. Int J Greenhouse Gas Control 2(1):3–8 Gray ML, Hoffman JS, Hreha DC, Fauth DJ, Hedges SW, Champagne KJ, Pennline HW (2009) Parametric study of solid amine sorbents for the capture of carbon dioxide. Energy Fuel 23:4840–4844 Harlick PJE, Sayari A (2006) Applications of pore-expanded mesoporous silicas. 3. Triamine silane grafting for enhanced CO(2) adsorption. Ind Eng Chem Res 45(9):3248–3255 Harlick PJE, Sayari A (2007) Applications of pore-expanded mesoporous silica. 5. Triamine grafted material with exceptional CO(2) dynamic and equilibrium adsorption performance. Ind Eng Chem Res 46(2):446–458 Harlick PJE, Tezel FH (2004) An experimental adsorbent screening study for CO2 removal from N-2. Microporous Mesoporous Mater 76(1–3):71–79 Hayashi H, Taniuchi J, Furuyashiki N, Sugiyama S, Hirano S, Shigemoto N, Nonaka T (1998) Efficient recovery of carbon dioxide from flue gases of coal-fired power plants by cyclic fixedbed operations over K2CO3-on-carbon. Ind Eng Chem Res 37(1):185–191 Henke S, Fischer RA (2011) Gated channels in a honeycomb-like zinc-dicarboxylate-bipyridine framework with flexible alkyl ether side chains. J Am Chem Soc 133(7):2064–2067 Herm ZR, Swisher JA, Smit B, Krishna R, Long JR (2011) Metal-organic frameworks as adsorbents for hydrogen purification and precombustion carbon dioxide capture. J Am Chem Soc 133(15):5664–5667 Heydari-Gorji A, Sayari A (2011) CO2 capture on polyethylenimine-impregnated hydrophobic mesoporous silica: experimental and kinetic modeling. Chem Eng J 173(1):72–79 Heydari-Gorji A, Belmabkhout Y, Sayari A (2011) Polyethylenimine-impregnated mesoporous silica: effect of amine loading and surface alkyl chains on CO2 adsorption. Langmuir 27(20):12411–12416 Hicks JC, Drese JH, Fauth DJ, Gray ML, Qi GG, Jones CW (2008) Designing adsorbents for CO (2) capture from flue gas-hyperbranched aminosilicas capable, of capturing CO(2) reversibly. J Am Chem Soc 130(10):2902–2903 Hiyoshi N, Yogo K, Yashima T (2004) Adsorption of carbon dioxide on amine modified SBA-15 in the presence of water vapor. Chem Lett 33(5):510–511 Hiyoshi N, Yogo K, Yashima T (2005) Adsorption characteristics of carbon dioxide on organically functionalized SBA-15. Microporous Mesoporous Mater 84(1–3):357–365 Ho YS, McKay G (1999) Pseudo-second order model for sorption processes. Process Biochem 34(5):451–465 Ho MT, Allinson GW, Wiley DE (2008a) Reducing the cost of CO2 capture from flue gases using pressure swing adsorption. Ind Eng Chem Res 47(14):4883–4890 Ho MT, Allinson GW, Wiley DE (2008b) Reducing the cost of CO2 capture from flue gases using membrane technology. Ind Eng Chem Res 47(5):1562–1568 House KZ, Harvey CF, Aziz MJ, Schrag DP (2009) The energy penalty of post-combustion CO2 capture & storage and its implications for retrofitting the US installed base. Energy Environ Sci 2(2):193–205 Hsu SC, Lu CS, Su FS, Zeng WT, Chen WF (2010) Thermodynamics and regeneration studies of CO2 adsorption on multiwalled carbon nanotubes. Chem Eng Sci 65(4):1354–1361 Huang LL, Zhang LZ, Shao Q, Lu LH, Lu XH, Jiang SY, Shen WF (2007) Simulations of binary mixture adsorption of carbon dioxide and methane in carbon nanotubes: temperature, pressure, and pore size effects. J Phys Chem C 111(32):11912–11920

50

Y. Shi et al.

Hwang YK, Hong DY, Chang JS, Jhung SH, Seo YK, Kim J, Vimont A, Daturi M, Serre C, Ferey G (2008) Amine grafting on coordinatively unsaturated metal centers of MOFs: consequences for catalysis and metal encapsulation. Angew Chem Int Ed 47(22):4144–4148 Inui T, Okugawa Y, Yasuda M (1988) Relationship between properties of various zeolites and their CO2-adsorption behaviors in pressure swing adsorption operation. Ind Eng Chem Res 27(7):1103–1109 Ishibashi M, Ota H, Akutsu N, Umeda S, Tajika M, Izumi J, Yasutake A, Kabata T, Kageyama Y (1996) Technology for removing carbon dioxide from power plant flue gas by the physical adsorption method. Energy Convers Manage 37(6–8):929–933 Jadhav PD, Chatti RV, Biniwale RB, Labhsetwar NK, Devotta S, Rayalu SS (2007) Monoethanol amine modified zeolite 13X for CO2 adsorption at different temperatures. Energy Fuel 21(6):3555–3559 Jones CW (2011) CO2 capture from dilute gases as a component of modern global carbon management. Annu Rev Chem Biomol Eng 2(2):31–52 Katoh M, Yoshikawa T, Tomonari T, Katayama K, Tomida T (2000) Adsorption characteristics of ion-exchanged ZSM-5 zeolites for CO2/N-2 mixtures. J Colloid Interface Sci 226(1):145–150 Keskin S, van Heest TM, Sholl DS (2010) Can metal-organic framework materials play a useful role in large-scale carbon dioxide separations? Chemsuschem 3(8):879–891 Khatri RA, Chuang SSC, Soong Y, Gray M (2005) Carbon dioxide capture by diamine-grafted SBA-15: a combined Fourier transform infrared and mass spectrometry study. Ind Eng Chem Res 44(10):3702–3708 Khatri RA, Chuang SSC, Soong Y, Gray M (2006) Thermal and chemical stability of regenerable solid amine sorbent for CO2 capture. Energy Fuel 20(4):1514–1520 Khelifa A, Benchehida L, Derriche Z (2004) Adsorption of carbon dioxide by X zeolites exchanged with Ni2+ and Cr3+: isotherms and isosteric heat. J Colloid Interface Sci 278(1):9–17 Kikkinides ES, Yang RT, Cho SH (1993) Concentration and recovery of CO2 from flue-gas by pressure swing adsorption. Ind Eng Chem Res 32(11):2714–2720 Kim SN, Son WJ, Choi JS, Ahn WS (2008) CO2 adsorption using amine-functionalized mesoporous silica prepared via anionic surfactant-mediated synthesis. Microporous Mesoporous Mater 115(3):497–503 Kim J, Yang ST, Choi SB, Sim J, Kim J, Ahn WS (2011) Control of catenation in CuTATB-n metal-organic frameworks by sonochemical synthesis and its effect on CO2 adsorption. J Mater Chem 21(9):3070–3076 Kitagawa S, Kitaura R, Noro S (2004) Functional porous coordination polymers. Angew Chem Int Ed 43(18):2334–2375 Kizzie AC, Wong-Foy AG, Matzger AJ (2011) Effect of humidity on the performance of microporous coordination polymers as adsorbents for CO2 capture. Langmuir 27(10):6368–6373 Knofel C, Descarpentries J, Benzaouia A, Zelenak V, Mornet S, Llewellyn PL, Hornebecq V (2007) Functionalised micro-/mesoporous silica for the adsorption of carbon dioxide. Microporous Mesoporous Mater 99(1–2):79–85 Knowles GP, Graham JV, Delaney SW, Chaffee AL (2005) Aminopropyl-functionalized mesoporous silicas as CO2 adsorbents. Fuel Process Technol 86(14–15):1435–1448 Knowles GP, Delaney SW, Chaffee AL (2006) Diethylenetriamine[propyl(silyl)]-functionalized (DT) mesoporous silicas as CO2 adsorbents. Ind Eng Chem Res 45(8):2626–2633 Krishnamurthy S, Rao VR, Guntuka S, Sharratt P, Haghpanah R, Rajendran A, Amanullah M, Karimi IA, Farooq S (2014) CO2 capture from dry flue gas by vacuum swing adsorption: a pilot plant study. AICHE J 60(5):1830–1842 Leal O, Bolivar C, Ovalles C, Garcia JJ, Espidel Y (1995) Reversible adsorption of carbon dioxide on amine surface-bonded silica gel. Inorg Chim Acta 240(1–2):183–189 Lee SC, Kim JC (2007) Dry potassium-based sorbents for CO2 capture. Catal Surv Asia 11(4):171–185

CO2 Capture Using Solid Sorbents

51

Lee KB, Sircar S (2008) Removal and recovery of compressed CO2 from flue gas by a novel thermal swing chemisorption process. AICHE J 54(9):2293–2302 Lee SC, Choi BY, Lee TJ, Ryu CK, Soo YS, Kim JC (2006a) CO2 absorption and regeneration of alkali metal-based solid sorbents. Catal Today 111(3–4):385–390 Lee SC, Choi BY, Ryu CK, Ahn YS, Lee TJ, Kim JC (2006b) The effect of water on the activation and the CO2 capture capacities of alkali metal-based sorbents. Korean J Chem Eng 23(3):374–379 Lee S, Filburn TP, Gray M, Park JW, Song HJ (2008a) Screening test of solid amine sorbents for CO2 capture. Ind Eng Chem Res 47(19):7419–7423 Lee JB, Ryu CK, Baek JI, Lee JH, Eom TH, Kim SH (2008b) Sodium-based dry regenerable sorbent for carbon dioxide capture from power plant flue gas. Ind Eng Chem Res 47(13):4465–4472 Lee SC, Chae HJ, Lee SJ, Park YH, Ryu CK, Yi CK, Kim JC (2009) Novel regenerable potassiumbased dry sorbents for CO2 capture at low temperatures. J Mol Catal B Enzym 56(2–3):179–184 Lee SC, Kwon YM, Ryu CY, Chae HJ, Ragupathy D, Jung SY, Lee JB, Ryu CK, Kim JC (2011) Development of new alumina-modified sorbents for CO2 sorption and regeneration at temperatures below 200 degrees C. Fuel 90(4):1465–1470 Li PY, Zhang SJ, Chen SX, Zhang QK, Pan JJ, Ge BQ (2008a) Preparation and adsorption properties of polyethylenimine containing fibrous adsorbent for carbon dioxide capture. J Appl Polym Sci 108(6):3851–3858 Li PY, Ge BQ, Zhang SJ, Chen SX, Zhang QK, Zhao YN (2008b) CO2 capture by polyethylenimine-modified fibrous adsorbent. Langmuir 24(13):6567–6574 Li G, Xiao P, Webley P, Zhang J, Singh R, Marshall M (2008c) Capture of CO2 from high humidity flue gas by vacuum swing adsorption with zeolite 13X. Adsorption J Int Adsorption Soc 14(2–3):415–422 Li JR, Kuppler RJ, Zhou HC (2009) Selective gas adsorption and separation in metal-organic frameworks. Chem Soc Rev 38(5):1477–1504 Li W, Choi S, Drese JH, Hornbostel M, Krishnan G, Eisenberger PM, Jones CW (2010) Steamstripping for regeneration of supported amine-based CO2 adsorbents. Chemsuschem 3(8):899–903 Liu J, Wang Y, Benin AI, Jakubczak P, Willis RR, LeVan MD (2010) CO2/H2O adsorption equilibrium and rates on metal-organic frameworks: HKUST-1 and Ni/DOBDC. Langmuir 26(17):14301–14307 Liu J, Benin AI, Furtado AMB, Jakubczak P, Willis RR, LeVan MD (2011) Stability effects on CO2 adsorption for the DOBDC series of metal-organic frameworks. Langmuir 27(18):11451–11456 Liu Z, Wang L, Kong XM, Li P, Yu JG, Rodrigues AE (2012) Onsite CO2 capture from flue gas by an adsorption process in a coal-fired power plant. Ind Eng Chem Res 51(21):7355–7363 Liu Q, Ning LQ, Zheng SD, Tao MN, Shi Y, He Y (2013) Adsorption of carbon dioxide by MIL-101(Cr): regeneration conditions and influence of flue gas contaminants. Sci Rep 3:2916 Liu Q, Shi Y, Zheng SD, Ning LQ, Ye Q, Tao MN, He Y (2014a) Amine-functionalized low-cost industrial grade multi-walled carbon nanotubes for the capture of carbon dioxide. J Energy Chem 23(1):111–118 Liu Q, Shi JJ, Zheng SD, Tao MN, He Y, Shi Y (2014b) Kinetics studies of CO2 adsorption/ desorption on amine-functionalized multiwalled carbon nanotubes. Ind Eng Chem Res 53(29):11677–11683 Llewellyn PL, Bourrelly S, Serre C, Filinchuk Y, Ferey G (2006) How hydration drastically improves adsorption selectivity for CO2 over CH4 in the flexible chromium terephthalate MIL-53. Angew Chem Int Ed 45(46):7751–7754 Loiseau T, Serre C, Huguenard C, Fink G, Taulelle F, Henry M, Bataille T, Ferey G (2004) A rationale for the large breathing of the porous aluminum terephthalate (MIL-53) upon hydration. Chem Eur J 10(6):1373–1382

52

Y. Shi et al.

Lopes ECN, dos Anjos FSC, Vieira EFS, Cestari AR (2003) An alternative Avrami equation to evaluate kinetic parameters of the interaction of Hg(11) with thin chitosan membranes. J Colloid Interface Sci 263(2):542–547 Low JJ, Benin AI, Jakubczak P, Abrahamian JF, Faheem SA, Willis RR (2009) Virtual high throughput screening confirmed experimentally: porous coordination polymer hydration. J Am Chem Soc 131(43):15834–15842 Lu CY, Bai HL, Wu BL, Su FS, Fen-Hwang J (2008) Comparative study of CO2 capture by carbon nanotubes, activated carbons, and zeolites. Energy Fuel 22(5):3050–3056 Ma XL, Wang XX, Song CS (2009) “Molecular Basket” sorbents for separation of CO2 and H2S from various gas streams. J Am Chem Soc 131(16):5777–5783 Maroto-Valer MM, Tang Z, Zhang Y (2005) CO2 capture by activated and impregnated anthracites. Fuel Process Technol 86(14–15):1487–1502 Maroto-Valer MM, Lu Z, Zhang Y, Tang Z (2008) Sorbents for CO2 capture from high carbon fly ashes. Waste Manag 28(11):2320–2328 Mason JA, Sumida K, Herm ZR, Krishna R, Long JR (2011) Evaluating metal-organic frameworks for post-combustion carbon dioxide capture via temperature swing adsorption. Energy Environ Sci 4(8):3030–3040 McDonald TM, D’Alessandro DM, Krishna R, Long JR (2011) Enhanced carbon dioxide capture upon incorporation of N, N0 -dimethylethylenediamine in the metal-organic framework CuBTTri. Chem Sci 2(10):2022–2028 Merel J, Clausse M, Meunier F (2008) Experimental investigation on CO2 post-combustion capture by indirect thermal swing adsorption using 13X and 5A zeolites. Ind Eng Chem Res 47(1):209–215 Millward AR, Yaghi OM (2005) Metal-organic frameworks with exceptionally high capacity for storage of carbon dioxide at room temperature. J Am Chem Soc 127(51):17998–17999 Monazam ER, Shadle LJ, Miller DC, Pennline HW, Fauth DJ, Hoffman JS, Gray ML (2013) Equilibrium and kinetics analysis of carbon dioxide capture using immobilized amine on a mesoporous silica. AICHE J 59(3):923–935 Mulgundmath V, Tezel FH (2010) Optimisation of carbon dioxide recovery from flue gas in a TPSA system. Adsorption J Int Adsorption Soc 16(6):587–598 Na BK, Koo KK, Eum HM, Lee H, Song HK (2001) CO(2) recovery from flue gas by PSA process using activated carbon. Korean J Chem Eng 18(2):220–227 Navarro JAR, Barea E, Salas JM, Masciocchi N, Galli S, Sironi A, Ania CO, Parra JB (2007) Borderline microporous-ultramicroporous palladium(II) coordination polymer networks. Effect of pore functionalisation on gas adsorption properties. J Mater Chem 17(19):1939–1946 Nestle NFEI, Kimmich R (1996) NMR imaging of heavy metal absorption in alginate, immobilized cells, and kombu algal biosorbents. Biotechnol Bioeng 51(5):538–543 Okunev AG, Sharonov VE, Aristov YI, Parmon VN (2000) Sorption of carbon dioxide from wet gases by K2CO3-in-porous matrix: influence of the matrix nature. React Kinet Catal Lett 71(2):355–362 Okunev AG, Sharonov VE, Gubar AV, Danilova IG, Paukshtis EA, Moroz EM, Kriger TA, Malakhov VV, Aristov YI (2003) Sorption of carbon dioxide by the composite sorbent of potassium carbonate in porous matrix. Russ Chem Bull 52(2):359–363 Park YC, Jo SH, Ryu CK, Yi CK (2009) Long-term operation of carbon dioxide capture system from a real coal-fired flue gas using dry regenerable potassium-based sorbents. Greenhouse Gas Control Technol 1(1):1235–1239 Park TH, Cychosz KA, Wong-Foy AG, Dailly A, Matzger AJ (2011) Gas and liquid phase adsorption in isostructural Cu-3[biaryltricarboxylate](2) microporous coordination polymers. Chem Commun 47(5):1452–1454 Pevida C, Plaza MG, Arias B, Fermoso J, Rubiera F, Pis JJ (2008) Surface modification of activated carbons for CO(2) capture. Appl Surf Sci 254(22):7165–7172

CO2 Capture Using Solid Sorbents

53

Phan A, Doonan CJ, Uribe-Romo FJ, Knobler CB, O’Keeffe M, Yaghi OM (2010) Synthesis, structure, and carbon dioxide capture properties of zeolitic imidazolate frameworks. Acc Chem Res 43(1):58–67 Plaza MG, Pevida C, Arias B, Fermoso J, Rubiera F, Pis JJ (2009) A comparison of two methods for producing CO(2) capture adsorbents. Greenhouse Gas Control Technol 1(1):1107–1113 Przepiorski J, Skrodzewicz M, Morawski AW (2004) High temperature ammonia treatment of activated carbon for enhancement of CO2 adsorption. Appl Surf Sci 225(1–4):235–242 Qader A, Hooper B, Innocenzi T, Stevens G, Kentish S, Scholes C, Mumford K, Smith K, Webley PA, Zhang J (2011) Novel post-combustion capture technologies on a lignite fired power plant – results of the CO2CRC/H3 capture project. 10th international conference on greenhouse gas control technologies, vol 4, pp 1668–1675 Qi GG, Wang YB, Estevez L, Duan XN, Anako N, Park AHA, Li W, Jones CW, Giannelis EP (2011) High efficiency nanocomposite sorbents for CO2 capture based on amine-functionalized mesoporous capsules. Energy Environ Sci 4(2):444–452 Razavi SS, Hashemianzadeh SM, Karimi H (2011) Modeling the adsorptive selectivity of carbon nanotubes for effective separation of CO2/N-2 mixtures. J Mol Model 17(5):1163–1172 Rochelle GT (2009) Amine scrubbing for CO2 capture. Science 325(5948):1652–1654 Rubin ES, Chen C, Rao AB (2007) Cost and performance of fossil fuel power plants with CO2 capture and storage. Energy Policy 35(9):4444–4454 Samanta A, Zhao A, Shimizu GKH, Sarkar P, Gupta R (2012) Post-combustion CO2 capture using solid sorbents: a review. Ind Eng Chem Res 51(4):1438–1463 Satyapal S, Filburn T, Trela J, Strange J (2001) Performance and properties of a solid amine sorbent for carbon dioxide removal in space life support applications. Energy Fuel 15(2):250–255 Sayari A, Belmabkhout Y (2010) Stabilization of amine-containing CO2 adsorbents: dramatic effect of water vapor. J Am Chem Soc 132(18):6312–6314 Sayari A, Hamoudi S, Yang Y (2005) Applications of pore-expanded mesoporous silica. 1. Removal of heavy metal cations and organic pollutants from wastewater. Chem Mater 17(1):212–216 Sayari A, Belmabkhout Y, Serna-Guerrero R (2011) Flue gas treatment via CO2 adsorption. Chem Eng J 171(3):760–774 Seo Y, Jo SH, Ryu CK, Yi CK (2007) Effects of water vapor pretreatment time and reaction temperature on CO2 capture characteristics of a sodium-based solid sorbent in a bubbling fluidized-bed reactor. Chemosphere 69(5):712–718 Serna-Guerrero R, Sayari A (2010) Modeling adsorption of CO2 on amine-functionalized mesoporous silica. 2: kinetics and breakthrough curves. Chem Eng J 161(1–2):182–190 Serna-Guerrero R, Da’na E, Sayari A (2008) New insights into the interactions of CO(2) with amine-functionalized silica. Ind Eng Chem Res 47(23):9406–9412 Serna-Guerrero R, Belmabkhout Y, Sayari A (2010a) Influence of regeneration conditions on the cyclic performance of amine- grafted mesoporous silica for CO2 capture: An experimental and statistical study. Chem Eng Sci 65(14):4166–4172 Serna-Guerrero R, Belmabkhout Y, Sayari A (2010b) Further investigations of CO2 capture using triamine-grafted pore-expanded mesoporous silica. Chem Eng J 158(3):513–519 Serna-Guerrero R, Belmabkhout Y, Sayari A (2010c) Triamine-grafted pore-expanded mesoporous silica for CO2 capture: effect of moisture and adsorbent regeneration strategies. Adsorption J Int Adsorption Soc 16(6):567–575 Serre C, Millange F, Thouvenot C, Nogues M, Marsolier G, Louer D, Ferey G (2002) Very large breathing effect in the first nanoporous chromium(III)-based solids: MIL-53 or Cr-III(OH) center dot{O2C-C6H4-CO2}center dot{HO2C-C6H4-CO2H}(x)center dot H2Oy. J Am Chem Soc 124(45):13519–13526 Sharonov VE, Okunev AG, Aristov YI (2004) Kinetics of carbon dioxide sorption by the composite material K2CO3 in Al2O3. React Kinet Catal Lett 82(2):363–369

54

Y. Shi et al.

Siriwardane RV, Shen MS, Fisher EP, Poston JA (2001) Adsorption of CO2 on molecular sieves and activated carbon. Energy Fuel 15(2):279–284 Siriwardane RV, Shen MS, Fisher EP (2003) Adsorption of CO(2), N(2), and O(2) on natural zeolites. Energy Fuel 17(3):571–576 Skoulidas AI, Sholl DS, Johnson JK (2006) Adsorption and diffusion of carbon dioxide and nitrogen through single-walled carbon nanotube membranes. J Chem Phys 124, 054708 Son WJ, Choi JS, Ahn WS (2008) Adsorptive removal of carbon dioxide using polyethyleneimineloaded mesoporous silica materials. Microporous Mesoporous Mater 113(1–3):31–40 Stavitski E, Pidko EA, Couck S, Remy T, Hensen EJM, Weckhuysen BM, Denayer J, Gascon J, Kapteijn F (2011) Complexity behind CO2 Capture on NH2-MIL-53(Al). Langmuir 27(7):3970–3976 Stylianou KC, Warren JE, Chong SY, Rabone J, Bacsa J, Bradshaw D, Rosseinsky MJ (2011) CO2 selectivity of a 1D microporous adenine-based metal-organic framework synthesised in water. Chem Commun 47(12):3389–3391 Su FS, Lu CS, Cnen WF, Bai HL, Hwang JF (2009) Capture of CO2 from flue gas via multiwalled carbon nanotubes. Sci Total Environ 407(8):3017–3023 Su FS, Lu CY, Kuo SC, Zeng WT (2010) Adsorption of CO2 on amine-functionalized Y-type zeolites. Energy Fuel 24:1441–1448 Sumida K, Horike S, Kaye SS, Herm ZR, Queen WL, Brown CM, Grandjean F, Long GJ, Dailly A, Long JR (2010) Hydrogen storage and carbon dioxide capture in an iron-based sodalite-type metal-organic framework (Fe-BTT) discovered via high-throughput methods. Chem Sci 1(2):184–191 Sumida K, Rogow DL, Mason JA, McDonald TM, Bloch ED, Herm ZR, Bae TH, Long JR (2012) Carbon dioxide capture in metal-organic frameworks. Chem Rev 112(2):724–781 Tang Z, Maroto-Valer MM, Zhang YZ (2004) CO2 capture using anthracite based sorbents. Abstr Pap Am Chem Soc 227:U1089 Vaidhyanathan R, Iremonger SS, Dawson KW, Shimizu GKH (2009) An amine-functionalized metal organic framework for preferential CO2 adsorption at low pressures. Chem Commun 35:5230–5232 Vishnyakov A, Ravikovitch PI, Neimark AV, Bulow M (2003) Nanopore structure and sorption properties of Cu-BTC metal-organic framework. Abstr Pap Am Chem Soc 226:U683 Wang YX, Zhou YP, Liu CM, Zhou L (2008a) Comparative studies of CO2 and CH4 sorption on activated carbon in presence of water. Colloids Surf Physicochem Eng Aspects 322 (1–3):14–18 Wang B, Cote AP, Furukawa H, O’Keeffe M, Yaghi OM (2008b) Colossal cages in zeolitic imidazolate frameworks as selective carbon dioxide reservoirs. Nature 453(7192):207–211 Wang L, Yang Y, Shen WL, Kong XM, Li P, Yu JG, Rodrigues AE (2013a) Experimental evaluation of adsorption technology for CO2 capture from flue gas in an existing coal-fired power plant. Chem Eng Sci 101:615–619 Wang L, Yang Y, Shen WL, Kong XM, Li P, Yu JG, Rodrigues AE (2013b) CO2 capture from flue gas in an existing coal-fired power plant by two successive pilot-scale VPSA units. Ind Eng Chem Res 52(23):7947–7955 Whitfield TR, Wang XQ, Liu LM, Jacobson AJ (2005) Metal-organic frameworks based on iron oxide octahedral chains connected by benzenedicarboxylate dianions. Solid State Sci 7(9):1096–1103 Wu HH, Reali RS, Smith DA, Trachtenberg MC, Li J (2010) Highly selective CO2 capture by a flexible microporous metal-organic framework (MMOF) material. Chem Eur J 16(47):13951–13954 Xu XC, Song CS, Andresen JM, Miller BG, Scaroni AW (2002) Novel polyethylenimine-modified mesoporous molecular sieve of MCM-41 type as high-capacity adsorbent for CO2 capture. Energy Fuel 16(6):1463–1469 Xu XC, Song CS, Andresen JM, Miller BG, Scaroni AW (2003) Preparation and characterization of novel CO2 “molecular basket” adsorbents based on polymer-modified mesoporous molecular sieve MCM-41. Microporous Mesoporous Mater 62(1–2):29–45

CO2 Capture Using Solid Sorbents

55

Xu XC, Song CS, Miller BG, Scaroni AW (2005a) Influence of moisture on CO2 separation from gas mixture by a nanoporous adsorbent based on polyethylenimine-modified molecular sieve MCM-41. Ind Eng Chem Res 44(21):8113–8119 Xu XC, Song CS, Miller BG, Scaroni AW (2005b) Adsorption separation of carbon dioxide from flue gas of natural gas-fired boiler by a novel nanoporous “molecular basket” adsorbent. Fuel Process Technol 86(14–15):1457–1472 Yang Y, Li H, Chen S, Zhao Y, Li Q (2010) Preparation and characterization of a solid amine adsorbent for capturing CO2 by grafting allylamine onto PAN fiber. Langmuir 26(17):13897–13902 Yazaydin AO, Benin AI, Faheem SA, Jakubczak P, Low JJ, Willis RR, Snurr RQ (2009a) Enhanced CO2 adsorption in metal-organic frameworks via occupation of open-metal sites by coordinated water molecules. Chem Mater 21(8):1425–1430 Yazaydin AO, Snurr RQ, Park TH, Koh K, Liu J, LeVan MD, Benin AI, Jakubczak P, Lanuza M, Galloway DB, Low JJ, Willis RR (2009b) Screening of metal-organic frameworks for carbon dioxide capture from flue gas using a combined experimental and modeling approach. J Am Chem Soc 131(51):18198–18199 Ye Q, Jiang JQ, Wang CX, Liu YM, Pan H, Shi Y (2012) Adsorption of low-concentration carbon dioxide on amine-modified carbon nanotubes at ambient temperature. Energy Fuel 26 (4):2497–2504 Yi CK, Jo SH, Seo Y, Lee JB, Ryu CK (2007) Continuous operation of the potassium-based dry sorbent CO2 capture process with two fluidized-bed reactors. Int J Greenhouse Gas Control 1 (1):31–36 Yong Z, Mata V, Rodrigues AE (2002) Adsorption of carbon dioxide at high temperature – a review. Sep Purif Technol 26(2–3):195–205 Yue MB, Chun Y, Cao Y, Dong X, Zhu JH (2006) CO2 capture by As-prepared SBA-15 with an occluded organic template. Adv Funct Mater 16(13):1717–1722 Yue MB, Sun LB, Cao Y, Wang Y, Wang ZJ, Zhu JH (2008a) Efficient CO2 capturer derived from as-synthesized MCM-41 modified with amine. Chem Eur J 14(11):3442–3451 Yue MB, Sun LB, Cao Y, Wang ZJ, Wang Y, Yu Q, Zhu JH (2008b) Promoting the CO2 adsorption in the amine-containing SBA-15 by hydroxyl group. Microporous Mesoporous Mater 114(1–3):74–81 Zelenak V, Badanicova M, Halamova D, Cejka J, Zukal A, Murafa N, Goerigk G (2008) Aminemodified ordered mesoporous silica: effect of pore size on carbon dioxide capture. Chem Eng J 144(2):336–342 Zhang YZ, Maroto-Valer MM, Zhong Z (2004) Microporous activated carbons produced from unburned carbon in fly ash and their application for CO2 capture. Abstr Pap Am Chem Soc 227: U1090 Zhang J, Singh R, Webley PA (2008a) Alkali and alkaline-earth cation exchanged chabazite zeolites for adsorption based CO2 capture. Microporous Mesoporous Mater 111(1–3):478–487 Zhang J, Webley PA, Xiao P (2008b) Effect of process parameters on power requirements of vacuum swing adsorption technology for CO2 capture from flue gas. Energy Convers Manage 49(2):346–356 Zhao ZX, Li Z, Lin YS (2009a) Adsorption and diffusion of carbon dioxide on metal-organic framework (MOF-5). Ind Eng Chem Res 48(22):10015–10020 Zhao CW, Chen XP, Zhao CS, Liu YK (2009b) Carbonation and hydration characteristics of dry potassium-based sorbents for CO(2) capture. Energy Fuel 23:1766–1769 Zhao CW, Chen XP, Zhao CS (2009c) Effect of crystal structure on CO2 capture characteristics of dry potassium-based sorbents. Chemosphere 75(10):1401–1404 Zhao CW, Chen XP, Zhao CS (2009d) CO2 absorption using dry potassium-based sorbents with different supports. Energy Fuel 23:4683–4687 Zhao A, Samanta A, Sarkar P, Gupta R (2013) Carbon dioxide adsorption on amine-impregnated mesoporous SBA-15 sorbents: experimental and kinetics study. Ind Eng Chem Res 52(19):6480–6491

56

Y. Shi et al.

Zheng F, Tran DN, Busche B, Fryxell GE, Addleman RS, Zemanian TS, Aardahl CL (2004) Ethylenediamine-modified SBA-15 as regenerable CO2 sorbents. Abstr Pap Am Chem Soc 227:U1086–U1087 Zhong T, Zhang YZ, Maroto-Valer MM (2004) Study of CO2 adsorption capacities of modified activated anthracites. Abstr Pap Am Chem Soc 227:U1090 Zukal A, Mayerova J, Kubu M (2010) Adsorption of carbon dioxide on high-silica zeolites with different framework topology. Top Catal 53(19–20):1361–1366

CO2 Capture by Membrane Teruhiko Kai and Shuhong Duan

Contents Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . CO2-Separation Membrane . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Principle of Membrane Gas Separation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . An Overview in the Development of CO2 Membrane Separation Material . . . . . . . . . . . . . . . . Membrane Module Design and Manufacturing for CO2 Membrane Separation . . . . . . . . . . . Demonstration (Field Test) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Summary and Future Prospects . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

2 2 2 11 20 23 23 24

Abstract

Among various CO2-capture technologies, membrane separation is considered as one of the promising solutions because of its energy efficiency and operation simplicity. Many research and development are conducted for the (1) CO2/N2 (CO2 separation from flue gas), (2) CO2/CH4 (CO2 separation from natural gas), and (3) CO2/H2 (CO2 separation from integrated gasification combined cycle (IGCC) processes). In this section, recent research and development of various types of membranes (polymeric membranes, inorganic membranes, ionic liquid membranes, facilitated transport membranes) for these applications are reviewed, as well as future prospects of membrane separation technologies.

T. Kai (*) • S. Duan Research Institute of Innovative Technology for the Earth (RITE), Kizugawa-shi, Kyoto, Japan e-mail: [email protected]; [email protected] # Springer Science+Business Media New York 2015 W.-Y. Chen et al. (eds.), Handbook of Climate Change Mitigation and Adaptation, DOI 10.1007/978-1-4614-6431-0_84-1

1

2

T. Kai and S. Duan

Introduction Carbon dioxide (CO2) capture and storage (CCS) is generally considered as an option for climate change mitigation. There are three principal pathways to capture CO2 from large emission sources: (1) CO2/N2 (CO2 separation from flue gas), (2) CO2/CH4 (CO2 separation from natural gas), and (3) CO2/H2 (CO2 separation from integrated gasification combined cycle (IGCC) processes). For practical application of the CCS technology, cost-effective methods for CO2 capture are required. Many studies have focused on the development of effective CO2-capture and CO2-separation technologies. Among them, membrane separation is one of the promising solutions because of its energy efficiency and operation simplicity. In the case of CO2 separation from flue gas, more than half of the cost of membrane separation goes toward powering the vacuum pump to evacuate the permeate side of the membrane. In addition, the costs of the membrane module and piping are high because the pressure ratio between the feed and the permeate side is low, and a large membrane area is needed. Therefore, high CO2 permeability is more important than high selectivity to reduce the cost of the membrane modules. On the other hand, in the case of CO2 separation in IGCC processes, a significant reduction in the CO2-capture cost is expected via the use of membrane technology, because a vacuum pump is not needed for high-pressure gas separations. In this case, both CO2 permeability and CO2/H2 selectivity are important to separate CO2 effectively. A schematic diagram of the IGCC process with membrane separation is shown in Fig. 1. Coal is gasified into synthesis gas and is then converted into H2 and CO2 via the water-gas shift reaction. Here, the gas composition is roughly 60 % H2 and 40 % CO2 at pressure of 2–4 MPa. Therefore, it is expected that membrane separation can reduce the cost of CO2 capture from IGCC. However, it is very difficult to separate CO2 from H2, which has a smaller molecular size. Therefore, it is very important to develop CO2-selective membranes with high CO2/H2 selectivity. In this section, research and development on CO2-selective membranes using various types of materials is reviewed.

CO2-Separation Membrane Principle of Membrane Gas Separation The early membrane separations were osmosis described by Nollet in 1748, electroosmosis described by Reuss in 1803, and dialysis described by Graham in 1861. These observations laid the scientific milestone of the beginning of membrane separation (Mulder 1996). Research on membrane gas separation using O2, N2, CO2, CH4, SO2, etc. started around 170 years ago. However, the application of membrane gas separation started relatively recently. In 1979, Monsanto Company in the United States developed membrane modules for O2/N2 separation

CO2 Capture by Membrane

3

Fig. 1 Schematic diagram of IGCC with CO2 capture

(Nakagama 1989). Membranes were known to have the potential to separate gas mixtures long before 1960, but the technology to fabricate high-performance membranes and modules economically was lacking. The development of high-flux asymmetric membranes and large-surface-area membrane modules for reverse osmosis applications occurred in the late 1960s and early 1970s. The innovative concept of high-flux asymmetric membranes was reported and prepared by Loeb and Sourirajan in 1961 initially for reverse osmosis and then adapted to gas separation, as shown in Fig. 2 (So et al. 1973). An acetone solution of 20 % (w/v) cellulose acetate was cast on a glass plate and dried for about 2 min for forming the surface dense layer and then immersed in water. Phase separation of water and acetone resulted in pore formation in the inner membrane. Hence, asymmetric membrane with porous layer and skin dense layer was prepared, and high flux was obtained. Milestones in the development of membrane gas separation are shown in Fig. 3 (Adapted from (Baker 2002; Li et al. 2006; Ismail and David 2001)). It is considered that the first plant with polysulfone hollow fiber membranes for gas separation was performed by Permea PRISM ® membranes in 1980 for H2/N2 separation. The first plant for CO2/CH4 separation with cellulose triacetate membranes was produced by Separex in 1982. The first commercial vapor separation plants were installed by MTR, GKSS, and Nitto Denko in 1988. The largest membrane plant for natural gas processing (CO2/CH4 separation) was installed in Pakistan in 1994 with spiral wound modules, which was a clear example of the easy scale-up of membrane technology. LTA zeolite membranes were commercialized by MES for dehydration in 1997. The development of membrane materials was investigated from conventional polymers to nanoporous materials (zeolite, carbon, silica, MOF, TR polymer, etc.). With the development of industry, separation of carbon dioxide from gas mixture has become very important. CO2 gas separation can be used for many industrial

4

T. Kai and S. Duan

Fig. 2 Diagram of high-flux asymmetric membranes prepared by Loeb and Sourirajan (Preparation of asymmetric 1973). (a) Diagram of membrane preparation. (b) Loeb and Sourirajan anisotropic phase separation membrane

fields, such as natural gas or land fill gas recovery process, enhanced oil recovery (EOR), upgrading of methane (CO2/CH4 separation) generated by the decomposition of biological wastes, and integrated gasification combined cycle (IGCC) processes (CO2/H2). And, CO2 membrane separation will play an important role in CCS. The membrane can be considered as a permselective barrier or interface between two phases as shown in Fig. 4a. Phase 1 is usually considered as the feed or upstream side phase while phase 2 is considered as the permeate side or downstream side phase. The membrane has the ability to transport one component from

CO2 Capture by Membrane

5 1850

1. Graham’s Law of diffusion

ª

1950

3. Loeb and Sourirajan make the first anisotropic membrane 1961

1960

2. van Amerongen, Barrer make first systematic permeability measurements 4. Spiral-wound and hollow-fiber modules developed for reverse osmosis

1970 1980 5. Permea PRISM® membranes Introduced 1980

6. Generon produces first N2/air separation system 1982 7. Dried CA membranes for CO2/CH4 natural gas separations Separex, Cynara, GMS

8. Advanced membrane materials for O2/N2; H2/N2 and H2/CH4 separation launched by Ube, Medal, Generon 1987 1990 10. Medal polyimide hollow-fiberf membrane for CO2/CH4 separation Installed 1994

9. First commercial vapor separation plants installed by MTR, GKSS, Nitto Denko 1988 11. First propylene/N2 separation plants installed 1996

2000 12. 1997 LTA zeolite membranes commercialized by MES for dehydration 2010

13. From polyimide to nano-porous membranes (Zeolite, Carbon, Silica, MOF, TR polymer etc.)

Fig. 3 Milestones in the development of membrane gas separation (Baker 2002; Li et al. 2006; Ismail and David 2001)

the feed mixture more readily than any other component or components because of differences in physical or chemical properties between the membrane and the permeating components. Transport through the membrane takes place as a result of a driving force acting on the components in the feed. In many cases, the permeation rate through the membrane is proportional to the driving force. The two phases divided by a membrane are different for various membrane separation processes as depicted in Fig. 4b. Driving force can be gradients in the pressure, concentration, and temperature. Membrane separation processes can be classified according to their driving force as in Table 1. Most of membranes used for gas separation have been nonporous polymer membranes, such as cellulose acetate (Yan 1996), silicone rubber polysulfone (Hao Jihao and Wang Shichang 1998; Ismail and Shilton 1998; Borisov et al. 1997), and polyimide (Li and Teo 1998; Thundyil et al. 1999; Staudt-Bickel and Koros 1999). Recently, microporous inorganic membranes, such as zeolite membranes (Wang et al. 1998; Poshusta et al. 1999; Aoki et al. 1998), nanoporous carbon membranes (Hernandez-Huesca et al. 1999), and ceramic membranes (Paranjape et al. 1998), have also been developed. Mechanism of membrane gas separation has been proposed depending on the properties of both the permeant and the membranes, as shown in Fig. 5. Different mechanisms may be involved in the transport of gases across a porous membrane included Poiseuille flow, Knudsen diffusion, and the molecular sieve effect as shown in Fig. 5(1). When membrane has pore sizes much larger than the dimension of gas molecules, Poiseuille flow takes place. Knudsen diffusion is the predominant transport mechanism in small pores at low pressures and high

6

T. Kai and S. Duan

Fig. 4 Schematic drawings of membrane separation. (a) Two phases separated by a membrane with driving force such as ΔP, ΔC, ΔT, and ΔE. (b) Schematic representation of phases divided by a membrane. L liquid, G gas Table 1 Driving force for various membrane separation processes. L liquid, G gas Driving force 1. Pressure

Phase 1 L

Phase 2 L

2. Partial pressure

G G L L L L L

G G G L L L L

3. Concentration 4. Electrical potential

Membrane process Reverse osmosis Nanofiltration Ultrafiltration Microfiltration Gas separation Vapor permeation Pervaporation Dialysis Membrane extraction Electrodialysis Membrane Electrodialysis

temperatures. When membrane has pore sizes close to the dimension of gas molecules, the molecular sieve will be effective. In some cases, affinity between gas molecules (e.g., CO2) and membrane materials can play an important role in high separation performance.

CO2 Capture by Membrane

7

Fig. 5 Mechanism of membrane gas separation. (1) Mechanisms for gas flow though a microporous membrane. (2) Mechanisms for gas flow though a nonporous polymeric membrane

The solution–diffusion model is used for transport mechanism for the permeation of gases through nonporous polymeric membranes, as shown in Fig. 5(2). The solution–diffusion model describes the transport of gases through a membrane as a three-step process: (a) preferential sorption of the gas into the membrane at the feed side, (b) diffusion through the membrane due to an applied concentration gradient (e.g., partial pressure), and (c) desorption of the gas from the permeate side of the membrane.

Gas Transport Through Porous Membrane Up to now a lot of work has been done in modeling the gas transport through membranes including porous and nonporous membranes. The models of gas permeation through porous membrane began with a comparison of the mean free path of the gas molecules and the mean membrane pore size. If the mean free path of the gas molecules is very small relative to the pore

8

T. Kai and S. Duan

diameter, gas transport takes place by viscous or Poiseuille flow, and no separation is achieved. The volume flux through these pores may be described by the Hagen–Poiseuille equation (Mulder 1996): J¼

er 2 ΔP 8ητl

(1)

where J is the volume flux through the pores, e is the porosity, r is pore radius, ΔP is pressure difference across a membrane of thickness l, η is viscosity, and τ is pore tortuosity. If the mean free path of the gas molecules is much greater than the pore diameter, gas transport takes place by Knudsen flow, and separation is achieved. Mass transfer may be expressed by the following equation (Mulder 1996): J¼

πnr 2 DK ΔP RTτl

(2)

where n is the number of pores and r is the pore radius. Dk, the Knudsen diffusion coefficient, is given by DK ¼ 0:66r

rffiffiffiffiffiffiffiffiffiffi 8RT πMW

(3)

T and Mw are the temperature and molecular weight, respectively. Equations 2 and 3 show that the flux is proportional to the driving force, i.e., the pressure difference (ΔP), across the membrane and inversely proportional to the ratio of the square root of the molecular weights of the gases. If the pore size of membrane used in separation is close to the mean free path of the gas molecules, the transport of gases and vapors falls in the transition between Knudsen and Poiseuille flow. Schofield et al. (1990) developed the transport model of gas and vapor for the transition region between Knudsen and Poiseuille flow in a simple and effective semiempirical relationship. The flux was expressed as follows: J ¼ aPb ΔP

(4)

where a is membrane permeation constant and b is 0 for Knudsen diffusion and 1 for Poiseuille flow.

Gas Transport Through Nonporous Polymeric Membrane The transport of gases through a dense, nonporous membrane is expressed in terms of a solution–diffusion model, as described above (Fig. 5b). The relationship between permeability, solubility, and diffusivity is expressed as follows: PermeabilityðPÞ ¼ SolubilityðSÞ  DiffusivityðDÞ

(5)

CO2 Capture by Membrane

9

The ability of a membrane to separate two molecules, A and B, is expressed as the ratio of their permeability (the selectivity, α): α ¼ PA =PB

(6)

For a binary gas mixture, the selectivity can also be determined from a molar concentration of the two gases in feed and permeate: α ¼ yð1  xÞ=xð1  yÞ

(7)

where y is the permeate concentration of the fast permeating gas and x is its feed concentration. The simplest way to describe the transport of gases and vapors through nonporous membrane is by Fick’s first law (Mulder 1996): J ¼ D

dc dx

(8)

The flux J of a component through a plane perpendicular to the direction of diffusion is proportional to the concentration gradient dc/dx. The proportionality constant D is called the diffusion coefficient. If it is assumed that the diffusion coefficient is constant, the change in concentration as a function of distance and time is given by Fick’s second law (Mulder 1996): @c @2c ¼ D 2 @t @x

(9)

Gas Transport Through Facilitated Transport Membrane Facilitated transport membranes, a type of liquid membranes, were developed for gas separation with high selectivity, especially at low gas partial pressure. Facilitated transport membranes selectively permeate specific gases (e.g., CO2) by means of a reversible reaction between the gases and the membrane. Other gases such as H2, N2, and CH4 do not react with the membrane and can only permeate by a solution–diffusion mechanism. The model of gas transport through a carrier membrane is described as follows. First, component A molecules form a complex AC with the carrier, and AC diffuses through the membrane. Second, dissolved gas A diffuses across the membrane with normal Fickian diffusion (shown in Fig. 6). The total flux of component A will then be the sum of the two contributions, i.e., JA ¼

 DAC   DA  CA, o  CA, l þ CAC, o  CAC, l l l

(10)

The first term on the right-hand side of Eq. 10 represents permeant diffusion according to Fick’s law, where DA is the diffusion coefficient of the uncomplexed

10

T. Kai and S. Duan

Fig. 6 The mechanism of a facilitated transport membrane. (a) The scheme of a facilitated transport membrane. (b) The scheme of a facilitated transport membrane for CO2

component inside the liquid membrane, while CA,o is the concentration of component A just inside the liquid membrane at the feed side and is equal to the solubility of the liquid of A when thermodynamic equilibrium occurs at the interface. The second term represents carrier-mediated diffusion with the flux being proportional to the driving force, which in this case is the concentration difference of complex across the liquid membrane. DAC is the diffusion coefficient of the complex, and CAC,o is the concentration of the complex at the feed side. The following limiting cases can be observed:

CO2 Capture by Membrane

11

1. CA, o  CAC, o When the concentration of the complex AC is much lower than the concentration of A, the first term, i.e., Fickian diffusion, is rate determining. 2. CA, o  CAC, o When the concentration of the complex AC is much greater than the concentration of A, the second term, i.e., diffusion of the complex, is rate determining. 3. CA, o  CAC, o When the concentration of the complex AC is close to the concentration of A, both Fickian diffusion and the diffusion of the complex are rate determining. Facilitated transport membranes for CO2 separation were originally prepared by impregnating pores of microporous support membranes or polymer matrices with carrier solutions such as amines and alkali metal carbonates, which have chemical affinity to CO2. Figure 6b shows the conceptual diagram of CO2-facilitated transport membranes (Matsuyama et al. 1996). As shown in Fig. 6b, CO2 carrier incorporated membrane can react selectively and reversibly with CO2. The CO2 permeation rate can be facilitated because CO2 carrier of the reaction product can transport through the membrane, in addition to CO2 transport membrane of physical solution–diffusion mechanism. On the other hand, other gases, such as N2, CH4, and H2, transport through the membrane only by solution–diffusion mechanism. As a result, the CO2 selectivity of facilitated transport membranes can be extremely high at low CO2 partial pressures. If amine is used as CO2 carrier, the reaction of carrier and CO2 is expressed as follows:  2CO2 þ 2RNH2 þ H2 O Ð RHNCOOH þ RNHþ 3 þ HCO3

(11)

The weak basic amino group will initiate the reaction. However, considering that amino groups are not consumed during the reversible reactions, they are taking the role of catalysts for the reversible CO2 hydration reactions; the final reactions can therefore be demonstrated with Eq. 12: þNH2

þ  H2 O þ CO2  )*  H þ HCO3

(12)

It is suggested that the high CO2 selectivity and permeability can be obtained by the reversible reaction above.

An Overview in the Development of CO2 Membrane Separation Material Polymeric Membranes Many studies have reported CO2-selective polymer membranes for the separation of CO2/CH4 and CO2/N2 gas mixtures. On the other hand, there are comparatively few polymeric membranes that can be utilized for the selective recovery of CO2

12

T. Kai and S. Duan F

F

2

F

F F F

O

O

N

N

O

O

]

O

]3]

F

F F F F F

O

N

N

O

O

]2 O

un-cross-linked polymer

OH

film or hollow-fiber formation

asymmetric hollow-fiber

dense film

Cross-linking

Δ, vacuum F

F

F

F F F

O

O

]N

N

O

O

O

]3]

F

F F F F F

O

N

N

O

O

]2 O

+ O

HO

byproduct O

O

]N

N

O

O F

F

F F F

O

O

H

cross-linked polymer

F

O

]3]

O

O

N

N

O

O

F

F F F F F

O

]2

Fig. 7 Illustration showing the cross-linked polyimide (propane-diol monoesterified) membrane formation (Omole et al. 2010)

over H2. Polymeric membranes made from glassy polymers such as cellulose acetate and polyimide have exhibited practical use in selective CO2 separation from CO2/CH4 gas mixtures. However, CO2/CH4 separation greatly decreases under high CO2 partial pressures due to CO2-induced plasticization. Koros et al. reported that cross-linked polyimide membranes exhibited enhanced resistance to CO2 plasticization, as shown in Figs. 7 and 8 (Omole et al. 2010).

CO2 Capture by Membrane

13

CO2/CH4 Separation Factor

50.0

40.0

30.0

20.0

un-cross-linked cross-linked (150 C)

10.0

cross-linked (200 C) cross-linked (250 C)

0.0 0

200

400

600

800

1000

Feed pressure (psia) Fig. 8 Effect of cross-linking temperature on CO2/CH4 separation factor (Omole et al. 2010)

The structure of polyimide consisted of the following three monomer units: (1) 4, 40 -(hexafluoroisopropylidene) diphthalic anhydride (6FDA), (2) 2,4,6-trimethyl1,3-diaminobenzene (DAM), and (3) 3,5-diaminobenzoic acid (DABA), in the ratio 5:3:2. The DABA groups were used as sites for cross-linking at 150, 200, and 250  C for 2 h under vacuum, as shown in Fig. 2. The effect of cross-linking temperature on CO2/CH4 separation factor using a mixed-gas feed with 50 % CO2 was shown in Fig. 3. 200 and 250  C cross-linked fibers showed higher selectivities than the un-cross-linked and 150  C cross-linked counterparts. Due to scale-up considerations for using lower cost conventional ovens, lower temperatures are referred commercially. Considering these aspects, 200  C cross-linking temperature would be useful to pursue. Robeson reported that there was a trade-off relationship between separation factor and the gas permeability for polymeric membranes. This upper-bound relationship for CO2/CH4 is shown in Fig. 9 (Robeson 2008). In recent years, the development of new membrane materials has been studied to produce both high permeability and high selectivity. Among of them, thermally rearranged polymers (TR polymers) are a novel polymer material in which the molecular sizes of the interchains are controlled by heat treatment. The outstanding performance of TR polymer membrane results from largely unique cavity formation with the size of angstrom order during thermal molecular rearrangement (Park et al. 2007). In the case of TR polymer, free-volume structure and distribution are suitable for gas transport (formation of cavity with size, distribution, and shape for a preferred CO2 transport) in contrast with conventional polymers. For comparative investigation, mixed-gas separation of CO2/CH4 by TR polymer and carbon molecular sieve

14

T. Kai and S. Duan 104

Present Upper Bound

1000

ALPHA CO2/CH4

Prior Upper Bound

TR polymers

100

10

1 0.0001

0.01

1

100

104

P(CO2) Barrers

Fig. 9 Upper-bound correlation for CO2/CH4 separation (Robeson 2008)

membranes derived from a polyimide of intrinsic micro porosity was reported. High CO2/CH4 separations by TR polymer membranes were maintained under high pressures because of its high free volume and enhanced resistance to plasticization (Swaidan et al. 2013). Chung et al. reported that thickness, durability, and plasticization of membrane with TR polymer from ortho-functional polyimide based on 2,20 -bis-(3,4-dicarboxyphenyl) hexafluoropropane dianhydride (6FDA) and 3,3-dihydroxy-4,4-diamino-biphenyl (HAB) for CO2 permeability was studied. Long-term exposure of the TR films to CO2 showed that the CO2 permeability of the thick TR films (15–20 μm) did not show significant decline at 32 atm for over 500 h (Wang et al. 2014). Lee et al. reported on physical properties, cavity size, and transport behavior of TR-PBO membranes by precursor hydroxypolyimide (Calle et al. 2013). Freeman et al. investigated on TR poly(benzoxazole)/polyimideblended membranes for CO2/CH4 separation and showed that blending a-hydroxypolyimides with non-TR polyimides was a feasible strategy to produce films with improved mechanical strength that retain the high gas separation performance of the TR polymer alone (Scholes et al. 2014a, b). It was also reported that MOP (microporous organic polymer) membranes with high affinity to CO2 displayed excellent CO2 separation, the same as TR polymer (Du et al. 2011; Xu and Hedin 2014). In addition, the directions for new membrane material design have been investigated to obtain high CO2 gas selectivity and permeability. And more

CO2 Capture by Membrane

15

research and investigation are carried out actively to introduce the molecular sieve ability with angstrom order size (Gin and Noble 2011; Hudiono et al. 2011). Poly(ethylene glycol) (PEG) has a high physical affinity toward CO2 and was expected to be a viable CO2-separation membrane material. However, pure PEG exhibited very low CO2 permeability, owing to its crystallization. Freeman et al. developed cross-linked PEG membranes in order to prevent this crystallization. The cross-linked PEG membranes exhibited favorable interactions with CO2, which enhanced the solubility of CO2 over that of H2 and showed a CO2/H2 selectivity of about 10 at 35  C and 25 at 20  C (Lin et al. 2006). Wessling et al. developed a PEG block copolymer membrane and obtained a CO2/H2 selectivity of 10 at 35  C (Husken et al. 2010). Peinemann et al. also developed a PEG block copolymer with CO2 affinity and obtained a CO2/H2 selectivity of 10.8 at 30  C (Car et al. 2008). As stated in introduction, CO2 separation from flue gas using membranes is performed under low pressure ratio between the feed and the permeate side, and the improvement in CO2 permeability is important in terms of flowering the system cost and membrane area. It is also important to improve the separation process. Merkel et al. proposed a new system to obtain a CO2 partial pressure difference between the feed and the permeate side using air as a sweep gas to reduce the energy cost. In addition, a membrane module with high CO2 permeability (Polaris TM membrane) was developed (Merkel et al. 2010). Huang et al. investigated on pressure ratio between feed side and permeate side and its impact on membrane gas separation processes. They reported that the optimum membrane processes may not correlate with the highest selectivity because of limited pressure ratio (Huang et al. 2014). Ha¨gg et al. reported similarly that the optimization of the operating conditions is important for membrane gas separation process, by investigating the influences of the operating parameters such as temperature, pressure, and stage-cut using the carbon membrane (He and Hagg 2011).

Inorganic Membranes As for inorganic membranes, zeolite membranes and carbon membranes, among others, have been reported for CO2 separation. Inorganic membranes have appropriate-sized pores that can act as molecular sieves to separate gas molecules by their effective size. In addition, inorganic membranes with strong CO2 affinities show high CO2 selectivity over N2 and CH4. Noble et al. reported that zeolite SAPO-34 membrane showed a high CO2/CH4 separation performance (Zhang et al. 2010). Zhou et al. reported preparation for silica MFI membranes with a thickness of 0.5 μm on analumina support membrane. The membrane showed a separation selectivity of 109 for CO2/H2 mixtures and a CO2 permeance of 51  107 mol m2 s1 Pa1 at 35  C (Zhou et al. 2014). Sub-nanoporous carbon membranes are prepared through precursor polymer thermolysis and carbonization in several hundred degrees or more by heat treating. Polyimide, polyacrylonitrile, cellulose, phenolic resin, etc. are used as precursors. Carbon membranes prepared from a precursor polyimide based on 6FDA-mPDA/ DABA (3:2) by thermolysis under 550  C showed CO2 permeability as high as

16

T. Kai and S. Duan

14750 Barrer with CO2/CH4 selectivity of approximately 52. Even 800  C pyrolyzed carbon membranes still showed high CO2 permeability of 2610 Barrer with high CO2/CH4 selectivity of approximately 118 (Qiu et al. 2014). Inorganic membranes show high gas separation performance and high stability and durability at high temperature. On the other hand, they have the disadvantages of high membrane cost, compared with polymeric ones. To combine the benefits of both polymeric and inorganic materials, mixed-matrix membranes (MMMs), a type of organic/inorganic composite membranes, have also been studied. Many studies have been reported on gas separation membranes using ZIF (zeolitic imidazolate frameworks), a type of MOF (metal-organic framework), as inorganic nanoparticles. Bae et al. reported that MMMs prepared by incorporating MOF (ZIF-90) into a polymeric matrix showed a high CO2/CH4 separation performance (Bae et al. 2010). MMMs prepared by incorporating ZIF-108 nanoparticles into polysulfone (PSf) matrix showed CO2/N2 selectivity of 227 (Ban et al. 2014). Ha¨gg et al. developed ZIF-8/PEBAX-2533 MMMs for CO2 capture. MMMs from PEBAX-2533/ZIF8 with 25 % ZIF-8 loading showed CO2 permeability of 1129 Barrer with CO2/N2 selectivity of 31 (Nafisi and Hagg 2014). Because CO2 separation by inorganic membranes is mainly carried out via molecular sieving, a high selectivity is obtained for CO2/CH4 and CO2/N2 separation but generally not for CO2/H2 separation.

Ionic Liquid Membranes Ionic liquid (IL) membranes have received increasing interest and have been studied in recent years because of their low vapor pressures and stability at high temperatures. Polymerized IL membranes were prepared for the separation of CO2/ N2, CO2/CH4, etc. by Noble et al. (Bara et al. 2007). Amino-containing ILs were investigated for the separation of CO2/H2 by Myers et al. (2008), and a CO2/H2 selectivity of 15 was obtained at 85  C. Matsuyama et al. reported aminocontaining IL membranes for the separation of CO2/CH4, and the membrane showed constant separation abilities for 260 days (αCO2/CH4 = ca. 60) (Hanioka et al. 2008). Nagai et al. reported impregnating IL and ZSM-5 into PI matrix for improvement of IL composite membrane stability. The resulting membrane exhibited CO2/CH4 selectivity of 31 with CO2 permeability of 4509 Barrer (Shindo et al. 2014). IL monomer was polymerized to improve pressure durability of IL membranes by Wessling et al. The resulting membrane showed CO2/CH4 selectivity of 22 with CO2 permeability of 18 Barrer at 40  C, CO2 (50 vol.%)/CH4 (50 vol. %) of feed mixed gas under 40 atm total pressure (Simons et al. 2010). Facilitated Transport Membranes Facilitated transport membranes for CO2 separation were originally prepared by impregnating pores of microporous support membranes or polymer matrices with carrier solutions such as amines and alkali metal carbonates, which have chemical affinity to CO2. Figure 10 shows the conceptual diagram of CO2-facilitated transport membranes (Zou and Ho 2006). As shown in Fig. 5, CO2 carrier incorporated membrane can react selectively and reversibly with CO2. The CO2 transport membrane rate can be facilitated because CO2 carrier of the reaction product can transport

CO2 Capture by Membrane

17

Fig. 10 The conceptual diagram of CO2-facilitated transport membrane (Zou and Ho 2006)

through the membrane, in addition to CO2 transport membrane of physical solution–diffusion mechanism. On the other hand, other gases, such as N2, H2, and CO, transport membrane only by solution–diffusion mechanism. As a result, the CO2 selectivity of facilitated transport membranes can be extremely high at low CO2 partial pressures. Facilitated transport membranes for CO2 separation have been studied since the 1960s. Ward and Robb immobilized an aqueous bicarbonate–carbonate solution into a porous support and obtained a CO2/O2 separation factor of 1500 (Ward and Robb 1967). Immobilized liquid membranes impregnated with carbonate and bicarbonate solutions were studied by Jung and Ihm (1984), Bhave and Sirkar (1986) and Yamaguchi et al. (1996). Apart from carbonate or bicarbonate ions as the reactive carrier, amines were other chemicals that can facilitate CO2 transport. Aqueous solution of diethanolamine (DEA) was used for facilitating CO2 transport (Guha et al. 1990) (Matsuyama et al. 1996). The transport of acid gases through an ion-exchange membrane was facilitated with a diamine carrier (Quinn et al. 1997). The membrane which acted as a fixed carrier membrane for CO2-facilitated transport was prepared by plasma grafting 2-(N, N-dimethyl) aminoethyl methacrylate (Matsuyama et al., 1996) (Neplembroek et al. 1992). The membranes based on the polyelectrolyte, poly(vinylbenzyl trimethyl ammonium fluoride), exhibited high permselective properties for CO2/CH4 (Quinn et al. 1997; Kemperman et al. 1997). Although the immobilized liquid membranes have quite high permselectivity, they have a shortcoming of instability. Some methods were proposed to improve membrane stability under a pressurized condition or in a vacuum. For example, polymer gel was used to retain CO2 carriers in membrane (Neplembroek et al. 1992; Kemperman et al. 1997; Matsumiya et al. 2004, 2005) or the surface of support membrane was treated by chemical method and liquid membrane layer was formed on the pretreated support membrane (Ito et al. 1997), etc. Ho et al. developed facilitated transport membranes by blending amines with poly(vinyl alcohol) (PVA) (Zou and Ho 2006). These membranes showed a CO2/H2 selectivity of 300 at 110  C and 100 at 150  C, as shown in Fig. 11. Matsuyama et al. reported facilitated transport membranes prepared by the immobilization of

18

T. Kai and S. Duan

Fig. 11 CO2/H2-separation properties of facilitated transport membranes prepared by blending amines with poly(vinyl alcohol) (PVA) (Zou and Ho 2006)

2,3-diaminopropionic acid and cesium carbonate in a PVA/poly(acrylic acid) copolymer matrix, and the resulting membrane showed a CO2/H2 selectivity of 432 at 160  C, as shown in Fig. 12 (Yegani et al. 2007). Ha¨gg et al. developed a CO2/N2-separation membrane by blending PVA and poly(vinyl amine) (PVAm). The composite membrane with a selective layer thickness of 0.3 μm was prepared by casting a solution of PVA/PVAm on a polysulfone (PSf) support membrane (Sandru et al. 2010). A model of CO2-facilitated transport membrane mechanism was shown in Fig. 13. In this model, the bicarbonate ion is considered as CO2 carrier and plays an important role for CO2 permeation.

CO2 Capture by Membrane

CO2 flux (mol/m2 s)

10−2

19

a

10−3 125°C 140°C

10−4 50wt% DAPA

160°C

CO2/N2 selectivity

N2 permeance (mol/m2 s kPa)

CO2 permeance (mol/m2 s kPa)

b 10−4

10−5 10−6

c

10−7

103

d

102

101 100

200

300

400

500

600

Feed gas pressure (kPa) Fig. 12 CO2/N2-separation properties of facilitated transport membranes prepared by the immobilization of 2,3-diaminopropionic acid and cesium carbonate in a PVA/poly(acrylic acid) copolymer matrix (Yegani et al. 2007)

Myers et al. reported that amino-containing IL membrane showed CO2-facilitated transport for dry CO2/H2 mixed-gas separation (Myers et al. 2008). Matsuyama et al. developed amino acid IL-based facilitated transport membranes for CO2 separation. A tetrabutylphosphonium proline as amino acid IL membrane showed

20

T. Kai and S. Duan

Feed side N2

CO2carriers fixed on polymer backbone CO2 NH2

CO2 Only by diffusion

H2O Top skin layer of PVAm

Reversible reaction NH3+ H2O

HCO3− CO2

Porous support CO2

N2

Permeate side Fig. 13 CO2-facilitated transport membrane mechanism by bicarbonate (Sandru et al. 2010)

an excellent CO2 permeability of 14,000 Barrer with CO2/N2 selectivity of 100 at 373 K under dry conditions and 10 kPa CO2 partial pressure (Kasahara et al. 2012). CO2 partial pressure and temperature significantly influenced CO2 permeability and CO2/N2 selectivity (Kasahara et al. 2014a, b). Svec et al. developed polymer hybrid CO2-facilitated transport membrane by photopolymerization based on polyaniline and 2-hydroxyethylmethacrylate. The resulting hybrid membranes showed a CO2 permeability of 3460 Barrer with CO2/ CH4 selectivity of 540 under 8.3 kPa CO2 partial pressure (Blinova and Svec 2012). Sirkar et al. reported excellent CO2/N2 selectivity using a viscous and nonvolatile poly(amidoamine) (PAMAM) dendrimer as an immobilized liquid membrane under isobaric and saturated water vapor test conditions (Kovvali et al. 2000). In the integrated coal gasification combined cycles with CO2 capture and storage (IGCC-CCS), CO2-separation membranes will play an important role for reducing CO2-capture costs. In Japan, PAMAM dendrimer/polymer hybrid membranes were developed for CO2 separation from flue gas (CO2/N2) (Duan et al. 2006; Kai et al. 2008) and from IGCC process (CO2/H2) (Taniguchi et al. 2008; Duan et al. 2012; RITE today (annual report 2015).

Membrane Module Design and Manufacturing for CO2 Membrane Separation Industrial membrane plants for gas separation often require hundreds to thousands of square meters of membrane to perform the separation. It is very important to provide a large surface area to deal with large quantity of flue gas or fuel gas.

CO2 Capture by Membrane

21

Fig. 14 Modular constructions employed for gas separation processes (Sanders et al. 2013)

Hence, it is very important to design and produce membrane modules. There are several ways to economically and efficiently package membranes for high surface area and economical module for gas separation. These packages are called membrane modules. The examples of membrane modules are shown in Fig. 14 (Sanders et al. 2013). 1. Hollow Fiber Membrane Modules A typical hollow fiber bundle contains on the order of 105 hollow fibers which are tightly packed (packing fractions on the order of 50 % are common) with both ends embedded in a thermosetting epoxy polymer (Coker et al. 1998). A hollow fiber bundle would then be housed in a polymeric or metal pressure vessel, depending on the pressure that the system was expected to encounter during operation. Feed gas can be introduced into the bore side or the shell side of a hollow fiber module, depending on the application.

22

T. Kai and S. Duan

Fig. 15 Hollow fiber membrane module. (a) Countercurrent flow. (b) Cocurrent flow

Fig. 16 Cross section constructions of spiral wound membrane module for gas separation

Two basic shapes of hollow fiber membrane module are shown in Fig. 15: (a) countercurrent flow and (b) cocurrent flow. Countercurrent shape module has the shell-side feed design with a loop of fibers contained in a pressure vessel. The system is pressurized from the shell side, and the permeate passes through the fiber wall and exits through the open fiber ends. It is easy to make very large membrane areas, thick wall, and small diameter to stand the pressure for used in-gas separation. In cocurrent shape module, fibers are open at both ends, and the feed fluid is circulated through the bore of the fibers. Large diameter is needed to minimize pressure drop (Δp) for in-gas separation at p Pc, penetration of the non-wetting phase occurs. In fact, there are pores (i.e., pore spaces with relatively large diameter forming between particles) and pore throats (i.e., the narrowest point of a tube connecting pores) of various sizes within the target porous medium. Therefore, even if PnwPw > Pc for a certain pore or pore throat diameter, penetration of CO2 (breakthrough) does not occur immediately, and the non-wetting phase is blocked by the next throat of the flow path with a smaller diameter. The first breakthrough occurs when the continuous flow path for the non-wetting phase is formed between both ends of the porous medium. The flow path of the non-wetting phase at this point occupies an insignificant fraction of total pore space (i.e., the water saturation Sw is high). A further increase in the difference of Pnw and Pw enables the penetration through pore throats with a smaller diameter, which results in the expansion of the flow path volume of the non-wetting phase. In contrast, a reduction of the difference between Pnw and Pw after the breakthrough of the non-wetting phase leads to re-imbibition of the wetting phase,

Residual gas saturation

Irreducible water saturation

Fig. 1 Relationship of each Pc term characterizing the sealing performance (Pcen, Pcdis, Pcth, and Pcres) with the Pc–Sw curve

5

Capillary pressure, Pc

CO2 Geological Storage

Drainage

Pcres Pcth Pcdis Imbibition

0

Pcen 90

100

Water saturation, Sw (%)

starting with the smallest pores and proceeding successively to larger pores (Hildenbrand et al. 2002). As a result, the connected flow path becomes blocked; the permeability k of the non-wetting phase decreases. Ultimately, the flow path in the largest pore throats is shut off, and the penetration of the non-wetting phase stops. The drainage and imbibition process of the wetting phase within the porous medium presents a PcSw curve (Fig. 1). Generally, the Pc in the imbibition process is smaller than that of the drainage process on the equal Sw. Therefore, the PcSw curve between the two produces large hysteresis.

Definition of Pc On the PcSw curve, the Pc value changes with penetration of the non-wetting phase into the porous medium. However, depending on its processes, various terms are defined to characterize the sealing performance. These terms include the entry pressure Pcen, displacement pressure Pcdis, threshold pressure Pcth, and residual capillary pressure Pcres. Here, Fig. 1 presents their general definitions and relationships with PcSw curve. The Pcen is the pressure when the non-wetting phase first comes in contact with the target porous medium. It is an index of the diameter of pores or maximum pore throat exposed at the porous medium surface. In this case, however, the throat does not need to be connected to the opposite end of the porous medium. The Pcen is easy to identify experimentally, but the structures of pores and pore throats exposed on the surface are dependent on the effects of sample size and heterogeneity. Thus, the physical meaning of measured Pcen is not necessarily clear (Pittman 1992). Based on a breakthrough experiment of mercury in sandstone, limestone, and silty shale rocks, Schowalter (1979) defined the Pc at a mercury saturation of 10 % as the Pcdis. Although these samples had a wide pore throat diameter distribution,

6

M. Sorai et al.

the non-wetting phase saturation necessary to build connected flow paths was limited to a narrow range of 4.5–17 %. Thus, it is concluded that breakthrough of non-wetting phase occurs in many rocks when Sw is approximately 90 %. The Pcth is a particularly ambiguous measure of sealing performance, and its definition has varied in past research. Generally, it refers to the difference of Pnw and Pw at both ends of porous medium, when building a connected flow path for the non-wetting phase in the porous medium. Experimentally, it is measured as the differential pressure when the non-wetting phase initially penetrates the porous sample. The magnitude of Pcth is defined by the Pc at the maximum pore throat diameter. Katz and Thompson (1986, 1987) considered that the pressure at which mercury forms a connected pathway within the sample is equivalent to the inflection point at which the PcSw curve becomes convex upward and defined this pressure as Pcth. In connection with this, Thomeer (1960) and Swanson (1981) both showed by numerical analysis and experiments, respectively, that a pore system with a good connectivity forms within rocks at the vertex of a hyperbola (i.e., a point where the slope of the tangent is 45 ) obtained when the PcSw curve is expressed in a double logarithmic diagram. Along with Pc in these drainage processes, the Pc in imbibition process is also defined. If the flow path of the non-wetting phase is completely blocked in the imbibition process of the wetting phase, residual pressure difference is generated at both ends of the porous medium and defined as the Pcres (Hildenbrand et al. 2002).

Measurement Methods for Pc Commonly, Pc is measured by mercury intrusion or gas sorption (mainly using nitrogen). The results are then converted to the Pc in the target system. In the former method, a throat diameter of several nm to several hundred μm is the target, while the throat diameter of approximately 0.1–100 nm is the target for the latter method. In either method, the principle of analysis is the same. For example, conversion to the CO2–water system using the mercury intrusion method is conducted as follows (Purcell 1949): ðPc Þcw ¼

σcw  cos θcw  ðPc Þma σma  cos θma

(2)

where subscripts cw represents the CO2–water system and ma represents the mercury–air system. Specifically, the relationship between the volume of injected mercury and pressure is measured, and (Pc)ma is obtained from the shape of the PcSw curve. At this point, σma (480 dynes/cm) and θma (40 ) are both known; thus by using σcw and θcw under the target condition as the input parameters, (Pc)cw is determined. There are several methods that use the actual target fluid to determine Pc. The most common is the step-by-step approach, where the pressure of the non-wetting phase is slowly increased in steps and the Pc is obtained based on the flow rate

CO2 Geological Storage

7

changes of the injected non-wetting phase and effluent wetting phase. The step-bystep approach corresponds to the measurement under static conditions in conformity with the definition of the seal; thus, it is considered to be the optimum method. However, to eliminate the dynamic effect, injection of the non-wetting phase at an extremely slow flow rate is required. As a result, it takes a few days to up to several weeks or more to measure the breakthrough of non-wetting phase (e.g., Liu et al. 1998). In contrast with these drainage processes, the Pcres is measured for imbibition processes (Hildenbrand et al. 2002). In this method, under the condition of constant volume for the whole system, the non-wetting phase is injected instantaneously at the same or higher pressure than the predicted Pcth. Following this, the upstream pressure in the sample decreases, while the downstream pressure increases. By measuring pressure changes in the non-wetting phase at both ends of the sample, Pcres is obtained from the difference in the final residual pressure. However, the Pcres measured in experiments possibly becomes lower when re-imbibition of the wetting phase is prevented because the flow path volume is different between at drainage and imbibition process. In fact, Hildenbrand et al. (2002) confirmed that in samples with k > 100 nD, Pcres becomes smaller than the Pcth in drainage process. To replace above static or quasi-static measurement methods, Egermann et al. (2006) proposed a dynamic measurement method, in which the non-wetting phase is injected applying a constant overall pressure drop, above Pcth, across the sample. This method stands on the following mechanism. Before the non-wetting phase reaches to the sample surface, the drainage rate of the wetting phase depends on the pressure drop over the whole sample. However, once the non-wetting phase starts to penetrate within the sample, the generation of Pcth at its front decreases the drainage rate of the wetting phase. From the experiments with a brine–nitrogen system, they showed that for fine-grained sandstone, carbonates, and chalk rocks with k  1 μD, the dynamic measurement method has a similar accuracy as the stepby-step approach, and measurements can be conducted in the same time as the Pcres method (Egermann et al. 2006). However, it is noteworthy that the Pcth measured in this method is influenced by the flow. In other words, under dynamic conditions, the interface between the wetting and non-wetting phase takes a shape defined by the receding contact angle against the wetting phase; thus, for rocks with poor wettability (i.e., θ of the wetting phase > 0 ), there is a possibility that θ becomes different from the equilibrium contact angle corresponding to static conditions. In such a case, Pcth may be overestimated (Egermann et al. 2006).

Evaluation of Sealing Performance Under CGS Conditions Previous Works The sealing performance of systems for CGS is evaluated with two types of methods: a method that converts from mercury’s Pc (e.g., Dewhurst et al. 2002; Bachu and Bennion 2008) and the direct measurement using CO2 (Hildenbrand et al. 2004; Li et al. 2005; Plug and Bruining 2007; Wollenweber et al. 2010). As shown clearly in

8

M. Sorai et al.

Eq. 1, Pc depends on the type of fluid; thus, the latter method is more desirable in order to reduce uncertainties. However, the number of studies is currently limited. As one example, Li et al. (2005) measured the Pcth of N2, CH4, and supercritical CO2 in anhydrite (CaSO4) (a simulated sample of the caprock) using the step-bystep approach. It was shown that the Pcth of CO2 is lower than those of CH4 and N2, which reflects the fact that Pcth is proportional to σ between gas and brine. In contrast, after injecting CO2 with a constant flow in unconsolidated sand samples under the condition of various temperatures and pressures, Plug and Bruining (2007) kept the CO2 pressure constant to cause the imbibition of water with a constant flow. As a result, they indicated that an increase of CO2 pressure, which causes a decrease of σ, decreases the Pc in both drainage and imbibition processes. Hildenbrand et al. (2004) measured the Pcres of N2, CH4, and CO2 in argillaceous rocks from changes in the differential pressure for imbibition. In addition to the derivation of the relational expression of k and Pcres, they also analyzed the pore size distribution based on Pc. In a similar manner, Wollenweber et al. (2010) conducted an experiment with limestone and marl as the target. By repeatedly injecting CO2, a decrease in Pcres was observed, which was attributed to the dissolution and re-precipitation of carbonates.

Modeling of the Correlation Between Pcth and k Generally, rocks in nature can differ greatly from their Pcth because penetration depends on the flow path structure in individual samples. Therefore, from the perspective of stable CO2 containment, it is necessary to ascertain the range of variation of caprock’s Pcth by measuring numerous samples instead of a single one. Moreover, the sealing performance over a widespread area must be evaluated especially for grand-scale CGS. However, the core samples that one can realistically obtain from a storage site are numerically restricted. Moreover, samples are not necessarily available over the whole site. Because of these problems, the authors have proposed a method to represent Pcth’s variation using artificial samples of which the internal structure is intergraded from a simple to a more complicated one (Sorai et al. 2014a). This methodology enables the prediction of the range of variation of rocks’ Pcth without repeated measurements of numerous rock samples. As a first step, the authors prepared sintered compacts of uniform spherical silica particles with diameters of 0.1–10 μm. Manufactured sintered compacts were made available for measurements of k and Pcth. Specifically, Pcth was measured in supercritical CO2–water system under conditions of 1,000 m depth (10 MPa and 40  C). The CO2 in this condition corresponds to the supercritical phase. The CO2 pressure at the sample bottom was increased to higher than 10 MPa in steps of 10 kPa, but the water pressure at the upper side of the sample was maintained 10 MPa. The authors determined Pcth of a sample based on differential pressure at the instant when the CO2 breakthrough was observed through a sample cell observation window. Figure 2 shows a breakthrough image of supercritical CO2 from the upper surface of the sample, taking the 0.1 μm particle sample, for example (Sorai et al. 2014a). Initially the CO2 was not able to penetrate into the sample because

CO2 Geological Storage

9

Fig. 2 Breakthrough image of supercritical CO2 from an upper surface of a sample (Modified after Sorai et al. 2014a): (a) immediately before breakthrough; (b) breakthrough started from a specific point; and (c) finally the entire surface was covered with numerous breakthrough points

of a capillary effect (Fig. 2a), but a further increase of differential pressure caused CO2 seepage from the upper surface. The breakthrough started from a specific point even though particles were packed homogeneously in the interior of a sample. The CO2 flow passing through this point increased with increasing differential pressure (Fig. 2b). An additional rise of the differential pressure gradually increased the number of breakthrough points. Finally, the entire surface was covered with numerous breakthrough points (Fig. 2c). Figure 3 presents the correlation between Pcth and k for sintered compacts. The closest-packing structure of uniform spherical particles is defined theoretically as a line on a double logarithmic plot. Here, it is noteworthy that sintered compacts scatter around the closest-packing line because of their random packing (Sorai et al. 2014a). This variation is enhanced on lower k. Moreover, results suggest that samples with inhomogeneous particle packing are above the line, but samples in which particles are packed homogeneously overall, with inhomogeneous structures such as cracks included locally, are below the line. Figure 3 also shows experimentally obtained results for various sedimentary rocks. The Pcth of rock samples, which is shifted downward further distant from the closest-packing line, seems to be lower than that of sintered compacts. This is due to the diverse sizes, shapes, and mineral compositions of the particles contained in rocks. In other words, the rock’s Pcth can vary greatly due to such factors.

10

M. Sorai et al.

Fig. 3 Correlation between Pcth and k of sintered compacts and various sedimentary rocks. The straight line represents calculated values for a case in which the closest packing of spherical particles is assumed

To what extent does such variation of Pcth affect the predictions of the spread of CO2 plumes after injection? For this problem, the migration of a CO2 plume was numerically simulated for a case, where a 100 m thick alternating sandstone and mudstone layer was set as the site, and one million ton of CO2 was injected annually into a sandstone stratum at 950–1,050 m depth for 50 years (Sorai et al. 2014b). The simulation compared the experimentally obtained upper (1.2 MPa) and lower limit (150 kPa) for the Pcth of mudstone stratum, assuming the k of each stratum in the vertical direction to be 10 mD (millidarcy) and 0.1 mD respectively. The result revealed a significant impact on the spread of the CO2 plume, particularly during the period following the completion of the injection. When Pcth was increased, almost all of the CO2 remained within the injection stratum over a longer period of time (Fig. 4a). However, when Pcth decreased, CO2 moved upward through the mudstone stratum due to its buoyancy, even after the injection had completed and reached the sandstone stratum located one layer above after 1,000 years (Fig. 4b). The magnitude of Pcth, therefore, has a significant impact on the predictions of CO2 behavior after storage. Determining the variation range of rock’s Pcth will be an issue tackled in the future. As the next step, the authors are investigating the impact of particle configuration and mineral composition on Pcth, in addition to the effect of particle size distribution arising from mixtures of particles with various sizes.

Conclusion of This Section With respect to the sealing performance of rocks, fundamental theory, measurement and analytical methods, and the application of obtained data have been well studied, particularly in the field of petroleum exploration. The acquired knowledge is directly useful in evaluating the safety of CGS. However, some questions remain regarding CGS, which were not necessary for consideration in petroleum exploration, such as the interaction of the non-wetting phase (CO2) and wetting phase

CO2 Geological Storage

11

Fig. 4 Impact of difference in Pcth of mudstone stratum on CO2 plumes (After Sorai et al., 2014b). The solid line represents the upper edge of the sandstone stratum, while the dotted lines represent the upper edge of the mudstone stratum. The Pcth of the mudstone stratum corresponds to (a) 1.2 MPa and (b) 150 kPa, respectively

(formation water) and the phase change of non-wetting phase itself. Moreover, even for existing evaluation methods, many issues remain such as the development of an accurate and efficient measurement method and the establishment of a theory that can be applied to rocks with complex internal structures. Future work should hopefully involve new approaches, such as modeling of Pcth using artificial samples with controlled internal structures, which can further develop the methods for the evaluation of sealing performance for CGS.

Evaluation of Geochemical Processes Introduction Implementing the CGS requires the quantitative evaluation of the behavior and effects of the injected CO2. Regarding this, the focus is generally on physical processes, such as CO2 migration, starting immediately after CO2 injection. This leaves many unknowns in the long term (1,000 years) (IPCC 2005) such as the effect of geochemical interactions. For instance, in storage aquifers in which the highest storage potential is expected, active geochemical processes are expected to occur due to the acidification of formation water caused by dissolution of CO2. The time scale for this process is expected to be long term, e.g., ranging from several hundred years to several tens of thousands of years. However, some processes may start immediately after CO2 injection in areas near injection wells and in rocks containing highly reactive carbonate minerals.

12

M. Sorai et al.

In this paper, the author presents the target processes and the methods for understanding geochemical processes related to CGS in aquifers. The author focuses on numerical simulations as the essential approach for the evaluation of long-term geochemical processes. To improve model reliability, the author examined conditions and issues to improve the accuracy of input parameters. The paper also includes an example of reaction rate measurements for carbonates at hot springs.

Approach to Evaluation of Geochemical Processes On CGS in aquifers, geochemical processes on various temporal and spatial scales are predicted to occur in components of the storage system, such as reservoirs, sealing layers, cracks, faults, wellbores, and upper aquifers. These processes include the dissolution of CO2 in formation water, reactions of acidified water with surrounding rocks, pore-filling materials, and cement, carbonation of CO2 through reaction with mafic or ultramafic rocks, evaporation of formation water (dry-out), formation of CO2 hydrate, groundwater pollution, etc. These processes are directly related to the evaluation of important issues associated with implementing CGS. These issues include: the storage potential (geochemical trapping) based on long-term mechanisms; CO2 injectivity in reservoirs; leakage risks from caprock, cracks, faults, and wellbores; and environmental effects on shallow groundwater. Four types of approaches are applied for the evaluation of the abovementioned issues: field tests, laboratory experiments, natural analogue studies, and simulation studies. Each method targets a different temporal and spatial scale. Laboratory experiments have significant constraints on the temporal and spatial scales, yet conditions can be strictly controlled. Thus, laboratory experiments are suitable for elucidating mechanisms and the acquisition of parameters for each fundamental process. In contrast, natural analogue studies make it possible to understand phenomena occurring on the geological time scale. However, the transitional history of the environmental conditions is not always clear; thus, these are not suitable for acquisition of quantitative data related to reaction kinetics. On the other hand, simulation studies can deal with phenomena of all scale and play a crucial role in complementing other approaches. Especially in recent years, a three-dimensional reactive transport simulation is becoming mainstream with drastic advances in computing power (e.g., Xu et al. 2003; Johnson et al. 2004; White et al. 2005). However, as will be later discussed, there are many issues regarding the reliability of calculation results. Field tests are quite efficient for conducting quantitative analysis of CO2 behavior in underground conditions. Starting with the Norwegian Sleipner Project in 1996 for the study of CGS in aquifers, demonstration projects of various scales are currently in progress around the world. However, geochemical approach mainly bases on ex situ analyses of brine and rock samples. In other words, there is no effective tool for in situ monitoring of geochemical processes.

CO2 Geological Storage

13

The Role and Directionality of Simulation Studies Geochemical processes are phenomena that occur on an extremely long time scale. Thus, the ultimate evaluation of these processes must rely on numerical simulations. However, as previous studies have indicated, current simulations are based on many assumptions with various uncertainties and unknown parameters. Thus, for the long-term evaluation of CGS, these uncertainties must be reduced in order to increase the accuracy of simulation results. Here, to improve the prediction accuracy of geochemical processes, the author examined the parameters that are especially important for simulation studies including contributions from each of the abovementioned approaches.

Equilibrium Parameters Equilibrium parameters include the density, solubility, and enthalpy in the CO2–water system and so forth (Gaus et al. 2008). For these parameters, it is necessary to formulate as a function not only of temperature, pressure, and salinity but also of the effect of impurities and acidic gases other than CO2. Interfacial tension and wettability that regulate capillary pressure of CO2 in rocks are also important parameters to which geochemical processes indirectly contribute. As for interfacial tension, a relatively large number of measurements have already been made for the CO2–water system (e.g., Bachu and Bennion 2009), and in part, expansion into multicomponent systems such as a mixed gas of CO2 and H2S is underway (Shah et al. 2008). Wettability is difficult to evaluate due to its complex behavior depending on the surface conditions; thus, subsurface minerals are often assumed as a condition of the surface being completely saturated with water (i.e., contact angle of 0 ). However, some of the recent studies showed that contact angles of mica and quartz change in the presence of CO2 (e.g., Chiquet et al. 2007). Therefore, additional data from laboratory experiments are desired. Kinetic Parameters Generally, the rate formula for the mineral reaction is expressed as the following Eq. 3 (e.g., Lasaga 1998): Rate ¼ kA expðE=RT Þ Πani i f ðΔGr Þ;

(3)

where k is the rate constant, A signifies the reactive surface area, E is the activation energy, R denotes the gas constant, T is the absolute temperature, ai is the activity of chemical species i, and ni stands for the reaction order for ai. The last term f(ΔGr) corresponds to a function related to the Gibbs free energy change, which is expressed as a function of the degree of saturation:   ΔGr ¼ RTln Q=K eq ;

(4)

where Q signifies the activity product and Keq is the equilibrium constant. As shown clearly in Eq. 3, the reaction rate is calculated as the product of each parameter;

14

M. Sorai et al.

thus, just one inaccurate parameter can reduce the overall reliability of the calculated results. The US Geological Survey (USGS) compiled the value of k, E, and n for H+, in acidic, neutral, and alkaline ranges, based on the dissolution rate data for all the important minerals (Palandri and Kharaka 2004). This is currently one of the most useful reaction rate database, but these values are not always measured under conditions consistent with those in CGS. As a result, reaction rates may change due to differences in reaction mechanisms. The A has long been discussed as the parameter with the highest uncertainty (e.g., Lasaga 1998). Especially when obtaining A, there is no agreement on whether the commonly used BET method – based on the amount of adsorbed inert gas – expresses the actual reactive surface areas. In addition, natural mineral surfaces are different from fresh cleavage planes often used in laboratory experiments and may be influenced by adhesion of clay minerals, coating of secondary products precipitated from leaching components, etc. (Blum and Stillings 1995; White and Brantley 2003). These effects of surface conditions also should be investigated further. The f(ΔGr) have mostly been overlooked in numerical simulations until now, as the knowledge of dissolution mechanism is especially limited. As a result, it was often approximated as a linear function of Q/Keq based on the transition state theory (e.g., Xu et al. 2005). However, based on dissolution experiments of feldspars in recent years, it has been shown that f(ΔGr) follows a sigmoidal curve, or an irregular shape is produced in response to the difference in dissolution mechanisms (Burch et al. 1993; L€ uttge 2006; Hellmann and Tisserand 2006; Sorai and Sasaki 2010). On the other hand, the function form of the ΔGr in the precipitation should incorporate the nucleation process prior to the growth. Most simulations assume that secondary minerals precipitate immediately after the solution becomes supersaturated (i.e., Q/Keq > 1). In fact, however, nucleation first occurs when a certain critical supersaturation is achieved. In addition, rates differ in homogeneous nucleation in free space and heterogeneous nucleation on the wall. Especially the heterogeneous nucleation rate is predicted to change significantly depending on the property of the wall surface. A factor that has not been considered much in previous numerical simulations is the activity of various impurities other than H+, i.e., promotion and inhibition effects of the reaction. It is unrealistic to examine the effects of all chemical species dissolved in the formation water for each mineral in this regard. However, for example, in the precipitation of calcite – the most important CO2 fixing mineral – it has been known for a long time that divalent cations such as Mg2+ function as inhibitory factors (e.g., Morse et al. 2007). Therefore, given the fact that the formation water generally includes these ions, effects of impurities should be taken into account at least in such reactions that relate to the nature of geochemical processes.

CO2 Geological Storage

15

Setting Conditions Simulation results are strongly influenced by the setting of primary and secondary minerals. Primary minerals can reflect the analysis of the rock samples from the site, but generated secondary minerals cannot be determined based on thermodynamically stable relationships only. In other words, when conducting an analysis from a kinetics perspective, in addition to the setting of a metastable phase as an intermediate product, the deviation from the thermodynamically stable region caused by changes in environmental conditions must also be considered (Gaus et al. 2008). The simulation of CGS into sandstone aquifers often assumes that the CO2 is mineralized into dawsonite. However, both thermodynamic and kinetic properties of dawsonite are not fully understood yet, except for some thermodynamic parameters (Be´ne´zeth et al. 2007). Regarding this, the dawsonite dissolution rate measured at 80  C and pH range of 3–10 indicates that dawsonite is in a stable phase only under high CO2 pressure conditions; it dissolves rapidly as the CO2 pressure decreases and mainly precipitates as kaolinite (Hellevang et al. 2005). Therefore, to avoid the overestimation of mineral trapping, care should be taken when assuming dawsonite. Similar issues exist for dolomite and magnesite, for which the precipitation mechanism is still unknown.

Measurement of Reaction Rates of Carbonates at Hot Springs The CO2 stored in aquifers dissolves in formation water to become carbonic acid, which in turn dissolves mineral components in rocks over a long period of time. Reaction rates of minerals are ordinarily extremely slow, but carbonates such as calcite or aragonite (calcium carbonate in both cases) are exceptions as they have high reactivity. When carbonates dissolve within alternating sandstone and mudstone layers, particularly in cases where the thicknesses of respective strata are thin, it can potentially lead to the formation of a leakage path to upper strata. In contrast, precipitation of carbonates can also occur when cations reaching from minerals in a sandstone stratum combine with bicarbonate or carbonate ions. In such cases, sealing performance would be enhanced due to the clogging of the sandstone stratum. The reaction of carbonates is, therefore, one of the most important geochemical processes associated with CGS, and knowing its rate is essential for geochemical simulations of CGS. It has been pointed out that mineral reaction rates measured in laboratories are completely different from those in nature because of the effects of various factors, such as reaction time, surface area, surface conditions (defects and coatings), pore water compositions, mass transfers, and biological activities (Blum and Stillings 1995; White and Brantley 2003). For this problem, the authors attempt to obtain carbonate reaction rates at carbonated and bicarbonated springs in various regions of Japan, regarding these sites as the natural analogue field of CGS. Selected sites contain either undersaturated or supersaturated waters with respect to carbonates.

16

M. Sorai et al.

Fig. 5 Differential interference microscope images for reacted calcite surface: (a) growth experiment at Masutomi Hot Springs (30 min after the start of the reaction) and (b) dissolution experiment at Shichirida Hot Springs (336 h after the start of the reaction)

At each site, cleaved single crystals of various carbonate species were immersed in the spring water for up to 1 month. Each sample was taken out at prescribed times for surface observations. A part of the sample surface was covered with a sputtered gold thin film or a silicon rubber for an inert reference area. Figure 5a shows the calcite cleavage plane reacted in the supersaturated water at Masutomi Hot Springs in Yamanashi, Japan, as viewed with a differential interference microscope. Numerous pyramidal-shaped pattern, referred to as growth hillocks, formed immediately after the start of experiment. These growth hillocks grew in size and repeatedly combined with the progress of reaction. It is noteworthy that calcite single crystals in the shape of rhombohedron formed on a gold coating plane. This signifies that the solute flux applied uniformly on the sample was directly incorporated on a non-coated bare calcite surface to form growth hillocks, but the heterogeneous nucleation of calcite occurred on a gold coating plane, since solutes

CO2 Geological Storage

17

Fig. 6 The temporal profile of the height difference between the reacted and reference surface (gold coating or original surface): (a) growth experiment at Masutomi Hot Springs and (b) dissolution experiment at Shichirida Hot Springs

were not incorporated into gold. In contrast, Fig. 5b corresponds to the sample reacted in the undersaturated water at Shichirida Hot Springs in Oita Prefecture. The length scale of the image is different from that of Fig. 5a, since the change of surface configuration was greater due to the long duration of the experiment. The undersaturated water produced numerous etch pits in the shape of inverted pyramids, a pattern of dissolution. These etch pits were expanded and combined with the progress of reaction. A phase shift interferometer with a nanoscale vertical resolution and a laser microscope were used to take measurements of the height difference Δh between coated and non-coated areas of these calcite cleavage planes (Sorai and Sasaki 2010). Figure 6 presents the time-course change of the Δh, corresponding to the experiments shown in Fig. 5 (Sorai et al. 2014b). The Δh for the former was expressed as a positive value, being growth from the reference plane, while the latter was expressed as a negative value, being regression from the reference plane.

18

M. Sorai et al.

The growth and dissolution rates were estimated to be 3.3  105 and 3.2  106 mol m2 s1, respectively, from the slope of a fitting line to the temporal profile. However, the growth and dissolution rates derived based on the USGS database described above were 4.5  105 mol m2 s1 and 2.9  106 mol m2 s1, respectively, under identical temperature and pH conditions (Palandri and Kharaka 2004), showing that the experimentally obtained values, particularly with regard to the growth rate, were smaller by almost 30 %. Regarding this, the USGS database defines the reaction rate as a function of temperature, pH, and the degree of saturation, with no consideration given to the effects of impurities contained in the solution. Therefore, it is possible that the growth rate was reduced by trace dissolved components, primarily magnesium ions, in the field experiment. On the other hand, the effect of impurities on the dissolution process of calcite is presumably small because the actually measured and calculated values are practically identical for the dissolution rate.

Conclusion of This Section Geochemical processes improve the safety of a reservoir by stabilizing and changing the form of CO2 on geological time scales; however, geochemical processes can also result in negative effects such as increased leakage risks near injection wells and lowered injectivity. Evaluation of these geochemical processes tends to rely on numerical simulations owing to the long time scales involved. However, the author highlights that many previous studies point out the lack of sufficient knowledge on reaction kinetics, especially compared to the equilibrium theory. Therefore, to improve the reliability of simulation studies, data with high uncertainties and assumptions should be examined on a case-specific basis while referencing appropriate monitoring results of demonstration tests, laboratory experiments, natural analogue studies, etc.

Geophysical Monitoring and Modeling Introduction In CO2 geological storage projects, monitoring is indispensable for verification of CO2 storage. Various techniques are used to monitor the location of CO2 injection and volumetric extent of CO2 migration and detect possible leaks of CO2 at faults and seals. Monitoring is required to implement for long time, beginning with the survey and development of a site, continuing throughout the CO2 injection period, and even after the site is closed. Therefore, it is essential to develop monitoring technology that is cost-effective. In order to make CO2 geological storage (CGS) projects acceptable to society, consideration of long-term cost is also very important.

CO2 Geological Storage

19

Monitoring technologies for CGS are divided into four main categories: (1) atmospheric monitoring tools, (2) near-surface monitoring tools, (3) subsurface monitoring tools, and (4) data integration and analysis technologies (Litynski et al. 2012). Here we describe geophysical monitoring and modeling technologies concerning the latter two categories.

Seismic Monitoring Geophysical exploration methods that detect the two- or three-dimensional distribution of physical properties from the ground surface/sea surface are used as effective monitoring methods that can supplement well data (which provide direct information of the deep underground, but consist only of linear or point measurements). Among them representative one is seismic monitoring using active seismic methods such as seismic reflection, vertical seismic profiling (VSP), and cross-hole seismic tomography. Seismic monitoring is considered to be the standard monitoring method in CGS worldwide (e.g., Eiken et al. 2011). In these methods, an elastic wave emitted from multiple source points by a moving artificial hypocenter is received by a multichannel seismometer array spread across the wellbores or on the ground surface/seabed and then analyzed to estimate the elastic properties of the medium (e.g., elastic wave velocity and attenuation) through which the elastic wave propagates. The underground structure and physical state are estimated based on the results. Since it is believed that the outline of the underground area where injected CO2 has spread (a CO2 plume) can be detected through this process, active seismic exploration method, especially seismic reflection, is considered to be an extremely useful method for CO2 monitoring. In active seismic exploration, a “snapshot” that captures the underground CO2 plume at a given time is obtained. This process of taking a snapshot is repeated over appropriate time intervals to detect temporal variations of the snapshots. In the seismic reflection, a method that obtains a “three-dimensional snapshot” using sources and geophones distributed on the ground surface is called threedimensional (3D) reflection method. It is also called 4D reflection method since a time axis is added in monitoring. However, such seismic reflection surveys are expensive, in particularly expensive to conduct surveys in the ocean, where a ship is used for the deployment of sources and geophones. Since sources and geophones are spread over a wide area, negotiations with local people and officials in the survey area are also required, which can influence social receptivity to CGS. Therefore, it is desirable to decrease the number of 3D seismic reflection survey when monitoring needs to be conducted regularly. There are also the following fundamental limits: seismic waves can only detect elastic properties, and the detectable structures and resolution can change depending on the frequency, energy, and source location of the elastic wave. Considering these limitations, it is worthwhile to take into account the use of other geophysical methods, in particular passive exploration methods, which can supplement seismic methods and reduce overall monitoring cost.

20

M. Sorai et al.

Fig. 7 Geophysical postprocessors. Various computational postprocessors permit the user to calculate temporal changes that are likely to be observed if geophysical surveys of an operating and/or closed CO2 injection site are repeated from time to time. Results may be used to supplement conventional reservoir engineering measurements in history-matching studies undertaken during reservoir model development

Geophysical Modeling To appraise the utility of geophysical techniques in monitoring of CO2 injected into deep aquifers, Ishido et al. (2011) carried out numerical simulations using the so-called geophysical postprocessors. The geophysical postprocessors calculate temporal changes in geophysical observables that result from changing underground conditions computed by reservoir simulations (e.g., Pritchett et al. 2000; Ishido et al. 2011, 2015). The purpose of the development is to enable us to use monitoring data from repeat and/or continuous geophysical measurements in history-matching studies (Fig. 7). The name, postprocessor, comes from the fact that there is no need to couple and solve governing equations for geophysical phenomena with governing equations for reservoir simulations to calculate changes in geophysical observables; rather, they can be solved using the results of a reservoir simulation (snapshots) “afterwards.” For example, electrokinetic phenomena handled by the self-potential postprocessor (Ishido and Pritchett 1999) are current flows coupled with fluid flow within the pores of rocks. It handles a phenomenon in which the potential difference (streaming potential) is created proportional to the pore pressure difference. The streaming potential leads to secondary pressure difference due to an electroosmotic effect at the same time, but the secondary pressure difference is extremely small and can be neglected.

CO2 Geological Storage

21

Thus, governing equations for electrokinetic phenomena do not need to be solved at the same time as the governing equations for reservoir simulations. A simulation that couples “G” (geophysical observables) to a reservoir simulation, which is a TH coupled simulation in which governing equations for the transport of T (heat) and H (fluid/chemical species) are coupled and solved simultaneously, is generally not required. However, when calculating underground stress changes and/or ground surface deformation due to pressure or temperature changes, a THM coupled simulation that includes M (mechanics) should be performed if changes in porosity and permeability due to stress changes are important. In the applications to CGS, the geophysical postprocessors can be used for the following purposes: 1. Planning of an appropriate monitoring system: Effective methods and way of monitoring can be chosen based upon prediction of changes in geophysical observables, which are calculated from underground flow models and potential risks associated with vertical faults, openings in the caprock, etc. 2. Quick understanding of the underground conditions: Whether or not injected CO2 is stored as predicted can be judged by comparing the monitoring data obtained from actual geophysical measurements with predicted changes. If the behavior that is different from the predictions possibly arises from potential risks, the selection and deployment of effective monitoring methods can be examined. 3. Verification of the storage model: If the measured changes in geophysical observables are different from the predictions, the underground flow model is improved so as to reproduce the measured changes (history matching). The calibrated flow model provides a basis for reliable long-term predictions.

Flow Simulation Based Upon Tokyo Bay Model Here, illustrative computations which have been carried out using various geophysical postprocessors are described, based upon the results of numerical simulations of CO2 injection into an aquifer system underlying a portion of the Boso Peninsula/ Tokyo Bay area and calculations of the temporal changes in geophysical observables caused by changing underground conditions as computed by the reservoir simulation. The Tokyo Bay area in the southern Kanto plain is a representative industrial area in Japan. A number of large-scale CO2 emission sources surround the Tokyo Bay. From a geological point of view, sedimentary strata underlying the Tokyo Bay area are believed to be suitable for open aquifer CO2 storage (e.g., Okuyama et al. 2009). Late Pliocene to Middle Pleistocene Kazusa Group sediments are found below several hundred meters depth, comprising an alternation of turbidite sandstone and pelagic to hemipelagic mudstone. The sandstone is poorly consolidated with high porosity and permeability. The mudstone, on the other hand, is well consolidated, having fairly high porosity but very low permeability. Below the Boso Peninsula, the Kazusa Group sediments are more than 2,000 m thick, almost undeformed, and dip to the west toward Tokyo Bay at very low angles (less than 10 ).

22

M. Sorai et al. 20000

15000

10000

Meters North

5000

0 −5000 −10000

25000

20000

15000

10000

5000

0

−5000

−10000

−16000

−25000

−20000

−20000

−15000

Meters East

Fig. 8 Study area (in the Boso Peninsula, Japan) considered in the numerical simulation. Also shown is a top view of the computational grid blocks used for flow simulations. “Inj-base” and “Inj-ese” indicate the location of CO2 injection site for the base and the ESE injection scenarios, respectively

A three-dimensional model is covering a 25  15 km2 area (Fig. 8) and representing 2,500 m of alternating sandstone- and mudstone-dominated formations based broadly upon the geological structure underlying the Tokyo Bay area (Fig. 9). In the base-case model, the horizontal and vertical permeabilities are simply assumed as 50 and 10 mD (1 mD = 1015 m2), respectively, for the sandstone-dominated (aquifer) formations, and 10 and 1 mD, respectively, for the mudstone-dominated (seal) formations. In view of the relative thinness of individual mudstone layers compared to the vertical size of the computational grid blocks (=100 m), they assumed a relatively high average permeability for the mudstonedominated seal formations. The porosity is assumed to be 0.2 for all formations. Relative permeability models for CO2 gas and liquid water are represented by Corey-type curves (with 0.1 residual saturation) and van Genuchten-type curves (with 0.2 residual saturation), respectively (Fig. 10). Capillary pressure (gas or liquid CO2 phase vs. aqueous phase) is represented by a van Genuchten-type model, and the capillary pressure magnitude is 3.58 kPa when the water saturation is 0.8 in the base model (Fig. 11).

CO2 Geological Storage

23

Fig. 9 Distribution of rock formations in the WNW-ESE section (x-z plane) at j = 8. The Umegase formation is indicated by the dark blue color.

1.0

Carbon–Dioxide Water

Relative permeability

0.8

0.6

0.4

0.2

0.0 0.0

0.2

0.4 0.6 Water Saturation

0.8

1.0

Fig. 10 Relative permeabilities for the non-wetting (carbon-dioxide gas, liquid carbonate) and wetting (water) phases. Residual saturation for the wetting (non-wetting) phase is 0.2 (0.1)

24

M. Sorai et al. 102

delp0 = 0.0358 bar delp0 = 0.62 bar delp0 = 5.0 bar

Capillary Pressure, MPa

10

1

10–1

10–2 0.0

0.2

0.4 0.6 Water Saturation

0.8

1.0

Fig. 11 Van Genuchten capillary pressure curves parameterized by delp0 (capillary pressure magnitude when the water saturation is 0.8)

In the initial state, all of the pore space within the computational grid is full of motionless liquid H2O (with small amount of dissolved CO2). At the outer lateral boundaries, the fluid pressure distribution is maintained at the initial (hydrostatic) value. At the top boundary, the pressure and temperature are maintained at 1 bar and 15  C, respectively. The bottom boundary is impermeable, and its temperature varies between 40  C at x = 0 km and 60  C at x = 25 km. In the numerical simulations, the “STAR” reservoir simulation code (Pritchett 1995, 2002) was used with the “SQSCO2” fluid constitutive module (Pritchett 2008) which represents the thermodynamics and thermophysical properties of H2O–NaCl–CO2 mixtures over the range from liquid-CO2 to supercritical-CO2 conditions including the three-phase region. (For test problem #3 in a code intercomparison project (Pruess et al. 2002), the STAR/SQSCO2 gave almost the same results (dry-out region development, disolved CO2 mass fraction, etc.) as those given by the TOUGH2/ECO2N simulation (Pruess 2005). See Fig. 12) Similar to the TOUGH2/ECO2M (Pruess 2011), the STAR/SQSCO2 can describe all possible phase conditions for brine–CO2 mixtures, including transitions between super- and subcritical conditions and phase change between liquid and gaseous CO2. Simulations were carried out for 50 years of injection (at a rate of ten million tons of CO2 per year) into a sandstone-dominated layer at 1,400 m depth (the Umegase formation) at the grid blocks (i = 10, 11; j = 8, 9; k = 9; “Inj-base” shown in Fig. 8) followed by 1,000 years of shut-in. The internal energy of the injected

CO2 Geological Storage

25

Fig. 12 A result of the “STAR/SQSCO2” application to test problem #3 in a code intercomparison project (Pruess et al. 2002). Computed distribution of gaseous-phase saturation as a function of the similarity variable ξ (= r2/t) for freshwater (blue) and seawater (red) cases (after Pritchett 2008)

CO2 corresponds to a temperature and pressure of 34  C and 144 bars (same as in situ conditions prior to injection). Here the results for two variants of the base case model: “L-k” and “H-k” models are presented, in which the horizontal/vertical permeabilities of the Umegase formation are 50 mD/ 10 mD and 500 mD/ 100 mD, respectively. Figure 13 shows the distribution of pressure, temperature, and phase saturations at t = 50 years (when injection ceases) and 1,050 years (after 1,000 years of shut-in) for the “H-h” model. At t = 50 years, the injected CO2 remains as a supercritical free phase with the saturation 0.2 or more within the sandstonedominated layers (the Umegase and underlying Otadai formations). After injection ceases, the CO2 density decreases (and its volume increases) due to pressure release. The supercritical CO2 then gradually migrates upward for hundreds of

26

M. Sorai et al.

Fig. 13 “H-k” model: pressure (cyan), temperature (red), liquid CO2 (green) and gas (black) saturation contours in x-z plane at j = 8 at t = 50 and 1050 years. Contour interval is 2 MPa for pressure and 5  C for temperature. Liquid CO2 and gas saturation contour labels are 0.005, 0.105, 0.205 and 0.305. The region shown in the figure extends 2500 m in the vertical (z) direction and 10,000 m (i = 3, . . ., 19) in the x-direction

years due to buoyancy and penetrates into the overlying seal layer (the Kokumoto formation). However, the rising gaseous CO2 is then densified at shallower levels as the temperature decreases below the critical temperature at sufficiently deep levels and becomes relatively immobile liquid CO2 condensate. As shown in Fig. 14, at t =1,050 years about 17 % of the CO2 is trapped as immobile CO2 condensate at near residual saturation, and the remaining CO2 is trapped as dissolved CO2 in the aqueous phase (41 %) and supercritical residual gas below the seal layer (42 %). Other similar calculations have shown that if a much lower permeability (i.e., 0.1 mD) or a larger capillary pressure is assumed for the seal layer than in the base-case model, CO2 intrusion into the seal layer in the postinjection period will become negligible.

Prediction of Changes in Geophysical Observables Next, various “geophysical postprocessors” were used to calculate time-dependent earth-surface distributions of seismic observables (from reflection, VSP, or tomography surveys), microgravity, electrical self-potential (SP), and apparent resistivity (from either DC or MT surveys). The temporal changes are caused by changing underground conditions (pressure, temperature, gas saturation, concentrations of dissolved species, flow rate, etc.), as computed by the reservoir simulations. Figures 15 and 16 show seismic sections calculated by applying the seismic (reflection survey) postprocessor (Stevens et al. 2003) to the reservoir simulation results. The reflected waves correspond to the upper and lower boundaries of

CO2 Geological Storage

27

600

Mass of carbon dioxide, billion kg

500

Total CO2 Liquid CO2

400

Dissolved CO2 Gaseous CO2

300

200

100

0

0

200

400

600 Time, Years

800

1000

Fig. 14 “H-k” model: total mass of carbon-dioxide, mass of liquid CO2, mass of CO2 dissolved in water, and mass of gaseous (and supercritical) CO2 in the computational grid

regions containing CO2 gas around the injection wells. In this case, the seismic postprocessor calculated the seismic velocity and Q factor in the water/gas two-phase regions using a “patchy saturation” model (e.g., Mavko et al. 2009). In this model, the low-frequency and high-frequency limiting bulk moduli are given by Gassmann’s relation and by Hill’s relation, respectively. The “standard linear solid” is applied to predict the velocity dispersion and attenuation with the characteristic frequency, which is inversely proportional to the square of liquid cluster size which depends on the liquid-phase saturation. The reflection events for the “H-k” model (Fig. 15) corresponds to the CO2 gas region at t=50 year shown in Fig. 13. Compared with this, the seismic events for the “L-k” model (Fig. 16) remains in a narrower region, which corresponds to a less horizontal expansion of CO2 gas region due to lower permeabilities of the injection aquifer (the Umegase formation) assumed for the “L-k” model. Figure 17 shows “sounding results” calculated by using the magnetotelluric (MT) postprocessor for the “L-k” model. In this calculation the postprocessor assumes that the pore fluid salinity is homogeneous at 0.01 below the upper surface of the seal layer (the Kokumoto formation), which gives 2.5 S/m pore fluid

28

M. Sorai et al. 0 250 500 750

Milli seconds

1000 1250 1500 1750 2000 2250 2500 2750 3000

1

2

3

4

5

6

7

8 9 10 11 12 13 14 15 16 17 18 19 Station Number

Fig. 15 “H-k” model: comparison of time-series results (blue) calculated for a seismic reflection survey after 50 years of CO2 injection with those for an initial survey (yellow) across the center of the study area (along 24 km-long A–A0 line shown in Fig. 18). The results overlapped with each other are shown by black color

conductivity around the injection level. Change in the pore fluid conductivity (σ) due to CO2 injection is given so as that σ is proportional to the square of the aqueous-phase saturation. In shallower levels than the seal layer, the bulk conductivity of rock–fluid mixture is fixed at 0.01 S/m to represent freshwater regions. As seen in Fig. 17, although the apparent resistivity of the CO2-saturated region does not change very rapidly, it has increased by 15 % or more after 50 years. Figure 18 shows changes in earth-surface gravity at t = 50 years for the “H-k” model. The maximum decrease, which appears just above the injection zone, is about 75 μGal. The gravity disturbance grows almost linearly with time during the 50 year of injection interval (Fig. 19). After shut-in, further gravity decrease takes place up to about t = 55 years arising from the upward buoyant migration of the CO2 (Fig. 19). However, the rate of further gravity change becomes very small after

CO2 Geological Storage

29

0 250 500 750

Milli seconds

1000 1250 1500 1750 2000 2250 2500 2750 3000

1

2

3

4

5

6

7

9 10 11 12 13 14 15 16 17 18 19 Station Number

8

Fig. 16 “L-k” model: comparison of time-series results (blue) calculated for a seismic reflection survey after 50 years of CO2 injection with those for an initial survey (yellow) across the center of the study area (along 24 km-long A–A0 line shown in Fig. 18). The results overlapped with each other are shown by black color

t = 60 years although the upward migration continues for hundreds of years after shut-in. This is because the supercritical CO2 entering the Kokumoto seal layer is densified by condensation as the temperature decreases below the critical temperature at sufficiently deep levels and the upward movement slows down. As for the “H-k” model, gravity increase and decrease continue at stations 1 and 2 respectively after t = 55 years. This slight change reflects CO2 gas movement from the injection location to peripheral regions. Figure 20 shows downhole borehole-gravity response at stations 1 and 2 (see Fig. 18 for the locations) for the “H-k” model. At the injection location (station 1), pronounced gravity changes appear even as early as t = 5 years, corresponding to the thickness of the CO2 plume. At station 2, the change in earth-surface gravity is small even at t = 50 years (Fig. 19), but the borehole response is very apparent

30

M. Sorai et al. 100 Hz Percent increase in apparent resistivity between t = 0 and 50 years

10 Hz

1 Hz

0.1 Hz

Meters

20000

15000

10000

5000

0.001 Hz

0

0.01 Hz

100 Hz

10 Hz

1 Hz

0.1 Hz

0.01 Hz

20000

Meters

15000

10000

5000

0.001 Hz

0

Increase in phase angle between t = 0 and 50 years

CO2 Geological Storage

31

particularly after 25 years or so when the expanding CO2 plume engulfs the borehole location. Figure 21 shows change in self-potential (SP) at t = 5 years for the “H-k” model; the maximum increase, which appears just above the injection zone, is more than 40 mV. The self-potential postprocessor calculates changes in subsurface electrical potential induced by pressure disturbances through electrokinetic coupling (Ishido and Pritchett 1999). Since the permeability of the seal layer (the Kokumoto formation) is relatively large in the present model, a pressure disturbance propagates to shallower levels where a transition zone between the shallower fresh and deeper saline water is assumed to be present as in the MT calculations. This transition zone acts as an interface between regions of different streaming potential coefficient. Pressure increases about 2 bars around this interface, which is located at a depth of 1,000 m at the injection location, bringing about a positive change of 40 mV at the earth surface (see the vertical section in Fig. 21). This obvious SP change develops rapidly with pressure increase for the first several years and persists until shut-in at t = 50 years. After shut-in, SP disturbance gradually declines as pressures return to their original levels. As seen in Fig. 22, more pronounced SP disturbance appears for the “L-k” model, which corresponds to larger pressure buildup (about 5 bars) due to CO2 injection into the lower permeability formation.

Summary Of course, the applicability of any particular method is likely to be highly site specific, but the calculations described here indicate that none of these techniques should be ruled out altogether. In addition to seismic methods (especially reflection surveys, e.g., Chadwick et al. 2009), microgravity surveys appear to be suitable for characterizing long-term changes, and SP measurements are quite responsive to short-term disturbances. The computed gravity changes suggest that microgravity monitoring can be used to characterize the subsurface flow of CO2 injected into underground aquifers. Gravity monitoring results are sensitive to the lateral migration of the CO2-rich phases (both liquid condensate and particularly gaseous CO2). Gravity monitoring may also be useful for assessing the suitability of particular disposal aquifers for CO2 sequestration. If the geothermal gradient is low as is observed in a portion of the Tokyo Bay area, the predicted decrease in gravity is quite small considering the relatively large injection rate. Even if the (supercritical) gaseous CO2 gradually migrates upward for hundreds of years after injection, the gaseous CO2 will be densified as the temperature decreases below the critical temperature at sufficiently deep levels and become relatively immobile liquid condensate, which is much less likely to escape the aquifer than highly buoyant, low-viscosity CO2 gas. When this occurs, the gravity change is very slight during the 1,000-year postinjection period. ä Fig. 17 Percentage apparent resistivity increase (upper) and phase angle increase (lower) as a function of frequency between t=0 and 50 years along 24 km long A–A0 line shown in Fig. 18

32

M. Sorai et al. 10000

7500

5000

Meters North

2500

0

−2500

−5000

5000

2500

0

−2500

−5000

−7500

−10000

−15000

−10000

−12500

−7500

Meters East

Fig. 18 Distribution of gravity change between t = 0 and 50 years in the study area shown in Figure 8 (extending from 15 km to 5 km in the east-west direction and from 10 km to 10 km in the northsouth direction). Maximum decrease is 75 μGal near the injection site centered at 8 km east and 1 km north

Considering the current advanced technology for field measurements (e.g., Nooner et al. 2007; Alnes et al. 2008; Sugihara and Ishido 2008; Sugihara et al. 2013), microgravity monitoring is thought to be a very promising technique for evaluating CO2 geological storage. The self-potential postprocessor calculates changes in subsurface electrical potential induced by pressure disturbances through electrokinetic coupling. If the permeability of the seal layer overlying the injection zone is not too small, substantial SP changes will appear at the earth surface during the first few years of injection. At least in coastal and estuarine environments, this large change is produced by a pressure increase of several bars at the interface between the shallower fresh and deeper saline water layers, which acts as an interface between

CO2 Geological Storage

33

Fig. 19 Change in gravity from t = 0 to 100 years for the “H-k” and “L-k” models at two stations 1 and 2, the locations of which are shown in Fig. 18

Fig. 20 Borehole gravity response for the “H-k” model at selected times at stations 1 and 2, the locations of which are shown in Fig. 18

regions of different streaming potential coefficient. If a discontinuity of streaming potential coefficient is present, SP monitoring can be an effective technique for monitoring pressure changes near the interface at depth.

Geomechanical Modeling Introduction When any kind of fluid like CO2 is pressure injected into an underground reservoir as is done for geological CO2 storage, the pressure (pore pressure) of the fluids underground increases, and the stress distribution underground may change. Stress redistribution within and surrounding the reservoir and caprock system may lead to

34

M. Sorai et al. 10000

7500

5000

Meters North

2500

0

−2500

−5000

5000

2500

0

−2500

−5000

−7500

−10000

−15000

−10000

−12500

−7500

Meters East

20000

17500

15000

12500

10000

7500

0

−5000

5000

−2500

2500

Meters RSL

0

Meters

Fig. 21 Distribution of self-potential at the earth’s surface (upper) and a vertical section (lower) for the “H-k” model at t = 5 years. The area represented is the same as that of Fig. 18. Contour interval is 5 mV, and the yellow color denotes positive SP

CO2 Geological Storage

35

Fig. 22 Temporal variations of self-potential from t = 0 to 100 years for the “H-k” and “L-k” models at two stations 1 and 2, the locations of which are shown in Fig. 18

geophysical changes, microseismicity, and fault reactivation and may even trigger large earthquakes (Giammanco et al. 2008; Lei et al. 2008; Miller et al. 2004; Yamashita and Suzuki 2009). For example, at a gas field in In Salah, Algeria, where CO2 is pressure injected to enhance natural gas production, synthetic aperture radar observations from a satellite have indicated a ground uplift rate of about 1 cm/year around the CO2 pressure injection wells, along with a similar amount of subsidence around the gas production wells (Onuma and Ohkawa 2009). In some gas fields in the Sichuan Basin, China, following injection of unwanted water into depleted gas reservoirs, a number of seismic sequences have been observed with sizable earthquakes ranging up to M4  5 (Lei et al. 2008, 2013). In recent years, following the rapid increase of applications in which fluids are intensively pressed into the deep formations of the Earth’s crust, such as the enhanced geothermal system (EGS), fracking of shale gas, and geological sequestering of CO2, injection-induced earthquakes and other risks related to injection-induced rock deformation and failure have attracted growing attention (Ellsworth 2013; Lei et al. 2013; Zoback et al. 2013; Zoback and Gorelick 2012). Indeed, geophysical changes and microseismicity are useful in the monitoring and management required during and after a large-scale injection project. However, the risks related to fluid leakage and earthquakes that can be felt may give rise to strong social impacts. The issue of noticeable or damage-causing earthquakes induced by artificial operations is controversial and has been the cause of delays and threatened cancelation of some projections such as the EGS (enhanced geothermal system) project at Basel (Deichmann and Giardini 2009). To carry out geological CO2 storage safely and for this technology to be accepted not only by the

36

M. Sorai et al.

inhabitants around the storage sites but also by the society as a whole, technological developments that address such public concerns are essential. In addition, there is a strong desire to be able to control or predict the occurrence of damaging earthquakes. In this regard, geophysical/geomechanical modeling is key in site selection, injection operation, and postinjection management. The purpose of the following subsections is to provide a general framework for geophysical/geomechanical modeling and microseismicity analysis. Section “A General Framework of Geophysical/Geomechanical Modeling” introduces a general framework for modeling. Then, section “Numerical Simulation for THM Coupling Analysis” introduces the numerical simulation technology used in the coupled THM (heat transferring, fluid flow, rock mechanics) analysis. Postprocessing for history matching and fault stability analysis is presented in section “Fault Stability Analysis: Coulomb Failure and Slip Tendency.” A case study of a natural analogue is introduced in section “An Example of a Natural Analogue: The Matsushiro Seismic Swarm Driven by CO2-Quality Fluid Activity.” Finally, section “Data Processing and Analysis of Injection-Induced Seismicity” introduces some key technologies for data processing and analyzing of injectioninduced seismicity.

A General Framework of Geophysical/Geomechanical Modeling Figure 23 shows a schematic flowchart of modeling with coupled THM simulation and history matching using a geophysical postprocessor (see section “Geophysical Monitoring and Modeling” for details). Firstly, existing geological and geophysical data should be integrated to build a conceptual geological model of a reservoir system. Then the mechanical and petrological properties of the major rocks in the geological model must be sufficiently investigated to create a numerical model. Additional laboratory experiments might be required to collect data on specified rocks to improve the reliability of the numerical analysis. Finally, history matching is applied to refine the numerical model of a reservoir to reproduce observed data. All data obtained through geophysical exploration methods, such as microgravity measurements, seismic exploration, electrical or electromagnetic exploration, etc., can be used in history matching to improve the accuracy of future forecasts. Geophysical postprocesses are used to convert changes in pressure, temperature, salinity, CO2 saturation, etc., calculated by reservoir simulation into changes in geophysical observables (Ishido et al. 2011, section 5). Since there are uncertainties in many aspects of the numerical model, such as small-scale inhomogeneity and upscaling, uncertainty analysis is necessary for a probability-based prediction. In geomechanical modeling, discontinuous structures including joints, fractures, and faults have a governing role in fluid flow and rock stability. Studies on water injection-induced seismicity in depleted gas/oil reservoirs show that earthquakes of relatively greater magnitude (M3 or greater) are mostly related to the reactivation of preexisting faults, favorably or unfavorably oriented, within or surrounding the reservoir (Lei et al. 2008, 2013). Therefore, estimating fault stability and

CO2 Geological Storage

37

sustainable fluid pressures for underground storage of CO2 is an important issue in geomechanical modeling (Rutqvist et al. 2008; Streit and Hillis 2004). Known major faults in or near target aquifers can be avoided during site screening. Since a fault of a dimension of a few hundreds/thousands of meters is sufficient to produce  M3/M5 earthquakes as indicated by the empirical relationship between source dimension and earthquake magnitude (Utsu 2002), faults that are not resolvable by geophysical surveys must also be properly addressed (Mazzoldi et al. 2012).

Numerical Simulation for THM Coupling Analysis There is a current focus on coupled THM analysis as a budding technology. This technology is used to predict injection-induced changes in rock properties, formation deformation, stress redistribution, and fracture/fault stability. Although the simulation technology is still in development, there are a number of choices of research-oriented or commercial software for reservoir simulation and/or stress analysis, such as TOUGH2 and FLAC3D. TOUGH2 is a multiphase reservoir simulation program developed by the Lawrence Berkeley National Laboratory (LBNL) in the US FLAC-3D (Itasca 2000) and is a commercial software for stress analysis. As a promising combination, the “TOUGH-FLAC” approach with couplers developed by Rutqvist et al. (2002) has proven useful in the analysis of deformation accompanied with fluid flow within hard and soft rocks in geothermal studies (Todesco et al. 2004), in CCS studies (Funatsu et al. 2013; Rinaldi and Rutqvist 2013; Rutqvist et al. 2008), and in natural analogues (Cappa et al. 2009). A schematic of the couplers and physical quantities handled by the TOUGHFLAC approach is shown in the center of Fig. 23. A reservoir is presumably a porous medium filled with formation water. If a fluid (CO2 in CGS) is pressure injected into this reservoir, the pore fluid pressure increases, and there is flow between the formation water and the pressure-injected fluid. Changes in pore fluid pressure lead to small changes in the reservoir rock. In addition, if there is a temperature difference between the reservoir and the pressure-injected fluid, as the fluid flows and spreads, heat is transported, and the temperature change causes rock deformation. The fluid flow simulator TOUGH2 calculates changes in the pore fluid pressure, temperature, degree of saturation, etc., and sends the results to the rock mechanics simulator in FLAC3D. The rock mechanics simulator then calculates the solid deformation and sends the changes in porosity, permeability, and capillary pressure back to the fluid flow simulator. The couplers are some built-in functions in TOUGH2 and FISH codes in FLAC3D (Rutqvist et al. 2002). Such coupling approaches, termed sequential coupling, work well for problems in which coupling is relatively weak. In the THM coupling simulation, the following parameters and models are required and should be investigated through laboratory experiments: (1) parameters governing the deformation and fracturing behaviors of a given rock; (2) petrophysical models linking porosity, intrinsic permeability, relative permeability, and effective confining pressure; and (3) permeability of fracture as a

38

M. Sorai et al.

Fig. 23 Flowchart of modeling with coupled THM simulation and history matching using a geophysical postprocessor

function of effective confining pressure. Rock properties and mechanical behaviors strongly depend on individual rock types and structures within the rock. Thus, individual rocks of a target reservoir system should be fully investigated to obtain reliable parameters. If a typical rock of the reservoir system has not been investigated by earlier studies, additional experimental study is required. Further, laboratory experimentation has a twofold role in geomechanical modeling. On one hand, it is the only way to get the physical/mechanical/hydraulic properties of a given rock and the constitutive laws required for conducting numerical models (Lei and Xue 2009; Lei et al. 2011a). On the other hand, a well-designed experiment is

CO2 Geological Storage

39

useful for verifying and improving a related numerical model by matching the numerical results with the experimental results (Lei et al. 2015). Permeability might change greatly due to deformation and fracturing (Zhang et al. 2007). In brittle rocks, fault rupturing can lead to a 2-order-of-magnitude permeability increase as estimated by in situ testing (Ohtake 1976) and laboratory experiments (Alam et al. 2014). Thus, permeability as function of deformation should be properly considered. In the TOUGH-FLAC approach, permeability is revised within Tough2 in every time step by built-in functions. In some previous works, permeability has been expressed as a function of volumetric strain (εv) or shear strain (εs) (Cappa et al. 2009; Cappa and Rutqvist 2011; Chin et al. 2000): k ¼ k0 ð1 þ βΔes Þ  n ϕ , φ ¼ 1  ð1  φi Þeev k ¼ k0 ϕi

(5) (6)

where ϕ and k are porosity and permeability, respectively, with ϕi and ki being the initial values. A β on the order 104 or n of 30 results in a 2-order-of-magnitude permeability increase for a fully reactivated fault (Fig. 24). As seen from Fig. 24, Eqs. 5 and 6 result in quite different behaviors. This should be examined in future studies.

Fault Stability Analysis: Coulomb Failure and Slip Tendency In most cases, the stress tensor is not fully defined or is badly defined, so rock failure analysis based on absolute stress tensors may lead to incorrect results. The earth’s crust is considered to be critically stressed; thus, a small change in stress may trigger earthquakes. The amplitude thresholds of the Coulomb failure stress change (ΔCFS) required to trigger earthquakes has been estimated to range from 0.01 to 0.03 MPa (Brodsky and Prejean 2005; Cochran et al. 2004; King et al. 1994; Lockner and Beeler 1999; Stein 1999). Based on Coulomb failure law, the critical condition for rupturing on a preexisting fault is   τ ¼ μσ e ¼ μ σ  Pf

(7)

where τ and σ are shear and normal stresses acting on the fault plane, respectively, σ e is effective normal stress, Pf is pore pressure, and μ represents the sliding friction of the fault plane. A change in Coulomb failure stress (ΔCFS) is defined as ΔCFS ¼ Δτ  μΔσ e

(8)

The tendency of a planar discontinuous structure such as a fault to undergo slip under a given stress pattern depends on the frictional coefficient of the surface and the ratio of shear to normal stress acting on the plane. The slip tendency of the fault

40

M. Sorai et al.

Fig. 24 Rock permeability as a function of volumetric strain for two different models

is defined as the ratio of the shear stress and normal stress (Morris et al. 1996) and thus equals the friction coefficient: Ts ¼ τ=σ e

(9)

Slip-tendency analysis is a technique that visualizes the stress tensor in terms of its associated slip-tendency distribution and the relative likelihood and direction of slip on interfaces at all orientations (Morris et al. 1996). It can be used in assessing the risks of geological CO2 storage (Kano et al. 2014). Under a uniform regional stress field, the most optimally oriented fault has the maximum slip tendency, as faults with greater slip tendency values are easier to rupture. Under the principal stress coordinate system (s1, s2, s3), the shear and normal stresses on a surface of given direction cosines (l, m, n) can be calculated from the three principal stress magnitudes (σ 1, σ 2, σ 3) as: τ2 ¼ ðσ 1  σ 2 Þ2 l2 m2 þ ðσ 2  σ 3 Þ2 m2 n2 þ ðσ 3  σ 1 Þ2 n2 l2 σ 2 ¼ σ 1 2 l2 þ σ 2 2 m2 þ σ 3 2 n2

(10)

In some cases, only the direction of the principal stresses and the stress difference ratio (R), or equivalently the shape ratio (ϕ), are given (Etchecopar et al. 1981; Gephart and Forsyth 1984): R¼

ðσ 1  σ 2 Þ ðσ 1  σ 3 Þ

ϕ¼1R¼

ðσ 2  σ 3 Þ ðσ 1  σ 3 Þ

(11)

(12)

In addition, σ 1σ 3 is not well constrained and can be expressed as an unknown parameter k. By further assuming that the frictional sliding envelope is tangential to the (σ 1, σ 3) Mohr circle, then the principal stresses are given by

CO2 Geological Storage

41

σ 1 ¼ kð1= sin ðϕÞ þ 1Þ=2 σ 2 ¼ σ 1  kR σ3 ¼ σ1  k

(13)

where tanφ = 1/tan(2θ) = μ. Inserting Eq. 13 into Eq. 10 leads to the following equations for shear stress and normal stress: h i1=2 τ ¼ k ð 1  ϕÞ 2 l 2 m 2 þ ϕ 2 m 2 n2 þ n2 l 2   cscðφÞ þ 1  ð1  ϕÞm2  n2 σ¼k 2

(14)

Thus, the slip tendency is independent of the choice of the unknown parameter k, and we can get a slip tendency normalized by the maximum. For such a partially defined stress field, we can draw the 3-D Mohr circles for shear and normal stresses normalized by k or the maximum shear stress. It is convenient to define an overpressure coefficient λ for fluid pressure (Terakawa et al. 2013): 

 Pf  P 0 λ¼ ðPmax  P0 Þ

(15)

where P0 is the critical pore pressure required to initiate rupture on the optimally oriented fault for a given friction coefficient, Pmax (=σ 3), which is the maximum pore pressure above which hydrofracture occurs. In the geophysical/geomechanical modeling approach, one can develop a postprocessor to calculate ΔCFS and slip tendency. It can be done within Flac3D by writing a simple FISH program. ΔCFS and slip tendency are especially useful for analyzing the effect of injection to preexisting nearby faults that have not been involved in the numerical model. Figure 25 shows an example of slip tendency analysis for a location where only the direction of the principal stresses and the stress difference ratio R (=0.6) are given. Faults having a strike and dip in the red zones have relatively greater probability of being reactivated.

An Example of a Natural Analogue: The Matsushiro Seismic Swarm Driven by CO2-Quality Fluid Activity In geological CO2 storage, it is important to clarify the mechanisms and geomechanical conditions of worst-case events, such as damaging earthquakes and reservoir leakage, so that they can either be avoided or mitigated. It is most desirable to use an actual CGS site in which such events have actually occurred and have been well monitored. However, many pilot projects are sited in places with good conditions for safely pressing CO2 into the reservoir. Thus, it is valuable to carry out “natural analogue research,” analyzing similar phenomena caused by the

42

M. Sorai et al.

Fig. 25 (a) 3D Mohr circles and (b) normalized slip tendency stereoplots (LSP lower sphere projection) under an over pore pressure constant of 0.1 and a local stress field in which only the direction of the principal stresses and the stress difference ratio R are given

activity of a natural CO2-quality fluid to examining the modeling technology. Here, we make a brief review of studies on the fluid-driven earthquake swarm in Matsushiro, central Japan, as a natural analogue of seismicity induced by fluid injection. In the Matsushiro area, which is located in the central and northern part of Nagano Prefecture, a series of more than 700,000 earthquakes occurred over a 2-year period (1965–1967). This swarm, termed the Matsushiro swarm, resulted in ground surface deformations (uplifts as large as 75 cm), cracking of the topsoil, enhanced spring outflows with changes in chemical compositions, and CO2 degassing. Ten million tons of CO2-rich saltwater was estimated to have seeped out from underground along the cracks (Ohtake 1976). Thus, the Matsushiro swarm is believed to have been triggered and driven by high-pressure CO2-rich fluid from deep sources. Data observed during the Matsushiro swarm can therefore be used as a natural analogue for examining THM coupling analysis (Cappa et al. 2009; Funatsu et al. 2013). In Matsushiro and surrounding areas, subsurface geophysical surveys have been frequently conducted since the occurrence of earthquake swarms, and underground data, such as seismic wave velocity structure data, are abundant. In addition, the surface geology is relatively well understood. The geological model was developed based on these existing data. Here, a new Matsushiro model is used, which is basically an improved model modified from earlier studies (Cappa et al. 2009; Funatsu et al. 2013). In the new model, the boundaries along all four sides are enlarged to limit the effect of boundary conditions. It covers a 50  50  6 km area with a focus dimension of 24  24  6 km

CO2 Geological Storage

43

Fig. 26 Map showing basic features around the Matsushiro earthquake swarm. The model area is shown on the topographic map. Red dotted lines indicate the Matsushiro fault and the Southeast Boundary Fault (SEBF) of the Nagano basin. Contours show earthquake swarm migration (Modified from Hagiwara and Iwata 1968). The right plot shows a 3D model for numerical analysis viewed from the southwest

centered at the intersection of the Matsushiro fault and SEBF (Fig. 26). In addition, the topography is also involved in the new model to simulate subsurface ground water flow. In order to better represent the deep structure in the area, a geological model of three lithology groups was constructed, considering the seismic profiles obtained so far. Two vertical faults that intersect at the center of the model are assumed. The faults are modeled as narrow zones of a new group termed “fault.” The regional stress field has its maximum compression axis in the east–west direction, and its minimum compression axis in the north–south direction. This model is divided into grids by steps so that the fault zone is split in small parts, while the surrounding matrix becomes coarser as the distance from the fault

44

M. Sorai et al.

Table 1 Mechanical properties Property Bulk modulus (GPa) Shear modulus (GPa) Cohesion (MPa) Ten. strength (MPa) Friction angle ( ) Dilation angle ( ) Biot’s coefficient Ini. Perm.(k0), (m2)

Aoki Besyo Basement 1.96 4.42 7.85 1.55 3.49 6.20 – – – – – – – – – – – – 0.9 0.8 0.8 1e17 1e18 1e18 k ¼ k0 ð1 þ βΔes Þ, β ¼ 30, 000

Fault/fault inters. 3.16 2.42 1.5 0.0 28.8 20 0.6 1e15/5e15

Porosity

0.05

0.05

0.01

0.01

increases (depth, however, is equally split at 500 m intervals). The total number of elements is 9,248. Mechanical properties, which are set similar to those in Cappa et al. (2009), are listed in Table 1, and the geometry of the model is shown in Fig. 26. After failure, in order to incorporate strain softening behavior into the model, we decreased the cohesion and tensile strength with strain following two linear paths to given values. After failure, the friction angle and dilation angle also change with the shear strain. The associated parameters are listed in Table 1. Note that some parameters used this, and former studies differ significantly from laboratory-derived data for intact rocks. For instance, the values of Young’s modulus used in the numerical model appear too low. As we know, the real crust contains fractures and faults at all scales. It is impossible to represent all individual fractures and small faults in a model, so we have to adjust some properties, such as the bulk and shear moduli, as an upscaling technique. Similar techniques are used in core-scale simulation to account for preexisting microcracks in rock samples (Lei et al. 2015). The calculated horizontal displacements and uplifts at the ground surface are shown in Fig. 27. The maximum uplift, 66 cm, is obtained at the point where the faults cross 1 year after from the beginning of fluid injection. This value is close to the observed maximum value of 75 cm. The uplift pattern becomes asymmetric to the SEBF, which concurs with observations. Left-lateral slip along the two faults is also identified. Results of the new model match observed values better than previous studies. The uplift gradually stretches away from the intersection of the two faults along their extensions in a skewed rhombic pattern, indicating a faultcontrolled pore pressure diffusion process. Figure 28 compares the Matsushiro earthquake swarm migration and a calculated distribution of ruptured zones along the Matsushiro and east Nagano earthquake faults 180 and 720 days after injection began. Except for fractures at the surface, all fractures demonstrate shear mechanisms. Surface fractures show tensile modes. Calculated surface uplift is plotted in Fig. 29. For comparison, some data estimated from field observations (Kasahara 1970; Tsukahara and Yoshida 2005) are also plotted.

CO2 Geological Storage

45

Fig. 27 Calculated distributions of X- and Y-displacement and uplift of the ground surface 360 days after injection began

In conclusion, the numerical model and coupled THM analysis using the TOUGH-FLAC3D approach successfully represent major characteristics of observed phenomena associated with the CO2-rich fluid-driven Matsushiro earthquake swarm.

Data Processing and Analysis of Injection-Induced Seismicity Statistical Properties of Injection-Induced Seismicity Based on ETAS Modeling It is well known that an earthquake triggers aftershocks following modified Omori’s law. In the case of injection-induced seismicity, it is important to be able to discriminate induced activity from background seismicity and statistically separate fluid-induced and Omori-law-type aftershock triggering. The epidemic-type aftershock sequence (ETAS) model (Ogata 1992), which incorporates Omori’s law by

46

M. Sorai et al.

Fig. 28 A comparison of the Matsushiro earthquake swarm’s migration and calculated distribution of ruptured zones at 300, 360, and 450 days after injection began along the Matsushiro and east Nagano earthquake faults

assuming that each earthquake has a magnitude-dependent ability to trigger its own Omori-law-type aftershocks. The ETAS model is an appropriate tool for testing the significance of changes in seismic patterns (Ogata 1992, 2001), detecting minor stress changes (Helmstetter et al. 2003), and extracting a fluid signal from seismicity data (Hainzl and Ogata 2005). Thus, it is particularly useful for analyzing injection-induced seismicity (Lei et al. 2008, 2013). In the ETAS model, the total occurrence rate is described as the sum of the rate triggered by all preceding earthquakes and a forcing rate λ0(t) that represents the background activity: λðtÞ ¼ λ0 ðtÞ þ νðtÞ, νðtÞ ¼

X

K 0 eαðMi Mc Þ ðt  ti þ cÞp

(16)

fi:ti