Encyclopedia of Environmental Health [6 Volume Set] [2 ed.] 9780444639516, 9780444639523, 0444639519, 0444639527, 9780444643278, 0444643273

Encyclopedia of Environmental Health, Second Edition presents the newest release in this fundamental reference that upda

1,952 67 83MB

English Pages [4940] Year 2019

Report DMCA / Copyright

DOWNLOAD FILE

Polecaj historie

Encyclopedia of Environmental Health [6 Volume Set] [2 ed.]
 9780444639516, 9780444639523, 0444639519, 0444639527, 9780444643278, 0444643273

Table of contents :
VOLUME 1 A-C
Front Cover
ENCYCLOPEDIA OF ENVIRONMENTAL HEALTH
ENCYCLOPEDIA OF ENVIRONMENTAL HEALTH
EDITORIAL BOARD
CONTRIBUTORS TO VOLUME 1
Copyright
GUIDE TO USE THE ENCYCLOPEDIA
SUBJECT CLASSIFICATION (THEMATIC TABLE OF CONTENTS)
CONTENTS OF VOLUME 1
PREFACE
PERMISSION ACKNOWLEDGMENTS
Advances in Analytical Methods for the Determination of Pharmaceutical Residues in Waters and Wastewaters
Conclusions
Agro-Industrial Waste Conversion Into Medicinal Mushroom Cultivation
Global Mushroom Production
Commercial Production and Medicinal Importance of the Most Cultivated Mushrooms Worldwide
Further Reading
Air Pollution and Lung Cancer Risks
Ambient Air Pollution and Lung Cancer
Ecological Fallacy
Indoor Air Pollution and Lung Cancer
Confounding Factors
Conclusions
Air Pollution Episodes
Important Air Pollution Episodes
France Summer 2003
The Nature of Episodes
Air Pollution From Solid Fuels
Global Consumption Pattern of Solid Fuels
Interventions
Interventions on the Source of Pollution
Improved stoves
Interventions to user behavior
Air Quality Legislation
Air Quality Legislation
International Air Quality Legislation
Air Quality Legislation in United States
Concluding Remarks
Air Transportation and Human Health
Ambient Air Pollution
Estimates Using Mathematical Modeling
Predicting the Impact on Public Health
Airport-Related Noise
The Mandate of the Noise Control Act of 1972
Relevant Websites
Ambient Concentrations of Acrolein: Health Risks
Air Concentrations and Sources of Exposure
Ambient Concentrations and Emissions
An Ecological Disaster Zone with Impact on Human Health: Aral Sea
Effects on Climate
Political Aspects
Further Reading
Animal and Human Waste as Components of Urban Dust Pollution: Health Implications
Human Health Effects Associated With Polluted Urban Dust
Antimicrobial Resistance in Environmental Fecal Bacteria
Bacterial Endotoxins and Respiratory Tract Inflammation
Further Reading
Antarctic: Persistent Organic Pollutants and Environmental Health in the Region
Pollutants
Antibiotics Pollution in Soil and Water: Potential Ecological and Human Health Issues
Environmental Fate of Antibiotics: Tetracyclines as Model Antibiotic
Antibiotics in Soil
Ecological Impacts of Antibiotics and Resistant Bacteria
Further Reading
Application of Healthy Life Year Measures to Environmental Health Issues
Background to Measures of Population Health
Measures of Disease Burden
Composite Measures
Rationale
Types of Composite Summary Measures
Valuing Life and Social Values
Expectation of Life
Valuing Life Lived at Different Ages
Valuing Future Life Compared With Present Life: Discounting
Environmental Impact on Health
Definition of the Environment for Health
The Global Burden of Disease and the Environment
Regional Burden of Disease and Environment
National Burden of Disease and the Environment
The Use of Summary Measures of Population Health to Measure the Environmental Impact on Health
Measuring the Effectiveness of Environmental Interventions
Environmental Health and the Sustainable Development Goals
Aquatic Environment and Fish Welfare in Aquaculture
Future Considerations
Further Reading
Arsenic Exposure From Seafood Consumption
Arsenic in Seafood
Freshwater Organisms
Arsenic: Occurrence in Groundwater
Pregnancy Outcome
Asbestos Exposure and Autoimmune Disease
A Proposed Connection Between Asbestos, Systemic Autoimmune Disease and Interstitial Lung Disease
Assessing Indoor Air Quality
General Principles of Indoor Air Quality (IAQ) Measurement
Real-Time Monitoring
Measurement Methods of Particulate Constituents
Semigravimetric Methods
Measurement Methods of Vapor-Phase Constituents
Formaldehyde
Assessment of Human Exposure to Air Pollution
Exposure Assessment
Asthma: Environmental and Occupational Risk Factors
Environmental Sensitizers That Cause Asthma
Indoor Allergens
Bahama Archipelago: Environment and Health
History and Discovery of the Bahama Archipelago
Pirates and Buccaneers
The First Royal Governor, 1717 and Hurricane
The Development of Florida With Bahamian Emigration
Bias in Environmental Epidemiology
Types of Bias
Current Topics in Bias
Copollutant Confounding
Bioaerosols in Apartment Buildings
Bioaerosol Concentration Levels in Apartments
Effects of Electronic Appliances on Concentration Levels
Bioavailable Aluminum: Its Effects on Human Health
Alzheimer Disease (AD)
ADED Behaviors in Rats Chronically Exposed to Dietary Aluminum Additives
Pathological Change in ADED Rat Brains
Bioavailable Aluminum: Its Metabolism and Effects on the Environment
Forest Decline
Aluminum Toxicity in Fish and Other Aquatic Life
Aluminum and Birds
Biodiversity and the Loss of Biodiversity Affecting Human Health
Examples of Global Changes and Their Effects on Biodiversity and Feedback Loops to Human Health
Aquatic Biodiversity and Human Health
Microplastics damage both the health of marine organisms and human health
Biomarkers in Environmental Carcinogenesis
Main Categories of Biomarkers
Biomarkers of Effect
Biomarkers of Genetic Susceptibility
Biomarkers of Environmental Exposures in Blood
Blood as a Matrix for Biomonitoring
Which Matrix for Which Chemicals?
Sources of Variation
Blood Biomonitoring and Public Health: Case Studies
Serum Cotinine
Lead
Biomass Burning, Regional Air Quality, and Climate Change
Types of Biomass Burning
Savanna Fire
Characteristics and Compositions of Biomass Burning Emission
Methane (CH4)
VOLUME 2 D-E
Front Cover
ENCYCLOPEDIA OF ENVIRONMENTAL HEALTH
ENCYCLOPEDIA OF ENVIRONMENTAL HEALTH
Copyright
EDITORIAL BOARD
CONTRIBUTORS TO VOLUME 2
GUIDE TO USE THE ENCYCLOPEDIA
SUBJECT CLASSIFICATION (THEMATIC TABLE OF CONTENTS)
Target Organ Toxicity of Environmental Pollutants
CONTENTS OF VOLUME 2
PREFACE
PERMISSION ACKNOWLEDGMENTS
Dampness and Mold Hypersensitivity Syndrome as an Umbrella for Many Chronic Diseases—The Clinician’s Point of View
DMHS and Vaccination
Dichloromethane—A Paint Stripper and Plastic Welding Adhesive
Cancer in Humans
VOLUME 3 F-K
Front Cover
ENCYCLOPEDIA OF ENVIRONMENTAL HEALTH
ENCYCLOPEDIA OF ENVIRONMENTAL HEALTH
Copyright
EDITORIAL BOARD
CONTRIBUTORS TO VOLUME 3
GUIDE TO USE THE ENCYCLOPEDIA
SUBJECT CLASSIFICATION (THEMATIC TABLE OF CONTENTS)
CONTENTS OF VOLUME 3
PREFACE
PERMISSION ACKNOWLEDGMENTS
Floods as Human Health Risks
Definition and Epidemiology of Floods
Flood Risk: Hazard, Exposure and Vulnerability
Past and Future Trends in Flood Events
Health Impacts of Floods
Mental Health
Post-traumatic stress disorder (PTSD)
Healthcare Infrastructure
Fluorine: Human Health Risks
Detrimental Health Effects
Dental Fluorosis
Skeletal Fluorosis in Children
Skeletal Fractures
Functional ‘Omics and Molecular Analysis of a Subtropical Harmful Algal Bloom Species, Karenia brevis
Understanding K. brevis Biology for Better Modeling
Toxin Production
Gallium: Environmental Pollution and Health Effects
Applications
Gene Environment Interactions: Cigarette Smoke, APC, DNA Damage Repair, and Human Health
Chemical Composition of Cigarette Smoke
Gene-Environment Interactions in Neurodegenerative Diseases
Alzheimer's Disease
Global Measures of the Environmental Burden of Disease (EBD)
Relevant Website
Household Energy Solutions in Low and Middle Income Countries
Effectiveness of HAP Interventions
Impact of Interventions on Household Pollution and Personal Exposures
Impacts of Clean Energy Interventions on Health Outcomes
Household Water Treatment and Safe Storage in Low-Income Countries
HWTS Methods
Chlorination
Filtration
Housing-Related Health Hazards: Assessment and Remediation
Housing Hazard Assessment
Radon
Safety and Poisoning
Carbon Monoxide (CO) and Other Combustion By-Products
Noise
Comparison of Three Bundled Housing Intervention Protocols
Mold Remediation
Lead Paint Hazard Control
Human Health and the State of the Pedosphere
Soil Contamination and Chemical Transformations
Human Health Implications of Personal Care Products: Breast Cancer and Other Breast-Related Diseases
Conclusions
Human Health Link to Invasive Species
Importance of NIS in Global Change
Impacts of NIS
Human Tick-Borne Diseases in Southern Europe: Present Status and Future Challenges
Viral TBDs
Tick-Borne Encephalitis
Crimean-Congo Hemorrhagic Fever
Bacterial TBDs
Spotted Fever Rickettsioses
Lyme Borreliosis
Hydraulic Fracturing and Public Health
Introduction
What Is Hydraulic Fracturing?
Concerns and Challenges
Exposure
Water pollution
Social and Economic Impacts
Health Impacts
Climate Change
Conclusion and Recommendations
Immunological Effects of the Chernobyl Accident
Immune Function after the Acute Radiation Injury
Early Effects
Immunological Effects in the Cleanup Workers
Early Effects
Indonesia: Threats to the Country’s Biodiversity
Terrestrial
Forest Fires
Deforestation
Infectious Processes and Medical Geology
Geologic Determinants of Earliest Microbes: Living Rocks and Primeval Soup
Earliest Microbes
Most Extreme Microbes
Insect Repellents: Assessment of Health Risks
Compounds
DEET
Local effects
Iodine in the Environment
Functions of Iodine in Humans
Fate of Iodine after Ingestion
Toxicity of Iodine
Japan Environment and Children’s Study (JECS): Concept, Protocol and Current Status
Study Overview
Study Objectives
Study Methods
Instruments
Kuwait: Before and After the Gulf War
Environmental Quality Before 1990
Land
VOLUME 4 L-O
Front Cover
ENCYCLOPEDIA OF ENVIRONMENTAL HEALTH
ENCYCLOPEDIA OF ENVIRONMENTAL HEALTH
Copyright
EDITORIAL BOARD
CONTRIBUTORS TO VOLUME 4
GUIDE TO USE THE ENCYCLOPEDIA
Cross-References
Example
Index
SUBJECT CLASSIFICATION (THEMATIC TABLE OF CONTENTS)
Disinfection By-products
Electromagnetic Fields
CONTENTS OF VOLUME 4
PREFACE
PERMISSION ACKNOWLEDGMENTS
Leishmaniasis and Environment in Argentina: An Eco-Epidemiological Approach☆All the researchers belong to the Network for ...
Cutaneous Leishmaniasis and Environment
Scale: Capture Station and Day-Month Dynamics
Long-Range Transport and Deposition of Air Pollution
Historical and Projected Emission Changes
Projected Emission Changes
Long-Term Effects of Particulate Air Pollution on Human Health
Other Effects of Particle Exposure
Lung Function
Respiratory Symptoms
Low-Frequency Magnetic Fields: Potential Environmental Health Impacts
Expert-Panel Reviews of Magnetic Fields and Environmental Effects
American Cancer Society (ACS) (2014) (http://www.cancer.org/cancer/cancercauses/radiationexposureandcancer/extremely-low-fr ...
EC: Scientific Committee on Emerging and Newly Identified Health Risks (SCENIHR) (2015) (http://ec.europa.eu/health/scienti ...
National Cancer Institute (NCI) (2016) (https://www.cancer.gov/about-cancer/causes-prevention/risk/radiation/electromagneti ...
World Health Organization (WHO) (2007) (http://www.who.int/peh-emf/publications/elf_ehc/en/)
Malaria as an Environmental Disease
Mosquito Ecology and Limiting Factors on Vector Life Cycle
Environmental Factors That Increase Risk of Malaria Transmission
Factors That Affect Adult Mosquito Abundance
Factors That Affect Adult Mosquito Longevity
Habitation-Related Factors
Other Associated Risk Factors
Malaria, Bilharzia and Geo-Helminth Transmission in Kenya: Environmental Determinants
Environmental Factors Spreading Bilharzia and Geohelminths in Kenya
Ecofriendly Malaria Control Interventions
Insecticide-Treated Bed Nets
Antimalarial Medicines
Malaysia: Environmental Health Issues
Major Environmental Challenges
Minimizing Socioeconomic Divide
Preventing/Reducing the Occurrence of Natural and Man-Made Disasters
Tackling Air, Water, Noise and Congestion, and Waste Disposal Related Problems
Reversing the Trend of Land Degradation, Deforestation, Depletion of Natural Resources, and Loss of Biodiversity
Dealing With Global Environmental Issues
Strengthening Local Institutional Arrangements
Major Environmental Health Issues
Air Quality
Management and Export of Wastes: Human Health Implications
Globalization of Waste Management
Regulation of International Nonhazardous/Solid Waste Transfer
Regulation of International Hazardous Waste Transfer
Regulation of International Management of Nuclear Waste
Uncertainties
Recent Waste Management Strategy
Zero Waste
Use of Life Cycle of a Product Approach
Waste Recovery and Treatment Techniques
Landfilling
Insufficiencies in Waste Management and Risk to Environment and Human Health
Insufficiency in Collecting
Waste Transport
Manganese: Environmental Pollution and Health Effects
Introduction
Absorption and Toxicity of Manganese
Inhalation Route
Ingestion Route
Systemic Delivery of Manganese
Excretion and Biological Half-Life
Basic Mechanism of Toxicity
Measurement of Air Pollutants
Introduction
Measurement Units
Commonly Monitored Atmospheric Pollutants
Carbon Monoxide (CO)
Pollutant Measurement Techniques
Aerosol Composition Measurements
Measuring Noise for Health Impact Assessment
Noise Measurement: Common Practice
Environmental Noise Measurement
Personal Exposure
Where Should Exposure to Environmental Noise Be Determined?
Façade Exposure
Quiet Side
Indoor Exposure
Mechanisms of Immune Modulation by Xenobiotics
Introduction
Research Emphasis
Mechanisms of Immunomodulation by Heavy Metals
Cadmium
Vanadium
Conclusions and Outlook for Future Research
Medical Anthropology
Introduction and Relevance
Introduction
Relevance to Environmental Health
Key Concepts
From Risk Perception to Popular Epidemiology
The Tragedy of the Commons
Mercury and Children Health
Elemental Mercury (Hg0)
Absorption, Distribution, Metabolism, Excretion and Toxicity
Inorganic Mercury (Hg1+, Hg2+)
Sources of Exposure
Mercury in Air
Reduction and Oxidation Processes in the Atmosphere
Atmospheric Mercury Depletion Events in Polar Regions
Mercury Toxicity
Effect of Mercury
Systemic Effects
Modification of Mercury Toxicity
Metal-Induced Toxicologic Pathology: Human Exposure and Risk Assessment☆This manuscript has been reviewed in accordance wi ...
Metal Fragments: Depleted Uranium
Human Health Effects
Monetary Valuation of Health Impacts From Noise
Further Reading
Nanoscale Titanium Dioxide: Environmental Health and Ecotoxicological Effects
Human Health
Carcinogenic Hazards
Epidemiology
Natural Disaster—Environmental Health Preparedness
Definitions and Overall Health Impact of Natural Disaster
Health Impacts of Disaster
Neurotoxicology
Defining Neurotoxicity
Nigeria: Environmental Health Concerns
Overview of Environmental and Public Health Issues in Nigeria
Implicating Heavy Metal Pollution in Cancer
Environmental Quality and Food Contamination
Noise and Cognition in Children
A Summary of Experimental Studies of Acute Noise
Reading, Memory, and Learning
Oil and Chemical Spills
Distribution of Heavy Metals in Marine Bivalves, Fish and Coastal Sediments in the Arabian Gulf and Gulf of Oman
Optimal Pollution: The Welfare Economic Approach to Correct Related Market Failures☆☆
Public Goods, Externalities, and Internalization
Public Goods
Externalities
Organochlorines and the Effect on Female Reproductive System
Epidemiological Studies of the Effects of Organochlorines on Female Reproductive System
Reproductive Hormones
Ovarian Morphology
Endometriosis
VOLUME 5 P-S
Front Cover
ENCYCLOPEDIA OF ENVIRONMENTAL HEALTH
ENCYCLOPEDIA OF ENVIRONMENTAL HEALTH
Copyright
EDITORIAL BOARD
GUIDE TO USE THE ENCYCLOPEDIA
CONTRIBUTORS TO VOLUME 5
SUBJECT CLASSIFICATION (THEMATIC TABLE OF CONTENTS)
CONTENTS OF VOLUME 5
PREFACE
PERMISSION ACKNOWLEDGMENTS
Parasite Zoonoses
Parasites, Hosts, and Life Cycles
Parasites
Hosts
Life Cycles
Environmental Change, Parasite Zoonoses, and Emerging Disease
New Incidence or Severity of Parasitic Disease
Demographic rates
Life Support Systems
Adaptation
A Caution: Check the Evidence!
Case Studies for Parasite Zoonoses and Environmental Change
Parasite Zoonoses in Wildlife
Parasite Zoonoses in Marine Systems
Particulate Matter and Public Health
Human Effects Associated With Exposure to PM
Cardiovascular Effect
Diabetes Mellitus
Birth Outcomes
Health Endpoints Associated With Exposure to PM
Long-Term Exposure
Short-Term Exposure
Air Quality Standards
Health Benefits From the Improvement of PM
Summary
Particulate Matter and Ultrafine Particles in Indoor Air
Behavior, Transport, and Fate of Particles in Indoor Spaces
Infiltration/Penetration
Deposition in Spaces
Resuspension
Particle Formation
Specific Indoor Sources
Cooking Activities
Exposure to Particles in Indoor Spaces
PM in Aboveground Traffic Modes
PM in Subway Metro Systems
Past, Present and Future of Malaria Prevalence and Eradication in the Light of Climate Change
Future Predictions for Malaria Prevalence During Climate Change
Could Malaria Eradication Successes Be Reversed?
PCBs
Background on PCBs
Toxicity of PCBs
Role of Metabolism
Perceptions and Physiological Responses to Indoor Air Quality
Perceptual Mechanisms
Influence of Air Temperature and Humidity
Ozone-Reactive Compounds and Olfaction
Sensory Irritations
Physiological Mechanisms
Enzymatic Biomarkers
Perfluorinated Substances
Commercial and Industrial Uses
Insulating Gas
Blanketing Gas
Fire Extinguisher or Fire Suppression Agent
Persistent Organohalogen Pollutants and Phthalates: Effects on Male Reproductive Function
Further Reading
Pesticide Exposure and Diabetes
Associations of Pesticide Exposure With Insulin Resistance
Further Reading
Pesticide Exposure and Human Cancer
Evidence from Epidemiology
Leukemia
Non-Hodgkin Lymphoma (NHL)
Prostate Cancer
Female Breast Cancer
Central nervous system cancer
Childhood Cancer
Limitations of Pesticides Epidemiology
Pesticides: Human Health Effects
Human Exposure to Pesticide
Health Effects
Acute toxicity
Long-Term Health Effects
Neurological effects
Carcinogenicity
Pharmacokinetic and Pharmacodynamic Considerations in Children's Human Health Risk Assessment
Absorption, Distribution, Metabolism, and Excretion of Xenobiotics in Children
Factors That Affect Chemical Metabolism in Children
Development of cytochrome P-450 isozymes
Phthalates: Occurrence and Human Exposure
Relevant Websites
Plants as a Tool for the Environmental Health Assessment
Use of Plant Bioassays for the Environmental Health Assessment
Short-Term Laboratory Studies
Platinum: Environmental Pollution and Health Effects
Bioaccumulation and Biological Availability
Health Effects
Acknowledgment
Polymorphism and Gene–Environment Interactions in Environmental Cancer
Definition of Genetic Variations and Polymorphisms
Gene–Environment Interactions in Relation to Cancer Risk
Functional Polymorphisms
Missense Polymorphisms
Promoter Polymorphism and Haplotype Context
Gene–Environment Interactions in Relation to Environmental Cancers
The Cytochrome P-450 Phase I and Phase II Enzymes
Prenatal Exposure to Polycyclic Aromatic Hydrocarbons (PAHs)
Clinical Outcomes: Disease Outcomes Associated With Prenatal Exposure to PAH
Adverse Birth and Developmental Outcomes
Further Reading
Principles of Medical Geology
Further Reading
Psychological and Mental Health Aspects of Ionizing Radiation Exposure
Comparisons with Antique Items
Why Not Just Tell Workers, “It is Safe?”
Quality of Life and Environmental Health Assessment
Environment Facets of WHO-QOL
Physical Environment (Pollution, Noise, Traffic, Climate)
The Importance of the Environment Domain in Assessing Overall Quality of Life
Radiation Exposures Due to the Chernobyl Accident
Environmental Radioactive Contamination
Environmental Countermeasures
Radio Frequency Electromagnetic Fields: Health Effects
Cancer
Exposure to Occupational RF-EMF
Exposure to RF-EMF from Radio/Television Transmitters
Conclusions
Reproductive Effects of Oil-Related Environmental Pollutants
Introduction
What Are Oil-Related Environmental Pollutants?
What Physiological Processes Affect Oil-Related Environmental Pollutants?
Do Environmental Pollutants Affect Reproduction?
Do Oil-Related Environmental Pollutants Affect Both Sexes?
How Oil-Related Environmental Pollutants Affect Female Reproduction?
How oil-related environmental pollutants affect ovarian morphology and fecundity?
How oil-related environmental pollutants affect female hypothalamic, pituitary and peripheral hormones?
Respiratory Effects of Chlorination Products
Animal Studies
Studies on Human Volunteers
Risk of Radiation Exposure to Children and Their Mothers
I. Risk of Ionizing Radiation Exposure to Children and Their Mothers
Childhood Malignancy Risk from Exposure to Ionizing Radiation
II. Risk of Nonionizing Radiation Exposure to Children and Their Mothers
Reproductive and Teratogenic Risk from Exposure to Nonionizing Radiation
Sanitation in Low-and Middle-Income Countries
Sanitation and Disease
Environmental Enteric Dysfunction
Shared Water Conflict and Cooperation
What Are Shared Waters?
Physical Nature
Short-Term Effects of Air Pollution on Health
Introduction and Background
Objective and Scope of This Review
Ambient Particulate Matter (PM) and Traffic Related Pollution
Mortality Outcomes
Health Impact Assessment
Soil Quality Criteria for Environmental Pollutants
Setting Environmental Soil Quality Standards for Chemicals
Development of SQSs
Exposure assessment
Solid Fuels: Health Effects
Nature of Solid Fuel Emissions and Exposures
Characteristics of Solid Fuel Smoke Typical Settings in Which Solid Fuels are Used Are Illustrated in Fig. 1
Solid Fuel Use: Health Effect
Pollutant Concentrations and Personal Exposure
Emissions and Pollutants
Mechanisms
Range of Concentrations
Burden of Disease Attributable to Indoor Air Pollution
Sulfur Oxides
Health Effects of Sulfur Oxides
Epidemiology
Effects of short-term exposures
VOLUME 6 T-Z
Front Cover
ENCYCLOPEDIA OF ENVIRONMENTAL HEALTH
ENCYCLOPEDIA OF ENVIRONMENTAL HEALTH
Copyright
EDITORIAL BOARD
GUIDE TO USE THE ENCYCLOPEDIA
Cross-References
Example
Index
Contributors
CONTRIBUTORS TO VOLUME 6
SUBJECT CLASSIFICATION (THEMATIC TABLE OF CONTENTS)
Country- and Area-Specific Environmental Health Issues
Dietary Exposures and Food Quality
Disinfection By-products
Disparities and Social Determinants of Environmental Health
Electromagnetic Fields
Environmental Health Emergencies (Disasters)
Ethics in Environmental Health Research and Practice
Target Organ Toxicity of Environmental Pollutants
CONTENTS OF VOLUME 6
PREFACE
PERMISSION ACKNOWLEDGMENTS
Take-Home Route of Pesticide Exposure
Results and Discussion
Evidence of Take-Home Pathway
Using biomarkers in mothers/spouses to assess take-home pesticide exposure pathway
Using biomarkers in farmworker children to evaluate take-home pesticide exposure pathway
Take-home in comparison to other pathways of exposure
Seasonality
Thallium: Environmental Pollution and Health Effects
Toxicity
Toxicity to Animals
Toxicity to Humans
Therapies for Thallium Poisoning
Thallium Exposure
Occupational Exposure
Nonoccupational Exposure Routes
Biomarkers of Thallium Exposure
Tin: Environmental Pollution and Health Effects
Introduction
Physical and Chemical Properties
Production and Uses
Toxicity
Differential Toxicity of Organotins
Toxicity in Animals
Human Toxicity
Toenails for Biomonitoring of Environmental Exposures
Relating Biomarker Concentration to Human Exposure
Arsenic
Selenium
Other Elements
Relating the Toenail Biomarker Concentration to Other Biological Tissues
Arsenic
Selenium
Nickel
Temporal Variability in Toenail Elemental Concentrations
Toxicological Pathways of Relevance to Medical Geology
Exposure Pathways
Inhalation Exposure
Drinking Water Exposure
Exposure From Consumption of Contaminated Food
Exposure From Contaminated Soil
Exposure From Adsorption
Metabolism and Toxicological Effects of Xenobiotics
Metabolism of Xenobiotics
The Aryl Hydrocarbon Receptor Pathway
Genes, Cancer, and the Immune System
Toxicological Methods in Medical Geology
Isolation of Environmental Xenobiotics for Toxicological Testing
Toxicological Testing Using Cell Lines
Molecular Biological Methods
Toxicology of Chromium(VI)
Exposure to Chromium
Occupational Exposure
Environmental Exposure
Toxicokinetics of Chromium
Absorption
Inhalation
Dermal
Oral
Metabolism
Deposition
Health Effects
Gastrointestinal Effects
Immunological and Hematological Effects
Tuberculosis
Diagnosis
Tuberculin Skin Test and Interferon-Gamma Release Assays
Chest Radiograph and Computed Tomography of Thorax
Sputum Acid-Fast Bacilli Smear
Sputum Culture
M. bovis
Treatment of Drug-Resistant Tuberculosis
Drug-Resistant Tuberculosis
MDR-TB
Tuberculosis: Epidemiology and Global Impact, Including Extrapulmonary Tuberculosis Manifestation With Emphasis on Skeletal ...
Tuberculosis
Introduction
WHO End TB Strategy
Pathogenesis
Post-primary TB
Symptoms, Disease Appearance
Primary tuberculosis
Latent TB infection (LTBI)
Progressive primary TB forms
Post—primary tuberculosis (=secondary TB; so-called reactivation TB)
Differential diagnoses
Manifestation locations of TB (Fig. 5)
Diagnostics
Anamnesis and clinical examination
Routine blood test
Radiology
Indirect pathogen detection
Tuberculin skin test (TST) (tuberculin test, PPD (purified protein derivative) test, Mendel-Mantoux intradermal test)
Prophylaxis
Disinfection and hygiene
Vaccination (disposition prophylaxis)
Public health education
Extrapulmonary Tuberculosis (EPTB)
Introduction
Lymph Nodes Tuberculosis (Tuberculous Lymphadenitis)
Genitourinary Tuberculosis (GUTB)
Symptoms
Diagnostics
Therapy
Prognosis
Musculoskeletal TB (=Osteoarticular (OATB), Bone, Joint, Skeletal TB)
Therapy
Characteristics of tuberculous spondylodiscitis (Pott disease)
Characteristics of extraspinal TB
Symptoms
Skin TB
Inoculation form of skin TB (exogenous genesis)
Ultraviolet: Ocular Effects
Epidemiological Studies
Cortical Opacities
Pterygium and Droplet Keratopathies
Erythema
Ocular Cancers
Ultraviolet Radiation and the Skin
Basic Principles
Responses of Cells to UV
Cell Signaling and the Response to UV
Cellular Responses to UV
Long-Term Effects of UV in the Skin
Chronic UV Photodamage
Ultraviolet Radiation Protection
Health Effects of UVR
Acute Health Effects
Chronic Health Effects
Ultraviolet Radiation
Spectral Weighting
Protection Factors
Solar UVR
Factors Affecting Solar UVR
Occupational Exposures and Solar UVR
Personal Exposure to Ultraviolet Radiation
Protection against Solar UVR
Personal Protective Equipment
Sunscreens
Hats
Sunglasses
UN Convention on Wetlands (RAMSAR): Implications for Human Health
The Ramsar Convention as a Vehicle for Implementation of the Ecosystem Approach: Functioning and Efficiency of the Global N ...
Convergence/Divergence Between Targets of Wetland Conservation and Ecosystem Health
The Potential for Recharging of Aquifers
The Buffering Potential of Wetlands in Relation to Flooding
Wetlands as a Key Source of Food, Nutrition, and Medicine
UN Convention to Combat Desertification
The Definition of “Desertification” and the Background of the UN Convention to Combat Desertification
The Definition of Desertification
Achievements, Good Practices, and Experience to Combat Desertification
Implementation of UNCCD
Technical Measures and Experience to Combat Desertification
Prioritization of preventive measures for combating desertification
Combating desertification by afforestation and reafforestation
Establishment of desert–oasis protective shelterbelt systems
Establishment of protective shelterbelt systems in sandy land areas
Establishment of shelterbelt systems in sandy land areas
Function of vegetation protection project
The “3-circles”
Biological, mechanical, and chemical approaches to stabilize sands
Artificial earth dyke to stop and accumulate shifting sands
Ditches to accumulate shifting sands
Sand barriers for blocking sands
Mulching networks for controlling sands
Further Reading
Uranium: Environmental Pollution and Health Effects
Uranium in Humans
Pathway of Intake
Uranium Biokinetic in the Body
Health Effects
Acute Health Effects
High Level of Exposure
Alimentary tract
Uranium: Toxicity to Renal Cells and Osteoblasts
Mechanisms of Action
Renal Cells
Urban Environments and Health
Urban Environments and Health
Frameworks for Urban Environmental Health
Macro-level social processes
Urban Environments: Housing, Food Systems, and Transportation Systems
Housing
Food Systems and Food Security
Promoting urban health through equitable food environments
Transportation Systems
Promoting urban health through transportation environments
Urban Health Indicators: The Role of Data Disparities
Urban Transportation and Human Health
Inequalities and Social Justice
Loss of Alternative Uses of Street Space
Impact of Transportation and Other Policies on Health and Inequalities in Health
Measures to Reduce Traffic Speed: An Example of Policy Impacts on Health
Congestion
Traffic Reduction
Using Testate Amoebae Communities to Evaluate Environmental Stress: A Molecular Biology Perspective
Introduction
How to Indicate Environmental Health/Stress?
Vanadium: Environmental Pollution and Health Effects
The Presence of Vanadium in Living Organisms
Vanadium in Plants
Vanadium in Marine Organisms
Vanadium in Higher Vertebrates
Vanadium in Human Tissues
The Effects of Excessive Vanadium Exposure
Biochemical Effects
Vector Borne Disease and Climate Change
Possible Mechanisms of Climate Change and Vectorborne Diseases
Existing Evidence of Climate Sensitivity and Vectorborne Diseases
Tick-Borne Encephalitis
Ventilation
Consideration of Isolation Room Ventilation
Natural Ventilation Is an Option?
Hybrid Ventilation: Taking the Best of Both
War and Environmental Health: Chemical Warfare Agents
Exposure
Case Studies and Site-Specific Risk Assessment
Spring Valley (Washington DC)
White Sea
Beaufort’s Dyke
Waterborne Parasites in North Africa Environment
Prevalence, Transmission and Symptomatology
Protozoa
Occurrence of Parasites in Sewage and in Sludge
Tunisia
Algeria
Morocco
Water Environment Management in India
Water Use in India
Domestic Water Use
Water Use in Industries
Water Use for Hydropower Generation
Weather, Pollen Concentrations and Allergic Rhenitis
Definition and Classification of Allergic Rhinitis
Allergenic Process
Effects of Climate Change and Air Pollution on Pollen Production and Public Health
Climate
Worldwide Regulatory Strategies and Policies for Drinking Water
Key Principles
Regulations Should Regulate Drinking Water from Catchment to Consumer, Using Multiple Barriers
Roles and Responsibilities of Stakeholders Should Be Clearly Delineated within Regulations
Zika Virus: A Compendium of the State of Knowledge
Control
Biological Control
Chemical Control
Zinc Toxicity in Humans
Acute Health Effects
Chronic and Subchronic Toxicity
Gastrointestinal Toxicity
Hemotoxicity
Diabetes
Epigenetic De-Regulation
Zinc—The Dark Horse of the Brain
Multiple Sclerosis
Vascular Dementia
Alzheimer’s Disease
Prion Disease
INDEX
AUTHOR INDEX

Citation preview

ENCYCLOPEDIA OF ENVIRONMENTAL HEALTH SECOND EDITION

This page intentionally left blank

ENCYCLOPEDIA OF ENVIRONMENTAL HEALTH SECOND EDITION EDITOR-IN-CHIEF

Jerome Nriagu University of Michigan, School of Public Health, Ann Arbor, Michigan, United States

VOLUME 1

Elsevier Radarweg 29, PO Box 211, 1000 AE Amsterdam, Netherlands The Boulevard, Langford Lane, Kidlington, Oxford OX5 1GB, United Kingdom 50 Hampshire Street, 5th Floor, Cambridge MA 02139, United States Copyright Ó 2020 Elsevier B.V. All rights reserved No part of this publication may be reproduced or transmitted in any form or by any means, electronic or mechanical, including photocopying, recording, or any information storage and retrieval system, without permission in writing from the publisher. Details on how to seek permission, further information about the Publisher’s permissions policies and our arrangements with organizations such as the Copyright Clearance Center and the Copyright Licensing Agency, can be found at our website: www.elsevier.com/permissions. This book and the individual contributions contained in it are protected under copyright by the Publisher (other than as may be noted herein). Notices Knowledge and best practice in this field are constantly changing. As new research and experience broaden our understanding, changes in research methods, professional practices, or medical treatment may become necessary. Practitioners and researchers may always rely on their own experience and knowledge in evaluating and using any information, methods, compounds, or experiments described herein. In using such information or methods they should be mindful of their own safety and the safety of others, including parties for whom they have a professional responsibility. To the fullest extent of the law, neither the Publisher nor the authors, contributors, or editors, assume any liability for any injury and/or damage to persons or property as a matter of products liability, negligence or otherwise, or from any use or operation of any methods, products, instructions, or ideas contained in the material herein. Library of Congress Cataloging-in-Publication Data A catalog record for this book is available from the Library of Congress British Library Cataloguing-in-Publication Data A catalogue record for this book is available from the British Library ISBN 978-0-444-63951-6 For information on all publications visit our website at http://store.elsevier.com

Publisher: Oliver Walter Acquisition Editor: Priscilla Braglia Content Project Manager: Michael Nicholls Associate Content Project Manager: Greetal Carolyn Designer: Greg Harris

EDITORIAL BOARD

EDITOR-IN-CHIEF Jerome Nriagu University of Michigan School of Public Health Ann Arbor, Michigan United States

SECTION EDITORS GUIDELINES AND REGULATIONS Choon Nam Ong Director, NUS Environmental Research Institute Professor, School of Public Heath National University of Singapore Singapore Republic of Singapore

DISPARITIES AND SOCIAL DETERMINANTS OF ENVIRONMENTAL HEALTH Denise M. Rennie Associate Dean Academic School of Science, Engineering and Environment University of Salford Manchester

WATER QUALITY AND QUANTITY Hongliang Liu Tianjin Municipal Bureau of Health Supervision (Tianjin, P. R. China) Heping District Tianjin People’s Republic of China

v

vi

Editorial Board

ENVIRONMENTAL EPIDEMIOLOGY Jaymie R. Meliker Professor Program in Public Health Department of Family, Population, & Preventive Medicine Stony Brook University (SUNY) Stony Brook, New York United States

METALS/METALLOIDS: EXPOSURE AND HEALTH EFFECTS Jerome Nriagu University of Michigan School of Public Health Ann Arbor, Michigan United States

GLOBAL ENVIRONMENTAL HEALTH Melissa Slotnick Ann Arbor, Michigan United States

AIR POLLUTION AND HUMAN HEALTH Nicola Pirrone Research Director CNR - Institute of Atmospheric Pollution Research Rende Italy

POLLUTION SOURCES AND HUMAN HEALTH Oladele Ogunseitan Presidential Chair Professor Department of Population Health and Disease Prevention Program in Public Health University of California, Irvine Irvine, California United States

COUNTRY AND AREA SPECIFIC ENVIRONMENTAL HEALTH ISSUES Orish E. Orisakwe Professor African Centre of Excellence for Public Health and Toxicological Research University of Port Harcourt East-West Road Choba, Rivers State Nigeria

Editorial Board

ENVIRONMENTAL MICROBIOLOGY Panagiotis Karanis State Key Laboratory of Plateau Ecology and Agriculture Qinghai University Director of the Center for Biomedicine and Infectious Diseases (CBID) Xining City Qinghai Province People’s Republic of China University of Cologne Medical Faculty and University Hospital Cologne Germany

COST-BENEFIT ANALYSIS OF ENVIRONMENTAL HEALTH Till M. Bachmann European Institute for Energy Research Karlsruhe Germany

NOISE POLLUTION: EXPOSURE AND HEALTH EFFECTS Toshihiro Kawamoto Professor Emeritus Department of Environmental Health University of Occupational and Environmental Health Yahatanishi-kuKitakyushu Japan

vii

This page intentionally left blank

CONTRIBUTORS TO VOLUME 1 Shahira A Ahmed Suez Canal University, Ismailia, Egypt A Åkesson Karolinska Institutet, Stockholm, Sweden; and Chaney Environmental, Beltsville, MD, United States EC Alexopoulos Hellenic Open University, Patras, Greece CF Amábile-Cuevas Fundación Lusara, México D.F., Mexico Laborde Amalia Unidad Pediátrica Ambiental (UPA), Montevideo, Uruguay; Poison Control Center (CIAT), Montevideo, Uruguay; and Toxicology Department, Faculty of Medicine, University of the Republic, Montevideo, Uruguay Heidi Amlund National Institute of Nutrition and Seafood Research (NIFES), Bergen, Norway R Andreoli University of Parma, Parma, Italy DA Axelrad US Environmental Protection Agency, Washington, DC, United States Lesa L Aylward Summit Toxicology, Falls Church, VA, United States; and University of Queensland, Brisbane, QLD, Australia W Babisch Federal Environment Agency, Berlin, Germany S-O Baek Yeungnam University, Gyeongsan, South Korea

S Barone, Jr US Environmental Protection Agency, Washington, DC, United States D Belpomme Association for Research and Treatments Against Cancer (ARTAC), Paris, France; European Cancer and Environment Research Institute (ECERI), Brussels, Belgium; and Paris V University Hospital, Paris, France Aurelian Bidulescu Indiana UniversitydBloomington School of Public Health, Bloomington, IN, United States Françoise G Bourrouilh-Le Jan Agrégée de l’Université, Docteur ès Sciences d’État, Maître de Conférence h.c., Talence, France P Brimblecombe University of East Anglia, Norwich, United Kingdom AL Bronzaft Professor Emerita, City University of New York, New York, NY, United States M-N Bruné World Health Organization, Geneva, Switzerland E Calva Universidad Nacional Autónoma de México, Cuernavaca, Mexico C Carlarne University of South Carolina, Columbia, SC, United States Dipankar Chakrabarti Jadavpur University, Kolkata, India

EB Bakeas University of Athens, Athens, Greece

Rufus L Chaney Karolinska Institutet, Stockholm, Sweden; and Chaney Environmental, Beltsville, MD, United States

Micha Barchana University of Haifa, Haifa, Israel; and Ministry of Health, Jerusalem, Israel

Chi-Hsien Chen National Taiwan University (NTU) College of Medicine and NTU Hospital, Taipei, Taiwan

ix

x

Contributors to Volume 1

Claudio Cocheo Fondazione Salvatore MaugeridIRCCS, Padova, Italy

L Erdinger University of Heidelberg, Heidelberg, Germany

BS Cohen New York University School of Medicine, New York, NY, United States

Gary W Evans Cornell University, Ithaca, NY, United States

RD Cohn SRA International, Inc., Durham, NC, United States S Conzen University of Chicago, Chicago, IL, United States Simonetta Corsolini University of Siena, Siena, Italy Gaurav G Dastane Institute of Chemical Technology, Matunga, Mumbai, India MH Depledge Peninsula College of Medicine and Dentistry, Plymouth, United Kingdom Ketan S Desai Institute of Chemical Technology, Matunga, Mumbai, India Ningombam L Devi Central University of South Bihar, Patna, India Sarjerao B Doltade Institute of Chemical Technology, Matunga, Mumbai, India Jonathan Dubnov Ministry of Health, Haifa, Israel; and University of Haifa, Haifa, Israel K Ebi University of Washington, Seattle, WA, United States Ingrid Eckerman Swedish Doctors for the Environment (LfM), Stockholm, Sweden P Eckl University of Salzburg, Salzburg, Austria WM Edmunds Oxford University Centre for the Environment, Oxford, United Kingdom Jessie K Edwards University of North Carolina at Chapel Hill, Chapel Hill, NC, United States Fumio Eguchi Tokyo University of Agriculture, Tokyo, Japan A El-Gammal Mercy University Hospital, Cork, Ireland

Despo Fatta-Kassinos University of Cyprus, Nicosia, Cyprus Stacey A Fedewa American Cancer Society, Surveillance Research Department, Atlanta, GA, United States Kim T Ferguson Sarah Lawrence College, Bronxville, NY, United States A Fino National Research CouncildInstitute of Atmospheric Pollution Research (CNR-IIA), Monterotondo, Roma, Italy B Foos Office of Children’s Health Protection, Washington, DC, United States P-G Forkert Queen’s University, Kingston, ON, Canada Hermann Fromme Bavarian Health and Food Safety Authority, Munich, Germany L García-García Instituto Nacional de Salud Pública, Morelos, Mexico M Gauthier-Clerc Centre de Recherche de la Tour du Valat, Arles, France; and Université de Franche-Comté, Besançon, France T Geurden Ghent University, Merelbeke, Belgium LR Goldman Johns Hopkins Bloomberg School of Public Health, Baltimore, MD, United States M Goldoni University of Parma, Parma, Italy Teufik Goletic Veterinary Faculty of the University of Sarajevo, Sarajevo, Bosnia and Herzegovina F Gore World Health Organization, Geneva, Switzerland J Grigg Barts and the London School of Medicine and Dentistry, Queen Mary University of London, London, United Kingdom

Contributors to Volume 1

Yue L Guo National Taiwan University (NTU) College of Medicine and NTU Hospital, Taipei, Taiwan

Panagiotis Karanis Qinghai University, Xining, Qinghai, P.R. China; and University of Cologne, Cologne, Germany

VC Hammen Helmholtz Centre for Environmental Research-UFZ, Halle (Saale), Germany

Shankar B Kausley Institute of Chemical Technology, Matunga, Mumbai, India; and TCS Research, TRDDC, Pune, Maharashtra, India

Lijian Han State Key Laboratory of Urban and Regional Ecology, Research Center for Eco-Environmental Sciences, Chinese Academy of Sciences, Beijing, China

Alexander P Keil University of North Carolina at Chapel Hill, Chapel Hill, NC, United States

EJ Hanford Reno, NV, United States

M Korkmaz Celal Bayar University, Manisa, Turkey

C Hertzman Human Early Learning Partnership, The University of British Columbia, Vancouver, BC, Canada

Jyoti K Kumar Institute of Chemical Technology, Matunga, Mumbai, India

Mike Holland EMRC, Reading, United Kingdom

JC Lambert United States Environmental Protection Agency, Cincinnati, OH, United States

H Hollert RWTH Aachen University, Aachen, Germany MJ Hooth National Institute of Environmental Health Sciences, National Institutes of Health, Research Triangle Park, NC, United States Steven R Horbal Indiana UniversitydBloomington School of Public Health, Bloomington, IN, United States Adnan A Hyder Johns Hopkins Bloomberg School of Public Health, Baltimore, MD, United States Lida Ioannou-Ttofa University of Cyprus, Nicosia, Cyprus P Irigaray Association for Research and Treatments Against Cancer (ARTAC), Paris, France; and European Cancer and Environment Research Institute (ECERI), Brussels, Belgium Wan-Kuen Jo Kyungpook National University, Daegu, South Korea Moll Maria José Unidad Pediátrica Ambiental (UPA), Montevideo, Uruguay; Red de Atencion Primaria (RAP), Montevideo, Uruguay; and State Health Service Administration (ASSE), Montevideo, Uruguay Hillary Jufer Pace University, Pleasantville, NY, United States Arianne V Julian Central Luzon State University, Muñoz, Philippines

Peter Lercher Medical University Innsbruck, Innsbruck, Austria BS Levy Tufts University School of Medicine, Boston, MA, United States Hongyan Li The University of Hong Kong, Hong Kong, People’s Republic of China JC Lipscomb United States Environmental Protection Agency, Cincinnati, OH, United States Shuchang Liu Tsinghua University, Beijing, China Emmanuil E Malandrakis University of Thessaly, Volos, Greece V Meineke Bundeswehr Institute of Radiobiology, University of Ulm, Munich, Germany A Melhem University of Chicago, Chicago, IL, United States RL Melnick National Institute of Environmental Health Sciences, National Institutes of Health, Research Triangle Park, NC, United States Marisela Méndez-Armenta Instituto Nacional de Neurología y Neurocirugía, Mexico, Mexico

xi

xii

Contributors to Volume 1

P Mendola Eunice Kennedy Shriver National Institute of Child Health and Human Development, Rockville, MD, United States Jerry R Miller Western Carolina University, Cullowhee, NC, United States ZA Mohamed Department of Botany and Microbiology, Faculty of Science, Sohag University, Sohag, Egypt Elmer-Rico E Mojica Pace University, New York, NY, United States J Moya US Environmental Protection Agency, National Center for Environmental Assessment, Washington, DC, United States A Mutti University of Parma, Parma, Italy K Ndebele Jackson State University, Jackson, MS, United States

CJC Phillips University of Queensland, Gatton, QLD, Australia D Poli Italian Workers’ Compensation Authority (INAIL), Research Center at the University of Parma, Parma, Italy Boris A Portnov University of Haifa, Haifa, Israel SH Prankel University of Worcester, Worcester, United Kingdom J Pronczuk World Health Organization, Geneva, Switzerland Prasanthi Puvanachandra Johns Hopkins Bloomberg School of Public Health, Baltimore, MD, United States Mohammad Mahmudur Rahman The University of Newcastle, Callaghan, NSW, Australia

Anastasia Nikolaou University of the Aegean, Mytilene, Greece

Md Harunur Rashid The University of Newcastle, Callaghan, NSW, Australia

Curtis W Noonan The University of Montana, Missoula, MT, United States

P Ravenscroft Entec UK Ltd, Cambridge, United Kingdom

Jerome Nriagu University of Michigan, Ann Arbor, MI, United States

Lauren Reilly Pace University, New York, NY, United States

TM O’Connor Mercy University Hospital, Cork, Ireland

F Renaud Ecologie, Génétique, Evolution et Contrôle (MIVEGEC), Montpellier, France

R O’Handley School of Animal and Veterinary Sciences, The University of Adelaide, SA, Australia

Renato G Reyes Central Luzon State University, Muñoz, Philippines

Jasmin Omeragic Veterinary Faculty of the University of Sarajevo, Sarajevo, Bosnia and Herzegovina

GE Rice United States Environmental Protection Agency, Cincinnati, OH, United States

AR Osornio-Vargas Department of Pediatrics, University of Alberta, Edmonton, AB, Canada

A Riecke Bundeswehr Institute of Radiobiology, University of Ulm, Munich, Germany

Panagiota Panagiotaki University of Thessaly, Volos, Greece

Camilo Rios Instituto Nacional de Neurología y Neurocirugía, Mexico, Mexico

Aniruddha B Pandit Institute of Chemical Technology, Matunga, Mumbai, India Jean C Pfau Montana State University, Bozeman, MT, United States E Phelan Mercy University Hospital, Cork, Ireland

CA Robledo University of Texas Rio Grande Valley, Harlingen, TX, United States I Rosas Universidad Nacional Autónoma de México, México D.F., Mexico

Contributors to Volume 1

H-A Rother University of Cape Town, Cape Town, South Africa CG Ruf Bundeswehr Institute of Radiobiology, University of Ulm, Munich, Germany

xiii

Hongzhe Sun The University of Hong Kong, Hong Kong, People’s Republic of China K Takeda Tokyo University of Science, Chiba, Japan

Paolo Sacco Fondazione Salvatore MaugeridIRCCS, Padova, Italy

PB Tchounwou Jackson State University, Jackson, MS, United States

B Schoket National Institute of Environmental Health, Budapest, Hungary

LK Teuschler United States Environmental Protection Agency, Cincinnati, OH, United States

Veronika Sele National Institute of Nutrition and Seafood Research (NIFES), Bergen, Norway

F Thomas Ecologie, Génétique, Evolution et Contrôle (MIVEGEC), Montpellier, France

SG Selevan Consultant, Silver Spring, MD, United States

S Tong Shanghai Children’s Medical Center, Shanghai Jiao Tong University, Shanghai, China; Anhui Medical University, Hefei, China; and Queensland University of Technology, Brisbane, QLD, Australia

J Settele Helmholtz Centre for Environmental Research-UFZ, Halle (Saale), Germany Arun K Shanker Indian Council of Agricultural Research (ICAR), Hyderabad, India DG Shendell UMDNJ-School of Public Health, Piscataway, NJ, United States VW Sidel Montefiore Medical Center and Albert Einstein College of Medicine, Bronx, NY, United States; and Weill Cornell Medical College, New York, NY, United States J Sidhu CSIRO Land and Water, Queensland Bioscience Precinct, Brisbane, QLD, Australia Sushant K Singh Virtusa Corporation, Irvington, NJ, United States of America Sukesh Narayan Sinha National Institute of Nutrition (ICMR), Hyderabad, India

S Toze CSIRO Land and Water, Queensland Bioscience Precinct, Brisbane, QLD, Australia Andrew Turner University of Plymouth, Plymouth, United Kingdom M Umezawa Tokyo University of Science, Chiba, Japan JL Valdespino Laboratorios de Biológicos y Reactivos de México, Secretaria de Salud, Distrito Federal, Mexico Cristina M Villanueva ISGlobal - Barcelona Institute for Global Health, Barcelona, Spain; Pompeu Frabra University, Barcelona, Spain; CIBER Epidemiology and Public Health, Madrid, Spain; and IMIM (Hospital del Mar Medical Research Institute), Barcelona, Spain Lionel F Villarroel Western Carolina University, Cullowhee, NC, United States

M Sliwinska-Kowalska Nofer Institute of Occupational Medicine, Medical University of Lodz, Lodz, Poland

M Vittecoq Centre de Recherche de la Tour du Valat, Arles, France; and Ecologie, Génétique, Evolution et Contrôle (MIVEGEC), Montpellier, France

Jens J Sloth National Institute of Nutrition and Seafood Research (NIFES), Bergen, Norway; and Technical University of Denmark, Kongens Lyngby, Denmark

JR Walton University of New South Wales, Sydney, NSW, Australia

Alexander D Stein-Alexandrescu San Diego, CA, United States

Runming Wang The University of Hong Kong, Hong Kong, People’s Republic of China

xiv

Contributors to Volume 1

Shuxiao Wang Tsinghua University, Beijing, China Ellen M Wells Purdue University, West Lafayette, IN, United States

Ishwar C Yadav Tokyo University of Agriculture and Technology (TUAT), Fuchu-Shi, Tokyo, Japan Laura Zaratin Fondazione Salvatore MaugeridIRCCS, Padova, Italy

TJ Woodruff University of California, San Francisco, San Francisco, CA, United States

FB Zhan Texas State University, San Marcos, TX, United States

JM Wright United States Environmental Protection Agency, Cincinnati, OH, United States

Weiqi Zhou State Key Laboratory of Urban and Regional Ecology, Research Center for Eco-Environmental Sciences, Chinese Academy of Sciences, Beijing, China

GUIDE TO USE THE ENCYCLOPEDIA Structure of the Encyclopedia The material in the Encyclopedia is arranged as a series of articles in alphabetical order. There are four features to help you easily find the topic you are interested in: an alphabetical contents list, a subject classification index, cross-references, and a full subject index.

Alphabetical Contents List The alphabetical contents list, which appears at the front of each volume, lists the entries in the order that they appear in the Encyclopedia. It includes the page number of each entry.

Subject Classification Index This index appears at the start of each volume and groups entries under subject headings that reflect the broad themes of Environmental Health. This index is useful for making quick connections between entries in different volumes and locating the relevant entry for a topic that is covered in more than one article.

Cross-References All of the entries in the Encyclopedia have been extensively cross-referenced. The cross-references which appear at the end of each entry serve three functions: i. To indicate if a topic is discussed in greater detail elsewhere. ii. To draw the readers’ attention to parallel discussions in other entries. iii. To indicate material that broadens the discussion.

Example The following list of cross-references appears at the end of the entry. Application of Healthy Life Year Measures to Environmental Health Issues

See Also: Composite Measures of the Environmental Burden of Disease at the Global Level; Global Burden of Disease (GBD) Approach and the Use of Disability Adjusted Life Years (DALY) at the World Health Organization (WHO); Quality of Life and Environmental Health Assessment; Use of Years of Potential Life Lost (YPLL) for Risk Assessment at Hazardous Waste Sites.

Index The index includes page numbers for quick reference to the information you are looking for. The index entries differentiate between references to a whole entry, a part of an entry, or a table or figure.

Contributors At the start of each volume there is a list of the authors who contributed to the relevant volume of the Encyclopedia.

xv

This page intentionally left blank

SUBJECT CLASSIFICATION (THEMATIC TABLE OF CONTENTS) Air Pollution and Human Health Air Pollution and Development of Children’s Pulmonary Function Air Pollution and Lung Cancer Risks Air Pollution From Solid Fuels Assessment of Human Exposure to Air Pollution Community Outdoor Air Quality: Sources, Exposure Agents, and Health Outcomes Complex Air Pollution in China Cyclic Volatile Methylsiloxanes: Occurrence and Exposure Hazardous (Organic) Air Pollutants Industrial Livestock Production Facilities: Airborne Emissions Intercontinental Air Pollution Transport: Links to environmental health Long-Range Transport and Deposition of Air Pollution Long-Term Effects of Particulate Air Pollution on Human Health Measurement of Air Pollutants Mercury Emissions at Regional and Global Scale Mercury in Air Mobile Source Related Air Pollution: Effects on Health and the Environment Mutagenicity of PM2.5 PM2.5 Sources and Their Effects on Human Health in China: Case Report Pollen Allergens Residential and Nonresidential Biomass Combustion: Impacts on Air Quality Respiratory Effects of Short-Term Peak Exposures to Sulfur Dioxide Short-Term Effects of Air Pollution on Health Short-Term Effects of Particulate Air Pollution on Human Health Sulfur Oxides

Assessment of Exposure to Environmental Risks (Methodologies) Advances in Analytical Methods for the Determination of Pharmaceutical Residues in Waters and Wastewaters Application of Healthy Life Year Measures to Environmental Health Issues Bias in Environmental Epidemiology Biomarkers in Environmental Carcinogenesis Biomarkers of Environmental Exposures in Blood Cumulative Environmental Risk Dermal Exposures Environmental Epidemiology Environmental Health Tracking Environmental Specimen Bank for Human Tissues The Exposome: An Approach Toward a Comprehensive Study of Exposures in Disease Exposure Guidelines and Radon Policy Exposure Modeling and Measurement: Exposure Factors Exposure Reconstruction using Space-Time Information Technology Exposure Science: Contaminant Mixtures Exposure Science: Ingestion Exposure Science: Monitoring Environmental Contaminants

xvii

xviii

Subject Classification (Thematic Table of Contents)

Exposure Science: Pharmacokinetic Modeling Exposure Science: Routes of ExposuredInhalation Frequency and Timing of Environmental Exposure Global Measures of the Environmental Burden of Disease Hair Elements for Biomonitoring of Human Exposure, Effects, and Health Hazards How Hormesis Will Change the Risk Assessment Process Methodologies for Assessing Bioaerosol Exposures Methods for Estimating Exposure to Metals in Drinking Water Pets as Sentinels of Human Exposure Pharmacokinetic and Pharmacodynamic Considerations in Children’s Human Health Risk Assessment Physiologically Based Pharmacokinetic Modeling and Risk Assessment Physiologically Based Pharmacokinetic Modeling for Exposure and Risk Assessment Toenails for Biomonitoring of Environmental Exposures Tooth Biomarkers in Environmental Health Research Toxicological Pathways of Relevance to Medical Geology Waterborne Disease Surveillance

Children’s Health and Prenatal Exposures Children’s Environmental Health: General Overview Children’s Environmental Health in Developing Countries Children’s Exposure to Environmental Agents Children’s Health Risk Assessment: Issues and Approaches Critical Windows of Children’s Development and Susceptibility to Environmental Toxins Developmental and Reproductive Toxicity of TCDD, Lead, and Mercury Developmental Immunotoxicants Effect of Early Exposure on Reproductive Outcomes Environmental Agents and Childhood Cancer Environmental Chemicals in Breast Milk Environmental Factors in Children’s Asthma and Respiratory Effects Evidence for Endocrine Disruption in Children: Sensitive Developmental Endpoints Issues and Challenges for Determining Environmental Risk Factors and Causes of Disease Among Children Japan Environment and Children’s Study: Concept, Protocol, and Current Status Malformations of Cortical Development and Epilepsy in Children Mercury and Children Health Pharmacokinetic and Pharmacodynamic Considerations in Children’s Human Health Risk Assessment Prenatal Exposure to Industrial Chemicals and Pesticides and Effects on Neurodevelopment Prenatal Exposure to Polycyclic Aromatic Hydrocarbons Reproductive Effects of Oil-Related Environmental Pollutants

Climate Change and Human Health Climate Change, Environmental Health, and Human Rights Climate Change: Health Risks and Adaptive Strategies Global Climate Changes and International Trade and Travel: Effects on Human Health Outcomes Heat Wave and Mortality of the Elderly Protozoan Waterborne Infections in the Context of the Actual Climatic Changes and Extreme Weather Events Thermal Stress Vector-Borne Disease and Climate Change Weather, Pollen Concentrations, and Allergic Rhinitis

Country- and Area-Specific Environmental Health Issues An Ecological Disaster Zone with Impact on Human Health: Aral Sea Bahama Archipelago: Environment and Health Briefing on Children Environmental Health in Uruguay Bolivia: Mining, River Contamination, and Human Health Cyanotoxins in Egypt and Saudi Arabia

Subject Classification (Thematic Table of Contents) Diabetes Mellitus in Albania: A Twofold Increase in the Last Decade Environmental Conditions in the Estuarine Coast of Montevideo (Uruguay): Historical Aspects and Present Status: An Update Environmental Health and Leishmaniasis by Indication on Afghanistan: A Review Environmental Health Concerns in Cameroon The Environmental Health of Children of Migrant WorkersdAn Example From China Ghana: Ecology, Politics, Society, and Environmental Health Human Tick-Borne Diseases in Southern Europe: Present Status and Future Challenges Indonesia: Threats to the Country’s Biodiversity Kuwait: Before and After the Gulf War Lebanon: Health Valuation of Water Pollution at the Upper Litani River Basin Leishmaniasis and Environment in Argentina: An Ecoepidemiological Approach Malaria, Bilharzia, and Geohelminth Transmission in Kenya: Environmental Determinants Malaysia: Environmental Health Issues Mexican Epidemiological Paradox: A Developing Country with a Burden of “Richness” Diseases: An Update Mozambique: Environment and Health in One of the World’s Poorest Nations Mycotoxins and Public Health in Africa Nigeria: Environmental Health Concerns The Occurrence and Potential Health Risk of Microcystins in Drinking Water of Rural Areas in China Oil Industry and the Health of Communities in the Niger Delta of Nigeria PM2.5 Sources and Their Effects on Human Health in China: Case Report Sandflies and Sandfly-Borne Zoonotic Infections in Greece Sierra Leone: Environmental Health Concerns Spain: Natural Hazards in the Country Taiwan: Environmental Health Concerns Tunisia: Water Resource Management and Sustainability of Agriculture Waterborne Parasites in North African Environments Water Environment Management in India

Dietary Exposures and Food Quality Arsenic Exposure From Seafood Consumption Bisphenol A Blastocystis spp., Ubiquitous Parasites of Human, Animals, and Environment Diet as a Healthy and Cost-Effective Instrument in Environmental Protection Effects of Cooking on Dietary Exposure to Arsenic From Rice and Vegetables: Human Health Risks Environmental Reservoirs of Antimicrobial Resistance of Foodborne Pathogens Epigenetic Changes Induced by Environment and Diet in Cancer Food Safety and Risk Analysis Furfuryl AlcoholdA Food Additive Mycotoxins in Foods Mycotoxins in the Food Chain and Human Health Implications Nutrition and CancerdAn Update on the Roles of Dietary Factors in the Etiology, Progression, and Management of Cancer Pyridine Vinylidene ChloridedUsed to Produce Flexible Films for Food Packaging

Disinfection By-products Carcinogenicity of Disinfection Byproducts in Humans: Epidemiological Studies Carcinogenicity of Disinfection Byproducts in Laboratory Animals Empirical Models to Predict Disinfection By-Products(DBPs) in Drinking Water: An updated review Genotoxicity of Disinfection By-Products: Comparison to Carcinogenicity Respiratory Effects of Chlorination Products Risks of Disinfection Byproducts in Drinking Water: Comparative Mammalian Cell Cytotoxicity and Genotoxicity

Disparities and Social Determinants of Environmental Health Climate Change, Environmental Health, and Human Rights Community Environmental and Health Needs: Novel Approaches and Methods

xix

xx

Subject Classification (Thematic Table of Contents)

The Definition of Refugees and Health Issues Related to Refugee Influx in Europe The Environmental Health of Children of Migrant WorkersdAn Example From China Environmental Justice and Interventions to Prevent Environmental Injustice in the United States Environmental Justice: An International Perspective Environmental Justice: An Overview Gender Differences in Cancer Incidence Health Impacts of Energy Poverty and Cold Indoor Temperature Life Course Epidemiology and Inequality in Health Maternal and Child Health Disparities: Environmental Contribution Neighborhood Risk and Infant Health Neighborhood “Social Infrastructure” for Health Perceptions and Physiological Responses to Indoor Air Quality Political and Social Violence: Health Effects Social Coherence and Social Structure and Health Social Environment: Overview Social Support and Social Networks

Ecosystem Services and Human Health (Ecohealth) Air Transportation and Human Health Biodiversity and the Loss of Biodiversity Affecting Human Health Biological Pathways between the Social Environment and Health Carbon Farming Desertification Disabling Environments Disentangling Physical, Chemical, Nutritional, and Social Environmental Influences on Asthma Disparities: the Promise of the Exposome Eco Health: Stratospheric Ozone Entomological Risks of Genetically Engineered Crops Environmental Health, Planetary Boundaries, and Limits to Growth Floriculture Forest Transition and Zoonoses Risk Global Climate Changes and International Trade and Travel: Effects on Human Health Outcomes Human Health and the State of the Pedosphere Human Health Link to Invasive Species The Impact of Environmental and Anthropogenic Factors on the Transmission Dynamics of Vector-Borne Diseases Land Quality: Environmental and Human Health Effects Landscape Epidemiology of Human Onchocerciasis in Southern Venezuela Medical Anthropology Mollusc Bivalves as Indicators of Contamination of Water Bodies by Protozoan Parasites Oceans and Human Health Overview of how Ecosystem Changes Can Affect Human Health Parasite Zoonoses Plants as a Tool for the Environmental Health Assessment Principles of Medical Geology Psychobiological Factors in Environmental Health Small and Medium Enterprises: Barriers and Drivers of Managing Environmental and Occupational Health Risks Small-Scale Industries and Informal Sector Activity Premises: Environmental and Occupational Health Issues Sustainable Management of Agricultural Systems: Physical and Biological Aspects of Soil Health UN Convention on Wetlands (RAMSAR): Implications for Human Health UN Convention to Combat Desertification Using Testate Amoebae Communities to Evaluate Environmental Stress: A Molecular Biology Perspective

Electromagnetic Fields Electromagnetic Fields: Environmental Exposure Electromagnetic Fields From Mobile Phones and Their Base Stations: Health Effects

Subject Classification (Thematic Table of Contents) Low-Frequency Magnetic Fields: Potential Environmental Health Impacts Radiofrequency Electromagnetic Fields: Health Effects

Emerging Environmental Contaminants Nanomaterials in the Environment and their Health Effects Nanomaterials: Potential Ecological Uses and Effects Nanosilver: Environmental Health Effects Perfluorooctanoic AciddA Water and Oil Repellent

Environmental Cancers Air Pollution and Lung Cancer Risks Benzene: Environmental Exposure Biomarkers in Environmental Carcinogenesis Cancer and the Environment: Mechanisms of Environmental Carcinogenesis (MS32) Cancer Risk Assessment and Communication Connecting Environmental Stress to Cancer Cell Biology through the Neuroendocrine Response Environmental Agents and Childhood Cancer Environmental Carcinogens and Regulation Environmental Lung Cancer Epidemiology Epigenetic Changes Induced by Environment and Diet in Cancer Erionite Series Minerals: Mineralogic and Carcinogenetic Properties Gene–Environment Interactions and Childhood Cancer History of the Dose–Response Human Health Implications of Personal Care Products: Breast Cancer and Other Breast-Related Diseases Indoor Air Pollution Attributed to Solid Fuel use for Heating and Cooking and Cancer Risk Mutagenicity of PM2.5 Nutrition and CancerdAn Update on the Roles of Dietary Factors in the Etiology, Progression, and Management of Cancer Pesticide Exposure and Human Cancer

Environmental Health Economics (Cost–Benefit Analysis) Cost–Benefit Analysis and Air Quality–Related Health Impacts: A European Perspective Decision-Making Under Uncertainty: Trade-Offs between Environmental Health and Other Risks Economic Analysis of Health Impacts in Developing Countries Economic Valuation of Health Impacts in Cost–Benefit Analyses of Infrastructure Projects in Europe Estimating Environmental Health Costs: Monetary Valuation of Greenhouse Gases Estimating Environmental Health Costs: Valuation of Children’s Health Impacts General Introduction to Valuation of Human Health Risks Monetary Valuation of Health Impacts from Noise Monetary Valuation of Trace Pollutants Optimal Pollution: the Welfare Economic Approach to Correct Related Market Failures Social Cost–Benefit Analysis of Air Pollution Control Measures at Industrial Point Emission Sources: Methodological Overview and Guidance for the Assessment of Health-Related Damage Costs

Environmental Health Emergencies (Disasters) 1976 Trichlorophenol Reactor Explosion at Seveso, Italy Bhopal Gas Catastrophe 1984: Causes and Consequences Environmental Health and Bioterrorism Floods as Human Health Risks Fukushima Nuclear DisasterdEmergency response to the disaster Fukushima Nuclear DisasterdMonitoring and Risk Assessment Fukushima Nuclear Disaster: Multidimensional Psychosocial Issues and Challenges to Overcome Them Human Health Effects of Landslides

xxi

xxii

Subject Classification (Thematic Table of Contents)

Immunological Effects of the Chernobyl Accident Natural DisastersdEnvironmental Health Preparedness Natural DisastersdMental health Impacts Oil and Chemical Spills Other Environmental Health Issues: Oil Spill Radiation Exposures due to the Chernobyl Accident Road Traffic Injuries Thyroid Cancer Associated with the Chernobyl Accident Volcanic and geothermal processes: Health Effects Volcanoes and Human Health Volcanogenic Contaminants: Chronic Exposure

Environmental Health Management Diet as a Healthy and Cost-Effective Instrument in Environmental Protection Management and Export of Wastes: Human Health Implications Noise Management: International Regulations Risk Management in Environmental Health Decision Sustainable Management of Agricultural Systems: Physical and Biological Aspects of Soil Health Water Environment Management in India

Environmental Influence on Communicable Diseases Biological Agents and Infectious Diseases in War and Terrorism Biomass Smoke and Infection: Mechanisms of Interaction Chagas Disease: Environmental Risk Factors Cholera: Environmental Risk Factors Cryptosporidiosis: An Update Dampness and Mold Hypersensitivity Syndrome as an Umbrella for Many Chronic Diseases Environmental Health Impacts on Ascariasis Infections by Indication of Afghanistan: A Review Epidemiology of Infectious Diarrhea Free-Living Amoebae and Their Multiple Impacts on Environmental Health Hantavirus Infectious Processes and Medical Geology Landscape Epidemiology of Human Onchocerciasis in Southern Venezuela Legionnaires’ Disease: Environmental Risk Factors Leishmaniases Lyme Disease Lymphocystis Disease Virus in Aquatic Environment Malaria as an Environmental Disease Parasite Zoonoses Past, Present, and Future of Malaria prevalence and Eradication in the Light of Climate Change Protozoan Waterborne Infections in the Context of the Actual Climatic Changes and Extreme Weather Events Severe Acute Respiratory Syndrome Schistosomiasis Shigellosis Tuberculosis Tuberculosis: Epidemiology and Global Impact, Including Extrapulmonary Tuberculosis Manifestation With Emphasis on Skeletal Tuberculosis and a Rare Example of Shoulder Tuberculosis From Tibetan Plateau Area Vermamoeba vermiformisdGlobal Trend and Future Perspective West Nile Virus Zika Virus: A Compendium of the State of Knowledge

Ethics in Environmental Health Research and Practice Environmental Health Ethics in the Study of Children Ethics in Pediatric Environmental Health Research

Subject Classification (Thematic Table of Contents) Objectivity and Ethics in Environmental Health Science War and Environmental Health: Chemical Warfare Agents

Gene–Environment Interactions Biomarkers of Disease and Genetic Susceptibility Developmental Programming and the Epigenome Entomological Risks of Genetically Engineered Crops Epigenetic Changes Induced by Environment and Diet in Cancer Epigenetic Effects of Nanomaterials Epigenetics of Environmental Exposures Fish and Genes: From Marine Ecology to Applied Hydrobiology and Beyond Functional Genomics and Molecular Analysis of a Subtropical Harmful Algal Bloom Species Gene–Environment Interactions and Childhood Cancer Gene–Environment Interactions in Neurodegenerative Diseases Genetically Modified Plants: Risks to Environment Genetics is Involved in Everything, but not Everything is Genetic Genome-Based Drug Design Genome Effects and Mutational Risk of Radiation Labeling of Genetically Modified Foods Polymorphism and Gene–Environment Interactions in Environmental Cancer Stress Axis as the Locus of Gene–Environment Interactions in Major Depressive Disorder

Globalization and Environmental Health Asthma: Environmental and Occupational Risk Factors Avian Influenza Viruses Global Development and Population Health Globalization and Environmental Health Global Life Cycle Impacts of Consumer Products Global Measures of the Environmental Burden of Disease Health Issues of Travelers International Environmental Agreements and Environmental Health Trade, Trade Agreements, and the Environment

Guidelines and Regulations Air Quality Legislation Boron: Environmental Exposure and Human Health Environmental Carcinogens and Regulation Gallium: Environmental Pollution and Health Effects Genetically Modified Organisms Germanium: Environmental Pollution and Health Effects Health, Exposure, and Regulatory Implications of Nitrate and Nitrite in Drinking Water International Environmental Agreements and Environmental Health Iodine in the Environment Noise Management: International Regulations Noise Management: Soundscape Approach Palladium: Exposure, Uses, and Human Health Effects Platinum: Environmental Pollution and Health Effects Policy Responses to Climate Change Soil Quality Criteria for Environmental Pollutants Thallium: Environmental Pollution and Health Effects Trade, Trade Agreements, and the Environment Uranium: Environmental Pollution and Health Effects

xxiii

xxiv

Subject Classification (Thematic Table of Contents)

Vanadium: Environmental Pollution and Health Effects Worldwide Regulatory Strategies and Policies for Drinking Water

Historical Aspects Environmental Health: An Overview on the Evolution of the Concept and Its Definitions Environmental Pollution and Human Health in Ancient Times History of the Dose–Response Occupational Cancer: Modern History

Household Products: Health Risks 1,2-DichloropropanedA Paint Stripper and Dry-Cleaning Component 1,3-Propane SultonedA Common Additive for Detergents and Emulsifiers 1-tert-Butoxypropan-2-oldA Component of Commercial Cleaner Formulations Beta-MyrcenedA Fragrance and Flavoring Agent Bisphenol A DichloromethanedA Paint Stripper and Plastic Welding Adhesive Housing-related health Hazards: Assessment and Remediation Perfluorinated Substances Perfluorooctanoic AciddA Water and Oil Repellent Phthalates: Exposure and Health Effects Phthalates: Occurrence and Human exposure TetrafluoroethylenedFor Production of Teflon, Fluoroplastics, and Fluoroelastomers TetrahydrofurandUsed to Manufacture Most Elastomeric Polyurethanes

Hydraulic Fracking Hydraulic Fracturing and Public Health

Indoor Air Pollution: Health Effects Assessing Indoor Air Quality Bioaerosols in Apartment Buildings Chronic Obstructive Pulmonary Disease Cockroach Allergens: Exposure Risk and Health Effects Dust Production Following Forest Disturbances: Health Risks Gene–Environment Interactions: Cigarette Smoke, APC, DNA Damage Repair, and Human Health Indoor Air Pollution Attributed to Solid Fuel use for Heating and Cooking and Cancer Risk Indoor Air Pollution: Unusual Sources Indoor Radon Prevention and Mitigation Inhaled Woodsmoke Particulate Matter and Ultrafine Particles in Indoor Air Perceptions and Physiological Responses to Indoor Air Quality Productivity and Health Effects of High Indoor Air Quality Radon: An Overview of Health Effects Residential Radon Levels Around the World Ventilation

Ionizing and Nonionizing Radiation: Health Effects Clinical Consequences of Radiation Exposure Electromagnetic Fields: Environmental Exposure Electromagnetic Fields From Mobile Phones and Their Base Stations: Health Effects Microwaves: Exposure and Potential Health Consequences

Subject Classification (Thematic Table of Contents) New Molecular Aspects of Ultraviolet-Induced Immunosuppression Nuclear Energy and Human Health Psychological and Mental Health Aspects of Ionizing Radiation Exposure Radiofrequency Electromagnetic Fields: Health Effects Retrospective Dosimetry and Dose Reconstruction of Ionizing Radiation Risk of Radiation Exposure to Children and Their Mothers Ultraviolet Exposure: Health Effects Ultraviolet: Ocular Effects Ultraviolet Radiation and the Skin Ultraviolet Radiation Protection

Measures of Community Disease Burden Application of Healthy Life Year Measures to Environmental Health Issues Global Burden of Disease approach and the use of Disability-Adjusted Life Years at the World Health Organization Global Measures of the Environmental Burden of Disease Quality of Life and Environmental Health Assessment YPLL: A Comprehensive Quantitative Tool to Evaluate Worker Risk Under Green and Sustainable Remediation

Metals/Metalloids: Environmental Exposure and Health Effects Arsenic: Occurrence in Groundwater Arsenic Pollution of Groundwater in Bangladesh Beryllium: Environmental Geochemistry and Health Effects Bioavailable Aluminum: Its Effects on Human Health Bioavailable Aluminum: Its Metabolism and Effects on the Environment Bismuth: Environmental Pollution and Health Effects Cadmium and the Welfare of Animals Cadmium Exposure in the Environment: Dietary Exposure, Bioavailability, and Renal Effects Cadmium Neurotoxicity Chromium VI Toxicology Chromium: Environmental Pollution, Health Effects, and Mode of Action Dental Amalgam Fillings: An Underinvestigated Source of Mercury Exposure Depleted Uranium: Exposure and Possible Health Effects Drosophila as a Model for Toxicogenomics of Lead Environmental and Health Consequences of Nuclear, Radiological, and Depleted Uranium Weapons The Export of Hazardous Industries Gold and Amalgams: Environmental Pollution and Health Effects Itai-itai Disease Lead and Attention-Deficit Hyperactivity Disorder Lead, Delinquency, and Criminal Offending Lead Exposure and Caries in Children Lithium: Environmental Pollution and Health Effects Magnesium and Calcium in Drinking Water and Heart Diseases Manganese: Environmental Pollution and Health Effects Mercury and Children Health Mercury Toxicity Minamata Disease Molybdenum in the Environment and its Relevance for Animal and Human Health Nanoscale Titanium Dioxide: Environmental Health and Ecotoxicological Effects Overview of Potential Aluminum Health Risks Plutonium: Environmental Pollution and Health Effects Renal and Neurological Effects of Heavy Metals in the Environment Strontium in the Environment and Possible Human Health Effects Tin: Environmental Pollution and Health Effects Tungsten: Environmental Pollution and Health Effects Uranium: Toxicity to Renal Cells and Osteoblasts Water Consumption and Implications for Exposure Assessment

xxv

xxvi

Subject Classification (Thematic Table of Contents)

Zinc Deficiency in Human Health Zinc Toxicity in Humans

Noise Pollution: Exposure and Health Effects Cardiovascular Effects of Noise Combined Exposures to Noise and Chemicals at Work Combined Transportation Noise Exposure in Residential Areas Effects of Low-Frequency Noise and Vibrations: Environmental and Occupational Perspectives Environmental noise Expressing the Significance of Environmental Exposures in Disability-Adjusted Life Years: The Right Answer to Wrong Questions? Measuring Noise for Health Impact Assessment Mental Health Effects of Noise Noise and Cognition in Children Noise and Health: Annoyance and Interference Noise Management: International Regulations Noise Management: Soundscape Approach Sleep Disturbance in Adults by Noise

Outdoor Air Pollution: Health Effects Air Pollution Episodes Ambient Concentrations of Acrolein: Health Risks Animal and Human Waste as Components of Urban Dust Pollution: Health Implications Antarctic: Persistent Organic Pollutants and Environmental Health in the Region Automobile Exhaust: Detrimental Effects on Pulmonary and Extrapulmonary Tissues and Offspring Biomass Burning, Regional Air Quality, and Climate Change Chronic Obstructive Pulmonary Disease Eco Health: Stratospheric Ozone Risk to Populations Exposed from Atmospheric Testing and Those Residing Near Nuclear Facilities Vehicular Exhausts

Persistent Organic Pollutants Dioxins Dioxins: Health Effects Estrogenic Chemicals and Cardiovascular Disease Polychlorinated Biphenyls Persistent Organohalogen Pollutants and Phthalates: Effects on Male Reproductive Function Prenatal Exposure to Polycyclic Aromatic Hydrocarbons

Personal Care Products and Pharmaceuticals Diethylstilbestrol Exposure in Mothers and Offspring Human Health Implications of Personal Care Products: Breast Cancer and Other Breast-Related Diseases Natural Health Products Pharmaceuticals: Environmental Effects

Pesticides: Human Exposure and Toxicity Challenges in Pesticide Risk Communication DiazinondAn Insecticide GlyphosatedA Herbicide Insect Repellents: Assessment of Health Risks Malathion Organochlorines and the Effect on Female Reproductive System

Subject Classification (Thematic Table of Contents) Organophosphate Insecticides: Neurodevelopmental Effects ParathiondAn Insecticide Pesticide Exposure and Diabetes Pesticide Exposure and Human Cancer Pesticides: Human Health Effects Prenatal Exposure to Industrial Chemicals and Pesticides and Effects on Neurodevelopment Pyrethroid Insecticides: An Update Take-Home Route of Pesticide Exposure TetrachlorvinphosdAn Insecticide

Pollution-Specific Sources and Human Health Dust Production Following Forest Disturbances: Health risks Environmental Health Issues for Railroads Environmental Risks associated with Waste Electrical and Electronic Equipment recycling plants Mineral and Fuel Extraction: Health Consequences Mining Activities: Health Impacts Mobile Source–Related Air Pollution: Effects on Health and the Environment Nuclear Energy and Human Health Power Generation and Human Health Shooting Ranges: Environmental Contamination Sick Building Syndrome Solid Fuel: Health Effects Solid Fuel Use: Health Effect Solid Waste Incinerators: Health Impacts Volcanogenic Contaminants: Chronic Exposure

Recent Technological Advancements in Environmental Health Sciences Biotechnology and Advances in Environmental Health Research Diffusive Gradients in Thin Films: an Effective and Simple Tool for Assessing Contaminant Bioavailability in Waters, Soils, and Sediments Environmental Health Engineering: Rationale, Technologies, and Practices for Various Needs Household Energy Solutions in Low- and Middle-Income Countries Household Water Treatment and Safe Storage in Low-Income Countries

Soil/Dust Exposure Antibiotics Pollution in Soil and Water: Potential Ecological and Human Health Issues Bioaccessibility of Trace Metals in Household Dust Contamination of Soil and Vegetation with Developing Forms of Parasites Groundwater and Soil Pollution: Bioremediation Impact of Natural Dusts on Human Health

Target Organ Toxicity of Environmental Pollutants Asbestos Exposure and Autoimmune Disease Cadmium Exposure in the Environment: Dietary Exposure, Bioavailability, and Renal Effects Cadmium Neurotoxicity Cardiotoxicity Cardiovascular Effects of Noise Chemically Induced Respiratory Toxicities Developmental and Reproductive Toxicity of TCDD, Lead, and Mercury Environmental Liver Toxins Mechanisms of Immune Modulation by Xenobiotics Metal-Induced Toxicologic Pathology: Human Exposure and Risk Assessment Neurodevelopmental Toxicants

xxvii

xxviii

Subject Classification (Thematic Table of Contents)

Neurotoxicology New Molecular Aspects of Ultraviolet-induced Immunosuppression Organochlorines and the Effect on Female Reproductive System Organophosphate Insecticides: Neurodevelopmental Effects Oxidation–Antioxidation–Reduction Processes in the Cell: Impacts of Environmental Pollution Renal and Neurological Effects of Heavy Metals in the Environment Splenic Toxicology Stress Axis as the Locus of Gene–Environment Interactions in Major Depressive Disorder

Urban Environment and Human Health Built Environment and Mental Health Physical Infrastructure Service and Environmental Health Deficiencies in Urban and Peri-urban Areas Urban Environments and Health Urban Health Urban Health Indicators: The Role of Data Disparities Urban Planning, the Natural Environment, and Public Health Urban Transportation and Human Health

Waste, Wastewater, Sludge, and Human Health Agro-Industrial Waste Conversion into Medicinal Mushroom Cultivation Biosolids: Human Health Impacts Electronic Waste and Human Health Infectious/Medical/Hospital Waste: General Characteristics Management and Export of Wastes: Human Health Implications Microbial Risks Associated with Biogas and Biodigestor Sludge

Water Quality and Quantity Antibiotics Pollution in Soil and Water: Potential Ecological and Human Health Issues Aquatic Environment and Fish Welfare in Aquaculture Arsenic: Occurrence in Groundwater Arsenic Pollution of Groundwater in Bangladesh Blastocystis spp., Ubiquitous Parasites of Human, Animals, and Environment Clean Water for Developing Countries: Feasibility of Different Treatment Solutions Drinking Water: Nitrate and Health Drinking Water Treatment and Distribution Systems: Their Role in Reducing Risks and Protecting Public Health Effects of Iodine and Fluorine in Drinking Water on Human Health Essential Nature of Water for Health: Water as Part of the Dietary Intake for Nutrients and the Role of Water in Hygiene Fluoride in Drinking Water: Effect on Liver and Kidney Function Fluorine: Human Health Risks Fluorosis Giardia and Cryptosporidium: Occurrence in Water Supplies Groundwater and Soil Pollution: Bioremediation Heterotrophic bacteria in Bottled Water Microbes and Water Quality in Developed Countries Microorganisms in Beach Sand: What do we Still not Know? The Occurrence and Potential Health Risk of Microcystins in drinking Water of Rural Areas in China Particulate Matter and Public Health Perfluorooctanoic AciddA Water and Oil Repellent Recreational Exposure to Cyanobacteria Sanitation in Low- and Middle-Income Countries Shared Water Conflicts Status of Water Resources and Human Health in the Middle East and North African Region: An Integrated Perspective

CONTENTS OF VOLUME 1 Preface Advances in Analytical Methods for the Determination of Pharmaceutical Residues in Waters and Wastewaters Despo Fatta-Kassinos, Anastasia Nikolaou, and Lida Ioannou-Ttofa

xxxv

1

Agro-Industrial Waste Conversion Into Medicinal Mushroom Cultivation Arianne V Julian, Renato G Reyes, and Fumio Eguchi

13

Air Pollution and Development of Children’s Pulmonary Function Jonathan Dubnov, Boris A Portnov, and Micha Barchana

21

Air Pollution and Lung Cancer Risks Shuxiao Wang and Shuchang Liu

29

Air Pollution Episodes P Brimblecombe

41

Air Pollution From Solid Fuels Sukesh Narayan Sinha

49

Air Quality Legislation A Fino

61

Air Transportation and Human Health BS Cohen and AL Bronzaft

71

Ambient Concentrations of Acrolein: Health Risks TJ Woodruff and DA Axelrad

82

An Ecological Disaster Zone with Impact on Human Health: Aral Sea L Erdinger, H Hollert, and P Eckl

87

Animal and Human Waste as Components of Urban Dust Pollution: Health Implications I Rosas, CF Amábile-Cuevas, E Calva, and AR Osornio-Vargas

95

Antarctic: Persistent Organic Pollutants and Environmental Health in the Region Simonetta Corsolini

103

Antibiotics Pollution in Soil and Water: Potential Ecological and Human Health Issues Hillary Jufer, Lauren Reilly, and Elmer-Rico E Mojica

118

xxix

xxx

Contents of Volume 1

Application of Healthy Life Year Measures to Environmental Health Issues Prasanthi Puvanachandra and Adnan A Hyder

132

Aquatic Environment and Fish Welfare in Aquaculture Panagiota Panagiotaki and Emmanuil E Malandrakis

143

Arsenic Exposure From Seafood Consumption Heidi Amlund, Veronika Sele, and Jens J Sloth

147

Arsenic: Occurrence in Groundwater Dipankar Chakrabarti, Sushant K Singh, Md Harunur Rashid, and Mohammad Mahmudur Rahman

153

Arsenic Pollution of Groundwater in Bangladesh P Ravenscroft

169

Asbestos Exposure and Autoimmune Disease Jean C Pfau and Curtis W Noonan

181

Assessing Indoor Air Quality S-O Baek

191

Assessment of Human Exposure to Air Pollution Claudio Cocheo, Paolo Sacco, and Laura Zaratin

199

Asthma: Environmental and Occupational Risk Factors Chi-Hsien Chen and Yue L Guo

207

Automobile Exhaust: Detrimental Effects on Pulmonary and Extrapulmonary Tissues and Offspring M Umezawa and K Takeda

217

Avian Influenza Viruses M Vittecoq, F Thomas, F Renaud, and M Gauthier-Clerc

223

Bahama Archipelago: Environment and Health Françoise G Bourrouilh-Le Jan

231

Benzene: Environmental Exposure D Poli, R Andreoli, A Mutti, EC Alexopoulos, EB Bakeas, and M Goldoni

252

Beryllium: Environmental Geochemistry and Health Effects WM Edmunds

262

Bhopal Gas Catastrophe 1984: Causes and Consequences Ingrid Eckerman

272

Bias in Environmental Epidemiology Alexander P Keil and Jessie K Edwards

288

Bioaccessibility of Trace Metals in Household Dust Andrew Turner

301

Bioaerosols in Apartment Buildings Wan-Kuen Jo

307

Bioavailable Aluminum: Its Effects on Human Health JR Walton

315

Bioavailable Aluminum: Its Metabolism and Effects on the Environment JR Walton

328

Contents of Volume 1

xxxi

Biodiversity and the Loss of Biodiversity Affecting Human Health VC Hammen and J Settele

340

Biological Agents and Infectious Diseases in War and Terrorism BS Levy and VW Sidel

351

Biological Pathways Between the Social Environment and Health C Hertzman

359

Biomarkers in Environmental Carcinogenesis B Schoket

366

Biomarkers of Environmental Exposures in Blood Lesa L Aylward

376

Biomass Burning, Regional Air Quality, and Climate Change Ishwar C Yadav and Ningombam L Devi

386

Biomass Smoke and Infection: Mechanisms of Interaction J Grigg

392

Biosolids: Human Health Impacts S Toze and J Sidhu

397

Biotechnology and Advances Environmental Health Research PB Tchounwou and K Ndebele

405

Bismuth: Environmental Pollution and Health Effects Runming Wang, Hongyan Li, and Hongzhe Sun

415

Bisphenol A Ellen M Wells

424

Blastocystis spp., Ubiquitous Parasite of Human, Animals and Environment Shahira A Ahmed and Panagiotis Karanis

429

Bolivia: Mining, River Contamination, and Human Health Jerry R Miller and Lionel F Villarroel

436

Boron: Environmental Exposure and Human Health M Korkmaz

456

Briefing on Children Environmental Health in Uruguay Moll Maria José and Laborde Amalia

460

The Built Environment and Mental Health Kim T Ferguson and Gary W Evans

465

Cadmium and the Welfare of Animals CJC Phillips and SH Prankel

470

Cadmium Exposure in the Environment: Dietary Exposure, Bioavailability and Renal Effects A Åkesson and Rufus L Chaney

475

Cadmium Neurotoxicity Camilo Rios and Marisela Méndez-Armenta

485

Cancer and the Environment: Mechanisms of Environmental Carcinogenesis P Irigaray and D Belpomme

492

xxxii

Contents of Volume 1

Cancer Risk Assessment and Communication Stacey A Fedewa

503

Carbon Farming Jerome Nriagu

509

Carcinogenicity of Disinfection Byproducts in Humans: Epidemiological Studies Cristina M Villanueva

517

Carcinogenicity of Disinfection Byproducts in Laboratory Animals RL Melnick and MJ Hooth

528

Cardiotoxicity Aurelian Bidulescu, Alexander D Stein-Alexandrescu, and Steven R Horbal

535

Cardiovascular Effects of Noise W Babisch

543

Chagas Disease: Environmental Risk Factors EJ Hanford and FB Zhan

553

Challenges in Pesticide Risk Communication H-A Rother

566

Chemically-Induced Respiratory Toxicities P-G Forkert

577

Children’s Environmental Health: General Overview LR Goldman

589

Children’s Environmental Health in Developing Countries J Pronczuk, M-N Bruné, and F Gore

593

Children’s Exposure to Environmental Agents J Moya and LR Goldman

603

Children’s Health Risk Assessment: Issues and Approaches S Barone, Jr and B Foos

610

Cholera: Environmental Risk Factors JL Valdespino and L García-García

616

Chromium: Environmental Pollution, Health Effects and Mode of Action Arun K Shanker

624

Chronic Obstructive Pulmonary Disease A El-Gammal, E Phelan, and TM O’Connor

634

Clean Water for Developing Countries: Feasibility of Different Treatment Solutions Shankar B Kausley, Gaurav G Dastane, Jyoti K Kumar, Ketan S Desai, Sarjerao B Doltade, and Aniruddha B Pandit

643

Climate Change, Environmental Health, and Human Rights C Carlarne and MH Depledge

653

Climate Change: Health Risks and Adaptive Strategies S Tong and K Ebi

661

Clinical Consequences of Radiation Exposure CG Ruf, A Riecke, and V Meineke

670

Contents of Volume 1

xxxiii

Cockroach Allergens: Exposure Risk and Health Effects RD Cohn

678

Combined Exposures to Noise and Chemicals at Work M Sliwinska-Kowalska

686

Combined Transportation Noise Exposure in Residential Areas Peter Lercher

695

Community Outdoor Air Quality: Sources, Exposure Agents and Health Outcomes DG Shendell

713

Complex Air Pollution in China Lijian Han and Weiqi Zhou

728

Connecting Environmental Stress to Cancer Cell Biology Through the Neuroendocrine Response A Melhem and S Conzen

735

Contamination of Soil and Vegetation With Developing Forms of Parasites Jasmin Omeragic and Teufik Goletic

742

Cost-Benefit Analysis and Air Quality Related Health Impacts: A European Perspective Mike Holland

755

Critical Windows of Children’s Development and Susceptibility to Environmental Toxins CA Robledo, P Mendola, and SG Selevan

767

Cryptosporidiosis: An Update T Geurden and R O’Handley

781

Cumulative Environmental Risk JC Lambert, LK Teuschler, GE Rice, JM Wright, and JC Lipscomb

789

Cyanotoxins in Egypt and Saudi Arabia ZA Mohamed

796

Cyclic Volatile Methylsiloxanes: Occurrence and Exposure Hermann Fromme

805

This page intentionally left blank

PREFACE We live in a time of tumultuous change in which economic interdependence is increasing rapidly, information technology is accelerating the spread of ideas, human influence on natural cycles and processes has become evident on a global scale, and the spread of an infectious disease around the globe is only a plane ride journey away. This process of interlocking economic, social, technological, political, and cultural changes that have emerged around the world has been called globalization, a phenomenon that is shrinking space and increasing the speed of interaction, changing our views of the world and of ourselves, and breaking down national and cultural barriers. Globalization and collateral human activities are now transforming the Earth’s natural systems in ways that are profound, pervasive, and accelerating. The transformational forces associated with rise of human population to 7 billion people, rapid growth in per capita consumption of goods and services, and oversized footprints of human activities on ecosystems have resulted in major changes of the planet’s land cover, rivers and oceans, climate system, and biogeochemical cycles and generate vast amounts of industrial and human wastes that are voided into the air, water, and land. Ecosystem services on which life on the Earth depends are increasingly being jeopardized as the environment is modified to suit human needs. Never before in the history of the Earth have the activities of a single species ever threatened the well-being of the entire planet. Environmental Health emerged as a scientific discipline in response to the need for systematic and comprehensive approach to understanding the health impacts of human–environment interactions so as to better inform decision-making in the land-use planning, environmental conservation, and public health protection. First reports on the connections between ecosystemic change and human health outcomes can be traced back to ancient timesdin Western societies to Hippocrates who wrote On Airs, Waters, and Places, and to much earlier eras in Eastern societies. Historically, however, environmental health as a scientific discipline has increasingly been focused on quantifying the exposure–response relationships for contaminants encountered in human-dominated environment: from heavy metals, radiation to multitudes of organic pollutants. With this framework, Environmental Health was defined as the study of health problems that are related to environmental exposures and transcend national boundaries, with a goal of improving health for all people by reducing the environmental exposures that lead to avoidable disease, disabilities and deaths (https://www.niehs.nih.gov/research/programs/geh/ index.cfm). In my view, this definition captures but one dimension of the human–environment relationships. The development of the field under such epistemological framing has tended to be limited, segmented, and incomplete. This was due to the fact that for some threats, it may be possible to establish clear causal linkages and effects, but where the health hazard is the result of environmental change, the risk bundles are likely to embrace interactions among streams of fundamental human processes including public policies, economic activities, technological applications, and varying lifestyles. Dealing with environmental risks invariably involves coping with the uncertain, the unknowable, and the inherently indeterminable. The situation is not helped by the fact that mechanisms relating developmental hazards, environmental exposures, and health are generally lacking and integrated databases and information systems to support policy and decision-making, planning, and evaluation are rarely available at relevant spatial and timescales. A lot of the environmental health programs and policies of recent decades have therefore been driven by political expediency, scientific weight of evidence, or precautionary principles rather than based on sound scientific principles. The biomedically driven paradigm of environmental health, however, was a useful and pragmatic framework for identifying and quantifying risks to human health in the environment so that the threats can be addressed. Removing lead from gasoline worldwide and the Clean Air Act and Clean Water Act in the law books of most countries are among the prominent successes of such paradigm.

xxxv

xxxvi

Preface

Recent concepts of environmental health posit that epidemiological dynamics and actions of stakeholders that determine the health of human (and animal) populations need to be studied in their interconnected ecological, socioeconomic, and political contexts. They emphasize the importance of participatory, wholesystem approaches to understanding and promoting health and well-being in the context of social and ecological interactions. What differentiates these approaches from earlier frameworks is the increased recognition of the linkages between ecosystem health and human well-being, defined as covering physical, psychological, and social aspects of wellness, and includes the presence of positive emotions and moods (e.g., contentment, happiness), and the absence of negative emotions (e.g., depression, anxiety), satisfaction with life, fulfillment, resilience, and positive functioning. The new paradigm also values social and citizen dimensions and believes that issues of equity (gender, socioeconomic classes, age, and even species) and research-to-action are important to fully understand and resolve environmental health problems. The relatively new branch of environmental health embodied in this encyclopedia places increased emphasis on the impacts of changes in the structure and function of natural and human-dominated systems on health outcomes at both the individual and population levels. It evolved from studies that repeatedly show that degrading nature comes with several costs to the human population through loss of “ecosystem services” (health benefits that ecosystems provide). Major impetus for the new concepts came from the United Nations (UN) Conference on the Human Environment held in Stockholm in 1972 which placed human health in the context of larger environmental processesdlifted environmental health out of the shadows of sanitary sciences and “community” health. The subsequent UN Conference on Environment and Development (UNCED) held in Rio de Janeiro in 1992 was remarkable in recognizing the importance of link between healthy people and healthy environment as a prerequisite to sustainable development. The first principle of the Rio Declaration proclaimed that “human beings are at the center of concerns for sustainable development. They are entitled to a healthy and productive life in harmony with nature.” The implied notion that environmental health is a basic human right is increasingly being embraced by governments in various parts of the world. The UNCED concretized the fact that people everywhere are beginning to view the world they live in a more restrained, less belligerent, and more realistic way. People in the environmental movement have come to realize that life without industries in modern world is impossible while business leaders no longer have to be told that environmental stewardship on their part has economic benefits and is good for customer relations. As many developing countries increasingly emphasize the benefits of environmental regulations and controls in their march to industrialized nation status, the less developed nations now look to them for leadership and are beginning to emulate their growing environmental awareness and concerns. By the end of the 20th century, the links between environment, health, and development had become a matter of interest and concern in most nations, both developed and developing. By the time of the World Summit on Sustainable Development was held in Johannesburg in 2002, achieving sustainable environmental health had become a “high table” goal in international affairs and a feature item on many local, regional, and national socioeconomic agendas. The World Congress on Sustainable Development of 2012 (often referred to as Rioþ20) and the subsequent UN Agenda for Sustainable Development of 2015 further concretized ecosystem changes as a necessary adjunct of human health and well-being. The benefits that people get from their environment (ecosystem services) have been categorized into four types: provisioning services, such as food, fiber, and genetic resources; regulating services, such as water and air quality; supporting services, such as primary production, water, and nutrient cycling; and cultural services, such as recreation and religious sites. According to this typology, knowledge of ecosystem services perspective can inform strategies for identifying and addressing health disparities among socioeconomic and racial/ethnic groups who depend heavily on natural resources. It is well documented that (i) the critical determinants of environmental health in many countries are increasingly global and outside the responsibility of individual nations, and (ii) huge disparities persist in the morbidity and mortality between the developed and less developed nations, due primarily to close interlocking of environmental risks and poverty. Poverty hinders the development of clean water and proper sanitation; drives the migration into overcrowded cities with substandard housing and high air pollution levels; is related to indoor air pollution from burning of biofuels or urban solid wastes; increases exposure to intentional and unintentional injuries and the risk of lead poisoning; and is primarily responsible for undernutrition with far-reaching effects. Transboundary movement of health hazards from the developed countries, including polluting industries and industrial wastes, pesticides, heavy and inefficient use of energy, and plundering of natural resources and spoliation of the environment, adds to existing environmental risk factors in many communities especially in the developing countries. The realization that ecosystem services are linked to health gains and economic growth is now beginning to shape national and global policies that bear directly on environment and health interdependence.

Preface

xxxvii

Current inequities and human vulnerability in sub-Saharan Africa illustrate some of the challenges facing human populations with degraded ecosystem services. The poverty of most sub-Saharan African countries and their total dependence on nature’s goods and services for livelihood increase and extend their vulnerability to environmental change and risks from new technologies. The “winds of change” that began in Africa in the 1960s seem to have worsened the poverty level and both the environment and human capitals have continued on a downward spiral, making the health of the population more susceptible to environmental risk factors. Over 60% of the population lives in ecologically vulnerable areas characterized by a high degree of sensitivity and low degree of resilience. Rapid population growth and overexploitation of natural resources, deepening poverty, and increasing food insecurity have brought about environmental changes that have taken a toll on the public’s health. Mismanagement of natural resources, the impacts of disasters and civil strife, and response to external pressures (such as the economic adjustment plans) have decimated the ecosystem services and exacerbated the environmental health risks in the region. Other factors such as weak institutional and legal frameworks, corruption, and poor economic performance have left most countries in the region with limited choices and low coping capacity to deal with any environmental threats. It is easy to see why the highest rates of environmentally attributable diseases are concentrated in this part of our world and where the need for environmental health research is most desirable. This brings us to the fundamental question: What is the definition of Environmental Health (EH)? As a field of academic pursuit, environmental health is an outgrowth of the global environmental movement and straddles the traditional disciplines of public health and environmental protection. In practice, it involves relevant elements of ecology, conservation, economics, human behavior, ethics, and genomics. As a consequence, the scientific literature is smattered with varying definitions of environmental health colored by the author’s own disciplinary perspectives. Those who have been willing to move beyond their own academic domain would not disagree with a definition of environmental health as the theory and practice of assessing, correcting, controlling, managing and preventing the physical, chemical, biological, social and psychosocial factors in the environment that can adversely affect human health including quality of life. The broad definition comes from the fact that environmental health is an interdisciplinary field that borrows techniques from emerging and more traditional fields of study and brings together diverse perspectives and sources of knowledge. Until environmental health matures further as an academic discipline, it remains different things to different people. In the aggregate, however, the academic umbrella known as environmental health includes three domains: an area of research, an arena of applied public health practice, and a milieu for education and training. All three domains are covered to varying degrees in this encyclopedia. This second edition of the encyclopedia comes at the time when Environmental Health is at cross-roads. Within the context of external factors that define its boundaries, environmental health has thus evolved over time into a complex and multidisciplinary field that provides a framework for understanding the natural world and dealing with how we affect it [natural world] and the affects it has on our health. On the contrary, extensive human alteration of the natural world has resulted in remarkable improvements in most health indices globally. The apparent contradiction stems from the fact that many of the key determinants and solutions to environmental health lie outside the direct realm of health and are strongly dependent on environmental changes, water and sanitation, industrial development, education, employment, trade, tourism, agriculture, urbanization, energy, housing, culture, and national security. Environmental risks, vulnerability, and variability manifest themselves in different ways and at different timescales and can impact human health in many dimensions. While there are shared global and transnational problems, each community, country, or region faces its own unique environmental health problems the solution of which depends on circumstances surrounding the resources, customs, institutions, values, and environmental vulnerability. This important dimension is covered severally in the group of articles on Country- and Area-Specific Environmental Health Issues. This encyclopedia has managed to include many issues and topics especially on social determinants of health which are not typically covered in existing environmental health textbooks and compendia and hence has provided an expanded umbrella for the field. A goal of the Second Edition of Encyclopedia of Environmental Health is to examine the ways for conceptualizing, identifying, organizing, and addressing key environmental health problems at the local, regional, and global scales. A number of disciplines have brought powerful concepts, methodologies, and experience to these tasks and are constantly creating new frontiers in the field. The focus of this edition is to provide a critical assessment of advancements in aligned research fronts that have occurred since the First Edition was published which can be used to embellish existing theories, syntheses, and analytic structure of the growing field of environmental health. Special emphasis has been given to recent developments in the areas of epigenetics

xxxviii

Preface

(environmental inheritance), health consequences of environmental disasters (natural and human-made), health disparities, and social determinants of environmental health, newer environmental contaminants. The underlying view is that while environmental health must deal with threats and how to minimize them, it has also created a wonderful framework for developing new scientific paradigms to address emerging local, national, and global environmental concerns. Environmental challenges and our knowledge of them are constantly evolving and this major work in the field needs to be constantly updated to reflect the current state of research and practice. Encyclopedia of Environmental Health is a collection of thoughtful and critical reviews written by leading experts in the fields about which they write. The text of the articles were written at a level that allows advanced undergraduate and postgraduate students access to the material, while providing environmental health practitioners, active researchers, and public and private sector employees in related disciplines with a ready resource for information in all aspects of environmental health. There are no unrealistic claims for answers in these volumes, however. Confronting the determinants of environmental health is not a matter for guidebook or blueprint; premature or undue prescriptions or programmatic approaches are likely to be misleading. Articles in the encyclopedia seek to conceptualize the issues more clearly, to describe the best available scientific methods that can be used in characterizing and managing environmental health risks, to extend the field of environmental health through new theoretical perspectives and heightened appreciation of social, economic, and political contexts, and to encourage a richer analysis in the field through examples of diverse experiences in dealing with the health–environment interface. In this regard, articles in the encyclopedia have cut across several disciplines and should be of interest to a large spectrum of readers in the biomedical, natural and social sciences. Preparation of this encyclopedia was driven by the need to (i) provide a framework to structure the existing and widely scattered knowledge base; (ii) place environmental health and risks in the broader context of environmental change and associated drivers of change; (iii) identify and assess potential interventions to prevent or remediate the risks; (iv) identify and assess major uncertainties in exposure assessment, risk characterization, and health impact determination; and (v) provide context for defining priorities for further research. The Encyclopedia of Environmental Health has followed the guidelines for previous encyclopedias in the Elsevier series of major reference works. The articles have been clustered into well-defined subject sections at the beginning of the reference work (Volume 1) for the readers’ convenience. Within the volumes, however, articles are arranged alphabetically by title rather than thematically. Articles range in length from about 3000 to over 10,000 words, reflecting the diversity in topics covered and the level of understanding of subject material. Authors were encouraged to use tables, diagrams, and illustrations whenever necessary. Each article contains a “Further Reading” list designed to provide the reader with critical literature on the topic. Readers may also find the web resources at the end of many chapters to be useful. Concerted effort was made to cross-reference the chapters as much as possible. The production of the first and second editions of this major reference work took many dedicated years, not surprising for a massive enterprise that involved hundreds of authors from many disciplines and in many countries. The foci for articles were developed by an international and multidisciplinary team of editors, associate editors, issue editors, and consultants. Peer reviewing of the articles took more time and effort than was expected. For a large project that involved many collaborators over a period of years, the drop out of contributors, reviewers, and issue editors was a problem that had to be managed carefully. This edition of Encyclopedia of Environmental Health would not have been possible without the dedicated staff and editors at Elsevier Press and the distinguished panel of Associate Editors. Appreciation and admiration are extended to the Associate Editors who put in incredible amounts of effort to ensure that articles in their purview were completed to the highest quality. Ultimately, any success of this encyclopedia belongs to the outstanding group of authors and coauthors from many disciplines and different institutions who contributed their scholarship, knowledge, and hard work to the endeavor. Jerome O. Nriagu Editor-in-Chief School of Public Health, University of Michigan Ann Arbor, MI 48109, USA

PERMISSION ACKNOWLEDGMENTS The following material is reproduced with kind permission of Taylor & Francis Table 4 Residential and Non-Residential Biomass Combustion: Impacts on Air Quality Table 6 Industrial Livestock Production Facilities Airborne Emissions www.taylorandfrancisgroup.com The following material is reproduced with kind permission of Oxford University press Table 2 Carcinogenicity of Disinfection Byproducts in Humans: Epidemiological Studies Figure 4 Human exposure to cyclic volatile methylsiloxanes Figure 3 Impact of Natural Dusts on Human Health Figure 1 1976 Trichlorophenol Reactor Explosion at Seveso, Italy Table 4 1976 Trichlorophenol Reactor Explosion at Seveso, Italy Figure 1 Human Health Link to Invasive Species Figure 2 Polymorphism and Gene-Environment Interactions in Environmental Cancer Text The Impact of Environmental and Anthropogenic Factors on the Transmission Dynamics of Vector Borne Diseases www.oup.com The following material is reproduced with kind permission of American Association for the Advancement of Science Figure 1 Impact of Natural Dusts on Human Health Figure 1 Volcanoes and Human Health Text The Impact of Environmental and Anthropogenic Factors on the Transmission Dynamics of Vector Borne Diseases www.aaas.org The following material is reproduced with kind permission of Nature Publishing Group Figure 2 Exposure Modeling and Measurement: Exposure Factors Figure 4 Occurrence of particles in indoor air (PM and PNC) Text The Impact of Environmental and Anthropogenic Factors on the Transmission Dynamics of Vector Borne Diseases http://www.nature.com

i

This page intentionally left blank

Advances in Analytical Methods for the Determination of Pharmaceutical Residues in Waters and Wastewatersq Despo Fatta-Kassinos, University of Cyprus, Nicosia, Cyprus Anastasia Nikolaou, University of the Aegean, Mytilene, Greece Lida Ioannou-Ttofa, University of Cyprus, Nicosia, Cyprus © 2019 Elsevier B.V. All rights reserved.

Abbreviations dSPE Dispersive solid-phase extraction GAC Green analytical chemistry HPLC High-performance liquid chromatography LPME Liquid-phase microextraction MEPS Microextration by packed sorbent MMLLE Microporous membrane liquid–liquid extraction mSPE Magnetic solid-phase extraction Q-TOF Quadrupole ion trap-time-of-flight QuEChERS Quick, Easy, Cheap, Effective, Rugged and Safe SBSE Stir-bar sorptive extraction SLE Supported liquid extraction SPE Solid-phase extraction SPME Solid-phase microextraction UPLC Ultra performance liquid chromatography

Introduction A large number of different chemical classes of pharmaceuticals are consumed by humans, husbandry and aquaculture. Most of the pharmaceutical compounds are complex molecules with different functionalities and physicochemical and biological properties. Two important characteristics of these compounds are their ionic nature and inherent biological activity. Their molecular weights range typically from 300 to 1000, and they can either have basic or acidic functionalities. Pharmaceuticals can be classified into different categories based on their chemical structure and mode of action on the target organs. The main categories of pharmaceuticals and their mode of action are given in Table 1. After their consumption, pharmaceuticals are metabolized in the organism and then they are excreted in either their parent form or as metabolites. Bio-degradation modifies the chemical structure of the active molecules, which in turn often results in a change in their physicochemical and pharmaceutical properties. Metabolism may lower pharmaceuticals’ activity or enhance their water solubility to facilitate the excretion from the body. In most cases, however, metabolism is incomplete. There are two important pathways of metabolism. Phase I metabolites result from the modification of the active compound itself by hydrolysis, oxidation, reduction, alkylation, and dealkylation. Phase II metabolites are phase I metabolites that have been modified by glucuronation or sulfatation to enhance excretion. Therefore, administered parent compound may be excreted (i) unchanged, (ii) as a glucuronide or sulfate conjugate, (iii) as a major metabolite, or (iv) as a complex mixture of many metabolites. It is also important to note that under environmental conditions pharmaceutical molecules can be neutral, cationic, anionic, or zwitterionic. As comparatively large and chemically complex molecules, the heteroatom content and multifunctional composition of pharmaceuticals make them polar and ionizable molecules. These properties are largely dependent on the pH of the solution. The metabolites of pharmaceuticals can be subjected to further transformation in the sewage treatment plants or in surface water and/or groundwater since biotic or abiotic processes, such as hydrolysis and photolysis may also degrade pharmaceutical substances. The transformation products (TPs) are of major concern, because they are often more persistent and exhibit similar to or even higher toxicity than the parent compounds. As Fig. 1 illustrates, pharmaceuticals and their metabolites can enter the environmental aqueous and soil matrices mainly through excretion and disposal via wastewater. Owing to the incomplete elimination in wastewater treatment plant (WWTP), q

Change History: April 2018. Lida Ioannou-Ttofa prepared the update. Affiliations, Keywords, and Figure 3 have been updated. This is an update of D. Fatta-Kassinos, S. Meric and A. Nikolaou, Advances in Analytical Methods for the Determination of Pharmaceutical Residues in Waters and Wastewaters, In Encyclopedia of Environmental Health, edited by J.O. Nriagu, Elsevier, 2011, Pages 9–16.

Encyclopedia of Environmental Health, 2nd edition, Volume 1

https://doi.org/10.1016/B978-0-12-409548-9.11247-3

1

2

Advances in Analytical Methods for the Determination of Pharmaceutical Residues in Waters and Wastewaters

Table 1

Main categories of pharmaceuticals and mode of action

Category

Mode of action

Analgesics Antacids Antianxiety drugs Antiarrhythmics Antibacterials Antibiotics Anticoagulants and thrombolytics Anticonvulsants Antidepressants Antidiabetic Antidiarrheals Antiemetics Antifungals Antihistamines Antihypertensives

Relieve pain Relieve heartburn and indigestion by neutralizing the stomach acid Suppress anxiety and relax muscles; other names anxiolytics, sedatives, or minor tranquilizers Control heartbeat irregularities Treat infections Combat bacterial infection Prevent blood from clotting and help to dissolve and disperse blood clots and may be prescribed for patients with recent arterial or venous thrombosis Prevent epileptic seizures Mood-lifting agents Stabilize and control blood glucose levels among people with diabetes Relief of diarrhea Prevent nausea and vomiting Treat fungal infections Counteract the effects of histamine, one of the chemicals involved in allergic reactions Lower blood pressure; include diuretics, b-blockers, calcium channel blocker, centrally acting antihypertensives, and sympatholytics Reduce inflammationdthe redness, heat, swelling, and increased blood flow found in infections and in many chronic noninfective diseases, such as rheumatoid arthritis and gout Treat cancer Treat symptoms of severe psychiatric disorders (also called major tranquilizers) Reduce fever Treat viral infections or they are used to provide temporary protection against infections, such as influenza Reduce the oxygen needs of the heart by reducing heartbeat rate Open up the bronchial tubes within the lungs when the tubes have become narrowed by muscle spasm, such as asthma Prevent the aches, pains and fever that accompany a cold can be relieved by aspirin or acetaminophen Antiinflammatories in arthritis or asthma or as immunosuppressives, acting also for preventing some malignancies or compensating for a deficiency of natural hormones (1) Alter the consistency or production of phlegm, such as mucolytics and expectorants; (2) Suppress the coughing reflex, such as codeine (narcotic cough suppressants), antihistamines, dextromethorphan and isoproterenol (nonnarcotic cough suppressants) Kill or damage cells as antineoplastics (used to treat cancer) and also as immunosuppressives Reduce swelling of the mucous membranes that line the nose by constricting blood vessels, relieving thus nasal stuffiness Increase the quantity of urine produced by the kidneys and passed out of the body, ridding the body of excess fluid; reduce water logging of the tissues caused by fluid retention in disorders of the heart, kidneys, and liver; useful in treating mild cases of high blood pressure Hormone replacement Prevent or reduce the body’s normal reaction to invasion by disease or by foreign tissues. Immunosuppressives are used to treat autoimmune diseases and to help prevent rejection of organ transplants Reduced the levels of fats (lipids), such as cholesterol, in the blood, called hyperlipidemia Estrogens and progesterone, responsible for development of female secondary sexual characteristics, to treat menstrual and menopausal disorders and are also used as oral contraceptives. Estrogens may be used to treat cancer of the breast or prostate, progestins (synthetic progesterone to treat endometriosis) Androgenic hormones are responsible for development of male secondary sexual characteristics. They may be used to treat breast cancer in women, but either synthetic derivatives called anabolic steroids, which have less marked side effects, or specific antiestrogens are often preferred. Anabolic steroids also have a “body building” effect that has led to use in competitive sports, for both men and women Induce sleep are benzodiazepines and barbiturates. Benzodiazepines drugs are used more widely than barbiturates because they are safer, the side effects are less marked, and there is less risk of eventual physical dependence Keep calm or sedative effect. Minor tranquilizers should be called antianxiety drugs, and the drugs that are sometimes called major tranquilizers should be called antipsychotics

Antiinflammatories Antineoplastics Antipsychotics Antipyretics Antivirals Beta-blockers Bronchodilators Cold cures Corticosteroids Cough suppressants Cytotoxics/cytostatics Decongestants Diuretics Hormones Immunosuppressives Lipid regulators Sex hormones (female) Sex hormones (male)

Sleeping drugs Tranquilizers

Source: FDA/Center of Drug Evaluation and Research (2009).

pharmaceutical residues are found in surface waters, soils irrigated with treated wastewater, and groundwater replenished with treated wastewater or aquifers that communicate with surface water already loaded with such compounds. Therefore, it is apparent that municipal, as well as hospital wastewater effluents are the most important sources of human pharmaceutical compounds, with contribution from wastewater effluents from pharmaceutical manufacturing companies and landfill leachates, as well as disposal of unused medicines into the environment. Additionally, veterinary pharmaceuticals enter the environment after the application in animals in farms and the subsequent runoff of the manure, as well as after direct application in aquaculture. Pharmaceuticals, present in the environment at microgram per liter to nanogram per liter levels are of particular concern because of both their

Advances in Analytical Methods for the Determination of Pharmaceutical Residues in Waters and Wastewaters

3

Fig. 1 Sources and fluxes of pharmaceutical residues into the environment. Adapted from Nikolaou, A., Meric, S., and Fatta, D. (2007). Occurrence patterns of pharmaceuticals in environmental matrices. Analytical and Bioanalytical Chemistry 387, 1225–1234.

ubiquity in the aquatic environment and health effects. Pharmaceutical residues have been detected in many environmental matrices worldwide including water, wastewater, sediments and sludge. The characteristic of a pharmaceutical compound, which determine whether this will enter the aquatic environment or remain adsorbed on solid particles is hydrophilicity, that is given by the biosolids/water distribution coefficient, in other words Kbiomass or Kp. Some of the most frequently detected pharmaceutical compounds in water and wastewater can be seen in Fig. 2. The abundance of antibiotic compounds potentially due to their overuse, misuse and incomplete removal by conventional WWTPs, accompanied by the presence of antibiotic-resistant bacteria (ARB) and their associated antibiotic resistance genes (ARGs) has been witnessed in the last decades in treated wastewater effluents. It is widely accepted that the conventional biological processes (i.e., conventional activated sludge) currently applied in WWTPs create an environment potentially conducive to

Analgesics/Anti-inflammatory: Diclofenac, ibuprofen, paracetamol, naproxen, ketoproxen

Antiepileptic: Carbamazepine

Antibiotics: Ofloxacin, sulfamethoxazole, erythromycin, trimethoprim Fig. 2

Most frequently detected pharmaceuticals in aqueous matrices.

Beta-blockers: Propranolol, atenolol, metoprolol

Steroids: 17-β estradiol, estrone, estriol, 17-a ethinylestradiol

Lipid regulators: Gemofibrozil, fenofibrate, bezafibrate, clofibric acid

4

Advances in Analytical Methods for the Determination of Pharmaceutical Residues in Waters and Wastewaters

antibiotic resistance development, since environmental- and commensal-derived bacterial communities are in close contact, facilitating thus the generation and proliferation of new resistant strains via horizontal gene transfer. Thus, is it of high importance to improve the tertiary treatment processes applied in WWTPs for disinfection (i.e., chlorination, UV oxidation, ozonation). However, little knowledge is still available regarding the operating parameters that may influence the removal mechanisms of ARB&ARGs during the application of the above processes.

Current Status With Respect to Sampling, Sample Preparation and Extraction Methods The quantification of pharmaceuticals in human biological matrices, such as urine and blood is feasible since a long time. The quantification, however, in environmental matrices has become feasible due to the very low concentrations in which these compounds exist and most importantly due to the fact that these matrices are very complex matrices containing a big number of different organic molecules. The presence of pharmaceuticals in WWTP effluents, surface, drinking and groundwater has become an issue of great interest, although it is probable that these compounds have been entering surface and groundwater systems for as long as people have been using them. The challenge therefore of modern environmental analytical chemistry is a continuous effort to search for a continuously increasing number of “new” contaminants including pharmaceutical residues at trace levels. Some years ago analytical techniques and equipment capable of measuring milligram per liter in water samples were considered as state of the art. Currently, a number of analytical techniques are available that are capable of measuring concentrations down to the parts-per-trillion levels. In parallel to the remarkable developments of increasing chromatographic resolution, detection sensitivity and specificity has been the ability to extract and enrich compounds of interest from extremely complex matrices, such as wastewater, soil, sediment, and sludge. Fig. 3 demonstrates the sample preparation procedures and the most common analytical methods used for the analysis of pharmaceutical compounds in aqueous matrices. The analytical processes for complex aqueous samples contain several steps, which can significantly affect the accuracy of the final result, such as sampling, sample pretreatment, extraction, identification and data processing, as shown in Fig. 3. During sampling, the representative sampling point and the appropriate sampling method (i.e., continuous mode (e.g., flow-proportional and constant samples) or discrete mode (e.g., time-proportional, flow-proportional, volume-proportional and grab samples)) should be carefully chosen. The times and frequencies of sampling can be properly decided after detailed preliminary work. Since, it is well known that the rainfall events influence the contaminants’ concentration in surface water and WWTP effluents, it is recommended samples to be taken during dry weather conditions. Finally, the storage, preservation and transport of the aqueous sample from the point of collection to the analytical laboratory must occur without any changes in sample’s physicochemical properties. For complex aqueous matrices, a sample pretreatment step is necessary, to provide

filtration pH adjustment etc

Extraction

LLE SPE LPME SPME SBSE QuEChERS MEPS etc

Sample pretreatment Clean up Identification

Sample transport

Sampling collection

Data processing Fig. 3

Common workflow diagram during pharmaceutical pretreatment, analysis and quantification.

LC-MS LC-MS/MS UPLC-MS/MS Q-TOF-MS GC-MS GC-MS/MS LCxLC GCxGC NMR etc

Advances in Analytical Methods for the Determination of Pharmaceutical Residues in Waters and Wastewaters

5

a sample aliquot that is relatively free of interferences, will not damage the analytical instrument and is compatible with the analytical method that will be used (as described in details in the following paragraphs). During the extraction step, the target analyte(s) are separated from the sample, while during the identification/quantitation step the unknown analyte(s) can be confirmed and their concentration can be determined (for more details please refer to the Section “Recent Developments in Chromatographic Systems and Techniques”). Finally, the data processing step can provide the analytical results of the target analyte(s). All the above mentioned analytical steps follow one another, meaning that the latter step cannot begin before the former one has finished. The sample preparation procedure is a crucial step of the analytical process. In the case of pharmaceuticals containing acidic groups in their structure and existing largely in their ionized form at neutral pH, acidification of water samples is necessary. Another important factor is the presence of natural organic matter in the samples that may reduce the extraction efficiency. In general, the water samples are filtered through 1.7, 1.0, 0.45, and/or 0.22 mm glass-fiber filters, prior to the extraction step. Several techniques have been developed and optimized during the last years, which are described in detail in the following paragraphs. During the past decades, sample preparation was performed by liquid–liquid extraction (LLE) and solid–phase extraction (SPE). LLE is one of the oldest and most widely employed sample preparation techniques, mainly due to the fact that it is a simple, easy to use and rapid method, offering high reproducibility and high sample capacity. It is a method that relies on the separation of analyte(s) based on their relative solubilities in two different immiscible liquids (i.e., water/organic solvent). After the addition of the extracting solvent in the sample, the mixture is agitated, by vortexing or shaking, and when physical mixing of the two liquid phases is achieved, they are allowed to separate. However, various drawbacks of LLE, such as the use of large sample volumes and high volumes of potentially toxic solvents, the emulsion formation, the generation of large amounts of wastes, the fact that it is labor-intensive procedure with limited selectivity, make this process time-consuming and environmentally unfriendly, with a poor potential for automation and online connection to analytical instruments. A rarely used alternative to LLE is the supported liquid extraction (SLE), sometimes referred also as solid-supported LLE, which offers many benefits compared to the conventional LLE, since it is an easy method, with reduced solvent requirements, higher recoveries and reproducibility, improved sample cleanliness and sensitivity, which prevents the formation of emulsion and can be completely automated. In SLE, the same organic phases used in LLE are coated onto an inert diatomaceous earth support (i.e., column or cartridge) and is subsequently treated with solvent for the extraction of hydrophobic molecules. Continuous-flow LLE can be achieved using a hydrophobic membrane, the well-known membrane extraction processes. Supported liquid membrane (SLM) extraction and microporous membrane LLE (MMLLE) are the most widely used membrane extraction techniques for the selective extraction of pharmaceutical compounds in environmental aqueous samples, and have been found to be attractive alternatives to conventional techniques. SLM is based on a three-phase system (aqueous phase–organic phase–aqueous phase), where the organic phase is immobilized in a porous thin hydrophobic polymer membrane; combining the extraction and stripping processes in a single unit operation. Hence, this process is a combination of three simultaneously processes: (i) extraction of the target analyte(s) into an organic phase, (ii) membrane transport and (iii) re-extraction. It is a simple and fast method, with low analysis cost, very low consumption of organic solvents and small sample volumes, which can be easily combined online with various analytical instruments, providing cleaner extracts and having lower detection limits than conventional LLE and SPE. Various configurations of SLM extraction has been used, such as flat sheet, spiral wound and hollow fiber (HF), with the latest being the most frequently used, mainly due to HF membrane’s high stability, its easy preparation and significantly higher surface area per volume of sample, although, its higher cost compared to the other configurations. MMLLE is a two-phase membrane extraction method, which is based on an aqueous phase and an organic phase, with the latter being supported by a hydrophobic membrane. It allows LLE to be made in a closed system, eliminating the main problem of conventional continuous LLE, that is, the need for phase separators. Two types of membranes (flat sheet and HF) have been used in MMLLE, with HF having larger extraction surface, leading thus to improved extraction efficiency. In addition to LLE process, also size exclusion takes place in MMLLE method, increasing the selectivity of the extraction. MMLLE has several advantages compared to LLE, such as higher selectivity, higher volume ratios and enrichment factors, easy handling, while it is an almost solventless and easier for automation process. On the other hand, SPE has a number of advantages (i.e., faster, less labor-intensive, reduced solvent use, higher concentration factors, large volume samples can be easily accommodated, potential of multiple extractions and high potential for automation) over LLE that have led to its rapid development and increasing usefulness as a sample preparation technique. SPE is a separation process that is used to remove solid or semisolid compounds from a mixture of impurities based on their physical and chemical properties. The separation ability of SPE is based on the preferential affinity of desired or undesired solutes in a liquid, mobile phase for a solid, stationary phase through which the samples passed. Impurities in the sample are either washed away, while the analyte(s) of interest is retained on the stationary phase, or vice versa. SPE is a technique that is used for quick, selective sample preparation and purification before the chromatographic analysis. Selectivity is controlled by applying liquid chromatography principles. The variety of the available phase chemistries are packed into an array of glass tubes, for example, or 96-well plates or a 47- or 90-mm flat disk, and are then processed using especially designed vacuum manifolds, as shown in Fig. 4. The manifold allows many samples to be processed simultaneously. A typical cartridge SPE manifold can accommodate up to 24 cartridges, whereas a typical disk SPE manifold can accommodate 6 disks. Most SPE manifolds are equipped with a vacuum port. Application of vacuum speeds up the extraction process by pulling the liquid sample through the stationary phase. The analytes are collected in sample tubes inside or below the manifold after they pass

6

Fig. 4

Advances in Analytical Methods for the Determination of Pharmaceutical Residues in Waters and Wastewaters

SPE manifold.

through the stationary phase. Analytes that are retained on the stationary phase can then be eluted from the SPE cartridge with the appropriate solvent. A variety of SPE sorbents (stationary phase) have been assessed for preconcentration, as well as for cleanup of pharmaceuticals in water samples. ENV þ, Oasis HLB, Strata-X, Lichrolut C18, and Lichrolut EN sorbents are employed mostly because they give better recovery of both polar and nonpolar compounds and have greater capacity than alkyl-bonded silicas. A common analytical method for the determination of pharmaceutical residues in wastewater samples includes the use of octadecylsilica, polymeric, or hydrophilic–lipophilic balanced (HBL) supports for online SPE of water samples, with either disks or, most frequently, cartridges at low pH (typically pH: 2). The selection of an appropriate solid phase is a difficult task, since the recoveries obtained for some compounds can be low. This problem is more evident in the case of multimethods that determine simultaneously several classes of pharmaceuticals. In those cases, a compromise should be made between the solid phases providing the best recoveries for each class of compounds. SPE is a technique typically performed manually, but there are some significant disadvantages with this approach: (1) manual (off-line) SPE is time-consuming, due to the limited speed and mass diffusion of the analyte(s) in the sorbent mass packed in the cartridge, as well as labor- and cost-intensive, and compromises productivity, (2) it is cumbersome to perform since it often requires a series of steps before reaching a concentrated extract that is suitable for the chromatographic analysis, (3) exposure to hazardous or infectious matrices (such as urban wastewater) involves safety issues, (4) the recovery of the analyte(s) can vary from batch to batch causing reproducibility problems, and (5) during off-line sample preconcentration there is also the risk of having losses due to evaporation or due to degradation. By automating the process, these problems can be eliminated and the following benefits can be provided: (1) direct injection of untreated samples, (2) conditioning, washing, elution steps, and analyte(s) enrichment take place automatically with the online systems, (3) elimination of the conventional manual sample pretreatment steps, (4) faster procedure by reducing the sample preparation time and therefore increase the sample throughput, (5) methods are less prone to errors resulting in overall better performance, improved precision, and accuracy and better reproducibility, (6) reduction in health risks, (7) samples can be run unattended, for instance, over-night or over the weekend, (8) higher sensitivity is achieved in online configurations, due to the transfer and analysis of the whole extracted volumes, (9) the analysis of the whole volume during online SPE leads to lower limits of detection (LODs) and, alternatively, smaller sample volumes may be used to obtain enough sensitivity for a large variety of compounds, (10) has low solvent consumption requirements thereby decreasing the cost for organic solvents waste disposal, and (11) SPE coupled online to LC has also the advantage that it is not necessary to remove all residual water from cartridges because elution solvents are compatible with the LC separation methods. In summary, online procedures are particularly attractive in situations where, for example, large numbers of samples or sample series have to be analyzed routinely with high sensitivity, or when hazardous or highly infectious materials have to be processed, whereas off-line procedures are favorable for their applicability to on-site sampling and the opportunity to inject the same extract several times. Among other aspects of SPE, significant efforts have been devoted to develop new, advanced sorbent materials (e.g., dual-phase polymeric sorbents, molecular-imprinted polymers (MIPs), immunosorbents, carbon nanotubes (CNTs), electrospun nanofibers (NFs), etc.), in order to endeavor to improve selectivity and/or specificity toward target analyte(s), sorptive capacity, detectability and chemical stability. Moreover, SPE speed disks are used, but not so frequently, as an alternative to SPE cartridges, since the particle size of the sorbent in the disks is smaller (8–12 m) than in the conventional cartridges (40–80 m), they have higher surface area for sorbent/sample contact, enabling thus to work with higher flow rates and shorter extraction times, and making them more suitable for isolating analyte(s) from high volume samples. Moreover, they have lower detection and quantification limits and higher concentration ratio compared to the conventional SPE cartridges. Dispersive SPE (dSPE) is an extraction method which requires significantly lower sample preparation time compared to the conventional SPE, due to the fact that the sorbent (e.g., primary-secondary amine (PSA), C18, graphite, etc.) is efficiently dispersed in the sample matrix instead of passing slowly via a cartridge, like in conventional SPE. It is a highly selective method and can be easily applied to different water matrices, while the recovery rates achieved are quite similar with those of SPE.

Advances in Analytical Methods for the Determination of Pharmaceutical Residues in Waters and Wastewaters

7

Magnetic SPE (mSPE) is a new and well-promising extraction method, which is based on the use of magnetic adsorbents, and the last years has received considerable attention mainly due to its high speed, compatibility, high selectivity and extraction efficiency and reduced amounts of sample and toxic solvents. In mSPE, the sorbents are dispersed into sample solutions instead of be packing into the SPE cartridges, and the phase separation is conducted by the application of an external magnetic field outside the sample container. The analyte(s) is adsorbed onto the magnetic adsorbent and then the adsorbent containing the adsorbed analyte(s) can be easily isolated from the solution by applying an external magnetic field. Recent research has been directed toward developing efficient, economical and miniaturized sample preparation methods, since they have several distinct advantages, such as faster analysis, low cost, smaller sample volume, high sensitivity, portability and reduced consumption of reagents and organic solvents (in a few mL), among others. To this purpose, liquid-phase microextraction (LPME) was developed and has successfully overcome many drawbacks of conventional LLE and SPE methods. The three extraction modes of LPME are the following: (i) Single-Drop MicroExtraction (SDME): SDME consists of the suspension of a drop of a solvent at the tip of a micro-syringe needle that contacts with the aqueous sample (i.e., direct immersion) or its headspace. The main advantages of this process are its low cost, its low consumption of sample and solvents, the combination of extraction, preconcentration and sample introduction in one step, and the fact that the possibility of carry over between analyses is negligible. However, some of the disadvantages of SDME are the potential evaporation of the extracting solvent drop (especially when samples are stirred vigorously to speed up the extraction process), the formation of air bubbles, the fact that it is time-consuming and not very robust extraction process and equilibrium cannot be attained after a long time in many cases. (ii) Hollow Fiber-based LPME (HF-LPME): HF-LPME is a simple, robust and inexpensive extraction process, with high possibilities for automation, in which the extracting phase is placed inside the lumen of a porous polypropylene HF or a semipermeable membrane, in which the extraction solvent is protected and stabilized. Hence, the extraction phase is not in contact with the sample solution, minimizing thus the potential loss of the extraction phase to the minimum, even when the solution is stirred vigorously; while, on the other hand, the interfacial area between solvent and aqueous sample and the extraction efficiency are increased. Other advantages of HF-LPME are the reduced consumption of solvents, its high sensitivity and enrichment of analytes, while the small pore size of HF modules prevents large particles from entering the accepting phase, yielding very clean extracts. (iii) Dispersive Liquid–Liquid MicroExtraction (DLLME): DLLME is based on a ternary component solvent system like conventional LLE and the appropriate mixture of the extraction and disperser solvent is rapidly injected by syringe into aqueous samples, and a cloudy solution is formed. Then, phase separation is performed by centrifugation and the determination of the target analyte(s) in the settled phase can be performed by conventional analytical techniques. It is a simple, fast, low cost, using reduced volume of extracting solvents (mL) and eco-friendly extraction technique, with very short extraction times (a few seconds) and high recovery and enrichment factors. On the other hand, solid-phase microextraction (SPME) first presented in the early 1990s at the University of Waterloo, proved to be a considerable advancement over conventionally used preconcentration techniques. Besides time and cost-efficiency, SPME is a simple technique without any demand of organic solvents that can be also easily automated with other analytical instruments. Another advantage of SPME is its ability not to change the chemical components and the concentrations of the analyte(s), due to the fact that only a very small amount of the target analyte(s) is removed from the samples. SPME is a fiber coated with an extracting phase (usually < 1 mL), that can be a liquid (polymer) or a solid (sorbent), which extracts different kinds of analytes, from volatile to nonvolatile, from different kinds of means, that can be in liquid or gas phase. The quantity of analyte(s) extracted by the fiber is proportional to its concentration in the sample, as long as equilibrium is reached or, in case of short-time pre-equilibrium, with help of convection or agitation. After extraction, SPME fiber is transferred to the injection port of separating instruments, like a GC, where desorption of the analyte takes place and analysis is carried out. To date, the most practical configuration of SPME utilizes a small fused silica fiber, usually coated with a polymeric phase. SPME is ideal for field monitoring, since it is not needed to measure the volume of the extracted sample and hence the SPME device can be exposed directly to the investigated system, in order to quantify the target analyte(s). There are three basic modes in SPME: (i) direct extraction, where the coated fiber is inserted into the sample and the target analyte(s) are transported directly from the sample matrix to the extracting phase, (ii) headspace extraction, where the analyte(s) are extracted from the gas phase equilibrated with the sample, in which the fiber is protected from adverse effects caused by matrix interferences (e.g., nonvolatile, high molecular weight substances, etc.), while matrix modifications (e.g., pH adjustment, etc.) are allowed without damaging the fiber, and (iii) extraction with membrane protection, where the fiber is separated from the sample with a selective membrane, which lets the analyte(s) through, while blocking the interferences. Fiber coatings (e.g., polydimethylsiloxane (PDMS), divinylbenzene (DVB), carboxen (CAR) and polyethylene glycol (PEG), Carbowax (CW), PDMS/DVB, PDMS/CAR and CW/DVB, molecularly imprinted polymers (MIP), ionic imprinted polymers, immunosorbents, etc.) are designed to be quite hydrophobic, in order to exclude water but still efficiently extract analyte(s). The physicochemical characteristics of analyte(s), the concentration and detection levels, the temperature and the solution pH are some of the main factors that should be considered when selecting SPME sorbents. Although the many advantages of SPME mentioned above, the extraction fiber is expensive and fragile, being the main drawback of this process; while, on the other hand, LPME seems to be more affordable and owns a better repeatability compared to SPME, due to the use of small amounts of organic solvent (mL) and the absence of sample carryover.

8

Advances in Analytical Methods for the Determination of Pharmaceutical Residues in Waters and Wastewaters

Other alternatives to SPME that are used in environmental analysis of various pharmaceutical compounds from aqueous samples are the stir-bar sorptive extraction (SBSE) and the microextration by packed sorbent (MEPS). SBSE is a novel solventless, simple and fast extraction method, with good repeatability and low detection limits, which is based on the same principles as SPME. SBSE is based on sorptive extraction, where the aqueous samples are extracted into a polymer coating (i.e., polydimethylsiloxane (PDMS)) on a magnetic stir bar. The analyte is extracted into the extraction phase, based on its octanol-water partitioning coefficient (Kow) and the phase ratio. After a predetermined extraction time (typically between 30 and 240 min), the analyte(s) can be introduced quantitatively into the analytical system by thermal desorption (for GC) or liquid desorption (for LC). SBSE main advantage is the significantly larger volume of coating used compared to SPME (i.e., 25–100 mL of PDMS in SBSE instead of 0.5 mL in SPME), resulting to higher recoveries and higher sample capacity. MEPS is a new format for SPE that has been miniaturized, in order to be able to handle small sample volumes (i.e., 10 mL) in a sorbent bed incorporated on a syringe (100–250 mL). MEPS can combine sample processing, extraction and injection steps fully automated as an online sampling/injecting device to GC and/or LC. In MEPs the packing is integrated directly into the syringe and not in a separate column, like in commercial SPE. MEPS technique has many advantages, such as the fact that it is a simple, easy to use, rapid, cheap, nonsolvent consuming and fully automated online procedure and can achieve the same sensitivity as SPE and LLE methods, while the extraction time required is significantly lower (only 5 min). It is noteworthy that the packed syringe can be used several times (even up to 200 extractions) without any loss of extraction capacity. Finally, QuEChERS (quick, easy, cheap, effective, rugged and safe), developed in 2003 by Anastasiades et al., is a simple sample preparation technique based on a liquid partition with organic solvent followed by a cleanup dispersive SPE (dSPE). The QuEChERS procedure involves an initial extraction with acetonitrile followed by a partitioning step after the addition of a salt mixture. An aliquot of the raw extract is then cleaned up by dSPE and the final extract in acetonitrile is directly amenable to determinative analysis based on LC and/or GC. It has several advantages over traditional extraction procedures, such as its simplicity, ease of use, low cost, flexibility, high recoveries (higher than 85%), accuracy, low use of no chlorinated solvents, low generation of wastes, high selectivity and sensitivity and it doesn’t require advanced analytical expertise and lab equipment. It should be noted that this technique is very flexible and it serves as a template which could be modified depending on the analyte(s) properties, matrix composition, and the available equipment and analytical techniques in each laboratory.

Recent Developments in Chromatographic Systems and Techniques The application of advanced analytical systems, such as gas chromatography-mass spectrometry (GC-MS) and GC tandem MS (GCMS/MS) or liquid chromatography-mass spectrometry (LC-MS) and LC tandem MS (LC/MS/MS) in environmental chemical analysis has made the determination of a wider range of substances including pharmaceuticals feasible and thus permitted a more comprehensive evaluation of the existence of environmental contaminants. LC tandem MS is more commonly used in pharmaceuticals’ analysis because it is characterized by high sensitivity and ability to provide compound confirmation compared to conventional LC with UV, for example. Separation and detection of compounds having the same molecular mass but different product ions, even if they coelute is feasible when LC tandem MS is applied. Therefore, nowadays, most of the investigations carried out apply tandem MS detection since it provides increased analytical sensitivity and selectivity in complex matrices, such as wastewater effluents. Nevertheless, both GC-MS and LC-MS techniques have some drawbacks. Before GC-MS analysis, derivatization of polar pharmaceuticals is necessary. This is performed by the use of various derivatization agents, depending on the compounds investigated. These agents can be highly toxic and carcinogenic, such as diazomethane, or, less frequently, acid anhydrides, benzyl halides and alkyl-chloroformates. The derivatization step can influence the accuracy of the method, as losses of analytes or incomplete derivatization reaction can occur. Some polar pharmaceutical compounds, such as the b-blockers atenolol and sotalol can only be analyzed by LC-MS and not by GC-MS techniques, because of incomplete derivatization of their functional groups. However, when analyzing highly contaminated samples, such as sewage effluents, by LC-electrospray ionization (ESI)-MS/MS a suppression of the ESI is likely to occur. To confront such analytical problems and therefore to obtain accurate and reproducible data, addition of a cleanup step during sample preparation is required. To control the procedure and to verify its accuracy, addition of an appropriate surrogate standard before SPE can be very useful. In general, slightly higher limits of detection (LODs) have been reported with LCMS/MS methods than those obtained with GC-MS methods. However, the advantages of LC-MS methodology in terms of versatility and sample preparation, being less complicated (i.e., derivatization is not needed), and being the only method capable of detecting some very polar compounds, make it a very attractive alternative during pharmaceutical analysis. To be able to determine unknown ions and identify chemical structures, exact mass analysis can be performed in tandem-in-time instruments, which are, typically, ion trap mass spectrometers. Using this system is possible to configure electric and magnetic fields so that ions can be held in stable orbits for a period of time long enough to perform useful measurements on them. Two forms of mass spectrometers are derived from this idea, the omegatron and the Fourier transform spectrometer. Both make use of the cyclotron principle. The sensitivity obtained by use of these systems is very high, because they can record a complete mass spectrum of each pulse of ions introduced into the trapping volume. Both triple quadrupole and ion trap MS instruments may provide different product ions for specific applications. The principle of function of a triple quadrupole is the direction of an isolated precursor ion into a collision cell followed by acceleration through an inert gas via a voltage offset. In this way, fragmentation occurs for every ion that either enters the collision cell (precursor ion) or is formed in the collision cell (product ion). Thus, a triple quadrupole is

Advances in Analytical Methods for the Determination of Pharmaceutical Residues in Waters and Wastewaters

Fig. 5

9

Quadrupole ion trap. From http://www.specmetcrime.com/instrumentation.htm.

selective in isolating precursor ions. The principle of function of an ion trap (Fig. 5) is fragmentation at a resonant frequency, which is specific for the isolated precursor ion. In this way, the fragment ions are unable to fragment further. Thus, the ion trap technique is selective in both isolating and activating precursor ions. The range of the resolution of the ion trap technique is 100,000–1,000,000. The resolution is high for the low molecules (< 200 m/z) and decreases with increasing mass (i.e., 700 m/z). The unique feature of the quadrupole ion trap MS/MS compared to other MS techniques, such as Q-TOF or triple quadrupole is the capability of analyzing MS to the n, which typically is MS3 or MS4 for most unknown compounds, a very useful characteristic for structural elucidation by tracing the pathway of fragmentation within ion fragments. On the other hand, Q-TOF MS/MS, described below, is unique in its ability to give accurate mass measurements (ranging from 1 to 2 millimass units) of the fragment ions that are ejected from the collision chamber, therefore providing a high assurance of correct identification of unknowns, as well as an empirical formula of fragment ions. The triple quadrupole MS/MS has the unique feature of neutral loss, which allows both quadrupoles 1 and 3 to scan in tandem and is used for identifying unknowns in the chromatogram that are structurally related to one another by fragmentation losses within the molecule. The time-of-flight MS (TOF-MS) is an interesting alternative technique that, compared to the previously described techniques, has the advantages of increased selectivity and avoiding false positive findings. The principle of function of a TOF-MS is the use of the differences in transit time through a drift region to separate ions of different masses. During this pulsed-mode operation, ions must be produced or extracted in pulses. The ions are accelerated by an electric field into a field-free drift region with a kinetic energy of qV, where q is the ion charge and V is the applied voltage. Since the ion kinetic energy is 0.5 mv2, lighter ions have a higher velocity than heavier ions and reach the detector at the end of the drift region sooner. The resolution range of modern Q-TOF instruments is in the range of 10 5, lower compared to that of triple quadrupole which is in the 10 8 range. However, in the case of QTOF, it is not affected from scan speed or m/z. In other words, it is a “dynamic range,” same resolution for low and high m/z with the same sensitivity. Q-TOF also has the capability of adding an “ion mobility,” which provides unrivaled separation and selectivity by combining chromatography, ion mobility and mass spectrometry. The overall sensitivity of LC-TOF-MS operated in accurate mass mode often approached that obtained by the triple quadrupole operated in selected reaction monitoring (SRM) mode for the analysis of several pharmaceutical compounds (including antimicrobials) in wastewater effluents. However, the LC-TOF-MS technique has the disadvantage of significantly lower effective linear dynamic range compared to that provided by quadrupole instruments. The new quadrupole-orthogonal acceleration TOF-MS (Q-TOF-MS) is of particular interest for the confirmation of the analyte identities in complex matrices, because it provides accurate masses for both parent and product ions as well as a full-scan product-ion spectrum. Although triple quadrupole instruments can be used for confirmation, Q-TOF-MS has the advantage of screening and confirmation of analytes because the relevant ions can be extracted from the MS/MS spectra and provides accurate masses for product ions. Concerning the MS-ionization mode, there is a tendency to prefer the ESI mode, as it provides very good results for both polar and nonpolar compounds as well as for compounds with poor thermal stability. Nowadays, Q-TOF-MS is most commonly used in pharmaceutical analysis, while there is also a noted increase in linear ion traps and high mass resolution mass spectrometers (e.g., OrbitrapÔ) for the identification and quantification of pharmaceuticals and related compounds, and as such, some hybrid technologies are emerging. However, triple quadrupole instruments remain used for the greatest sensitivity and reproducibility for conjugate quantification. Moreover, the multifaceted combination of Q-TOFMS and triple quadrupole will be of great value. A novel approach to chromatographic separation is ultraperformance liquid chromatography (UPLC). UPLC uses columns packed with sub 2 mm particles, which enable elution of sample components in much narrower, more concentrated bands, resulting in better chromatographic resolution and increased peak capacity. Reducing the particle diameter from 5 or 3 mm (typical high-

10

Advances in Analytical Methods for the Determination of Pharmaceutical Residues in Waters and Wastewaters

performance liquid chromatography, HPLC columns) to 1.7 mm (UPLC) results in a multifold increase in linear velocity (speed) and efficiency (peak capacity). However, reducing the particle size by a factor of three results in an increase in the back pressure by a factor of 27. Therefore, to achieve the benefits of operating at higher linear velocities it is necessary to run at higher pressures (10000–15,000 psi) and to use specially designed instruments. UPLC produces superior peak shapes and sensitivity compared to conventional HPLC. One of the major advantages of using UPLC is the ability to shorten analysis time without reducing peak resolution. This is achieved by scaling the separation from the existing LC methodology to UPLC by keeping the ratio of column length to particle size (l/dp) constant. This increases throughput without compromising analytical performance. Two-dimensional GC and LC systems have also been gaining interest during the last years in an attempt to increase the accuracy and lower the detection limits of the analysis. Two-dimensional GC (GC  GC) chromatography employs a pair of GC columns (generally, nonpolar and polar columns) connected in series through a modulator. Effluent from the first column is trapped in the modulator for a fixed period of time (modulation time) before being focused and injected into the second column. The chromatograms obtained through repeated trapping and injection is rendered in two dimensions, providing a two-dimensional chromatogram with the boiling point and polarity on the respective axes. The most important outcome of GC  GC has been the great increase in the peak capacity of the GC experiment. Therefore, for a complex sample it is possible to present many more peaks in a chromatogram, because of expansion in the available separation space, making better separations as well. Two-dimensional GC-MS methods have been successfully applied during the analysis of pharmaceutical residues in water and wastewater with detection limits down to ng/L range. Similarly, two-dimensional liquid chromatography (LC  LC) techniques were adopted for pharmaceutical analysis in order to improve selectivity and peak capacity in the analysis of complex mixtures. The ability to easily switch between selective and comprehensive LC  LC combined with software that enables enhanced instrument control and visualization of the multidimensional data has renewed interest of these techniques in pharmaceutical analysis. The structural similarity of impurities or degradation products requires subtle changes in selectivity to resolve a critical pair, without compromising the resolution of the other components in the mixture. Combining reversed-phase columns with orthogonal selectivities in LC  LC is an effective approach for separating structurally similar impurities. Recently, the state of the art for achiral–chiral separations was dramatically improved by the development of new column technologies for ultrafast chiral separations. For biopharmaceutical (large-molecule) applications, LC  LC is the most powerful technique to address the complex mixtures (including hundreds or even thousands of chemically distinct analytes) often encountered. The use of LC  LC for concurrent analysis of biopharmaceuticals using combinations of reversed-phase LC, size-exclusion chromatography (SEC), ion-exchange chromatography, and capture–elution chromatography (such as with protein A) has been successfully applied for both targeted and comprehensive LC  LC separations. Nuclear magnetic resonance spectroscopy (NMR) is another interesting analytical technique that has rapidly developed in the sector of pharmaceutical analysis, especially after 2002, and is still showing great potential toward the determination of 3-D structure of macromolecules, as well as the detection and quantification of pharmaceuticals and related compounds. Rapid quantification of mixtures by proton NMR spectroscopy is feasible, due to the relationship between molar concentration and integrated signal area which is generally linear, minimizing the needs for calibration graphs. High-resolution NMR spectra are very useful for identification of unknown compounds, while the use of NMR in conjunction with the newly developed LC, GC and MS techniques is expected to play an important role in pharmaceutical analysis.

Multiresidue Methods During the past several years, a number of studies focused on the analysis of specific compounds or on specific therapeutic groups, for example, steroids, antibiotics, antiinflammatories, and b-blockers. Currently, there is an intense effort among the research community to develop multiresidue methods to facilitate the simultaneous analysis of a number of different pharmaceutical groups. Analyzing simultaneously, a wide spectrum of compounds is difficult since it is not possible to achieve the best analysis conditions for all the compounds at the same time. Conversely, some procedures have allowed measuring pharmaceuticals at trace levels but with very laborious and tedious procedures, including large volumes. The actual need, however, is to be able to screen samples for a big number of compounds and be able to save time and be more cost-efficient since the chromatographic analyses are characterized by high costs and by the fact that they are very time-consuming. In 2006, one of the first studies on multiresidue studies was published reporting the development of a sensitive multiresidue analytical method, based on off-line SPE followed by LC-ESI-MS/MS (QQQ) for the simultaneous analysis of an extended list of 29 pharmaceuticals in both surface and wastewaters. Within this list, different therapeutic classes are incorporated. Target compounds were selected due to their occurrence and ubiquity in the aquatic environment according to the information available in the literature as well as due to their high human consumption worldwide. The pharmaceuticals investigated are analgesics and antiinflammatories (ketoprofen, naproxen, ibuprofen, indomethacine, diclofenac, mefenamic acid, acetaminophen, and propyphena- zone), lipid regulators and cholesterol-lowering statin drugs (clofibric acid, gemfibrozil, bezafibrate, pravastatin, and mevastatin), psychiatric drugs (carbamazepine, fluoxetine, and paroxetine), antiulcer agent (lansoprazole), histamine H1 and H2 receptor antagonists (loratadine, famotidine, and ranitidine), antibiotics (erythromycin, azithromycin, sulfamethoxazole, trimethoprim, and ofloxacin), and b-blockers (atenolol, sotalol, metoprolol, and propranolol). More recently, in 2015, the occurrence of 67 pharmaceutical and antifungal residues in the Danube river, Romania, was studied by using solid-phase extraction (SPE) and LC-Q Exactive Orbitrap high-resolution MS in both full-scan (FS) MS and targeted MS/MS

Advances in Analytical Methods for the Determination of Pharmaceutical Residues in Waters and Wastewaters

11

modes. The FS method showed satisfactory analytical performance. The evaluation of the recovery concluded that 75% of the compounds show recoveries between 85% and 115% and 10% of the compounds show recoveries between 85% and 65%. The level of detection was lower than 5 ng L 1 for 66% of the compounds, between 5 and 10 ng L 1 for 22% and between 10 and 25 ng L 1 for 14% of the compounds. The coefficients of determination R2 were higher than 0.99 for 79% of the compounds, over a linearity range of 2.5–50 ng L 1. Targeted MS/MS analysis was performed for confirmatory purpose. The analysis of water samples from Danube river revealed the occurrence of 23 compounds including diclofenac, carbamazepine, sulfamethoxazole, tylosin, indomethacin, ketoprofen, piroxicam, together with antifungals like thiabendazole, and carbendazim. Carbamazepine was detected with maximum concentration 40 ng L 1. The highest concentration detected was 166 ng L 1 for diclofenac.

Implementation of “Green Analytical Chemistry” Although, analytical methodologies are applied in various laboratories all over the world to assess the state of environmental pollution in environmental aqueous matrices, soils, sediments and sludge are also characterized by high and in many cases uncontrolled consumption of hazardous reagents and toxic organic solvents, which can negatively influence the environment. In some cases, the chemicals employed for analysis are as toxic as or even more toxic than the compounds being determined. The general public concern over the environmental protection has induced the chemists’ society and especially the analytical chemical laboratories to check the side effects of their analytical methodologies/procedures, looking for alternatives that could reduce their environmental impacts. Thus, “green chemistry” was first defined at the beginning of 1990s, as the use of chemistry methodologies in a way that a reduction or elimination of the use and/or generation of feedstocks, products, by-products, solvents, reagents, etc. that are hazardous to human health and/or the environment will be achieved. The main objectives of “green analytical chemistry” (GAC) are to obtain new analytical methodologies or to modify the old ones, in order to incorporate cleaner and environmentally friendly procedures that use and produce smaller amounts of hazardous chemicals, without compromising their overall efficiency. Despite the green analytical strategy, the efficiency, accuracy and sensitivity of the new or modified “greener” analytical techniques should not be diminished, guaranteeing their reliability and effectiveness, even at low concentration of analyte(s) in complex environmental samples. In the end of 1990s, Anastas designed the “Twelve Principles of Green Chemistry,” which are as follow: (1) direct analysis avoiding sample preparation, where possible, (2) reduction of sample volume, (3) reduction of energy consumption, (4) reduced use of hazardous/toxic reagents, (5) waste prevention and management, (6) reduced or no use of derivatives, (7) use of automated and miniaturized methodologies, (8) in situ analysis, (9) use of renewable sources, (10) development of methods for the simultaneous analysis of multiple analyses, (11) use of integrated processes in order to reduce the consumption of reagents and the energy requirements, and (12) increase the safety of the chemist(s)/analyst(s). These “rules” were designed to help chemists/analysts working in the various analytical laboratories to achieve the goal of sustainability and are applied up to now. Nowadays, the pursuit in the field of “green chemistry” is becoming a great challenge for chemists, in order to develop new reagents and methodologies that achieve the necessary social, economic and environmental objectives regarding environmental safety, pollution prevention and sustainability. The development of the modern sample preparation techniques, like SPME, SBSE, MEPS and MLLE, among others are very promising in reducing the solvent consumption and waste generation, while new sorbent materials, such as graphene, CNTs, MIPs, immunosorbents, etc. can potentially promise greener approaches in the field of sample preparation. Miniaturization is another way to minimize the side effects of analytical methodologies, since the sample and solvent volume are significantly reduced, reducing also the required time for the analysis and the energy requirements, as well. Moreover, the replacement of organic solvents in chromatographic techniques by renewable solvents, supercritical fluids, ionic fluids, superheated water, etc. are the modern green approach concerning eluents and mobile phases in chromatographic analysis. Concerning the chromatographic equipment, some analytical techniques, such as GC are considered to be greener than others, that is, HPLC, since reduced amounts of solvents, sample and time for analysis are required.

Conclusions Among the many contaminants for which scientific interest has emerged during the past years, pharmaceuticals are of great interest and concern because they enter the water cycle on a continuous basis due to their uses and applications in human and animal health care. The pharmaceuticals’ consumption and environmental concentrations are characterized by varied patterns attributed to the differences between countries related to wealth, habits, and practices with regard to drugs applications and also due to the sewage collection and treatment schemes implemented. According to their chemical groups, many analytical techniques and extraction procedures had to be developed during the past several years to determine those mostly consumed. These efforts are expected to last long since it seems that the list of the chemical contaminants that exist down to the nanograms per liter levels will grow longer and longer, pacing with the technological advancements. Another important topic is that of the identification of the pharmaceuticals’ metabolites and also the intermediates products that result after the biotic or abiotic processes in which the parent compounds of such substances participate in once released in the environment or during urban wastewater treatment. It is expected that there is still a long way to go in this field of research. A series of advances in existing techniques and systems are expected. Having the recent

12

Advances in Analytical Methods for the Determination of Pharmaceutical Residues in Waters and Wastewaters

example of the development of UPLC/MS-MS or the Q-TOF systems, as well as two-dimensional GC and LC techniques and NMR spectroscopy, it is apparent that new systems and more sensitive instrumental combinations will soon evolve that will allow for even faster and easier identification, elucidation, and quantification of the residues of the pharmaceutical compounds in environmental matrices, in the framework of the green analytical chemistry, providing both environmental and economic benefits.

See also: Exposure Science: Monitoring Environmental Contaminants; Infectious/Medical/Hospital Waste: General Characteristics; Methodologies for Assessing Bioaerosol Exposures; Methods for Estimating Exposure to Metals in Drinking Water; Methods for Estimating Exposure to Metals in Drinking Water; Pharmaceuticals: Environmental Effects; Waterborne Disease Surveillance.

Further Reading Aziz-Zanjani, M.O., Mehdinia, A., 2014. A review on procedures for the preparation of coatings for solid phase microextraction. Microchimica Acta 181 (11 12), 1169–1190. Bester, K., 2007. Personal care compounds in the environment: Pathways, fate and methods for determination. Wiley-VCH, Germany. Białk-Bielinska, A., Kumirska, J., Borecka, M., Caban, M., Paszkiewicz, M., Pazdro, K., Stepnowski, P., 2016. Selected analytical challenges in the determination of pharmaceuticals in drinking/marine waters and soil/sediment samples. Journal of Pharmaceutical and Biomedical Analysis 121, 271–296. Chitescu, C., Kaklamanos, G., Nicolau, A., Stolker, A., 2015. High sensitive multiresidue analysis of pharmaceuticals and antifungals in surface water using U-HPLC-Q-Exactive Orbitrap HRMS. Application to the Danube River Basin on the Romanian territory. Science of the Total Environment 532, 501–511. Gross, M., Petrovic, M., Barcelo, D., 2006. Development of a multi-residue analytical methodology based on liquid chromatography-tandem mass spectrometry (LC-MS/MS) for screening and trace level determination of pharmaceuticals in surface and wastewaters. Talanta 70, 678–690. Kummerer, K. (Ed.), 2004. Pharmaceuticals in the environment: Sources, fate, effects and risks, 2nd edn. Springer, Germany. Lacina, P., Mravcová, L., Vávrová, M., 2013. Application of comprehensive two-dimensional gas chromatography with mass spectrometric detection for the analysis of selected drug residues in wastewater and surface water. Journal of Environmental Sciences 25 (1), 204–212. Nikolaou, A., Meric, S., Fatta, D., 2007. Occurrence patterns of pharmaceuticals in environmental matrices. Analytical and Bioanalytical Chemistry 387, 1225–1234. Ribeiro, C., Ribeiro, A.R., Maia, A.S., Gonçalves, V.M., Tiritan, M.E., 2014. New trends in sample preparation techniques for environmental analysis. Critical Reviews in Analytical Chemistry 44 (2), 142–185. Rodriguez-Mozaz, A., Lopez de Alda, M.J., Barcelo, D., 2007. Advantages and limitations of on-line solid phase extraction coupled to liquid chromatography-mass spectrometry technologies versus biosensors for monitoring of emerging contaminants in water. Journal of Chromatography A 1152, 97–115. Souza-Silva, É.A., Jiang, R., Rodríguez-Lafuente, A., Gionfriddo, E., Pawliszyn, J., 2015. A critical review of the state of the art of solid-phase microextraction of complex matrices I. Environmental analysis. TrAC Trends in Analytical Chemistry 71, 224–235. Ternes, T.A., Joss, A. (Eds.), 2006. Human pharmaceuticals, hormones and fragrances: The challenge of micropollutants in urban water management. IWA Publishing, UK.

Relevant Websites http://www.specmetcrime.com/instrumentation.htm dJEOL Scientific Instruments Worldwide. http://www.fda.gov/cder dUS Food and Drug Administration. https://www.copybook.com/companies/rssl/articles/nmr-and-pharmaceutical-analysis drecent-advances-two-dimensional-liquid-chromatography-pharmaceutical-andbiopharmaceutical-analysis.

Agro-Industrial Waste Conversion Into Medicinal Mushroom Cultivation Arianne V Julian and Renato G Reyes, Central Luzon State University, Muñoz, Philippines Fumio Eguchi, Tokyo University of Agriculture, Tokyo, Japan © 2019 Elsevier B.V. All rights reserved.

Introduction Improper waste management in agro-industrial operations is among the world’s major environmental problems. Annually, the earth produces around 200 billion tons of organic matter. However, most of these organic materials are not directly fit for human and animal consumption; thus, agricultural and industrial wastes can be a serious source of environmental pollution if left untreated and improperly handled. In addition, these residues contain high lignocellulosic materials with complex structure that makes their decomposition difficult. Lignin prevents the degradation of cellulose and hemicellulose fibers which inhibits the use of these materials as a source of bioenergy. Several approaches, both chemical and physical, have been applied to facilitate the removal of lignin from cellulose and hemicellulose such as the use of alkali, acid, steam explosion, and radicalization. However, these techniques do not only require high energy utilization but also generate pollutants. Consequently, there has been an increasing interest on the bioconversion of these materials as renewable sources in energy-saving and environment-friendly biotechnological processes. Waste products generated from agro-forestry and agro-industrial production such as rice straw, wheat straw, sawdust, sugarcane bagasse, coconut and banana residues, coffee husk, and corn cob can be considered as inexpensive and readily available natural carbon sources. Moreover, most agro-industrial wastes contain bioactive compounds which make them excellent alternative sources for the production of mushrooms, enzymes, vitamins, antioxidants, antibiotics, animal feed, biofuels, and biofertilizer through solid state fermentation (SSF). Presently, mushroom cultivation can be considered as the most cost-effective and efficient practice for biotransformation of agro-industrial wastes into protein-rich food. Mushrooms were considered as agents of immortality and food for the gods by the Egyptians and Romans during the ancient period. For centuries, Japanese and Chinese have used wild mushrooms as food and medicine; whereas cultivation in the western culture was only first recorded in 1650 in Paris, France after discovering Agaricus bisporus growing in melon crop compost. Since then, several commercial cultivation techniques were practiced and various materials were used as substrates. Over the past decades, mushroom researches have been focused on its nutritional values, medicinal properties, and its application for successful utilization of agro-industrial residues as substrate. Some edible mushrooms possess a variety of hydrolyzing and oxidizing enzymes capable to degrade lignin, thereby using its components as nutrients to support fruiting body growth. The expanding attention given to mushroom research and cultivation can be attributed to its low-cost production and nutritional value. Wild and cultivated mushrooms are known excellent sources of protein, minerals, carbohydrates, vitamins, and fibers. In addition to their nutritional value, some mushrooms also possess bioactive compounds. The medicinal properties of several edible mushrooms have been well documented. Several mushroom species possess antiviral, antifungal, cholesterol reducing, antitumor, anticancer properties etc. Mushroom cultivation, therefore, plays a vital role in producing protein-rich food from agro-industrial wastes, while providing economic and health benefits.

Bioconversion of Agro-Industrial Residues Through Mushroom Cultivation Activities from all living systems, primarily mankind, result to production of various waste materials. Among these residues are agroindustrial wastes that contain high lignocellulosic proportion, which are the most abundant renewable sources. Through various biotechnological practices using fungi, these residues could be utilized to produce improved ecological products and processes such as mushroom, enzyme and antibiotic production, bioremediation of toxic materials, biofuel production, single cell protein and biosurfactant production, and many others. It is estimated that the annual lignocellulosic biomass in the world is more than 200 billion tons, in which 4 billion tons are from 27 food crop residues. Among these food crops, cereal by-products account for 3 billion tons per year. Cereal residues include rice straw, wheat straw, barley residues, sorghum, maize, and millet stalks. It is projected that an increase of approximately 15% in food supply can be generated if waste materials could be reduced by 30%–50%. As of 2000, the world generates more than 700 and 670 million metric tons of wheat straw and rice straw residues, respectively (Table 1). Due to the expanding agricultural productions, China and India became the leading countries generating large crop residues. In particular, China accounts for the highest volume of agricultural residues including rice, wheat, and corn by-products. The abundance of these materials in the environment has led to various bio-based industrial productions. Accordingly, among the different applications of microbial technology, cultivation of edible and medicinal mushrooms demonstrates the most significant and costeffective biotransformation of lignocellulosic wastes into valuable products. One kilogram of lignocellulosic dry wastes can produce 1 kg of fresh mushrooms. Additionally, these dry materials will triple its weight upon the addition of water for moisture. Hence, 330 kg would become 1000 kg, in which 200–300 kg of fresh mushrooms could be harvested with 20%–30% biological efficiency.

Encyclopedia of Environmental Health, 2nd edition, Volume 1

https://doi.org/10.1016/B978-0-12-409548-9.11042-5

13

14

Agro-Industrial Waste Conversion Into Medicinal Mushroom Cultivation Table 1

Worldwide availability of wheat straw and rice straw residues in million metric tons (as of 2000)

Continent

Country

Wheat straw residues

Rice straw residues

Asia

China India Turkey Indonesia United States Canada Argentina France Russia Germany Australia Egypt

132.0 79.2 25.2

231.5 146.6

America Europe Oceania Africa Total

83.3 29.3 12.1 47.8 32.3 23.8 26.1 7.4 709.2

55.5 9.8

1.6 6.6 673.3

Mantanis, G. (2000). Worldwide availability of agriwaste. Greece: MARLIT Ltd.

Meanwhile, during the process of cultivation, its by-product termed as mushroom spent can be utilized as organic fertilizer or silage materials for farm animals. The macromolecular structures of lignocellulosic materials are cellulose, lignin, and hemicellulose. Lignin, which constitutes to about 26%–29% of lignocellulose, encrusts the cellulose and hemicellulose forming a seal around these structures. These components make lignocellulosic materials physically hard and difficult to degrade; hence, enzyme systems are involved in this process. These include the hydrolytic and oxidative systems. The hydrolytic enzyme system synthesizes hydrolases which enable cellulose and hemicellulose degradation, while the oxidative enzyme system is responsible for lignin breakdown. Some wood-degrading mushrooms like Lentinula edodes and Pleurotus ostreatus, and other white rot fungi possess these necessary enzymes enabling them to break lignin and utilize the bioenergy from lignocellulose as nutrients. In particular, lignin is more difficult to degrade than cellulose and hemicellulose due to its complex structure. Several enzymes such as laccase, lignin peroxidase (LiP), and manganese peroxidase (MnP) are involved in lignin degradation. Litter-decomposing mushrooms which act as brown rot fungi have at least two of these ligninolytic enzymes (laccase and MnP). However, their degradation rate is lower than that of white-rot fungi. White-rot basidiomycetes, including L. edodes, Ganoderma spp., and Pleurotus spp., degrade lignocellulosic materials most efficiently owing to their ability to release both hydrolytic and oxidative enzymes. Agricultural residues can be crop-based or processing-based residues. Crop-based residues, generated from the field after removal of the main crops, include straw, leaves, fibrous materials, stalks, roots, twigs, and branches. Meanwhile, processingbased residues are by-products of postharvest processes such as crop cleaning, crushing or threshing, which result in various forms including dust, husk, or stalks. Agro-industrial wastes used as substrates in mushroom production include both cropbased and processing-based residues. However, different mushroom species exhibit different growth responses on various substrates based on their ability to degrade lignocellulosic materials through enzyme production. Also, since mushrooms require particular nutrients for successful growth and fruiting body formation, several supplements containing starch, sugar, and fats are typically added to the substrate. Organic materials such as soybeans, cereal grains, molasses, rice bran, wheat bran, millet, and coffee wastes are used in mushroom cultivation as additives to improve nutritional composition, increase growth rate and yield. There are also several mushroom species which could be successfully grown on different substrates such as Pleurotus spp., Schizophyllum commune, Calocybe indica, Collybia reinakeana, Hericium erinaceus, Agrocybe aegerita, Coprinus comatus, Ganoderma spp., Grifola frondosa, Hypsizygus marmoreus, Lepista nuda, Pholiota nameko, and Stropharia spp. Mushroom cultivation is considered an ingenious approach which effectively reduces waste disposal problems while providing a low-cost and profitable alternative for the production of nutritious and medicinal food.

Global Mushroom Production Mushroom industry generates an annual global market value of $45 billion. According to FAO data statistics, China remains the top mushroom producer accounting for 70% of world production in 2009 (Table 2). However, 95% of mushrooms produced in China is locally sold; hence, it is estimated that the consumption per capita is about 10 kg/person/year. This is remarkably higher compared to that of US and several European countries with 3 kg/person/year. Asia contributes to 69.94% of world mushroom production, followed by Europe and America accounting to 21.65% and 7.39%, respectively (Fig. 1). FAO statistics also projected that the global mushroom production increased significantly from 2.9 million metric tons in 1996 to 10.9 million metric tons in 2016 (Fig. 2). Thus, there has been a 350% increase in global mushroom production in a span of two decades. This remarkable rise in production worldwide shows that mushroom cultivation is a profitable and ecological industry, and the most efficient biotechnological process in utilizing agro-industrial residues.

Agro-Industrial Waste Conversion Into Medicinal Mushroom Cultivation Table 2

World production of mushrooms (metric tons)

Countries

1999

2009

China United States Netherlands Poland Spain France Italy Canada Japan Indonesia Ireland Germany United Kingdom Australia Belgium India Republic of Korea Iran Hungary Vietnam Other countries World

2,183,006 387,550 250,000 106,483 93,600 151,889 61,623 69,280 70,511 24,000 64,800 60,000 104,700 37,568 – 14,000 19,774 13,000 15,901 14,000 153,606 3,895,291

4,680,726 369,257 235,000 176,569 136,000 117,934 105,000 77,017 64,143 63,000 57,747 52,000 45,000 43,416 42,208 38,930 28,000 26,708 21,950 20,091 134,846 6,535,542

Table 50, World mushroom and truffles: Production, 1999–2009; United Nations, Food and Agriculture Organization, FAOStat (4/17/ 2018).

0.78%

0.24% 7.39%

21.65%

Africa Americas Asia Europe Oceania 69.94%

Fig. 1

Production share of mushrooms and truffles by region (Average 1996–2016). United Nations, Food and Agriculture Organization, FAOStat.

12

Tons (in million)

10 8 6 4 2 0 Fig. 2

1996 1998 2000 2002 2004 2006 2008 2010 2012 2014 2016

Total mushroom production in the world (1996–2016). Source: United Nations, Food and Agriculture Organization, FAOStat.

15

16

Agro-Industrial Waste Conversion Into Medicinal Mushroom Cultivation

Phases of Mushroom Cultivation Mushroom production is a solid state fermentation (SSF) technique which effectively converts lignocellulosic wastes into proteinrich food. It is considered the most efficient application of SSF considering the scale of global production. Its economic value could be attributed to its low-cost production while providing high yield and nutritious food source. Mushroom cultivation can be divided into three primary phases: grain spawn production, substrate preparation, and fruiting body development.

Grain Spawn Production The quality of the spawn or inoculum determines the success of mycelial growth and fruiting body formation. Spawning involves inoculation of actively growing mycelia into sterilized grains in polypropylene bags plugged with cotton to ensure proper air circulation. Cereal grains such as unhulled rice, sorghum, wheat, millet, or rye are often used. Spawn production is a complex method performed using sterile technique. Good quality spawn, or the colonized grain, should be axenic or free from contamination. The spawn is then inoculated into mushroom substrates.

Substrate Preparation The main nutrient sources of mushroom for carbon are cellulose, hemicellulose and lignin, which are readily available from agroindustrial residues. Mushroom growth depends on the composition of substrate used. Different mushroom species have different nutritional requirements. Hence, appropriate materials and methods should be applied to ensure successful mycelial colonization. For instance, A. bisporus, a litter-decomposer, must be grown on composted plant litter and requires high source of nitrogen. Conversely, Pleurotus spp. and L. edodes which are both white-rot mushrooms should be cultivated on noncomposted lignocellulosic materials and optimum growth is obtained with lower nitrogen but high carbon source. Depending on the species, the ideal pH value of mushroom substrate ranges from 6 to 8. Selection of materials for a substrate is the first step for growth medium preparation. Mushroom substrates are typically prepared using agricultural crop residues, industrial and forest by-products such as rice straw, wheat straw, sawdust, and manures. Also, adequate water must be incorporated to ensure optimum mycelial growth, maintaining moisture content between 35% and 60% for wooden substrates while 60%–80% for other types. Substrate preparation varies according to species. There are some mushrooms, such as L. edodes, in which substrate inoculation is direct (using logs) and usually does not involve extensive pretreatment. However, some mushrooms require axenic environment to thrive; hence, pretreatment of substrates is necessary. Pretreatment includes pasteurization or sterilization above atmospheric pressure using an autoclave.

Fruiting Body Development Mushroom growth has two-phase life cycle, the vegetative phase and the reproductive phase. During the vegetative phase, mycelia colonize the substrate and produce extensive enzymes to biodegrade the lignocellulosic materials, and absorb its dissolved nutrients to support fruiting body formation. Environmental factors such as temperature, humidity, ventilation, and light influence the transition from vegetative to reproductive phase or the development of fruiting bodies. In general, mushroom mycelia can successfully grow between 20 C and 30 C. Meanwhile, during primordial and fruiting body formation, humidity level should range from 90% to 95%. Some mushrooms, for example, Pleurotus spp., grow best in dark condition while others require light to develop fruiting body. Often, brief exposure to natural or artificial light is sufficient to stimulate mycelial growth and fruiting body maturation. On the other hand, A. bisporus does not require light for growth. Since mushrooms are aerobic fungi, proper ventilation is particularly important during the reproductive phase. High concentration of carbon dioxide (CO2) may promote mycelial growth but impede fruiting body development. To reduce CO2 level, aeration is increased. Temperature higher than the ideal temperature of a certain species may inhibit the growth of its mycelium and increase the chance of contamination, while lower temperature may slow down mycelial colonization. The growing condition during incubation determines the success of primordial development into mature fruiting body (Fig. 3). Application of appropriate control in the environment during mushroom incubation ensures production of desired yield and quality.

Commercial Production and Medicinal Importance of the Most Cultivated Mushrooms Worldwide From about 14,000 mushroom species, at least 2000 are considered edible but only 35 are commercially cultivated around the world. Among these mushrooms, the most cultivated globally are A. bisporus, P. ostreatus, L. edodes, Auricularia auricula, Flammulina velutipes, and Volvariella volvacea. The process of cultivation using agro-industrial residues and the medicinal importance of the top three most cultivated mushrooms are presented.

Agaricus bisporus A. bisporus or button mushroom is a litter-decomposing basidiomycete belonging to the Agaricaceae family. These types of mushrooms naturally grow on forests and grasslands. Today, it remains the most cultivated mushroom worldwide, mainly produced in

Agro-Industrial Waste Conversion Into Medicinal Mushroom Cultivation

Fig. 3

17

Primordial to fruiting body development of mushroom.

China, North America, Europe, and Australia. The common substrate for A. bisporus is a combination of animal manure and strawbased compost. There are two methods involved in substrate preparation. The first stage involves composting of the raw materials, such as horse or chicken manure and wheat or rice straw, for about 3 weeks. During this time, several bacteria and fungi degrade some of the complex structure of lignocellulose, and their biological activities also increase the temperature of the compost which provides an environment only conducive for heat tolerant microorganisms. The compost is pasteurized and inoculated with spawn. Luxuriant mycelial growth of A. bisporus requires high humidity and temperature of 25 C. Following full mycelial colonization throughout the substrate, after about 2–3 weeks, a casing layer containing clay soil is placed over the compost. This is a unique method in A. bisporus cultivation, and a critical step for the successful initiation of fruiting body. When mycelia begin to appear into the casing layer, the casing layer is mixed carefully in to disrupt the mycelia to facilitate colonization of the surface. Once mycelia reach the surface of the casing layer, the temperature is reduced to 16 C–18 C. Humidity and CO2 are also regulated to stimulate primordial formation, which would usually appear 2 weeks after casing. Fruiting bodies of A. bisporus are collected before their caps expand.

Nutritional and medicinal importance

Aside from being the most cultivated mushroom in the world, A. bisporus has also been a subject for many researches due to its nutritional and medicinal properties. A. bisporus is an excellent source of carbohydrate, protein, and fiber. It was reported that the amino acids found in mushroom proteins are similar to animal proteins, which make them good alternative for meat. Button mushrooms are also rich in minerals particularly potassium, phosphorus, magnesium, calcium, sodium, iron, copper, zinc, and

18

Agro-Industrial Waste Conversion Into Medicinal Mushroom Cultivation

selenium. Several reports indicate that A. bisporus contain vitamins, including niacin, riboflavin, and vitamins B1, and B3. In addition, these mushrooms are also a natural source of Vitamin D (984 IU/g) when cultivated with light exposure. A. bisporus also contains ergosterol, a precursor to vitamin D2, with 39.5–56.7 mg/100 g content of fresh weight. Button mushrooms have low fat content but possess some fatty acids primarily linoleic acid, also palmitic, stearic, caprylic, oleic, eruric and eicosanoic acids, which are needed by the body for many vital functions. Several bioactive compounds have been isolated from various mushroom species. Correspondingly, developing functional foods has been one of the goals in mushroom research. A. bisporus has been utilized in traditional therapies for many decades. Numerous studies have elucidated its medicinal significance, including its antimicrobial, antioxidant, antihyperlipidemic, antidiabetic, and anticancer properties. Various studies suggest that methanol extract of A. bisporus demonstrated antimicrobial activities against some bacteria, dermatophytic mold and yeast. It is reported that its aqueous total protein extracts showed antibacterial property against Staphylococcus aureus and Methicillin-Resistant S. aureus (MRSA). These mushrooms, particularly brown A. bisporus (portabella), also contain relatively high antioxidant content. Its ethanol extract has phenolic compounds of about 100.32–100.78 mg/100 g content of fresh weight. Phenolic compounds are the primary source of antioxidant activity in edible mushrooms. Tocopherols, which are fat-soluble antioxidants, were also detected in A. bisporus fruiting bodies. Hyperlipidemia or high cholesterol level can also be prevented by consumption of A. bisporus. This condition is the major risk factor in developing serious health problems such as cardiovascular diseases and atherosclerosis. Phytosterols, which are organic compounds identified in button mushrooms, can decrease the absorption of cholesterol. Furthermore, these macrofungi also contain about 564.4 mg/kg of lovastatin, a hypolipidemic agent. Its antidiabetic property is also reported by several researchers. One study indicated that high dose of A. bisporus powder (200 mg/kg of body weight) to streptozotocin-induced diabetic rats for 3 weeks reduced the concentration of blood glucose by up to 24.7%. Meanwhile, polysaccharides isolated from A. bisporus exhibit immunomodulating, antitumor, and anticancer properties. Alpha-glucan, one of its main polysaccharides, can suppress the production of tumor necrosis factor by up to 69%. Regular intake of its fruiting bodies may also improve mucosal immunity and its extracts demonstrated ability to inhibit the proliferation of some leukemia cells by initiating apoptosis. Intake of A. bisporus is highly recommended for cancer patients as dietary supplements because their fruiting bodies also contain arginine which can delay metastasis of cancer cell growth. In particular, proliferation of breast cancer cells can be inhibited by the presence of anti-aromatase phytochemicals in A. bisporus. It was also reported that its extracts have inhibiting activities against acetylcholinesterase and butyrylcholinesterase, which make them a potential adjunct treatment for Alzheimer’s disease.

Pleurotus spp. Pleurotus species, belonging to Pleurotaceae family, are edible macrofungi cultivated worldwide particularly in South East Asia, Europe, and Africa. These basidiomycetes rank second in the global mushroom production. Its popularity could be attributed to its simple and low cost production, palatability, and high biological efficiency. They are commonly known as oyster mushrooms and are considered white rot fungi for their white mycelium and efficient degradation of wood or noncomposted lignocellulosic materials. There are about 70 identified species of Pleurotus but only few are commercially cultivated such as P. ostreatus, Pleurotus eryngii, Pleurotus djamor, Pleurotus pulmonarius, Pleurotus cystidiosus, Pleurotus florida, and Pleurotus sajor-caju. In comparison to other mushrooms, Pleurotus spp. have a shorter growth time. Oyster mushrooms efficiently grow at temperature of 20–30 C and 55%– 70% humidity on different lignocellulosic substrates such as cotton wastes, rice and wheat straw, wood sawdust and chips, corn cobs, sugarcane bagasse, maize and sorghum stover, different leaves, and other residues. Along with L. edodes, Pleurotus spp. are one of the most efficient white rot fungi lignocellulose degrader. Unlike the technology applied in Agaricus production, substrate preparation in Pleurotus spp. requires pasteurization or sterilization and no casing is necessary. The percentage of substrate conversion into fruiting bodies is expressed through biological efficiency (BE). To calculate BE, fresh weight of harvested mushrooms is divided over the initial weight of the dry substrate. In Pleurotus spp., BE of over 50% is considered profitable. One study determined that using softwood residues, paper wastes, coffee pulp and cardboard as substrate materials for P. ostreatus and P. pulmonarius production resulted to BE of more than 100%. Meanwhile, 75%–100% BE were recorded in mushrooms grown in cotton wastes and wheat straw. In addition, several reports showed the calcium absorption efficacy of some Pleurotus spp. using other waste materials. P. eryngii cultivated in sawdust medium supplemented with 2% calcinated oyster shell powder increased the calcium content of its fruiting body. Similarly, 2%–10% eggshell powder incorporated in rice straw-based substrate improved Ca, Mg, Na, Si, Cl, and S contents in P. ostreatus.

Nutritional and medicinal importance

Oyster mushrooms possess a wide range of nutritional and medicinal properties. Like most edible mushrooms, they are an excellent source of protein, carbohydrate and fiber, and contain low fat. Mineral composition of mushrooms vary according to species and substrates used. Generally, Pleurotus spp. contain Ca, Mg, P, K, Fe, Na, Zn, Mn, and Se. They are also a source of vitamins B1 and B2, thiamin, riboflavin, pyridoxine, and niacin. Pleurotus spp. have been considered as functional food due to their ability to exhibit positive effects on human health. Several reports elucidated their antimicrobial and antiviral properties. Their methanolic extracts have shown growth inhibition in Bacillus megaterium, S. aureus, E. coli, Candida glabrata, Candida albicans, and Klebsiella pneumoniae. Ubiquitin, an antiviral protein, was also detected in the fruiting body of Pleurotus spp. Specifically, P. ostreatus and Pleurotus tuberregium contain ribonucleases which could potentially degrade the genetic material of human immunodeficiency virus (HIV). Likewise, hot water extracts of P. pulmonarius and P. sajor-caju demonstrated inhibition of HIV-1 reverse transcriptase activity. Pleurotus citrinopileatus showed similar action through lectin, a type of protein, isolated from its fruiting body. Polysaccharides isolated from

Agro-Industrial Waste Conversion Into Medicinal Mushroom Cultivation

19

mycelia of P. ostreatus, P. citrinopileatus, and P. sajor-caju demonstated antineoplastic activity. In one report, there has been a 76% decrease in the neoplastic cells when polysaccharide from P. ostreatus culture broth was administered intra-peritoneally to female Swiss albino mice. Remarkably, extracts from oyster mushrooms exhibited antitumor activity against some types of lung and cervical sarcomas. It is also reported that antioxidant levels from its fruiting bodies are higher compared to other commercial mushrooms. Oyster mushrooms also exhibit antilipidemic and antihyperglycemic properties. Mevinolin in P. ostreatus and P. citrinopileatus showed activity in lowering cholesterol level. In addition, guanide, which is a compound typically found in antidiabetic medicine, was identified from oyster mushrooms. A study showed that oral intake of aqueous extracts of P. pulmonarious in diabetic mice decreased the blood glucose level. Many species of Pleurotus mushrooms possess some bioactive compounds such as glucan, vitamin C and phenol, which intensify the action of certain enzymes responsible for reducing hepatic cell necrosis. It is also reported that extracts from oyster mushrooms may lower blood pressure, possess immunomodulatory activity and antiaging property.

Lentinula edodes The third most cultivated mushroom worldwide is L. edodes or shiitake mushroom, widely known for their flavor and nutraceutical importance. In Asia, particularly Japan and China, shiitake has been used as medicinal food for several years. These mushrooms can be produced using traditional or artificial cultivation method. The traditional method entails extensive and laborious preparation. It involves the use of freshly-cut hardwood logs which are holed to facilitate insertion of mycelial plugs. These inoculated woods are then stored in a controlled environment, and mycelial colonization would take up to 4–6 months while it would take 1–2 years before the first flush or harvest. This approach is still practiced because it results in high quality mushroom. However, due to its time-consuming fruiting cycle and threat to natural forests, some techniques have been developed over the past decades. This involves the use of plant residue-based substrates such as sawdust, cereal straws, corn cobs, sunflower seed hulls, cotton straw, sugarcane bagasse etc. These materials are usually supplemented with rice bran, wheat bran, rye, or millet as nitrogen, sugar, minerals, and vitamin sources. After sterilization or pasteurization, the bags are inoculated with spawn. Mycelium usually colonizes the substrate within 2 months, and the first fruiting bodies mature after 3–4 weeks. In addition to having shorter fruiting cycle, using artificial substrates also results to higher BE. Natural method typically produced mushrooms with 20%–30% BE, while using artificial logs can result to as much as 50% BE or more depending on the substrates used. For instance, using sunflower seed hulls and sugarcane bagasse resulted to 107.5% and 87.4% BE, respectively.

Nutritional and medicinal importance

Similar to other edible and nutritious mushrooms, L. edodes is rich in protein, carbohydrates and fiber, and a good source of several vitamins including B1, B2, B12, C, D, and E. It also contains Fe, Mn, Ca, K, Zn, and Cd. The demand for shiitake has grown over the years due to the increasing interest of people for food with nutraceutical benefits. Several reports showed the antioxidant and antimicrobial activities of L. edodes extracts. Lenthionine, a compound isolated from shiitake, exhibited inhibitory effects against E. coli, S. aureus, and Bacillus subtilis. Some researchers have also documented the ability of its extract to inhibit oral pathogens. Submerged culture of L. edodes mycelium reduced plasma glucose level in rats with induced diabetes. Moreover, lowered cholesterol and triglyceride levels were also detected. The hepatoprotective effect of L. edodes was also documented. One of the best studied polysaccharides isolated from L. edodes is lentinan which has been known for many years for its antitumor properties. One study showed that there was a 97.3% regression of tumors in Swiss albino mice with Sarcoma 180 cancer cells after administration of lyophilized hot water extracts of shiitake. Several reports also indicated the anticancer property of lentinan against stomach, pancreatic, breast, cervical, and ovarian cancers. Recent studies have also identified the antiviral activity of shiitake against poliovirus type 1 (PV-1) and bovine herpes virus type 1 (BoHV-1). Similarly, lentinan showed ability to suppress the activity of HIV-1 reverse transcriptase. When combined with azidothymidine, an antiretroviral drug, lentinan exhibited anti-HIV activity in vitro more effectively.

See also: Electronic Waste and Human Health; Infectious/Medical/Hospital Waste: General Characteristics; Management and Export of Wastes: Human Health Implications.

Further Reading Atila, F., Owaid, M.N., Shariati, M.A., 2017. The nutritional and medical benefits of Agaricus bisporus: A review. Journal of Microbiology, Biotechnology and Food Sciences 7 (3), 281–286. Barshteyn, V., Krupodorova, T., 2016. Utilization of agro-industrial waste by higher mushrooms: Modern view and trends. Journal of Microbiology, Biotechnology and Food Sciences 5 (6), 563–577. Eguchi, F., Dulay, R., Kalaw, S., et al., 2014. Antihypertensive activities of a Philippine wild edible white rot fungus (Lentinus sajor-caju) in spontaneously hypertensive rats as models. Advances in Environmental Biology 8 (24), 74–81. Israilides, C., Philippoussis, A., 2003. Bio-technologies of recycling agro-industrial wastes for the production of commercially important fungal polysaccharides and mushrooms. Biotechnology and Genetic Engineering Reviews 20 (1), 247–260. Julian, A., Umagat, M., Reyes, R., 2017. In: Kallel, A., Ksibi, M., Dhia, H., Khélifi, N. (Eds.), Mineral composition, growth performance and yield of Pleurotus ostreatus on rice strawbased substrate enriched with natural calcium sources. Proceedings of Euro-Mediterranean Conference for Environmental Integration (EMCEI-1), Advances in Science, Technology & Innovation, Springer International Publishing, Tunisia, pp. 1573–1575.

20

Agro-Industrial Waste Conversion Into Medicinal Mushroom Cultivation

Mantanis, G., Nakos, P., Berns, J., Rigal, L., 2000. In: Turning agricultural straw residues into value-added composite products: A new environmentally friendly technology. Proceedings of the 5th International Conference on Environmental Pollution, Thessaloniki, Greece, pp. 840–848. Patel, Y., Naraian, R., Singh, V.K., 2012. Medicinal properties of Pleurotus species (oystermushroom): A review. World Journal of Fungal and Plant Biology 3 (1), 1–12. Philippoussis, A., 2009. Production of mushrooms using agro-industrial residues as substrates. In: Singh, P., Pandey, A. (Eds.), Biotechnology for agro-industrial residues utilization, 1st edn. Springer Science þ Business Media B.V, Dordrecht, pp. 163–196. Reyes, R., Nair, M., 2016. Ligninolytic and leaf litter degrading mushrooms from the Philippines with antioxidant activities. International Journal of Pharmaceutical Research & Allied Sciences 5 (4), 67–74. Reyes, R., Kalaw, S., Dulay, R., Eguchi, F., et al., 2013. Philippine native and exotic species of edible mushrooms grown on rice-straw-based formulation exhibit nutraceutical properties. Philippine Agricultural Scientist 96 (2), 198–204. Reyes, R., Umagat, M., Umagat, M.R., et al., 2016. Comparative elemental composition and antioxidant activity of the fruiting bodies of Pleurotus djamor cultivated on sawdust and rice straw-based formulations. International Journal of Biology, Pharmacy and Allied Sciences 5 (10), 2572–2580. Sadh, P.K., Duhan, S., Duhan, J.S., 2018. Agro-industrial wastes and their utilization using solid state fermentation: A review. Bioresources and Bioprocessing 5 (1), 1–15. Stamets, P., 2005. Notes on nutritional properties of culinary-medicinal mushrooms. International Journal of Medicinal Mushrooms 7, 103–110. Valverde, M., Hernández-Pérez, T., Paredes-López, O., 2015. Edible mushrooms: Improving human health and promoting quality life. International Journal of Microbiology 2015, 1–14. Wani, B., Bodha, R., Wani, A., 2010. Review: Nutritional and medicinal importance of mushrooms. Journal of Medicinal Plant Research 4 (24), 2598–2604.

Air Pollution and Development of Children’s Pulmonary Functionq Jonathan Dubnov, Ministry of Health, Haifa, Israel; and University of Haifa, Haifa, Israel Boris A Portnov, University of Haifa, Haifa, Israel Micha Barchana, University of Haifa, Haifa, Israel; and Ministry of Health, Jerusalem, Israel © 2019 Elsevier B.V. All rights reserved.

Nomenclature C Elemental carbon CO2 Carbon dioxide ETS Environmental tobacco smoking FEV1 Forced expiratory volume at the first second FVC Forced vital capacity LF Lung function MEF25–75 Mid expiratory flow between 25% and 75% of the forced vital capacity MMEF Maximal mid expiratory flow NO2 Nitrogen dioxide NOx Nitrogen mono- and dioxides (NO, NO2) O3 Ozone PEF Peak expiratory flow PEFR Peak expiratory flow rate PF Pulmonary function PFT Pulmonary function test PM Particulate matter PM10 Particulate matter with a cut-off aerodynamic diameter of up to 10 mm PM2.5 Particulate matter with a cut-off aerodynamic diameter of up to 2.5 mm Raw Airway resistance SLFG Slower lung function growth SO2 Sulfur dioxide TLC Total lung capacity TSP Total suspended particulates

Introduction Three early episodes of air pollution, which occurred in the Meuse Valley, Belgium, in 1930, Donora, Pennsylvania, in 1948, and London in 1952, corroborate that air pollution is a formidable health risk factor. When heavy fog, combined with air pollution from local coal-fired power plants and private homes, fell on London on 1 December 1952 and lasted until 5 December 1952, the event resulted, according to different estimates, in approximately 4000–12,000 deaths. Infant mortality during this period also doubled. Characteristically, the deaths occurred before the flu epidemic and were clearly attributed to unusually high levels of air pollution. The 1952 London air pollution event and its massive health effects are traditionally defined as the starting point of “air pollution epidemiology.” However, two decades earlier, in December 1930, stagnant atmospheric conditions and air pollution from different industrial sources in the Meuse Valley of Belgium contributed to a severe accumulation of pollutants in the air. More than 60 people died during the next 2 days, increasing the local mortality rate by nearly 10-fold. During the following decades, epidemiologists analyzed and reanalyzed the health effects of these and similar air pollution episodes. However, in-depth ecological studies of air pollution events were conducted only sporadically. From the early 1990s onwards, epidemiological research on the health-related effects of air pollution took another, more systematic approach. During this period, several key epidemiologic studies (including the Harvard Six Cities and American Cancer Society (ACS) prospective cohort studies) were conducted in the United States. These studies reported significant health effects at relatively low concentrations of ambient air pollutants.

q

Change History: June 2018. The section editor updated the references. This is an update of J. Dubnov, B.A. Portnov, M. Barchana, Air Pollution and Development of Children’s Pulmonary Function, Editor(s): J.O. Nriagu, Encyclopedia of Environmental Health, Elsevier, 2011, Pages 17–25.

Encyclopedia of Environmental Health, 2nd edition, Volume 1

https://doi.org/10.1016/B978-0-12-409548-9.11349-1

21

22

Air Pollution and Development of Children’s Pulmonary Function

Follow-up studies concentrated on the long-term effects of air pollution exposure; systematic inflammation and oxidative stress such as the pathophysiological mechanism of cardiopulmonary morbidity and mortality; and interaction between genetic and environmental factors. As these studies demonstrated, air pollution did not affect all population groups equally. Children particularly were found to be at risk, due to their unique anatomical, physiological, and behavioral characteristics. These unique characteristics can be summarized as follows:

• • •

The number of children’s alveoli tends to increase 10-fold from birth to the age of four and their lung tissues develop with greater permeability of the epithelium layer than those of adults. Both the lung surface area and air intake per kilogram of lung tissue are 50% greater for children than for adults. Lastly, children tend to spend more time outdoors and perform outdoor activities with greater lung ventilation rates than adults.

Several early studies of air pollution on children focused on the exacerbation of existing respiratory diseases, respiratory symptoms, and short, usually reversible, effects of air pollution on children’s pulmonary function (PF), especially linked to ozone (O3). Largescale longitudinal studies, initiated in the early 1990s, such as the Children’s Health Study, added several other air pollutants to the list of environmental factorsdparticulate matters (PMs), nitrogen dioxide (NO2), sulfur dioxide (SO2), and elemental carbon (C)d that are likely to affect the development of children’s PF.

Performance Measures The primary function of the lungs is to enable efficient gaseous exchange between the body and the environment. As ambient oxygen is absorbed, carbon dioxide (CO2) is depleted from the lungs. Ambient air pollution affects the air-supplying system, causing the physical obstruction of the airways. These disturbances can act directly by depositing pollutants and narrowing the lumen of the small air ducts or by triggering their contractions. The common outcome of these disturbances is reducing the lung ventilation capacity (and gaseous exchange) by physical reduction of the amount of air (volume) that is being inhaled. There are several techniques designed to assess the actual volume of air that is being inhaled and possible obstacles to the flow. The most common and noninvasive dynamic technique is the spirometry, which is widely used in epidemiological surveys. This test consists in breathing into and out of a device capable of measuring inspired and expired gas volume and registering the time and volume of the air inhaled or exhaled. Key parameters examined by spirometric techniques are the total lung capacity (TLC), forced volume capacity (FVC), and forced expiratory volume during the first second (FEV1). TLC is the total amount of gas contained in the lungs after a maximal inspiration, whereas FVC is the maximal volume of air exhaled after a maximal inhalation, expressed in liters. FEV1 is the maximal volume of air exhaled in the first second of a forced expiration. Each test is repeated three times and the best results (largest FVC and largest FEV1) are considered. For clinical purposes, tests can also be performed after inhaling short-acting drugs to assess their effects when reversible airflow limitation is suspected. Total lung volume differs among individuals on the basis of their physical stature, such as age, weight, and height. Height is the most important factor, but in children and adolescents, growth of the lung tissue may lag compared to their increase in height, as noted by several empirical studies. Appropriate selection of reference value for lung volume and dynamic tests is pivotal for correct interpretation of the tests. Thus, the tests most commonly used in epidemiological practice are TLC, FVC (for measure of static lung volume and capacity), FEV1, and MEF25–75 and peak expiratory flow (PEF) (for dynamic lung volumes and flow rates). The FEV1 and especially MEF25–75 reflect the status of small airway and sensitive indicators for early nonfixed obstruction of airways.

Short- and Long-Term Effects of Air Pollution Studies conducted in the early 1990s indicated a significant deterioration of PF performance, among both healthy and asthmatic children, during acute episodes of air pollution, especially for elevated exposure to PMs and O3. Thus, the large-scale Harvard Six Cities study found a significant association between decreased PF performance in children and their O3 exposure. Empirical studies carried out in Europe also detected significant associations between children’s PF performance and their exposure to PMs and gaseous pollutants. Detels et al. (1991), who retested the same cohort of children and young adults for PF performance at a 5-year interval, found that FEV1 test results were on average 40%–60% lower among children residing in highly polluted areas, especially in females. NO2, SO2, and PMs have a common origin in the combustion process and often correlate with each other. These correlations limit the ability to attribute decreasing PF to a specific pollutant, resulting in a general conclusion about the adverse effects of the “pollution mix.” In the late 1980s and the early 1990s, large cross-sectional studies carried out in North America (e.g., the Harvard Six Cities Study, 24 Cities Study, NHANES II survey, and 12 Southern California Communities Study) attempted to assess the long-term effect of exposure to air pollutants on children’s PF. In all these studies, a negative association was found between children’s PF performance and annual concentrations of total suspended particulates (TSP), NO2, and O3.

Air Pollution and Development of Children’s Pulmonary Function

23

Thus, in the study of 24 communities in the United States and Canada (24 Cities Study), exposure to particle strong acidity, PM2.1 and PM10, was also found to be associated with PF decrement. The FVC decrement was larger, albeit not significantly different, for children who were lifelong residents of their communities. The odds ratio of children with low PF performance (FVC  85% of that predicted) increased up to 2.5 times with exposure to particle strong acidity. Horak et al. (2002) survey of Austrian children in the second and third grades also found a negative association between air pollution and PF growth, with PF decrements ranging from 25% to 75% for a 10 msg m 3 increase in the PM10 pollution levels. The most prominent research into the influence of air pollution on children’s PF development was initiated in 1993 and is currently underway in 12 southern California communities. The survey, known as the Children’s Health Study, revealed deficits in PF growth of 2.5% for FVC and of 3.4% for FEV1 associated with elevated concentrations of PM10, NO2, and acid vapor. After the 8 years of the follow-up period, the deficits in PF growth were found to be even more substantial, that is, 4.7%–5.7% for FEV1 and similar for FVC. In addition, the proportion of low FEV1 (< 80% of that predicted) was 4.9 times higher in the most air-polluted community than in the least air-polluted community. The results of other empirical studies dealing with long-term exposure to air pollution and PF growth are generally consistent with the Children’s Health Study, indicating that air pollutants (even at low exposure levels) are likely to have adverse effects on children’s PF development, ranging widely in magnitude according to air pollution exposure (see Table 1).

Methodological Issues Exposure–Response Relationships Early studies of air pollution and mortality, conducted in the 1980s and 1990s, indicated a nonthreshold relationship between air pollution and mortality, that is, a monotonous increase in mortality in line with rising pollution levels. Thus, Daniels et al. (2000), who analyzed daily patterns of PM-associated mortality in the 20 largest US cities, concluded that for all-cause mortality and for cardiopulmonary mortality separately, linear models without thresholds fit the PM-mortality association better than threshold models. The findings of several other empirical studies led to a similar conclusion, showing a nearly linear relationship between children’s health status and their air pollution exposure. More recent studies also demonstrated a dose-dependent linear association between carbon content of airway macrophages and PF performance. Thus the study of Kulkarni et al. (2006), which was carried out using sputum induction and microscopy, showed a causal association between PM inhalation and deteriorating PF in healthy children (see Table 1). However, there are some indications that the association between children’s PF growth and their exposure to air pollution may be nonlinear, with disproportionably greater damage to children’s PF performance tending to occur under high concentrations of air pollutants than under moderate and low concentrations. Thus, according to a study recently carried out in the vicinity of a coal-fired power station in Northern Israel, this nonlinear relationship was captured best by the exponential “exposure–response” function (see Table 1). An important implication of this conclusion for public health policy is that there are no “safe” thresholds of air pollution and that constant efforts are to be taken to improve air quality, especially in highly polluted residential areas.

Appropriative Study Design The effect of environmental factors on childhood morbidity is often time-lagged, that is, adverse effects of environmental risk factors may take a long period of time to accumulate. Therefore, the “exposure–response” relationship, especially in the case of children’s PF performance, may better be detected by a longitudinal approach (i.e., by panel studies based on the repeated testing of the same individuals) than by a cross-sectional comparison, in which individual air pollution exposure estimates are mutually compared with static “snapshots” of the health status of the study population at a given point of time. Disregarding the latency effects of air pollution may be the main reason why no clear association between PF development and ambient air pollution is detected. To illustrate this point, let us consider a hypothetical example of four adjacent townships with different air pollution levels (Table 2). In year1, children residing in these subareas underwent PF tests and the average values of these tests indicated significant differences (PFTyear1: P < .05; Table 2). In year1 þ n, PF test was rerun (PFTyear1 þ n; Table 2). Although changes in PFT between year1 and year1 þ n are negatively correlated to air pollution levels (DPFT: r ¼  0.970; P < .05), there are no differences whatsoever in the average values of the repeat test (PFTyear1 þ n ¼ 95%; Table 2). Therefore, using the latter numbers for a cross-sectional comparison might result in the absence of any clear relationship between air pollution and children’s PF performance. The migration of children’s families from one place of residence to another may thus become an essential confounding factor. Thus, children who moved across 12 southern California communities showed remarkable changes in their maximal mid expiratory flow (MMEF), peak expiratory flow rate (PEFR), and FEV1 in response to PM10 differences between “origin” and “destination” communities.

Risk of Ecological Bias Epidemiological studies investigating the effect of air pollution on children’s PF are often based on data for aggregate geographic units, such as townships, city neighborhoods, census blocks, or other areal units established for the specific purpose of data

Brief summary of selected studies of the effect of air pollution on the development of children’s pulmonary function

24

Table 1

Contingent, spatial resolution, and time span

Environmental risk factors

Dependent variable(s)

Covariate(s)

Main findings

Brunekreef and Hoek (1993)

Over 800 Dutch children surveyed in 1987–91

Winter concentrations of PM10, NO2, SO2

Repeated measurements of peak expiratory flow (PEF)

Age, gender, race, height, environmental tobacco smoking (ETS), chronic respiratory symptoms, asthma

Dubnov et al. (2007)

1492 schoolchildren of second and fifth grades living in the vicinity of a major coal-fired power station in Northern Israel, surveyed in 1996 and 1999

Percent changes of observed FVC and FEV1, referenced to FVC and FEV1 predicted values for a healthy child, adjusted for age group and gender

Height, ETS, asthma, chronic bronchitis, housing conditions, parental education, environmental tobacco smoking

Frye et al. (2003)

2493 schoolchildren of sixth grade surveyed in three consecutive regional cross-sectional surveys in 1992–93, 1995–96, 1998–99 in East Germany

Accumulated numbers of NOx and SO2 “air pollution events” exceeding local reference levels in 1996–99; proximity of a child’s residence to the nearest freeway or major road Annual and daily averages of total suspended particles (TSP) and SO2 in 1991–98

Pulmonary function was found to decrease among healthy and children with chronic respiratory symptoms with increasing levels of air pollutants, especially particulate matters An FEV1 decline for up to 10.2% for the most polluted area; similar results for FVC

Percent changes of FVC and FEV1

Gauderman et al. (2000)

1498 schoolchildren of fourth grade surveyed in 12 southern California communities, United States, in 1993–97 (Children’s Health Study, Cohort I)

Annual averages of PM10, NO2, O3, PM2.5, and acid vapor in 1994–96

Gauderman et al. (2004)

1759 schoolchildren of fourth grade surveyed in 12 southern California communities, United States, in 1993–2001 (Children’s Health Study, 1993–2001)

Annual averages of PM10, NO2, O3, PM2.5, acid vapor, elemental and organic carbon in 1994– 2000

Differences in estimated average annual percent growth rates of FVC, FEV1, MMEF, and FEF75 per year between the most and the least polluted communities (annual measurements with 4year follow ups) Differences in estimated average annual growth rate (FVC, FEV1, MMEF) per year between the most and the least polluted communities (annual measurements with 8-year follow ups)

Age, gender, height, season of the year, lung function equipment, parental education, parental atopy, ETS, low birth weight, molds, heating, gas cooking, carpeting, pets in the house Age, gender, race, height, body mass index (BMI), ETS, and the presence of asthma

Gauderman et al. (2007)

3677 schoolchildren of fourth grade surveyed in 12 southern California communities, United States, in 1993–2001 and 1996–2004 (Children’s Health Study, Cohort I and II; 1993–2004)

Proximity of a child’s residence to nearest freeway or major road; model-based estimates of trafficrelated air pollution at the place of residence (PM10, NO2, O3, PM2.5, acid vapor, and elemental carbon)

Differences in the estimated lung function (LF) average annual growth rate (FVC, FEV1, MMEF) (annual measurements with 8-year follow ups)

Age, gender, race, height, BMI, ETS, asthma, exposure to gas stove, home heating, parental education, and presence of pets in the house

Age, gender, BMI, height, race, ethnic origin, parental income and education, socioeconomic status, asthma, ETS, gas stove cooking

A 4.7% FVC decrease for a 50 mg m 3 annual increase of TSP pollution, and 4.9% FVC decrease for a 100 mg m 3 increase in SO2 levels Deficits of pulmonary function growth (FVC ¼ 2.5%, FEV1 ¼ 3.4%, FEF75 ¼  6.1%) for PM10, NO2 and acid vapor Deficits of pulmonary function growth (FEV1 ¼ 4.7% to 5.7%) for PM2.5, NO2, acid vapor and elemental carbon; proportion of low FEV1 (less than 80%) is reported to be 4.9 times higher in the most air-polluted community than in the least air-polluted community (7.9 vs. 1.6, P < .01) Deficits of FEV1 and MMEF (81 mL and  127 mL s 1, respectively), compared to children living 1500 m from a freeway

Air Pollution and Development of Children’s Pulmonary Function

Source

975 schoolchildren of second and third grades in eight Austrian communities surveyed in 1994–97

Seasonal averages of O3, NO2, SO2, and PM10 in 1994–97

Absolute changes in lung growth-FVC, FEV1, and MEF25–75 (two annual measurements with 3-year follow ups)

Gender, atopy, height and height difference, ETS, baseline lung function, home heating, and parental education

Jedrychowski et al. (1999)

1000 schoolchildren of third grade in Krakow, Poland, surveyed in 1995–97

Annual averages of suspend particulate matter (SPM) and SO2 in 1991–95

Absolute changes in pulmonary function growth (FVC, FEV1)  annual measurements with 2-year follow ups; proportion of children with slower lung function growth (SLFG)

Gender, height, ETS, molds, home heating, and parental education

Kulkarni et al. (2006)

64 healthy 8–15-year-old schoolchildren surveyed in Leicester, United Kingdom, in 2002–03

Carbon content of airway macrophages in mm2; modeled annual mean of PM10 in mg m 3 at child’s home address

FVC, FEV1, FEF25–75

Age, height, weight, body-mass index, sex, race, birth order, number of siblings, exercise measures, and cotinine levels in saliva (active and passive smoking).

Raizenne et al. (1996)

Over 10,000 schoolchildren in 24 communities in United States and Canada in 1989–91

Annual averages of PM10, NO2, O3, SO2 particle strong acidity in 1988–91

FVC, FEV1, FEF25–75, proportion of FVC 85% of predicted

Age, sex, weight, height, and the interaction of sex and height

Sugiri et al. (2006)

2574 nonasthmatic 6-year-old schoolchildren surveyed in West and East Germany, in 1991–2000

Annual and daily averages of total suspended particles (TSP) and SO2 in 1991–2000; proximity child’s residence to nearest busy street

Annual and daily means of total lung capacity (TLC) and airway resistance (Raw)

Age, gender, BMI, height, parental education, ETS, home heating, gas stove cooking

Negative association between air pollution and pulmonary function growth for winter (NO2, O3) and summer (PM10), with an average annual decrease of FEV1 growth by 84 mL and by 329 mL per year for MEF25–75 corresponding to a 10 mg m 3 increase in a PM10 pollution level Negative association between air pollution level and pulmonary function growth (FVC and FEV1) for both SPM and SO2. Proportions of children with slower lung function growth (SLFG) are found to be significantly higher in boys in air polluted area (odds ratios for FVC and FEV1 were 2.15 (1.25– 3.69) and 1.90 (1.12–3.25), respectively) Negative association between median carbon concentration in macrophages and pulmonary function growth with 1 mm2 increase in the carbon content associated with the reduction of FEV1 by 17%, FVC by 12.9%, and FEF25–75 by 34.7% The difference of 52 nmol m 3 in annual mean particle strong acidity associated with 3.5% decrement for FVC and 3.1% for FEV1; the odds ratio for low pulmonary function (FVC 85% of predicted) was 2.5 times higher among children with exposure to the range of particle strong acidity TLC decreased by 6.2% per a 40 mg m 3 increase in annual mean TSPs, both in East and West Germany, for children living less than 50 m from a busy road

Air Pollution and Development of Children’s Pulmonary Function

Horak et al. (2002)

25

26

Air Pollution and Development of Children’s Pulmonary Function Table 2

Hypothetical case illustrating differences between PFT and DPFT values

Township

Air pollution level

PFTyear1

DPFT

PFTyear1 þ n

1 2 3 4

20 30 50 60

89 90 100 102

6.0 5.0 5.0 7.0

95 95 95 95

PFT, Pulmonary function test; DPFT, PFT change between year1 and year1 þ n.

collection, other than health investigations. In most cases, such aggregated data are more readily available for researchers than individual estimates; they are easy to process, analyze, and link to other information sources (such as population enumerations and household surveys), and nevertheless may provide sufficiently accurate indications about the relationships that may expectedly be found in follow-up investigations or require such in-depth investigations, if necessary. However, the use of these data may lead to erroneous estimates of “air pollution–health effect” linkages due to a phenomenon known as “ecological fallacy” or “ecological bias.” Greenland and Morgenstern (1989) identified two main sources of ecological biasdomitted regional (intra-group) confounders and effect modification. The former source of ecological bias refers to “the failure of a crude (or partially adjusted) association to properly reflect the magnitude of the exposure effect, due to differences in the distribution of extraneous risk factors among exposed and unexposed individuals” (Greenland and Morgenstern, 1989, p. 269). Environmental smoking and nutrition deficiency are examples of ecological confounders whose levels may vary across groups or regions, causing biased estimates of health effects attributed to exogenous environmental risk factors. Ecological bias analogous to confounding may also occur when the background rate of disease varies across unexposed populations or in the presence of an “effect modifier,” that is, a factor that is not necessarily a risk factor itself (e.g., nutritional deficiency), but one that may modify the effect of the risk factor under study (e.g., smoking) due to a covariance between the two. Two other known sources of ecological bias are “selection bias” and “information bias.” The former (selection bias) refers to the way in which research subjects are selected from the study population, that is, lost subjects or missing data. The latter type of ecological bias (information bias) refers to “information loss” due to aggregation or measurement inaccuracy that may distort the effect estimates. According to Greenland and Morgenstern (1989), “there will be no ecological bias if both the background (unexposed) rate of disease and the exposure effect do not vary across groups, and there is no confounding within group.” However, a recent study by Portnov et al. (2007) indicates that ecological bias may arise even though none of these conditions is violated. Fig. 1, showing four regions with different distribution patterns of environmental risk factor (gray cones) and individuals under study (small black dots), helps to illustrate this point. Although there may be no confounding at the individual level and no differences in the exposure effect within the groups, an ecological bias may nevertheless occur upon data aggregation. Although Regions 1 and 4 are equally exposed to the health-risk factor, only in Region 4, where individual exposure levels differ, the expected link between the risk factor and its health effects in the study population may expectly be found.

Fig. 1

Hypothetical example illustrating a possibility of ecological bias emanating from the aggregation of individual-level data.

Air Pollution and Development of Children’s Pulmonary Function

27

Differences between individual and group correlations (i.e., ecological bias) are not a necessary outcome of any areal data aggregation, and situations in which such differences are likely to occur are often detectable from the outset of the analysis by scrutinizing the distribution maps, as Fig. 1 demonstrates. In particular, an essential precondition for ecological bias (attributed to data aggregation over geographic areas) not to occur is observing different exposure levels in each “reference” unit (such as in Region 4) and different average exposure levels across different units. If these preconditions are upheld, the relationships observed at the individual level are likely to emerge at the aggregated level as well.

Elevated Risk Groups Studies conducted since the early 1990s found a significant prevalence of respiratory symptoms and a decrement of PFs (PEF or PEFR) among children with asthma and other respiratory diseases. Comparisons revealed that air pollution exposure had more significant effects on PF for asthmatic than healthy children. Notably, using medications for relieving asthma symptoms did not prevent the decrease of PEF or respiratory symptoms attributed to air pollution. Thus, Mortimer et al. (2000, 2002), who studied a cohort of asthmatic children (aged 4–9), found that prematurely born children with low birth weight tended to exhibit more decline in their PF performance as a result of air pollution exposure than did children with normal gestation or normal birth weight. Recent publications of birth cohort studies carried out in the Netherlands and of Children’s Health Study suggest that air pollution is associated with newly diagnosed asthma (see Table 1). Children without asthma or wheeze had a greater hazard ratio for new onset asthma in more polluted communities than in less polluted ones. In the first publications of the 12 Southern California Communities (Children’s Health) Study, the association of PFs with air pollution was found to be more significant among girls with asthma than among girls without asthma. However, those findings were not confirmed during the 8 years of follow up: effects of air pollution on PF were not significantly different among 457 children who had a history of asthma, although the small sample size in this subgroup might have influenced the results.

Other Risk Factors Apart from air pollution, several factors may influence PF developmentdbiological, genetical, environmental, and socioeconomical. According to the results of epidemiological studies worldwide, the main factors that affect children’s PF are age, gender, race/ethnicity, height, body-mass index, diagnosed asthma, environmental tobacco smoking (ETS), overcrowding, and exercise activity. Additional factors that may influence children’s PF growth are nutritional status, in-utero exposure to maternal smoking, pets in house, and parental education. According to several empirical studies, children living in overcrowded housing and/or exposed to ETS are likely to suffer from airway obstruction, decreased PF, and increased risk for the onset of new asthma. Toxic effects of smoking on developing fetuses were associated with morphological changes in placenta, upregulating nicotinic acetylcholine receptors in fetal lungs, and causing chronic hypoxic stress with changes in lung development and function.

Implication for Research and Public Health Policy Although the association between air pollution and respiratory diseases has been known for decades, only recently has the research evolved in the direction of public policies concerning standards for and benefits of reducing air pollution exposure. Early studies focused on the role of ozone and its influence on lung function, whereas more advanced air monitoring and sampling techniques allowed researchers to explore the role of fine PMs and mechanisms of their action in relation to lung function and respiratory morbidity. It also has been demonstrated that PMs, especially with a small aerodynamic diameter, breathed deeply, may include various carcinogenic chemicals absorbed onto their surfaces that cause pulmonary and systematic oxidative stress, chronic inflammation, and progression of atherosclerosis. However, the exact pathophysiologic way these substances act on the pulmonary tissue is yet to be explored. Individual susceptibility to air pollution varies in correlation with the overall health condition of an individual. Newborns and children seem to be more vulnerable due to the developmental status of their respiratory system and the relatively large amount of gaseous exchange, thus becoming a group at risk. There can be two main directions for research in this field: studying the pathways of pollutants’ action in tissues and eventually seeking ways to prevent or repair those damages; and studying the effects of air pollution in populations at risk. Many empirical studies in this field use mortality as an end point. Data gathered in this way are very thin in the sense of pointing out the mechanisms of air pollutants, thus preventing proper conclusions from being reached. An essential function of modern society is to defend those who need more protection than others. A possible practical implication of this principle can be limiting traffic in neighborhoods populated by elderly citizens, schools or other places where vulnerable individuals are concentrated, as well as locating new potentially polluting industries away from these sites.

28

Air Pollution and Development of Children’s Pulmonary Function

Another conclusion is that often air pollution standards fail to guarantee reasonable protection (with a special focus on vulnerable populations) because the proposed directives do not adequately reflect scientific knowledge. Exposure–response relationships of air pollution and PF growth, especially in the context of low exposure levels, may thus have important public health concerns in applying World Health Organization (2005) air quality guidelines.

See also: Air Pollution Episodes; Air Pollution From Solid Fuels; Air Pollution and Lung Cancer Risks; Air Transportation and Human Health; Automobile Exhaust: Detrimental Effects on Pulmonary and Extrapulmonary Tissues and Offspring; Chronic Obstructive Pulmonary Disease; PM2.5 Sources and Their Effects on Human Health in China: Case Report; Respiratory Effects of Short Term Peak Exposures to Sulfur Dioxide.

References Brunekreef, B., Hoek, G., 1993. The relationship between low-level air pollution exposure and short-term changes in lung function in Dutch children. Journal of Exposure Analysis and Environmental Epidemiology 1 (Suppl. 3), 117–128. Daniels, M.J., Dominici, F., Samet, J.M., Zeger, S.L., 2000. Estimating particulate matter-mortality dose-response curves and threshold levels: An analysis of daily time-series for the 20 largest U.S. cities. American Journal of Epidemiology 152, 397–406. Detels, R., Tashkin, D.P., Sayre, J.W., et al., 1991. The UCLA population studies of CORD: X. A cohort study of changes in respiratory function associated with chronic exposure to SOx, NOx, and hydrocarbons. American Journal of Public Health 81 (3), 350–359. Dubnov, J., Barchana, M., Rishpon, S., et al., 2007. Estimating the effect of air pollution from a coal-fired power station on the development of children’s pulmonary function. Environmental Research 103, 87–98. Frye, C., Hoelscher, B., Cyrys, J., Wjst, M., Wichmann, H.E., Heinrich, J., 2003. Association of lung function with declining ambient air pollution. Environmental Health Perspectives 111 (3), 383–387. Gauderman, W.J., McConnell, R., Gilliland, F., et al., 2000. Association between air pollution and lung function growth in Southern California children. American Journal of Respiratory and Critical Care Medicine 162, 1383–1390. Gauderman, W.J., Avol, E.L., Gilliland, F., et al., 2004. The effect of air pollution on lung development from 10 to 18 years of age. New England Journal of Medicine 351, 1057–1067. Gauderman, W.J., Vora, H., McConnell, R., et al., 2007. Effect of exposure to traffic on lung development from 10 to 18 years of age: A cohort study. Lancet 369, 571–577. Greenland, S., Morgenstern, H., 1989. Ecological bias, confounding, and effect modification. International Journal of Epidemiology 18 (1), 269–274. Horak Jr., F., Studnicka, M., Gartner, C., et al., 2002. Particulate matter and lung function growth in children: A 3-yr follow-up study in Austrian schoolchildren. European Respiratory Journal 19 (5), 838–845. Jedrychowski, W., Flak, E., Mróz, E., 1999. The adverse effect of low levels of ambient air pollutants on lung function growth in preadolescent children. Environmental Health Perspectives 107 (8), 669–674. Kulkarni, N., Pierse, N., Rushton, L., Grigg, J., 2006. Carbon in airway macrophages and lung function in children. New England Journal of Medicine 355 (1), 21--30. Mortimer, K.M., Tager, I.B., Dockery, D.W., Neas, L.M., Redline, S., 2000. The effect of ozone on inner-city children with asthma: Identification of susceptible subgroups. American Journal of Respiratory and Critical Care Medicine 162 (5), 1838–1845. Mortimer, K.M., Neas, L.M., Dockery, D.W., Redline, S., Tager, I.B., 2002. The effect of air pollution on inner-city children with asthma. European Respiratory Journal 19 (4), 699–705. Portnov, B.A., Dubnov, J., Barchana, M., 2007. On ecological fallacy, assessment errors stemming from misguided variable selection, and the effect of aggregation on the outcome of epidemiological study. Journal of Exposure Science & Environmental Epidemiology 17 (1), 106–121. Raizenne, M., Neas, L.M., Damokosh, A.I., et al., 1996. Health effects of acid aerosols on North American children: Pulmonary function. Environmental Health Perspectives 104 (5), 506–514. Sugiri, D., Ranft, U., Schikowski, T., Krämer, U., 2006. The influence of large-scale airborne particle decline and traffic-related exposure on children’s lung function. Environmental Health Perspectives 114 (2), 282–288. World Health Organization, 2005. WHO air quality guidelines global updates 2005. Available from: http://www.euro.who.int/Document/E87950.pdf (accessed February 2010).

Further Reading Anon, 1995. Standardization of spirometry: 1994 update. ATS Statement. American Journal of Respiratory and Critical Care Medicine 152, 1107–1136. Avol, E.L., Gauderman, W.J., Tan, S.M., London, S.J., Peters, J.M., 2001. Respiratory effects of relocating to areas of differing air pollution levels. American Journal of Respiratory and Critical Care Medicine 164, 2067–2072. Dockery, D.W., Speizer, F.E., Stram, D.O., Ware, J.H., Spengler, J.D., Ferris Jr., B.G., 1989. Effects of inhalable particles on respiratory health of children. American Review of Respiratory Diseases 139 (3), 587–594. Gordon, S., Mortimer, K., Grigg, J., Balmes, J., 2018. In control of ambient and household air pollutiondhow low should we go? The Lancet 5 (12), 918–920. Islam, T., McConnell, R., Gauderman, W.J., Avol, E., Peters, J.M., Gilliland, F.D., 2008. Ozone, oxidant defense genes, and risk of asthma during adolescence. American Journal of Respiratory and Critical Care Medicine 177 (4), 388–395. Kajekar, R., 2007. Environmental factors and developmental outcomes in the lung. Pharmacology and Therapeutics 114 (2), 129–145. Landrigan, P.J., et al., 2018. The Lancet Commission on Pollution and Health. The Lancet 391 (10119), 462–451. Nkansah, M.A., Darko, G., Dodd, M., Opoku, F., Essuman, T.B., Antwi-Boasiako, J., Fantke, P., 2017. Assessment of pollution levels, potential ecological risk and human health risk of heavy metals/metalloids in dust around fuel filling stations from the Kumasi Metropolis. Ghana. Cogent Environmental Science 3 (1). https://doi.org/10.1080/ 23311843.2017.1412153. O’Connor, G.T., Neas, L., Vaughn, B., Kattan, M., Mitchell, H., Crain, E.F., Evans III, R., Gruchalla, R., Morgan, W., Stout, J., Adams, G.K., Lippmann, M., 2008. Acute respiratory health effects of air pollution on children with asthma in US inner cities. Journal of Allergy and Clinical Immunology 121 (5), 1133–1139.e1. https://doi.org/10.1016/j.jaci.2008. 02.020. Pope 3rd, C.A., Dockery, D.W., 2006. Health effects of fine particulate air pollution: Lines that connect. Journal of the Air and Waste Management Association 56 (6), 709–742. Schwartz, J., 2004. Air pollution and children’s health. Pediatrics 113 (Suppl. 4), 1037–1043.

Air Pollution and Lung Cancer Risksq Shuxiao Wang and Shuchang Liu, Tsinghua University, Beijing, China © 2019 Elsevier B.V. All rights reserved.

Introduction Lung cancer is an increasingly serious health problem, and remains the top contributor to cancer related mortality. According to the Global Burden of Disease (GBD) study, deaths due to lung cancer worldwide increased from 1.44 million in 2006 to 1.71 million deaths in 2016. In developing countries such as China, the mortality of lung cancer has continued to increase in both cities and rural areas. In the early 1970s, lung cancer mortality in China was 7 per 100,000 people (12.61 for urban areas), and it increased to 47.84 per 100,000 in 2016. Lung cancer can be considered a pathology of multifactorial etiopathogenesis, which is affected by a few factors including outdoor and indoor air pollution, active and passive smoking, and a variety of occupational agents, together with genetic factors and living habits. Though smoking is the leading cause of lung cancer, it cannot explain the relatively high rates of lung cancer observed in non-smoking populations. International Agency for Research on Cancer (IARC) has proved that some environmental agents, such as outdoor air pollution, engine exhaust, asbestos, and residential radon can also lead to lung cancer. Evidence from epidemiological studies that have taken into account tobacco smoking, as well as occupational and other risk factors, have reported increases in lung cancer associated with air pollution. Based on epidemiological study, health impact assessment can be conducted to figure out specific health impact of air pollution on lung cancer in term of metrics such as premature death and disability-adjusted life year (DALY). The GBD 2015 study estimated that 283,300 deaths as well as 6,209,100 disability-adjusted life-years due to tracheal, bronchus, and lung cancer annually were attributable to ambient air pollution, while household air pollution from solid fuels resulted in 158,380 deaths as well as 3,664,000 disability-adjusted life years due to tracheal, bronchus, and lung cancer. According to the WHO’s estimates, 14% and 17% of lung cancers are attributable to ambient air pollution and household air pollution, respectively. Particularly, it is reported that among all lung cancer cases of women in East and South Asia, 83% are nonsmokers, and the solid fuel combustion was thought to be the major cause especially in developing countries. Since coal, wood, and other biomass fuels such as crop residues and dung remain the primary heating and cooking fuels in developing countries, particularly in rural areas, small cities, and less-developed periurban areas of large cities, indoor air pollution is even more serious than outdoors. Therefore, household air pollution in these areas has a disproportionate impact on lung cancer, affecting a large share of the population who spend a large percentage of time indoors.

Ambient Air Pollution and Lung Cancer Ambient air pollution has been implicated as a cause of various health effects including lung cancer. Air pollution is a complex mixture of different gaseous and particulate components, and the mechanisms underlying its genetic and related effects are rather complicated. It is generally believed that fine particulates can be the carrier of toxic and carcinogenic pollutants (e.g., SO2, PAHs, and heavy metals), and bring them into human lungs, while many gaseous components of air pollution themselves are carcinogenic or toxic. Besides, insoluble particles can potentially lead to tumor formation as they deposited in human body and uptake by phagocytes and other cells. Although the specific biological mechanisms involved is still unclear, there have been strong evidences from experiments and observations that air pollution is genotoxic and contributes to the development of tumor via inducing sustained inflammation. The air pollution mix varies greatly by locality and time. In recent decades, emissions and air concentrations of traditional industrial air pollutants, such as SO2 and coarse particles, have decreased, whereas there is an increasing or continued problem with air pollution from vehicles, with emissions of engine combustion products including volatile organic compounds, nitrogen oxides (NOx), and fine particulate matters (PM2.5), as well as secondarily increased ozone levels. The epidemiological evidence regarding outdoor air pollution and lung cancer has been reported by a number of cohort studies as well as case–control studies. In these studies, relative risks (RRs) or odds ratios (ORs) were applied for the risk assessment.

Methods of Risk Assessment In statistics and mathematical epidemiology, the risk of an event (or of developing a disease) relative to exposure, relative risk (RR), can be calculated as a ratio of the probability of the event occurring in the exposed group versus a non-exposed group. RR is used frequently in the statistical analysis of binary outcomes where the outcome of interest has a relatively low probability. It is thus often q

Change History: April 2019. Shuxiao Wang and Shuchang Liu updated sections/table, Introduction, Lung Cancer Risk of Ambient Air Pollution by Area: China, Table 1. This is an update of S. Wang, Y. Zhao, Air Pollution and Lung Cancer Risks, Editor(s): J.O. Nriagu, Encyclopedia of Environmental Health, Elsevier 2011, Pages, 26–38.

Encyclopedia of Environmental Health, 2nd edition, Volume 1

https://doi.org/10.1016/B978-0-12-409548-9.11823-8

29

30

Air Pollution and Lung Cancer Risks

suited to clinical trial data, where it is used to compare the risk of developing a disease. It is particularly attractive because it can be calculated by hand in the simple case, but is also susceptible to regression modeling, typically in a Poisson regression framework. In a simple comparison between an experimental group and a control group, an RR of > 1 means the event is more likely to occur in the experimental group than in the control group, whereas an RR of o1 means the event is less likely to occur in the experimental group. As a consequence of the delta method, the log of the RR has a sampling distribution that is approximately normal with variance that can be estimated by a formula involving the number of subjects in each group and the event rates in each group. This allows the construction of a confidence interval (CI), which is symmetric around log(RR). The antilog can be taken of the two bounds of the log CI, giving the high and low bounds for an asymmetric CI around the RR. The odds ratio (OR) is a measure of effect size particularly important in Bayesian statistics and logistic regression. It is defined as the ratio of the odds of an event occurring in one group to the odds of it occurring in another group, or to a sample-based estimate of that ratio. These groups might be men and women, an experimental group and a control group, or any other dichotomous classification. Similarly, an OR > 1 indicates that the condition or event is more likely in the first group, and vice versa. In medical research, the OR is favored for case–control studies and retrospective studies, whereas RR is used in randomized controlled trials and cohort studies. The hazard ratio (HR) is the ratio of the hazard rates corresponding to the conditions described by two levels of an explanatory variable. For example, if the mortality per unit time of an event among exposed group is larger than the unexposed ones, the HR would be larger than 1, indicating higher hazard of death from the exposure. Since HR represents instantaneous risk over the chosen time period and differs from RR and OR in that RR and OR are cumulative to a study-specified endpoint, it is less affected by selection bias due to the chosen of endpoints.

Lung Cancer Risk From Ambient Air Pollution by Area North America

Several cohorts have been set up in north America to examine the association between lung cancer mortality and outdoor air pollution (mainly particulate matter (PM) and Ozone), including the Adventist Health Study on Smog (AHSMOG) Study, Harvard Six Cities Study, American Cancer Society (ACS) Study, Canadian Census Health and Environment Cohort (CANCHEC), and Ontario Tax Cohort Study. So far there have been several follow-ups of these cohort studies while the newly published analyses for more extended time periods further substantiate original findings and also provide much clearer, stronger evidence for ambient air pollution exposure relationships with the increased lung cancer risk. The AHSMOG was based on 6338 California Seventh-Day Adventists followed from 1977 through 1992. In the study, PM10 was measured only during the past 5 years of follow-up in this study; during most of the study, PM10 concentrations were estimated from measurements of total suspended particles (TSP). The analysis yielded very mixed results for lung cancer mortality. The investigators reported substantial increases in RRs of lung cancer mortality among men in relation to long-term ambient concentrations of PM10 (RR ¼ 3.36, 95% CI: 1.57–7.19 associated with an interquartile range of 24.08 mg m 3) but not for females (RR ¼ 1.33, 95% CI: 0.60–2.96). In contrast, such cancer deaths were significant for mean NO2 only for females (RR ¼ 2.81, 95% CI: 1.15– 6.89 associated with an interquartile range of 19.78 ppb) but not for males (RR ¼ 1.82, 95% CI: 0.93–3.57). Lung cancer metrics for mean SO2 were significant for both males (RR ¼ 1.99, 95% CI: 1.24–3.20 associated with an interquartile range of 3.72 ppb) and females (RR ¼ 3.01, 95% CI: 1.88–4.84). Although the mean ozone concentration was not associated with lung cancer incidence in men or women, there was an association of ozone with lung cancer risk in males when the exposure metric was formulated as the number of hours per year with elevated ozone concentrations (RR ¼ 4.19, 95% CI: 1.81–9.69 for at least 551 h per year over 100 ppb ozone, controlling for pack-years of cigarette smoking, educational level, and current alcohol use). However, as suggested by the wide CIs, these results were based on very few cases (18 lung cancer deaths for females and 12 for males). The AHSMOG-2 was a subpopulation of the Adventist Health Study-2 (AHS-2) which includes of about 96,000 participants from all 50 US states as well as 5 provinces of Canada. After excluding participants not linked with cancer registries and completed address, prevalent cancers except non-melanoma skin cancer, and missing values on important confounders, the AHSMOG-2 includes about 80,285 participants from the United States followed from 2002 through 2011. Monthly surface PM2.5 and O3 concentrations were estimated by interpolating the monitored concentrations across America from January 2000 through December 2001. The investigators reported that for each 10 mg/m3 increment in PM2.5, adjusted hazard ratio (HR) with 95% confidence interval (CI) for lung cancer incidence was 1.43 (95% CI: 1.11, 1.84) in the two-pollutant multivariable model with ozone after controlling for sex, education level, race, and smoking. And for people who spent more time outdoors, the risk of lung cancer is potentially higher, since the study found that among those who spent outdoors > 1 h/day, the HR was 1.68 (95% CI: 1.28, 2.22) associated with an increase of 10 mg m 3 PM2.5. The Harvard Six Cities Study was based on 8111 residents of six US cities, and the follow up began from 1974. Three follow-ups of the cohort have been published: through 1989, through 1998, and through 2009. The earliest and second follow-up found positive associations of lung cancer mortality with PM2.5. For the latest follow-up, exposure was estimated on the basis of average levels of pollution in each city over the risk period, assuming residential stability. So there is no within-city scale variation. HR estimates were adjusted for age, gender, smoking habits, body mass index, and education. The total number of lung cancer deaths was reported as 7.8% of 4495. And the adjusted HR was 1.37 (95% CI: 1.07–1.75 associated with an increase of 10 mg m 3). The ACS Study linked ambient air pollution data with risk factor data throughout the United States for nearly 1.2 million adults enrolled in the ACS Cancer Prevention Study II (CPS-II) since 1982. Several analyses linking air pollution with lung cancer have been published using the data. One recent publication followed never smokers in the cohort through 2008 and found positive

Air Pollution and Lung Cancer Risks

31

association between PM2.5 exposure and lung cancer. The investigators controlled for individual differences in age, sex, race, education, body mass index, marital status, diet, other exposures such as passive smoke, and find that every 10 mg m 3 increase of PM2.5 concentrations associates with a 15%–27% increase in lung cancer mortality. Another recent study focused on subjects resided in California through 2000, and found that HRs for lung cancer were elevated in PM2.5 (HR ¼ 1.06; 95% CI: 0.95–1.18 associated with an interquartile increase of 5.3037 mg m 3) and NO2 (HR ¼ 1.11; 95% CI: 1.02–1.21 associated with an interquartile increase of 4.1167 ppb) but not for ozone (HR ¼ 0.86; 95% CI: 0.75–0.99 associated with an interquartile increase of 24.1782 ppb). CANCHEC was based on adults over 25 years-old who are usual residents of Canada from 1991 through 2011, including approximately 3.6 million participants. The study explored special variation of different regions through dividing the study area into different zones according to the atmospheric condition. After adjusting for socioeconomic characteristics, individual confounders, and ozone, the study reported increases in HR of lung cancer mortality in relation to ambient concentrations of PM2.5 (HR ¼ 1.49, 95% CI: 1.23–1.88 associated with an increase of 10 mg m 3), while a high HR 1.54 (95% CI: 1.27, 1.87 associated with an increase of 10 mg m 3) was found after consideration of spatial variation. There are a few studies focusing on the relationship between ambient volatile organic compounds (VOCs) and lung cancer. One of them is a cohort study based on Ontario Tax Cohort, a cohort randomly selected from income tax filings of Canadians residing in the province of Ontario. The cohort selected 58,760 Toronto residents, and the follow-up is from 1982 through 2004. And the investigators reported that adjusted HRs for lung cancer were potentially elevated in Benzene (HR ¼ 1.05; 95% CI: 0.96–1.14 associated with an interquartile increase of 0.13 mg m 3), n-Hexane (HR ¼ 1.03; 95% CI: 0.97–1.10 associated with an interquartile increase of 1.2 mg m 3), and total hydrocarbons (HR ¼ 1.04; 95% CI: 0.98–1.10 associated with an interquartile increase of 9.02 mg m 3).

Europe

European populations have a wide range of different exposures and living habits (particularly diet and smoking prevalence), which could act as effect modifiers. Therefore, conducting studies in Europe is extremely valuable. The European Study of Cohorts for Air Pollution Effects (ESCAPE) study is a pooled analysis of 17 European cohorts with 312,944 individuals involved from nine European countries with large differences in exposure levels. The investigators used random effects models for meta-analyses while the mean follow-up time is 12.8 years and 2095 incident lung cancer cases were diagnosed. And the adjusted HRs for lung cancer were increased in PM10 (HR ¼ 1.22; 95% CI: 1.03–1.45 associated with an increase of 10 mg m 3), PM2.5 (HR ¼ 1.18; 95% CI: 0.96–1.46 associated with an interquartile increase of 5 mg m 3), but not for NO2 (HR ¼ 1.01; 95% CI: 0.95–1.07 associated with an interquartile increase of 20 mg m 3). It is worth mention that there was no evidence of heterogeneity between the hazard ratios for the 17 cohorts.

China

In China, there were also series of studies on evaluation of lung cancer risks. However, most of them used simple regression method to indicated the association between lung cancer and ambient level of some pollutants, such as TSP, SO2, and NOx. Very few studies have provided quantitative values for lung cancer caused by ambient air pollution, or exposure–response functions. One cohort study published in 2017 provided HR values for lung cancer of outdoor air pollution across China. The study population were randomly selected from China’s 145 Disease Surveillance Points with 224,064 participants, covering both urban and rural area. All of them were resident men older than 40 years-old, and followed from 1990/1991 through 2006. The investigators reported increases in HR of lung cancer mortality in relation to ambient concentrations of PM2.5 (HR ¼ 1.12, 95% CI: 1.07–1.14 associated with an increase of 10 mg m 3), after adjusting for individual confounders, socioeconomic characteristics, and other exposures such as household solid-fuel use.

Integrated exposure–response (IER) model

IER model was built by GBD for estimating the global burden of disease attributable to ambient fine particulate matter exposure that has been used in GBD 2010 and GBD 2015 project, and is extensively approved and utilized by researchers. The idea of IER model was generated because of the fact that most of the risk assessment studies of PM2.5 are based on cohorts from North America and Europe where concentrations are relatively low, typically from approximately 5 to 30 mg m 3, yet in developing countries, the PM2.5 concentrations are commonly above 100 mg m 3. Some studies tried to extrapolate the existing exposure–response relationships based on linear or log-linear models to estimate the risk of exposure to higher PM2.5 concentrations. However, they always got implausibly high value of relative risk at higher concentrations. To assess risk of ambient fine particulate matter exposure globally, the GBD study developed IER model that constrain the shape of the exposure–response relationship using epidemiology data for other kinds of exposure studies with higher PM2.5 concentrations such as household air pollution from solid fuel combustion, passive tobacco smoke and active tobacco smoking. The basic assumption here is that toxicity of PM2.5 is determined only by its concentration and its composition does not affect the exposure– response relationship. Besides, to combine all types of prescribed exposure, it assumed that different exposure types for any cause of mortality have no impact on each other. Using the IER model, the investigators established exposure–response relationships between PM2.5 concentrations over the global range and increased mortality from major diseases including lung cancer. According to the GBD study, the form of IER model is: RR IER ðzÞ ¼ 1; for z < zcf

32

Air Pollution and Lung Cancer Risks n h  d io ; for z  zcf RR IER ðzÞ ¼ 1 þ a 1  exp  g z  zcf

where z is the exposure to PM2.5 in micrograms per cubic meter and zcf is the assumed counterfactual concentration with no additional risk. The power of PM2.5, d, is included to predict risk over a very large range of concentrations. RRIER approximates 1 þ a as z approaches infinity. And RRIER(zcf þ 1) approximates 1 þ ag Thus, g ¼ [RRIER(zcf þ 1) – 1]/[RRIER(N) – 1] can be interpreted as the ratio of the RR at low-to-high exposures. The recently published Chinese cohort study above-mentioned compared their results with IER. The investigators found that the IER may currently underestimate the relative risk where population exposure of PM2.5 is comparatively high, which may contribute to the future update of IER. However, given the limited studies in developing countries with high PM2.5-exposure, more researches are still needed to further figure out the exposure–response relationship in those areas.

Ecological Fallacy An ecological fallacy, or an ecological bias, is an error in the interpretation of statistical data in an ecological study, whereby inferences about the nature of specific individuals are based solely on aggregate statistics collected for the group to which those individuals belong. Ecological studies are especially susceptible to ecological fallacy since they assume that individual members of a group have the average characteristics of the group at large, which means that if there is no within-area individual variability, there will be no ecological fallacy, however the variability always exists. Several studies have emphasized that ecological analysis of lung cancer mortality rates and air pollution would be burdened by the ecological fallacy. For instance, epidemiological studies typically utilize one or few central monitoring stations as a proxy for personal exposure to air pollution. Recent studies indicate that there might be large differences between population exposure estimated from fixed ambient monitoring stations and personal exposure measured on individuals. And it is suggested that robust and comprehensive spatiotemporal analysis methods should be developed and applied to evaluate PM-lung cancer associations, while it is believed that these approaches would compensate for residual variability resulting from spatial variation in parameters that were not included in traditional analysis methods. To address this criticism, a study should select a set of place-specific variables that measured determinants of health ranging from the biophysical environment to the social environment. Population data categorized by enumeration area, the smallest geographic unit for which such information is available, may also be required to perform finer analyses and to reduce the ecological fallacy. For example, a study may investigate variables such as socioeconomic characteristics, occupations, and smoking habits, which may help to understand the actual exposure of an individual. Besides, other confounders such as diet, age and sex may also affect personal health response to exposures. By analyzing those individual confounders, ecological fallacy can be reduced to some extent. Besides, using concentration surface generated by models and satellites has also been popular among epidemiology studies these days. It has the advantage that finer exposure can be considered instead of relying on limited monitoring stations in an area, however, the model uncertainty is still a main concern. Most of the epidemiology studies mentioned before have done stratified analyze according to individual confounders, and some of them used concentration surface generated by models and satellites. Moreover, to understand ecological fallacy, it is useful to inspect the exposure–response relationship with more individual information such as individual confounders and within area exposure variability in relatively small areas with less population so that the outcome can be carefully analyzed at the individual level, which may contribute to the results of the larger area. For example, the ACS investigators performed specific analyses in cities or states like Los Angeles and California, that take intra-city exposure variability into consideration, such as proximity to road, which means more impact from traffic. Their results generally support the outcome of national cohorts, while finding that traffic pollution was positive associated with lung cancer. Moreover, it is found that concentration–response relationship in Los Angeles was higher than those in the national study or in the rest of the state, indicating that the population of Los Angeles is more susceptible, or the air pollution there is more toxic. This kind of studies are helpful to better understand results from larger cohorts who were not able to consider individual variations in detail and susceptible to ecological fallacy.

Indoor Air Pollution and Lung Cancer Indoor Combustion Based on the observation of very high lung cancer rates in some regions of China and elsewhere among women and children who spend much of their time at home, exposure to indoor air pollution from combustion sources, especially household solid fuel combustion for heating and cooking, as well as high levels of cooking oil vapors resulting from some cooking methods, has been identified as a risk factor for lung cancer. In most cases, air pollution arises from domestic activities such as cooking, heating and lighting are classified as household air pollution (HAP), while the major concern of HAP is household solid fuel use in low and middle income countries particularly in south Asia and sub-Saharan Africa. IARC concluded that HAP from coal use was lung carcinogenic while other solid fuel like biomass use still need future research. Several factors influencing HAP have been studied: for heating fuel, including type of fuel, type of stove or central heating, ventilation, heating location and duration, and subjective smokiness; and for cooking fuel, including type of fuel, type of stove or open pit, ventilation of kitchen, cooking location and duration, frequency of cooking, and smokiness. And epidemiologic studies, mostly

Air Pollution and Lung Cancer Risks

33

case-control studies, have examined the relationship between household emissions and lung cancer risk. A study systematically reviewed primary studies reporting relationships between lung cancer and household solid fuel use, and conducted metaanalysis including 28 case-control studies (17 from China, three from Taiwan, two from India, and one study each from Japan, Mexico, United States, Canada, and Europe). A significant relationship between lung cancer and solid fuel use was found for biomass smoke (OR ¼ 1.50; 95% CI: 1.17–1.94) and coal smoke (OR ¼ 1.82; 95% CI: 1.60–2.06), besides, it is reported that solid fuel use was greater in females (OR ¼ 1.81: 95% CI 1.54–2.12) compared to males (OR ¼ 1.16; 95% CI: 0.79–1.69). Few cohort studies have assessed the effects of exposure to HAP on lung cancer. A large retrospective cohort study investigated risk of lung cancer associated with domestic use of coal. The follow-up was from 1976 to 1996, with 42,422 participants involving who are all residents from Xuanwei county in China. The investigators reported significantly increased HR of lung cancer mortality between users of smoky coal and users of smokeless coal both in men (HR ¼ 36; 95% CI: 20–65) and women (HR ¼ 99; 95% CI: 37–266). Apart from household solid fuel use, cooking oil fume especially from Chinese-style cooking is also a kind of indoor pollution, and is also suggested having an effect on lung cancer. Several epidemiologic studies have investigated the relationship between cooking oil fume and lung cancer risk. A meta-analysis including 2 retrospective cohorts in China and 21 case-control studies (18 from China, two from Canada, and one from Singapore) summarized the evidences of cooking oil fume as a risk factor for lung cancer. The investigators reported increased OR of lung cancer mortality among cooking female (OR ¼ 1.98; 95% CI: 1.54–2.54) who do not smoke and cooking male (HR ¼ 99; 95% CI: 37–266). Most of the researches on indoor air pollution were conducted in developing countries. Limited data supporting a similar effect of exposure to cooking and heating-derived indoor air pollution are available from other regions of the world. Due to the fact that in most parts of Europe and North America, frying is less common than in China and kitchens are generally larger, better ventilated, and separated from the living quarters. Besides, Central heating is increasingly common, and open combustion sources indoors are infrequent. However, given that lung cancer induction may span several decades, earlier living conditions may still play a role today in the risk of lung cancer among the middle-aged and older generations in Europe, although its importance should be waning.

Environmental Tobacco Smoke Environmental tobacco smoke (ETS) is composed of sidestream and mainstream smoke, in which known, probable, or possible human carcinogens are present. The association between exposure to ETS and lung cancer has been largely demonstrated. It is estimated by GBD that second-hand smoke exposure resulted in 27,350 deaths as well as 615,930 disability-adjusted life-years due to tracheal, bronchus, and lung cancer in 2016. Several studies were conducted in developed countries like United States and European Union. In a meta-analysis of 55 studies, the RR of lung cancer attributable to spousal ETS was 1.27 (95% CI: 1.17–1.37) for never smoking women, whereas the RR for studies in North America was 1.15 (95% CI: 1.03–1.28), in Asia 1.31 (95% CI: 1.16–1.48) and Europe 1.31 (95% CI: 1.24– 1.52). There were also series of studies focusing on developing countries like China. A cohort study of women from Shanghai, China reported significantly increased HR of lung cancer mortality in relation to second-hand smoke at work (HR ¼ 1.79, 95% CI: 1.09– 2.93). IARC has evaluated the evidence of a carcinogenic risk from exposure to ETS. After taken confounders such as dietary, occupational and social class-related factors into consideration, ETS was classified as an established human carcinogen.

Meta-Analysis on Lung Cancer Risk Estimate of Indoor Air Pollution in China As noted in the preceding text, China the largest developing country in the world and attracts a lot of attention on the indoor air pollution impacts on lung cancer, particularly in rural regions. As a tool for extracting more information from a series of individual studies by pooling their results using certain statistical methods, meta-analysis was applied to integrate existing studies and establish more general quantitative estimates.

Method In statistics, a meta-analysis combines the results of several studies that address a set of related research hypotheses, to make more accurate estimate. A meta-analysis was carried out based on the studies listed in Table 1. Pooled ORs are calculated by the random effects model based on the DerSimonian and Laird method. The heterogeneity is defined as the P-value of the Cochran’s Qtest< 0.05 or I2 > 50%. All analyses were conducted with STATA.

Results of Indoor Air Pollution and Lung Cancer The results of meta-analysis are shown in Table 2. Regarding the coal consumption, the pooled OR values with 95% CI obtained using the random effect model were 2.06 (95% CI: 1.34–3.18), 1.53 (95% CI: 1.02–2.30), 3.01 (95% CI: 1.22–7.43), 1.68 (95% CI: 1.39–2.03) for nonsmoking women, women, men, and both sexes, respectively. The meta-analysis results indicate that there is significant association between coal consumption and lung cancer risk. The higher risk in men may due to a limited number of studies while most of them are studies in Xuanwei where people there are possibly more susceptible to lung cancer due to their gene susceptibility and toxicity of local coal. For ETS exposure, the pooled OR values with 95% CI were 1.36 (95% CI: 1.22– 1.52) and 1.39 (95% CI: 1.17–1.67) for nonsmoking women and both sexes, respectively. The meta-analysis results indicate

34

Air Pollution and Lung Cancer Risks

Table 1

Results selected for the meta-analysis from epidemiological studies in China.

Air pollution issues

Coal Both sexes

OR(95% CI)

Period

Study area

Sample size Case

Control

References

3.72 (0.88–15.71) 7.6 (3.7–15.7) 2.22 (1.28–3.86) 2.4 (1.3–4.4) 1.29 (1.03–1.61) 1.51 (1.2–1.91) 2.333 (1.187–4.588) 2.264 (1.919–2.673) 7.4 (4.1–13.1) 5.81 (1.67–20.22) 1.3 (0.3–5.8) 1.03 (0.66–1.63) 1.33 (0.83–2.14) 6.4 (2.9–14.5) 1.03 (0.84–1.26) 3.18 (2.55–3.97) 4.08 (2.17–7.67) 98.8 (36.8–265.6) 1.2 (0.95–1.52) 1.6 (1.22–2.1) 6.31 (2.85–13.94) 36.2 (20.3–64.7) 8.4 (3.6–19.6)

1986–1993 1990–1991 1996–1999 1995–1996 1994–1998 1994–1998 2007–2008 2006–2014 1985–1990 1992–1993 1992–1993 1994–1998 1994–1998 1985–1990 1996–2009 2000–2002 2005–2007 1976–1996 1996–2000 1994–1998 1985–1990 1976–1996 1985–1990

Nanjing Fuzhou Harbin Xuanwei Gansu Gansu Dalian Nanjing Xuanwei Harbin Taiwan Gansu Gansu Xuanwei Shanghai Shenyang Taiyuan Xuanwei Shanghai Gansu Xuanwei Xuanwei Xuanwei

263 102 206 122 846 740 200 1374 498 120 117 220 180 238 429 618 119 1128 332 560 260 1079 260

263 306 618 122 1740 1628 200 1374 498 120 117 459 420 238 71320 872 119 16443 74609 1208 260 17054 260

Shen et al. (1996) Luo et al. (1996) Sun et al. (2002) Lan et al. (2000) Kleinerman et al. (2002) Kleinerman et al. (2004) Wang (2008) Liu et al. (2017) Lan et al. (2008) Dai et al. (1996) Ko et al. (1997) Kleinerman et al. (2002) Kleinerman et al. (2004) Lan et al. (2008) Kim et al. (2014) Fan (2004) Mu et al. (2013) Baroneadesi et al. (2012) Kim et al. (2016) Kleinerman et al. (2004) Kim et al. (2014) Baroneadesi et al. (2012) Lan et al. (2008)

Women

1.84 (1.11–3.03) 2.404 (1.258–4.593) 8.11 (3.13–21.05) 3.81 (1.06–13.73) 1.61 (1.01–2.56)

Zhuhai Dalian Harbin Nanjing Singpore

131 98 206 263 268

131 87 618 263 203

Liang et al. (2009) Wang (2008) Sun et al. (2002) Shen et al. (1996) Tang et al. (2010)

Nonsmoking women

2.87 (1.56–5.29) 1.71 (1.41–2.08) 1.67 (1–2.5) 4.53 (2.09–9.94) 1.78 (1.14–2.78) 0.89 (0.68–1.16)

Hongkong Tianjin Gansu Shenyang Taiwan Singpore

139 264 233 72 248 427

164 250 459 72 226 1357

Chiu et al. (2010) Wang et al. (2001) Metayer et al. (2002) Zhou et al. (2000) Lo et al. (2013) Tang et al. (2010)

6.15 (2.16–17.55) 1.59 (1.13–2.23) 2.51 (1.8–3.51) 3.18 (2.55–3.97) 3.79 (2.29–6.27) 1.86 (1.39–2.47) 1.56 (1–2.5) 2.32 (1.59–3.41) 2.15 (1.2–3.21) 1.84 (1.12–3.02) 2.5 (1.4–4.3) 2.05 (1.1–3.84) 2.91 (1.02–8.27)

2004–2006 2007–2008 1996–1999 1986–1993 1996–1998/ 2005–2008 2002–2004 1992–1995 1994–1998 1991–1995 2002–2009 1996–1998/ 2005–2009 2002–2004 2004–2010 2002–2006 2000–2002 1992–1994 1992–1995 1994–1998 1992–1993 1992–1993 1992–1994 1993–1996 2006–2010 2006–2010

Hongkong Shenyang Shenyang Shenyang Shenyang Tianjin Gansu Shanghai Shanghai Shanghai Taiwan Fujian Fujian

80 524 350 618 135 264 230 498 504 504 131 72 31

128 524 350 872 135 250 459 595 601 601 262 147 80

Yu et al. (2006) Yin et al. (2014) Li et al. (2008) Fan (2004) Wang et al. (1996a) Wang et al. (2001) Kleinerman et al. (2000) Liu et al. (2001) Zhong et al. (1995) Zhong et al. (1999b) Ko et al. (2000) Lin and Cai (2012) Lin and Cai (2012)

2.515 (2.141–2.953) 2.3 (1.5–3.6) 1.06 (0.63–1.8) 1.19 (0.7–2) 2.4 (1.1–5.1) 1.79 (1.08–2.97) 1.98 (1.12–3.51)

2006–2014 1993–1994 1996 1994–1998 1990–1991 1990–1993 2005–2007

Nanjing Heilongjiang Beijing Gansu Fuzhou Guangdong Taiyuan

1374 128 350 886 102 390 92

1374 128 350 1765 306 390 117

Liu et al. (2018) Yu et al. (1996) Li et al. (2008) Kleinerman et al. (2000) Luo et al. (1996) Wang et al. (1996a) Mu et al. (2013)

Women

Nonsmoking women

Men

Cooking oil fume Both sexes

ETS Both sexes

Nonsmoking women

35

Air Pollution and Lung Cancer Risks Table 1

Results selected for the meta-analysis from epidemiological studies in China.dcont'd

Air pollution issues

OR(95% CI)

1.39 (1.17–1.67) 2.76 (1.84–4.13) 1.15 (0.64–2.06) 2.52 (1.03–6.44) 1.65 (1.1–2.47) 2.7 (1.49–4.88) 1.19 (0.66–2.16) 1.7 (1.3–2.3) 1.3 (0.7–2.5) 3.14 (1.97–5.01) 0.94 (0.45–1.97)

Table 2

Period

2002–2009 2006–2010 1991–1994 1990–1993 1992–1993 1985–1987 1986 1992–1994 1992–1993 1990–1994 1991–1995

Study area

Sample size

Taiwan Fujian Shenyang Beijing Shanghai Harbin Guangzhou Shanghai Taiwan Guangdong Shenyang

References

Case

Control

1221 208 166 116 498 114 229 504 117 200 72

1221 255 166 464 595 114 229 601 117 200 72

Lo et al. (2013) Lin and Cai (2012) Wang et al. (1996b) Zheng et al. (1997) Liu et al. (2001) Wang et al. (1996c) Lei et al. (1996) Zhong et al. (1999a) Ko et al. (1997) Dai et al. (1997) Zhou et al. (2000)

Odds ratios resulting from meta-analysis.

Coal Nonsmoking women Women Men Both sexes ETS exposure Nonsmoking women Both sexes Indoor cooking oil Nonsmoking women Incense Only three studies while all of

Random-effect model OR(95% CI)

Lower

Upper

Heterogeneity Q-stat(P value)

d.f.

p

I2(%)

2.06 1.53 3.01 1.68

1.34 1.02 1.22 1.39

3.18 2.30 7.43 2.03

134.94 19.81 100.12 56.42

4 4 3 8

0 0.001 0 0

97.04 79.81 97 85.82

1.36 1.39

1.22 1.17

1.52 1.66

26.4 16.18

11 5

0.006 0.006

58.33 69.09

1.52

1.32

1.74

69.78

13

0

81.37

them focus on different population, no meta-analysis conducted

that there is an association between ETS and lung cancer risk. For cooking oil exposure, the pooled OR values with 95% CI obtained using the random effect model were 1.52 (95% CI: 1.32–1.74) for nonsmoking women. For incense using, there were only three studies focusing on different populations, so they are not included in the meta-analysis. High heterogeneity was found between studies in all categories. Since the definition and measurement of ‘exposure’ and ‘nonexposure’ in these studies differs a lot, it is difficult to set a strict criterion for selection of studies for meta-analysis. This might partly explain the high heterogeneity. For example, among the collected 14 studies on risk of cooking oil fume among nonsmoking women, detailed descriptions of exposure definition were not found in four studies, another four studies just classify the exposure into heavy and medium or frequently and rarely without telling the criteria of classification, and three studies defined the exposure as much fume and little fume. Only three studies described quantified classification criteria either as annual cooking times and socalled ‘dish-years’ which is related to number of dishes cooked by the participant. Implicit classification criteria of ‘exposure’ and ‘nonexposure’ impede the meta-analysis since it is hard to figure out the same hierarchy in different studies. Given that, quantified criteria are preferred. However, since the exposure is impacted by many factors such as cooking methods and frequency, ventilation, and so on, efforts are still need to develop quantitative standards for epidemiology studies on indoor air pollution. In general, according to the meta-analysis, indoor air pollution including coal combustion, cooking oil fumes, and ETS plays a very important role in increasing the risk of lung cancer. Since Chinese females commonly stay indoors longer than males, indoor air pollution derived from cooking fuel and household coal consumption has been established as a major risk factor for female lung cancer deaths.

Test of Publication Bias The meta-analysis is based on results published in peer-reviewed journals. These may not represent all available information because some relevant reports may be published in other formats or not published at all. Publication bias refers to the tendency for findings that support a particular hypothesis (in this case, that air pollution has an adverse effect on lung cancer) to be published preferentially in peer-reviewed journals. It might lead to an inaccurate estimate or conclusion regarding the degree of support in the literature for the hypothesis.

36

Air Pollution and Lung Cancer Risks

Funnel plots in which OR values are plotted in logarithmic scale against their SE are applied in this study to assess the validity of the meta-analysis results. The assumption is that precision will improve as the sample size of a component study increases. Results from small studies will scatter widely at the bottom of the graph, with the spread narrowing among larger studies. If the data lack publication bias, the resulting scatter should be symmetric like an inverted funnel. Besides, several factors can also lead to the asymmetric of funnel plots. One is different quality of methodology between small studies and larger ones. Compared with studies with large samples, smaller ones tend to be less rigorous in methodology and more likely to obtain more serious effect of the exposure. Thus significant publication bias test is also called ‘small-study effects’ test in some occasions. Besides, heterogeneity between studies may also result in asymmetric funnel plots. For example, studies whose participants are more susceptible to exposure would derive higher effects than other studies. Fig. 1 shows the funnel plots of meta-analysis based on the data listed in Table 1. For coal consumption, large asymmetric with significant so-called ‘small-study effect’ was found while most of the studies fall out of the 95% CI lines in the right are studies in Xuanwei, indicating that he asymmetric is possibly originate from the heterogeneity between studies in Xuanwei and other places which might originate from the susceptibility of subject population or toxicity of coal. Fig. 2 is the funnel plots of coal combustion after deleting studies in Xuanwei, and it can be seen that the asymmetric improved a lot and the ‘small-study effect’ becomes insignificant. For ETS and cooking oil fume, it can be seen from Fig. 1 that the studies related to most risk factors distribute quite symmetrically with the pooled estimate as midline with no significant ‘small-study effects’.

Confounding Factors Apart from air pollution, there are many other lung cancer risk factors such as occupational exposure to carcinogens, hormonal factors (women), environmental exposure to radon, infectious factors, socioeconomic status, other medical history, dietary factors, personal habits such as alcohol drinking and smoking, which might increase the uncertainty of epidemiology studies. Smoking is the most important risk factor to lung cancer, especially for males, it leads to about 0.85 million lung cancer annually worldwide according to IARC and is a significant confounder when conducting epidemiological studies to find out the relationship between lung cancer and environmental exposure. Besides, it is generally considered that risks for lung cancer were negatively associated with intake of b-carotene, flavonoids, and vitamin C-rich foods. Green tea drinking and consumption of fruit and vegetable may also have protective effects on lung cancer risk according to some studies. Occupation exposure is also a factor to lung cancer. People involved in some certain work, including smelting, steel-making, coal production and gasification, and so on, might have more affinity for lung cancer than others. All these confounders would have some impact on the risk of lung cancer, thus affect the explicit relationship between inspected environmental exposure and lung cancer. So it is important for epidemiological studies to investigate the confounders among participants and to exclude their impact.

Health Impact Assessment of Air Pollution on Lung Cancer As epidemiology studies have established the relationship between air pollution and lung cancer, health impact of air pollution on lung cancer can be calculated. Firstly, pollution indicators should be chosen to quantify impact of complex air pollution exposure on lung cancer. For outdoor air pollution, it may consist many pollutants such as PM2.5, ozone and NOx. For indoor combustion, it may generate several pollutants, such as CO, PAHs, PM2.5 and SO2. PM2.5 is commonly used to conduct health impact assessment these days as epidemiological studies so far found that it showed strongest relationship with lung cancer and the concentration– response function of PM2.5 and lung cancer has been extensively accepted. Concentration of outdoor air PM2.5 is determined through monitoring, modeling or satellite, while concentration of PM2.5 resulted from indoor combustion is determined through the information of fuel type, ventilation, stove technologies and so on. Then concentration–response relationship established by epidemiology studies should be surveyed and selected. For diseases of low-incidence like lung cancer, it is particularly important to include large subject population to conduct epidemiology study, because if the population is not large enough, there might not be lung cancer cases. Sometimes areas with high lung cancer incidences are chosen to study the exposure effect such as Xuanwei mentioned before, which may not be representative since people there might be especially susceptible to lung cancer and the concentration–response relationship is not applicable to other places. Given that, the IER, which combines epidemiology studies across the world, has been widely accepted and used to assess health impact of air pollution. After that, the number lung cancer cases related with air pollution can be calculated using the idea of population attributable fraction (PAF), which means the proportional reduction in population disease or mortality that would occur if exposure to a risk factor were reduced to an alternative ideal exposure scenario. PAF can be calculated as: Pe ðRR  1Þ Pe ðRR  1Þ þ 1 where Pe is the proportion of exposure population, RR means the relative risk of lung cancer at the exposure level. And the health impact can be calculated as:

DY ¼ y0 P  PAF

Air Pollution and Lung Cancer Risks

Funnel plots of publication bias test: coal consumption (A) Funnel plot with pseudo 95% confidence limits 0

SE

.2

.4

.6

.8 -2

0

2

4

6

ln(OR)

Funnel plots of publication bias test: ETS (B) Funnel plot with pseudo 95% confidence limits 0

.1

SE

.2

.3

.4

.5 -.5

0

.5

1

1.5

ln(OR)

Funnel plots of publication bias test: cooking oil fume (C)

Funnel plot with pseudo 95% confidence limits 0

SE

.2

.4

.6 -.5

0

.5

1

1.5

2

ln(OR)

Fig. 1 Funnel plots for analysis of publication bias in OR estimates of lung cancer from indoor air pollution. (A) Funnel plots of publication bias test: coal consumption. (B) Funnel plots of publication bias test: ETS. (C) Funnel plots of publication bias test: cooking oil fume. Note: SE means standard error of ln(OR).

37

38

Air Pollution and Lung Cancer Risks

Funnel plot with pseudo 95% confidence limits 0

SE

.2

.4

.6

.8 -1

0

1

2

ln(OR) Fig. 2 Funnel plots for analysis of publication bias in OR estimates of lung cancer from coal combustion after deleting studies in Xuanwei. Note: SE means standard error of ln(OR).

where O Y is lung cancer cases caused by exposure, y0 is the lung cancer morbidity or mortality of the whole population and P is number of the whole population. It worth mention that RR should be consistent with y0. If RR is the ratio between morbidity of exposed and un-exposed people, y0 is morbidity and the calculated O Y is the persons with lung cancer due to exposure. If premature death is to be calculated, RR should be the ratio of mortality and y0 should be mortality. Two metrics are typically used to assess the health impact of air pollution. One is premature death, which is relatively easy for public to understand but neglects disabilities or pains due to lung cancer that may affect life quality and work capacity. The other is disability-adjusted life year (DALY). It is a measure of overall disease burden, combining mortality and morbidity in a single metric through adding the years of life lost (YLL) and the years lost due to disability (YLD) together. YLL is the life expectancy at the time of death, while YLD measures the burden of living with a disease or disability using the formula: YLD ¼ DY  DW  L where DW means disability weight of specific condition, L represents for average duration of the case until remission or death (years), and O Y represent for number of incident cases in the population. Sometimes age-standardized DALY is used, in which the value of each year of life depends on age while it is typical to value years lived as young adults more than years spent as a young child or older adult, since productivity of young adults is the highest during the whole life. Many studies have assessed lung cancer resulted premature death and DALY related with air pollution, such as the GBD study mentioned before. Apart from assessing global air pollution health impact, health impact by different emission sector can also be quantified through modeling to provide suggestions for air pollution control. For example, it is estimated that aerosols from power generation lead to 1.3 million premature deaths due to ischemic heart disease, cerebrovascular disease, chronic obstructive pulmonary disease, and lung cancer annually in China, which is a substantial impact on public health, indicating that control of power plants is necessary for protecting human health in China.

Conclusions Although the increased risk associated with air pollution was small compared with that from cigarette smoking, there are epidemiological evidences that suggest a considerable association between air pollution and lung cancer. However, studies are still needed to quantify health response due to air pollution at different pollution levels and among different populations to reduce the uncertainty, since problems still remain such as health impact of unit air pollution differs at different concentration and human reaction to air pollution may vary between population. Besides, as the pollution situation of developing countries such as China is more complicated than what have been experienced by developed ones, the composition of air pollutants such as PM2.5 might be much complicated while the toxicity might have changed, so more epidemiology studies are necessary for those countries. According to reports of WHO and GBD, air pollution health impact is unevenly distributed while a large part of it is borne by the populations of highly polluted cities in developing countriesdfar more than 60% of the world’s burden of air pollution

Air Pollution and Lung Cancer Risks

39

attributed disease. In Chinese cities, where air pollution levels are much higher than those in the cities of the developed countries, outdoor air pollution may contribute to as much as 10% of lung cancer overall, and perhaps a larger proportion in nonsmoking women. Therefore, opportunities to strengthen the scientific evidence on air pollution and lung cancer should be pursued, especially in developing countries where the estimated health impact of air pollution and the need for accurate risk estimates are the greatest. Besides, air pollution health impact need further source apportionment especially in developing countries where the air pollution situation and pollutants emission are the most serious. Conducting such apportionment will contribute to policy decision of pollution control in terms of public health, which is the top concern of environmental protection.

See also: Air Pollution Episodes; Cigarette Smoke, DNA Damage Repair, and Human Health; Effect of Air Pollution on Human Health; Hazardous (Organic) Air Pollutants; Indoor Air Pollution Attributed to Solid Fuel Use for Heating and Cooking and Cancer Risk; Long-Term Effects of Particulate Air Pollution on Human Health; Mutagenicity of PM2.5; PM2.5 Sources and Their Effects on Human Health in China: Case Report; Radon: An Overview of Health Effects; Risk to Populations Exposed from Atmospheric Testing and Those Residing Near Nuclear Facilities.

References Baroneadesi, F., et al., 2012. Risk of lung cancer associated with domestic use of coal in Xuanwei, China: Retrospective cohort study. BMJ 345, e5414. Chiu, Y., Wang, X., Qiu, H., Yu, I.T., 2010. Risk factors for lung cancer: a case-control study in Hong Kong women. Cancer Causes & Control 21 (5), 777–785. Dai, W., et al., 1997. Fraction analysis of the involvement of multiple risk factors in the etiology of lung cancer: risk factor interactions in a case-control study for lung cancer in females. Chinese Journal of Epidemiology 18, 341–344 [in Chinese]. Dai, X., Lin, C., Sun, X., Shi, Y., Lin, Y., 1996. The etiology of lung cancer in nonsmoking females in Harbin, China. Lung Cancer 14, S85–S91. Suppl. 1. Fan, L., 2004. Risk factors of lung cancer in non-smoking women in Shenyang: A case-control study. (Doctoral dissertation, China Medical University) [in Chinese]. Kim, C., et al., 2014. Home kitchen ventilation, cooking fuels, and lung cancer risk in a prospective cohort of never smoking women in Shanghai, China. International Journal of Cancer 136 (3), 632–638. Kim, C., et al., 2016. Cooking coal use and all-cause and cause-specific mortality in a prospective cohort study of women in Shanghai, China. Environmental Health Perspectives 124 (9), 1384–1389. Kleinerman, R.A., et al., 2002. Lung cancer and indoor exposure to coal and biomass in rural China. Journal of Occupational and Environmental Medicine 44 (4), 338–344. Kleinerman, R.A., et al., 2004. Lung cancer and indoor exposure to coal and biomass in rural area. In: China Preventive Medicine [in Chinese]. Kleinerman, R.A., et al., 2000. Lung cancer and indoor air pollution in rural china. Annals of Epidemiology 10 (7). Ko, Y., et al., 1997. Risk factors for primary lung cancer among non-smoking women in Taiwan. International Journal of Epidemiology 26 (1), 24–31. Ko, Y., et al., 2000. Chinese food cooking and lung cancer in women nonsmokers. American Journal of Epidemiology 151 (2), 140–147. Lan, Q., et al., 2000. Indoor coal combustion emissions, GSTM1 and GSTT1 genotypes, and lung cancer risk: A case-control study in Xuan Wei, China. Cancer Epidemiology, Biomarkers & Prevention 9 (6), 605–608. Lan, Q., et al., 2008. Variation in lung cancer risk by smoky coal subtype in Xuanwei, China. International Journal of Cancer 123 (9), 2164–2169. Lei, W., et al., 1996. Some lifestyle factors in human lung cancer: a case-control study of 792 lung cancer cases. Lung Cancer 14 (Suppl. 1), 121–136. Li, Q., et al., 2008. A case-control study of risk factors for lung cancer in Beijing. Bull Chin Cancer 9 (2), 83–85 [in Chinese]. Liang, X., et al., 2009. Case-control study of lung cancer in Zhuhai. South China Journal of Preventive Medicine (5), 31–34 [in Chinese]. Lin, Y., Cai, L., 2012. Environmental and dietary factors and lung cancer risk among Chinese women: A case-control study in Southeast China. Nutrition and Cancer 64 (4), 508–514. Liu, E., et al., 2001. Risk factors for lung cancer among non-smoking females in urban Shanghai: a population-based case-control study. Tumor 21 (6), 421–425 [in Chinese]. Liu, Z., et al., 2017. Residential environment, indoor air pollution and risk of lung cancer: A case-control study. Chinese Journal of Public Health (9). Lo, Y., et al., 2013. Risk factors for primary lung cancer among never smokers by gender in a matched case-control study. Cancer Causes & Control 24 (3), 567–576. Luo, R., Wu, B., Yi, Y., Huang, Z., Lin, R., 1996. Indoor burning coal air pollution and lung cancer- a case-control study in Fuzhou, China. Lung Cancer. Metayer, C., et al., 2002. Cooking oil fumes and risk of lung cancer in women in rural Gansu, China. Lung Cancer 35 (2002), 111–117. Mu, L., et al., 2013. Indoor air pollution and risk of lung cancer among Chinese female non-smokers. Cancer Causes & Control 24 (3), 439–450. Shen, X., Wang, G., Huang, Y., Xiang, L., Wang, X., 1996. Analysis and estimates of attributable risk factors for lung cancer in Nanjing, China. Lung Cancer. Sun, X., Dai, X., Shi, Y., Lin, Y., 2002. A case-control study on the relationship among indoor air pollution, depression and oncogenesis of lung cancer. Chinese Journal of Lung Cancer 5 (2), 101–103. Tang, L., et al., 2010. Lung cancer in Chinese women: Evidence for an interaction between tobacco smoking and exposure to inhalants in the indoor environment. Environmental Health Perspectives 118 (9), 1257–1260. Wang, F., et al., 1996. A case-control study of childhood and adolescent exposure to environment tobacco smoke (ETS) and the risk of female lung cancer. Lung Cancer 14 (Suppl. 1), 238. Wang, M., 2008. Matched case-control study on risk factors of primary lung cancer. Doctoral dissertation, Dalian Medical University [in Chinese]. Wang, Q., et al., 2001. The risk factors of female lung cancer in Tianjin. Tumor 10 (2), 99–100 [in Chinese]. Wang, S., et al., 1996a. A comparative study of the risk factors for lung cancer in Guangdong, China. Lung Cancer. Wang, T., et al., 1996b. The case-control study on lung cancer risk in nonsmoking women in Shenyang. Chinese Public Health Transactions 15 (1996), 257–259 [in Chinese]. Wang, T., Zhou, B., Shi, J., 1996c. Lung cancer in nonsmoking Chinese women: A case-control study. Lung Cancer. Wang, Z., et al., 2001. The risk factors of female lung cancer in Tianjin. Tumor 10 (2), 99–100 [in Chinese]. Yin, Z., et al., 2014. Genetic polymorphisms of TERT and CLPTM1L, cooking oil fume exposure, and risk of lung cancer: A case-control study in a Chinese non-smoking female population. Medical Oncology 31 (8). Yu, I.T., Chiu, Y., Au, J.S., Wong, T.W., Tang, J., 2006. Dose-response relationship between cooking fumes exposures and lung cancer among Chinese nonsmoking women. Cancer Research 66 (9), 4961–4967. Yu, Z., Li, K., Lu, B., Hu, T., Fu, T., 1996. Environmental factors and lung cancer. Lung Cancer. Zheng, H., et al., 1997. Studies on relationship between passive smoking and lung cancer in non-smoking women. Chinese Journal of Preventive Medicine 31, 163–164 [in Chinese]. Zhong, L., et al., 1999b. Lung cancer and indoor air pollution arising from Chinese style cooking among nonsmoking women living in Shanghai, China. Epidemiology 10 (5), 488–494.

40

Air Pollution and Lung Cancer Risks

Zhong, L., et al., 1995. The association of cooking air pollution and lung cancer risk: Results from a case-control study in nonsmoking women in Shanghai. Tumor 15, 313–317 [in Chinese]. Zhong, L., Goldberg, M.S., Gao, Y.T., Jin, F., 1999a. A case–control study of lung cancer and environmental tobacco smoke among nonsmoking women living in Shanghai, China. Cancer Causes & Control 10 (6), 607–616. Zhou, B., Wang, T., Guan, P., Wu, J.M., 2000. Indoor air pollution and pulmonary adenocarcinoma among females: a case-control study in Shenyang, China. Oncology Reports 7 (6), 1253–1262.

Further Reading Burnett, R.T., et al., 2014. An integrated risk function for estimating the global burden of disease attributable to ambient fine particulate matter exposure. Environmental Health Perspectives 122 (4), 397. Cakmak, S., et al., 2018. Associations between long-term PM 2.5 and ozone exposure and mortality in the Canadian census health and environment cohort (CANCHEC), by spatial synoptic classification zone. Environment International 111, 200–211. Cohen, A.J., et al., 2017. Estimates and 25-year trends of the global burden of disease attributable to ambient air pollution: An analysis of data from the global burden of diseases study 2015. Lancet 389 (10082), 1907–1918. Gharibvand, L., et al., 2017. The association between ambient fine particulate air pollution and lung cancer incidence: Results from the AHSMOG-2 study. Environmental Health Perspectives 125 (3), 378. Gordon, S.B., et al., 2014. Respiratory risks from household air pollution in low and middle income countries. The Lancet Respiratory Medicine 2 (10), 823–860. Hu, J.L., et al., 2017. Premature mortality attributable to particulate matter in China: Source contributions and responses to reductions. Environmental Science & Technology 51 (17), 9950–9959. Jerrett, M., et al., 2013. Spatial analysis of air pollution and mortality in California. American Journal of Respiratory and Critical Care Medicine 188 (5), 593–599. Lepeule, J., et al., 2012. Chronic exposure to fine particles and mortality: An extended follow-up of the Harvard six cities study from 1974 to 2009. Environmental Health Perspectives 120 (7), 965. Prüss-Üstün, A., Maria, N., 2016. Preventing disease through healthy environments: A global assessment of the burden of disease from environmental risks. World Health Organization. Raaschou-Nielsen, O., et al., 2013. Air pollution and lung cancer incidence in 17 European cohorts: Prospective analyses from the European study of cohorts for air pollution effects (ESCAPE). The Lancet Oncology 14 (9), 813–822. Smith, T.R., Wakefield, J.C., 2016. Ecological modeling: General issues. In: Lawson, A., Banerjee, S., Haining, R., Ugarte, M. (Eds.), Handbook of spatial epidemiology. Chapman and Hall/CRC, New York, pp. 112–130. Wang, N., et al., 2018. Lung cancer and particulate pollution: A critical review of spatial and temporal analysis evidence. Environmental Research 164, 585–596. Wang, S.X., Zhao, Y., 2011. Air pollution and lung cancer risks. In: Nriagu, J.O. (Ed.), Encyclopaedia of environmental health. Elsevier Science, pp. 26–38. Yin, P., et al., 2017. Long-term fine particulate matter exposure and nonaccidental and cause-specific mortality in a large national cohort of Chinese men. Environmental Health Perspectives 125 (11).

Air Pollution Episodesq P Brimblecombe, University of East Anglia, Norwich, United Kingdom © 2019 Elsevier B.V. All rights reserved.

Introduction The development of our understanding and the regulation of air pollution has been strongly influenced by episodes. The Great Fire of London in 1666 is a historical illustration of this. John Evelyn, the author of Fumifugium (1661), the first English book on air pollution, wrote in his diary: all about so hot and inflamed, that at the last one was not able to approach it, and were forced to stand still, and let the flames burn on, for nearly 2 miles in length and 1in breadth. The clouds also of smoke were dismal, and reached near 50 miles in length (John Evelyn’s Diary, 3 September 1666).

His work goes on to hint at an interest in calculating the dispersal of smoke over great distances. In the modern period, the developing literature on air pollution, such as the World Health Organization monograph Air Pollution of 1961, shows a continued focus on episodes. This was driven by a number of recent serious air pollution episodes such as the one in Donora in 1948 and London’s Great Smog of 1952. It was during these periods of high air pollutant concentrations that health outcomes were revealed to early investigators most clearly. The usual definition of an air pollution episode is nicely illustrated with the one provided by The UK National Air Quality Information Archive: “is the term used for a period of poor air quality, lasting up to several days, often extending over a large geographical area. Concentrations of all the measured species may increase at the same time, or only one species may be affected.” This article discusses episodes more broadly; hence, in addition to seeing them as events of a sporadic or occasional nature, accidents that may have more localized impacts such as that at Seveso are discussed. In contrast, the accident at Chernobyl saw radioactivity spread worldwide. Additionally, episodes that are derived from an underlying heterogeneity in the emission and meteorology of air pollutants have been considered. This forces us to consider heterogeneity not only in a temporal sense but also spatially, particularly when considering indoor air pollutants.

Important Air Pollution Episodes It is useful to begin by considering a number of notable air pollution events and episodes as case studies before considering some of the general principles. We will consider a mix of disasters and episodes as illustrations of the way in which understanding, policy, and health have been affected. The list is illustrative rather than definitive. We should note that the early episodes are winter events, whereas the final one relates to modern photochemical pollution of the summer. Episodes were well known in Victorian London, and even the popular magazine The Idler (November, 1892) contained a short story “The Doom of London,” where its author Robert Barr was able to write: “During the fog there was always a marked increase in the death rate, and on this occasion the increase was no greater than usual.”

Meuse Valley 1930 The earliest major episode that provoked a substantial body of scientific research occurred between the 1 and 5 December 1930 when there was fog along the Meuse valley in Belgium between Liège and Huy. Pollutants from numerous coke ovens, steel plants, and factories accumulated under an inversion that formed in conditions of light winds. Understanding of the episode is hampered by the lack of any contemporary measurements of the air pollutants, but health data were obtained by questioning family physicians and patients after fog. Some 60 people died in the week of the episode along with much livestock. The incident was widely reported in the press, both local and international. The Royal Prosecutor of Liège started a judicial inquiry and appointed a committee of experts, which included Jean Firket of the University of Liège, whose work has become indelibly associated with the incident. Although sulfur dioxide was seen as the key pollutant in the incident, questions about the importance of fluorides and nitrogen oxides were raised. The scientific study of this episode was significant in demonstrating the impact of air pollution on mortality and morbidity. Its novelty lay in the attempt to identify causative agents and its recognition of the role played by meteorological conditions.

q

Change History: September 2018. P. Brimblecombe made changes to the text and references. This is an update of P. Brimblecombe, Air Pollution Episodes, Editor(s): J.O. Nriagu, Encyclopedia of Environmental Health, Elsevier, 2011, Pages 39–45.

Encyclopedia of Environmental Health, 2nd edition, Volume 1

https://doi.org/10.1016/B978-0-12-409548-9.11638-0

41

42

Air Pollution Episodes

Donora 1948 Similar meteorological conditions characterize the serious incident at Donora Pennsylvania in the last week of October 1948. Anticyclonic weather and light winds along the Monongahela River allowed pollutants from a large steel mill and a zinc smelter to accumulate. The marked inversion and fog were associated with 17 deaths. The US Public Health Service assisted local authorities in an investigation that, as with the Meuse valley incident, relied on questioning people. Correlation of engineering, meteorological data, and the health records attempted to establish the pollutants responsible. The general view regarded sulfur dioxide, its oxidation products, and particulate matter as responsible. Sulfur dioxide was estimated to be 1500–5700 mg m 3. Although nearby Pittsburgh had long been concerned about its air pollution, the potential for problems in Donora did not seem to be been seen as an urgent issue by either the manufacturers or public health professionals. The investigation of this incident resulted in the first meaningful federal and state laws to control air pollution, and marked the beginning of modern efforts within the United States to assess and deal with the health threats from air pollution. Justin Shawley, the high school student studied the incident in the 1990s, and his enthusiasm led to commemorating the event 50 years on. A commemorative plaque dedicated by the Pennsylvania Historical and Museum Commission in memory of the victims was unveiled in an official ceremony in Donora on 28 October 1998.

London 1952 London experienced deadly winter smog in the late Victorian period, but the Great Smog of 1952 was important in triggering groundbreaking environmental legislation. In early December 1952, a stationary high-pressure system over western Europe allowed coal smoke to accumulate at low wind speeds under a temperature inversion (50–150 m). From 5 December visibility dropped to as low as 10 m in a fog that lasted till 10 December. A performance of La Traviata at Sadler’s Wells theater had to be abandoned, cattle died at Smithfield market, and many people developed respiratory illnesses within 12 h. It was apparent that emergency services had been stretched to the limit; so a week after the fog had cleared, the Ministry of Health made estimates of the number of deaths for the week, which were placed at 4703 compared with 1852 the previous year. Parliament pressed for the fog episode to be taken seriously, and although there was preexisting legislation (e.g., within Public Health Act, 1936), the government was obliged to assemble the Beaver Committee, whose report paved the way for the Clean Air Act, 1956. This new Act was more elaborate than earlier legislation. It fostered practical approaches to control by focusing on smoke, exempting some activities from regulation, and funding research. Its detailed efforts were largely found in memoranda on chimney heights, smoke control areas, and industrial premises, which provided much needed guidance on implementation. The importance of incident and the effectiveness of Act have been much debated. It is apparent that the 4000 deaths attributed to the Great Smog may have been underestimated by as much as a factor of 3. The success of the Act seems likely to have been overstated. It is very true that there has been a great reduction in the classic air pollutants in UK cities, but it can be questioned whether this arose from Clean Air Act, 1956. The improvements have been variously attributed to burning cleaner fuels, especially the use of gas; tall stacks on power stations; the decline of heavy industry; and a shift to all-electric homes and gas fired central heating in the domestic sector.

Auckland 1973 A little known but interesting event occurred in late February of 1973. A cargo ship Good Navigator arrived in Auckland, New Zealand, leaking defoliant. Some of the cargo with badly damaged barrels were brought ashore and a noticeable odor pervaded the central parts of the city. The local population were informed that the compound was extremely toxic, although it is unlikely that the component merphos (tributyl phosphorotrithioite) in the barrels would have warranted such strong concerns. The announcement of its danger was coupled with rumors that it was a nerve gas. Some 400 workers and nearby residents started exhibiting symptoms: breathing difficulty, eye irritation, headache, and nausea. There was widespread concern and even panic. A subsequent Commission of Inquiry recognized that merphos had a low toxicity and blamed the widely experienced symptoms on butyl mercaptan. Nevertheless many questions have remained as to whether this incident was one of poisoning or mass panic. The way this episode was handled has remained something of an embarrassment. Concerns heightened by the media and authorities may have catalyzed preexisting public fears of pollution, radioactivity, and chemical weapons. Under such situations, those who do not access the media can show symptoms very different from those who have.

Seveso 1976 Social and political forces can also be seen in a potentially far more serious incident that occurred at Seveso, Italy, on 10 July 1976. An industrial accident at a plant producing the disinfectant hexachlorophene resulted in the release of a large amount of material that contained dioxin (2,3,7,8-tetrachlorodibenzo-p-dioxin) as an impurity. Dioxin was already under suspicion because it had been found as a trace contaminant in Agent Orange used as a defoliant during the Vietnam War. However, information was released only slowly after the accident, and it took time to reveal that large areas had been contaminated with dioxin. Small animals started to die and those living in the more highly contaminated area developed an acne like complaint, chloracne. The directors of the chemical plant and local government were badly coordinated, so delay and inaction caused considerable concern. Some of these

Air Pollution Episodes

43

were fuelled by political, religious, and environmental interests that lay well outside the immediate locality. The teratogenic, mutagenic, and carcinogenic properties of chlorinated dibenzo-dioxins and furans triggered fears for unborn children, which meant that some pregnant women chose to have abortions, a difficult choice in a catholic country of the 1970s. Despite early fears, the Seveso incident led to no identifiable disastrous health outcomes. Nevertheless it had severe psychological, social, and economic effects. It has also had regulatory consequences as reflected in the industrial safety regulations of the European Community, called the Seveso Directive (1982) and the Seveso II Directive (1999, 2005), which were designed to lessen the environmental impact of major industrial accidents. The outcomes may not have been so positive at Bhopal or Chernobyl.

France Summer 2003 The European summer of 2003 was marked by the worst heat wave in more than 500 years. In France there were more than approximately 15,000 deaths, largely among elderly people. As we age our body becomes less efficient to temperature regulation and thus struggles to maintain its ideal temperature of 37 C. However, these deaths could also be seen in a social context. In modern Europe the 35-h working week can severely affect the time available to doctors, especially when family practitioners vacation in August. Additionally there is an increasing tendency for old people to live alone rather than in an extended family, and a desire for lengthy summer holidays among their children often results in elderly relatives left behind without adequate support. There may have been more at play than the heat. The stagnant conditions and long sunlight hours of the heat wave caused high ozone concentrations to build up; and in France, these exceeded 60 ppb for most of the first half of August. Although studies suggest that ozone had an effect on mortality in French cities, it did not permit the relative balance of the effects of temperature and ozone to be assessed. Furthermore, the elderly people were probably indoors, so the effects may have been related to indoor as much as outdoor ozone, and we have much less knowledge about indoor pollutant concentration during this episode.

The Nature of Episodes Weather seems to play an important role in many of the episodes. Air pollution episodes such as those described earlier have required action. Even in the absence of deaths at Seveso, concern over the potential for disaster was so strong that political change seemed essential. The intensity of the social and political issues is such that friction and disagreement is often prevalent, and the outcomes can frequently be challenged. It is possible for these to cascade over time and ultimately be interpreted in a mythology of conspiracy and cover-up. London’s Great Smog became an episode of UK Channel Four’s Secret History. The number of deaths that occurred in the London smog has been much disputed as has issues of its success. In recent years, there have been accusations that the Donora incident was covered up. Such statements are often made close to commemoration of the events and often surface despite an abundance of formal historical accounts. This may arise because academic writing has effectively become the repository of knowledge of these episodes rather than shared social memory. The impact of air pollution is strongly felt on vulnerable sections of the population. The elderly, the young, or those who are unwell can be disproportionately affected, whereas the poor may have less recourse to avoid the pollution and less resilience in coping with the impact, so issues of social justice frequently surface.

Pollutant Types Smog (Smoke Plus Fog) We can also think of episodes in terms of particular types of pollution. Classically this is found in the episodes that occurred in the Meuse valley, Donora, and in London. In these episodes, pollutants that accumulated under stable winter inversions with fog and sulfur dioxide and smoke were found at high concentrations in classical smog conditions (smoke plus fog). The particles within these smogs are increasingly blamed for their health impact, but the high concentrations of sulfur dioxide are likely to oxidize to form sulfuric acid droplets, so there was also the potential impact from acid aerosols.

Photochemical Smog In the 20th century, the transition from solid fuels burnt in stationary furnaces to liquid fuels burnt within the internal combustion engine has entirely transformed the air of our cities. A new kind of air pollution, photochemical smog, has emerged. This smog, which is neither smoke nor fog, arises particularly when volatile organic compounds from liquid fuels are oxidized in a sunlit atmosphere and promote the production of nitrogen dioxide that leads to high ozone concentrations. Ozone is the most important of a range of secondary pollutants formed from the reactions of primary pollutants emitted directly into the atmosphere. Photochemical smog was first recognized in Los Angeles during the Second World War. It was seen as unique and to some extent different from the type of pollution observed elsewhere; so it came to be called Los Angeles Smog. Its unusual character was apparent when conventional smoke abatement techniques failed to make any impression, and smoke abatement experts were initially baffled. It was Haagen-Smit, a biochemist studying vegetation damage, who finally realized that the Los Angeles smog was caused by reactions of automobile exhaust vapors in sunlight.

44

Air Pollution Episodes

High ozone episodes became a feature of Southern California and dominated its air pollution concerns for half a century. Improvements were hard won and involved a range of measures that included lowering the emissions from automobiles and reducing the loss of volatile organic compounds from a wide range of sources (Fig. 1). These photochemical episodes were not limited to the Los Angeles basin and were soon recognized as quite frequent elsewhere in summer months. In the drought of 1976, Britain experienced high ozone concentrations. The high levels are often observed in rural areas, away from sources of primary pollutants and are the product of the reactions of pollutant precursors. Photochemical pollution is now experienced worldwide as liquid fuels have come to dominance. Summer smog and high ozone concentrations characterized pollution of the heat wave of Europe in 2003. This could potentially get worse with a changing climate as the severity and duration of summertime regional pollution episodes in the mid-western and north-eastern United States may increase significantly through the 21st century. This change arises because mid-latitude cyclones and their associated cold fronts, which disperse the air pollutants, will decrease in frequency in a warmer climate. Pollutant concentrations during these episodes increase by 5%–10% and the mean episode duration may double. Clearly a greater reduction in precursors to photochemical smog will be required to maintain air quality.

Winter Nitrogen Oxide Smogs In cities the winter atmosphere of the past was typified by sulfur dioxide and smoke-laden fogs from the combustion products of coal that has largely changed now. The contemporary winter atmosphere is laden with nitric oxide, so modern urban smog can show itself as a very different type of episode. Under these situations, it is nitrogen dioxide that accumulates. However, in winter there is insufficient ozone to oxidize nitric oxide to nitrogen dioxide in the normal way: NO þ O3 /NO2 þ O2 Yet in cold conditions and high nitric oxide concentrations the termolecular reaction 2NO þ O2 /2NO2 can be significant. This and an additional oxidation encouraged by the presence of conjugate dialkenes present in the polluted atmosphere results in very high nitrogen dioxide concentrations.

Forest Fires Large-scale forest fires have been well known for many decades in both South America and South East Asia. However, it was likely that the fires of 1997, which raised the greatest attention was associated with wider issues of perception as they were with the millennial fears, economic problems, and a severe El Nino event. The smoke caused raised great public- and political concern in Malaysia, especially in terms of its impact on respiratory health. This focus on a single event made it all too easy to ignore earlier events. An example would be the extensive fires of 1877 and 1878, reported in the Sarawak Gazette of 24 April 1878 “Fires are bursting forth in all directions, and such a dense haze . for months that traffic . stop[ped].” The article went on to compare the haze to a November English fog, perhaps hinting at a relationship with respiratory health reminiscent of the polluted Victorian London’s fogs. Forest fires were largely a product of forest clearance in both South America and South East Asia. However, in other parts of the world, they can often be related to changes in forestry practice. A range of studies suggest that there are health impacts and, naturally enough, these are severe in those who must fight the fires. Interior air quality can also be affected, which is relevant given the advice that sensitive individuals should remain indoors during periods of high smoke concentrations. 1600

Hours

1200

800

400

0 1960

1970

1980

1990

2000

Year Fig. 1 The number of hours ozone exceeded 100 ppb at Crestline, California, showing the increasing frequency of high ozone concentrations followed by an improvement in recent years. Data from Lee, E. H., Tingey, D. T., Hogsett, W. E., Laurence, J. A. (2003). History of tropospheric ozone for the San Bernardino Mountains of Southern California. Atmospheric Environment 37, 2705–2717.

Air Pollution Episodes

45

Haze can also have far-reaching effects on climate, and the brown cloud that hangs over Asia is increasingly seen as responsible for a process known as global dimming. This smoke is seen as effectively cooling the world at a time when many other processes raise global temperature. In some ways, this may sound good news, but global dimming has the potential to enhance ultimately the problems of global climate change as we lower smoke concentrations in the atmosphere by reducing emissions.

Vog Volcanic eruptions emit large quantities of toxic material, which can pose hazards as severe as the rocks and lava. Individuals close to the eruptions can die very rapidly due to asphyxiation after inhaling hot ash-laden air. Further away or where concentrations are lower, they may still lead to more continuous exposures. The vog (volcano plus fog) can form as a haze and the particles are acidic and pose risks to health. It may be that the eruptions of Laki may have spread a harmful vog over Europe that increased mortality in France 1783–84. Today, the island of Hawaii suffers from fairly frequent air pollution episodes locally called vogs. In these periods, the southeastern part of the island is covered by a thick, acrid haze for several days. These represent natural air pollution episodes caused by sulfur dioxide gas and other pollutants emitted from Kilauea volcano, which can be a problem when light winds blow the volcanic plume across inhabited areas or the national parks. Reactions within the plume can lead to the production of sulfuric acid, and such acid aerosols pose a threat to health. There are also concerns about health effects arising from traces of toxic metals in volcanic plume. Vog limits visibility, which troubles drivers and is a problem for air traffic control. At times, there have been pressures to keep schoolchildren indoors and cancel outdoor events in the parks during periods of vog.

Dust Events Episodes with high levels of wind-blown dust take on a seasonal pattern in many parts of the world. In northern latitudes they can be rarer, but spectacular events can occur where large amounts of dust move from the Sahara desert to be deposited often as colored rain over Europe. On the Asia Pacific rim, the events are most frequent in the spring and have been associated with respiratory health concerns. In particular it is feared that the frequency of these events has increased because of changing agricultural practices within China, and intensive industrialization has meant that the dust absorbs pollutants when it passes through industrial regions.

Roadside High pollutant concentrations are also found along busy roads. The concentrations can be especially high at motorway interchanges; and the Gravelly Hill Interchange, when opened in 1972 near Birmingham, rapidly came to be known as “Spaghetti Junction,” because its intertwining loops and ramps resemble a bowl of spaghetti. The term has come to be generally applied (e.g., converging of interstates: I-65, I-64, and I-71 in Ohio and as part of the 8664 Feasibility Study, Louisville Kentucky) and associated with the noise- and air pollution problems these junctions create. Road tunnels also become polluted; thus they require ventilation to keep traffic fumes at tolerably low values, but the vents can create localized high pollution concentrations. There is a correlation between traffic flow and the concentrations of NO, SO2, and CO in busy cities; although this depends a little on the local situation. However, what we encounter is not only the potential for temporal issues but also spatial differences with heavily trafficked streets exhibiting higher concentrations. It also means that there is a poor correlation between exposure of commuters and pedestrians, compared with that which would be indicated by the air quality measurements at fixed monitoring sites. Ozone is very reactive in urban air, thus it can be depleted through reactions with nitric oxide so that roadside concentrations may be reduced, whereas in large open spaces such as parks, where most pollutants are usually found at lower concentrations, it can be enhanced.

Fireworks Some pollution episodes can be driven by cultural activities such as Diwali, Guy Fawkes night, and New Year celebrations when large numbers of fireworks are used. Such annual events often give rise to the highest particle load in the atmosphere of some cities. The issue has become so serious that there have been active attempts to ban or reduce the number of fireworks that are burnt. An editorial of February 1999 from the Honolulu Star Bulletin points to New Year’s Eve particulate concentrations of 1510 mg m 3. This is well in excess of the old US National Ambient Air Quality Standards for PM10 of 150 mg m 3 for a 24-h mean or the European Union 24-h limit value of 50 mg m 3. Governor Cayetano described the situation in Hawaii as “utter madness” while accepting the argument that the celebrations were a valued local tradition, “the practice had now grown so blatantly out of hand . and affected the rights of all people to breathe comfortably.”

Indoor Heterogeneity The indoor environment is also very heterogeneous with regard to both space and time. Regular indoor activities such as cooking can influence the concentration of combustion-derived pollutants and those derived from food (e.g., ethanol and acetic acid). There are diurnal and seasonal effects as we change ventilation patterns between summer and winter. There is also a balance between indoor- and outdoor sources of pollutants.

46

Air Pollution Episodes

Uptake of pollutants by the surfaces of indoor spaces means that some gases such as ozone or sulfur dioxide are typically found at lower concentrations indoors than outdoors. Other components such as tobacco smoke are typically at much higher concentrations indoors because these have indoor sources. There is a strong contemporary interest in the chemistry of indoor air, particularly the production of particles through the reaction of ozone with the terpenes found as fragrances in many consumer products.

Pollutant Frequency Distribution As we have seen in the earlier sections, pollutant concentrations are highly variable over time. This can be due to changes in emission strength, the speed and direction of the wind, inversion height, turbulence in the air, and changes in the chemical reactions in the atmosphere. Episodes are usually characterized by unusual meteorological conditions. In addition to random fluctuations, pollution concentration often shows periodic variations. These can be summer and winter changes, weekly cycles due to different emissions at the weekend, or diurnal changes that can reflect meteorology or emission patterns (some hints of these are seen in Fig. 2A). As most analytical methods require a finite time to determine pollution concentration, the results they give represent an average over the sampling period. Averaging times thus smooth out peak values, and the longer the averaging time the lower the maximum concentration found in the data set. Although averaging times conceal short-term peaks, it is not always a problem because health responses also require a finite time of exposure, and it is these times that are embodied in air pollution regulations. Several common distributions can be used to fit observed frequency distributions of air pollutant concentration data. The normal or Gaussian distribution can be a problem as the variation of pollutant concentration is often so high that it suggests a significant number of negative concentrations. Furthermore, it does not handle extreme values very well. A lognormal distribution is often seen as a good representation of concentration (see Fig. 2B). The success of this distribution has been explained on the basis of the near lognormality of wind speed distribution, although this explanation does not establish that wind speed distributions are solely responsible for observed concentration distributions. Other distributions such as the gamma and Weibull distributions are popular, and one should also note that wind speeds can be treated with the Weibull distribution. These distributions have no values less than zero, and the Weibull distribution has a very simple cumulative distribution function, which makes it easy to calculate the number of times the pollutants are likely to exceed a given concentration, and extremes (as seen in Fig. 2C) can be well represented too. In recent years, multifractal descriptions of air pollutant concentration time series have also been popular. Heterogeneity also has a spatial dimension. For example, the concentration of air pollutants indoors is very skewed with a few dwelling spaces showing extremely high values. Analyses of the concentration distributions suggest that the gamma distributions are capable of representing organic compounds found indoors. Describing the pollutant distribution in this way makes it possible to give estimates of the probability of receiving certain levels of exposure to those living in such homes.

Policy Response The most obvious air pollution episodes are driven by unusual meteorological conditions. However, there are more regular seasonal processes that lead to high air pollution concentrations that cover broad geographical areas. Events or accidents are characterized by very high emissions over a relatively short period. There is also a spatial heterogeneity to air pollution that makes some places exhibit high pollutant concentrations. Notable air pollution episodes sparked legislation. The Clean Air Act, 1956, was an example of relatively simple legislation that sought to control emissions, but had to grapple with issues of personal freedom, in terms of what people burnt as fuels to cook food and heat their homes. Control strategies have to be more complex when secondary photochemical smog is so dominant, so this increasingly means adopting air quality management (AQM). This strategy has become apparent in regulatory approaches to improve air quality. It is complex and involves understanding of models, monitoring both meteorology and pollutant concentrations, emission inventories, and a clear set of standards for health. Despite the complexity, it is often adopted because it allows air chemistry to be incorporated into the regulation of air quality and provides some flexibility in terms of the socioeconomic application of restrictions. Other tactics have emerged in recent years. The European Commission has been very keen on the notion of alerts that involve changing traffic flow industry in times of high potential air pollution. Controls on, particularly, traffic have involved the odd–even number plate system, which allows vehicles access on alternate days and London’s congestion charges. The latter was not primarily aimed at air pollution control but has proved to be a useful adjunct to control measures and has showed how even unpopular regulations can ultimately be accepted by the public as beneficial. Indoor regulation is less evident. It potentially involves further restrictions on individual freedom. However, it is already apparent in the regulation of materials used in furnishings (to lower formaldehyde emissions) and higher ventilation rates in kitchens. South Korea has promulgated more definite regulations within its Indoor Air Quality Management Act, 2004, which addresses air quality in public buildings and newly built apartments; it was partly driven by concerns surrounding sick building syndrome. Although now in force, the success of this Act remains unclear.

Air Pollution Episodes

(A)

47

250

NO2

200 150 100 50 0 0

50

100

150

200

250 300

350

400

Day (B) 1600

1400

Number

1200 1000 800 600 400 200 0 10

30

50

70

90 110 130

0

15

0

17

0

19

0

21

0

23

NO2 (C)

120

Percent

100 80 60 40 20 0 0

50

100

150

200

250

NO2 Fig. 2 (A) The hourly nitrogen dioxide concentrations in mg m 3 Cromwell Road, London as days of the year 2007, showing the high degree of variation (data from http://www.airquality.co.uk). Note also the missing data in the early spring. (B) The hourly data plotted as a histogram in 10 mg m 3 bins. The number denotes the upper bound to each bin. The data appears to have only a slightly skewed distribution, but this is because the very high values are not obvious. (C) The cumulative distribution which shows that 50% of the data lie below 69 mg m 3. However, 1% of the data (83 h) are in excess of 143 mg m 3.

Episodes have sparked new regulations, and although some of the episodes of the past are unlikely to reoccur, it is clear that we confront different types of pollutants in the 21st century that will undoubtedly generate novel air pollution episodes.

See also: Air Pollution From Solid Fuels; Air Quality Legislation; Antarctic: Persistent Organic Pollutants and Environmental Health in the Region; Complex air pollution in Chinese cities; Intercontinental Air Pollution Transport: Links to Environmental Health; Long-Range Transport and Deposition of Air Pollution; Mobile Source Related Air Pollution: Effects on Health and the Environment.

48

Air Pollution Episodes

Further Reading Anenberg, S.C., West, J.J., Yu, H., Chin, M., Schulz, M., Bergmann, D., Bey, I., Bian, H., Diehl, T., Fiore, A., Hess, P., Marmer, E., Montanaro, V., Park, R., Shindell, D., Takemura, T., Dentener, F., 2014. Impacts of intercontinental transport of anthropogenic fine particulate matter on human mortality. Air Quality, Atmosphere and Health 7 (3), 369–379. Brimblecombe, P., 2006. The Clean Air Act after fifty years. Weather 61, 311–314. Chan, L.Y., Wu, H.W.Y., 1993. Study of bus commuter and pedestrian exposure to traffic air pollution in Hong Kong. Environment International 19, 121–132. Eaton, P., Radojevic, M., 2001. Forest Fires and Regional Haze in Southeast Asia. Nova Science Publishers, New York. Guerova, G., Jones, N., 2007. A global model of ozone enhancement during the August 2003 heat wave in Europe. Environmental Chemistry 4, 285–292. Gusev, A., MacLeod, M., Bartlett, P., 2012. Intercontinental transport of persistent organic pollutants: A review of key findings and recommendations of the task force on hemispheric transport of air pollutants and directions for future research. Atmospheric Pollution Research 3, 463–465. Hemispheric Transport Of Air Pollution 2010, This was an assessment done by the Task Force of the UNECE-LRTAP convention for the four priority pollutants of global concern which included Ozone and its precursors, PM, POPs and Mercury, UNECE, Geneva. The four full reports can be found at: http://www.htap.org. Kwon, H.-J., Cho, S.H., Chun, Y., et al., 2002. Effects of the Asian dust events on daily mortality in Seoul, Korea. Environmental Research 90, 1–5. Lee, E.H., Tingey, D.T., Hogsett, W.E., Laurence, J.A., 2003. History of tropospheric ozone for the San Bernardino Mountains of Southern California, 1963–1999. Atmospheric Environment 37, 2705–2717. McLeod, W.R., 1975. Merphos poisoning or mass panic? Australian and New Zealand Journal of Psychiatry 9, 225–229. Shi, J.P., Harrison, R.M., 1997. Rapid NO2 formation in diluted petrol-fuelled engine exhaustdA source of NO2 in winter smog episodes. Atmospheric Environment 31, 3857–3866. Slaughter, J.C., Koenig, J.Q., Reinhardt, T.E., 2004. Association between lung function and exposure to smoke among firefighters at prescribed burns. Journal of Occupational and Environmental Hygiene 1, 45–49. Snyder, L.P., 1994. The death-dealing smog over Donora, Pennsylvania: Industrial air pollution, public health policy and the politics of expertise, 1948–1949. Environmental History Review 18, 117–139. Taylor, J.A., Jakeman, A.J., Simpson, R.W., 1986. Modeling distributions of air pollutant concentrationsdI: Identification of statistical models. Atmospheric Environment 20, 1781–1789.

Air Pollution From Solid Fuelsq Sukesh Narayan Sinha, National Institute of Nutrition (ICMR), Hyderabad, India © 2019 Elsevier B.V. All rights reserved.

Abbreviations APTI Air pollution tolerance index ARI Acute respiratory infections COPD Chronic obstructive pulmonary disease DALY Disability adjusted life year HT Hindustan Times IARC International Agency for Research on Cancer LPG Liquefied petroleum gas PAHs Polycyclic aromatic hydrocarbons PMs Particulate matters VOCs Volatile organic compound

Introduction Any alteration in the physical, chemical, and biological attributes of air causing adverse effects on human and other organisms is termed as air pollution. The substances contributing to air pollution are known as air pollutants. They can be natural or manmade. Pollutants can also be classified as either primary or secondary. Primary pollutants are the result of direct processes like volcanic eruptions, carbon mono-oxide from automobiles or sulfur dioxide from industries. Secondary pollutants are the result of the interaction of primary pollutants with air. An important example of a secondary pollutant is solid biomass fuel which includes such as coal, charcoal, wood, agricultural wastes, animal dung, shrubs, grass, straw, and so forth. Greater than 80% of the people living in urban areas have exposure to bad quality air that exceeds the World Health Organization (WHO) limits. Even though all the countries in the world are affected, it causes greater impact in low income developing countries. The latest Urban quality database shows that 98% of cities (3000 cities in 103 countries) in low and middle income countries with more than 100,000 inhabitants do not meet WHO air quality guidelines where as in high income countries it is only 56%. Declining in the urban quality air causes the risk of stroke, heart disease, lung cancer and chronic and acute respiratory diseases like asthma (Source: WHO Global Urban Ambient Air Pollution, 2016). Concerns have been raised to control air pollution in the developing as well as the developed world. The combustion of solid fuels results in emission of toxic pollutants, called solid fuel “smoke,” which is the dominant source of indoor air pollution contributing to the burden of ill-health. The International Agency for Research on Cancer (IARC) has identified indoor air pollution from coal usage as a known human carcinogen, whereas that from biomass is identified as a probable human carcinogen. Most households in the developing world that use solid fuels belong to low socioeconomic group and are largely located in rural and remote settlements, where primary health-care facilities are grossly limited. This contribution brings an overview of the extent of the environmental pollution and health effects that arise out of the pollution due to combustion of solid fuels, with reference to the countries in the developing world. More than half of the world most polluted countries are from India, according to WHO which indicates the choking effects of industrial and vehicular exhausts. Delhi, India’s Capital city has dropped to 11th position while the Iranian city of Zabol tops the list. Gwalior and Allahabad from India falls on second and third position respectively (Fig. 1). The report took into account the annual average concentration of the particulate matter (PM) 2.5 in 3000 cities from 103 countries all over the world.

Global Consumption Pattern of Solid Fuels The global patterns of household energy use data are available, which come from national census information and energy use statistics. Nearly 180 countries use solid fuels. The consumption pattern of solid fuels as primarily cooking fuel among the households in different parts of the world is shown in Fig. 2 and the global population using solid fuels till 2015 data is shown in Fig. 3. q

Change History: July 2018. Sukesh Narayan Sinha updated the text. Added some things new shown as Interventions, Interventions on the Source of Pollution, Alternative fuels, Improved stoves, Interventions to the living environment, Interventions to user behavior, Air Pollution Tolerance Index. These have been added in the updated chapter. And all other sections have been updated in detail with recent updated data. This is an update of S.N. Sinha, P.K. Nag, Air Pollution from Solid Fuels, Editor(s): J.O. Nriagu, Encyclopedia of Environmental Health, Elsevier, 2011, Pages 46–52.

Encyclopedia of Environmental Health, 2nd edition, Volume 1

https://doi.org/10.1016/B978-0-12-409548-9.11266-7

49

50

Air Pollution From Solid Fuels

Fig. 1

Recent Global Pollution Data 2016. WHO, Published in HT, e paper.

Percentage 95 Data not available Not applicable Fig. 2

Households using solid fuels as the primary cooking fuel, by WHO region, 2010. WHO Global Health Observatory.

Fig. 3

Global population using solid fuels (millions) in 1990, 2003 (mid-point) and 2015.

Air Pollution From Solid Fuels

51

Indoor Air Pollution From Solid Fuels Products of incomplete combustion of solid fuel are a complex mixture of particulate and gaseous species. Nearly 60 hydrocarbons and 17 aldehydes and ketones are emitted in the gases of solid fuel-burning stoves. The gas-phase pollutants include compounds that are carcinogenic (benzene, formaldehyde, 1, 3-butadiene, and styrene). Incomplete combustion of wood and coal results in the formation of two- to four-membered aromatic rings that are emitted in gaseous form due to combustion of solid fuels. The higher molecular weight polycyclic aromatic hydrocarbons (PAHs), PAH derivatives, methylated and alkylated PAHs, and nitrogencontaining heterocyclic aromatic compounds are emitted in the form of particles from the combustion of bituminous smoky coal. The combustion of biomass fuels in the semiurban area of western India yielded organic and inorganic pollutants (Table 1), suggesting that benzene and toluene are also released due to the combustion of wood, dry leaves, crop residues, and cow dung; however, the level of benzene exposure is high in dung fuel as compared to wood fuel. The dung cake reflects the presence of biologic metals, such as iron and copper (Fe and Cu). Coal contains intrinsic concentration of sulfur, arsenic, silica, fluorine, lead, nickel, chromium, and mercury. On combustion, these elements are released as such or in the form of their oxides. Micro fibrous quartz can also be found in some smoky coals and the resulting coal smoke. Oxides of nitrogen, SO2, CO2, and CO are emitted from wood, dung, and coal. Cooking stoves burning solid biomass fuels emit CO, fine particles, and hydrocarbons. The extent of pollutants produced on burning of coal at high temperatures is more than the solid biomass fuels. Compared to kerosene and LPG, biofuel combustion generates several times more respirable particulates and gaseous species, PAHs, volatile organic compound (VOCs), and other pollutants, owing to its low thermal and heat transfer efficiencies. The particulate matters (PMs) generated from combustion of solid fuel are fine and ultrafine in size. PM10, PM5, and PM2.5 are emitted from wood, dung, and coal. Larger particles might result from the suspension of ash and solid fuel debris. Nanoparticles, with at least one dimension of 100 nm, are commonly produced from the combustion processes. These originate from carbon black and fly ash, due to combustion of coal.

Ambient Air Pollution From Solid Fuels Coal, a major energy resource, is an aggregate of heterogeneous substances composed of organic and inorganic materials. Four major coal types ranked in order of high heat value are lignite, subbituminous, bituminous, and anthracite. The inorganic portion of coal is composed of phyllosilicates (kaolinite, etc.), quartz, carbonates, sulfides, sulfates, and rare earth minerals (e.g., Al and Fe). Arsenic, nickel, zinc, cadmium, cobalt, and copper represent only a small fraction of the mineral matter. As reported, a typical (500 MW) coal-fired power plant burns 1.4 million tons of coal each year. Burning of coal causes smog, soot, acid rain, toxic air emissions, and warming (Table 2). Mining, transporting, and storing of coal pollute the environment. Ash, sludge, toxic chemicals, and waste heat create more environmental problems. In India alone, there are several hundred coal-fired power plants, and their stack emissions constantly pollute the ambient environment. The dust in the plant is produced due to handling of solid fuel, additives, and solid wastes (e.g., fly ash). The silica content of the dust is important, and the fly ash produced from the plant might contain approximately 60% SiO2.$There is a worldwide increase in vegetation fire. The smoke from vegetation fire travels vast distance and affects population from the ambient air pollution. The vegetation and forest fires contribute to toxic gaseous and particle air pollutants into the atmosphere. As reported (http://www.who.int/mediacentre/factsheets/fs254/en/index.html), nearly 2 billion metric tons of plant mass are burned annually in the process of land clearing. Approximately 800–1200 million metric tons of agricultural residues are burned annually. In North America, 2 million hectare of land area is burned annually and a 40–130 million hectare annually in Australia. Vegetation fire smoke consists of CO2, CO, NOx, SO2, and NH3 VOCs. Since sulfur is low in vegetation fuel, particles are produced in small quantities. Forest fires produce large quantities of SO2 and H2S. The whole range of trace elements can also be contained in particles produced from forest fires. Emission of aliphatic and aromatic hydrocarbons (alkanes, alkenes, and alkynes) is predominant in vegetation fire smoke, including emission of oxygenated organic compounds and chloromethane from biomass burning (dead and living vegetation). Semivolatile organic compounds such as PAH are found in vegetation fire smoke (Fig. 4). Table 1

Emission of air pollutants from combustion of solid fuels

Solid fuels

Pollutants

Wood

Oxides of nitrogen, SO2, CO2, CO, benzene, toluene, PAHs, benzo(a) pyrene, RPM, SPM, aldehyde Oxides of nitrogen, SO2, CO2, CO, benzene, toluene, PAHs, benzo(a)pyrene, RPM, SPM, fine particles, Cu, Fe, aldehyde Oxides of nitrogen, SO2, CO2, CO, benzene, toluene, PAHs, benzo(a)pyrene, RPM, SPM Oxides of nitrogen, SO2, CO2, CO, benzene, toluene, PAHs, benzo(a) pyrene, aldehyde, ketone, RPM, SPM, fine particles, nanoparticles, Cu, Fe, sulfur, arsenic, silica, fluorine, Pb, Ni, Cr, Hg

Dung Agricultural wastes and residues Coal

52 Table 2

Air Pollution From Solid Fuels Generation of pollutants from typical coal plants in given year

Pollution

Quantity

Effects

Carbon dioxide

3,700,000 tons

Sulfur dioxide



Small airborne particles

500 tons

Oxides of nitrogen

10,200 tons

Carbon monoxide Hydrocarbons and volatile organic compounds Mercury Arsenic Lead Cadmium

720 tons 220 tons 170 lb 225 lb 114 lb 4 lb

The primary human cause of global warmingdas much carbon dioxide as cutting down 151 million tree Causes acid rain that damages forests, lakes, and buildings and form small airborne particles that can penetrate deep into the lungs Causes chronic bronchitis, aggravated asthma and premature death, as well as haze obstructing visibility No leads to formation of ozone (smog) that inflames the lung tissues, making people more susceptible to respiratory illness Causes headache and stress in people with heart disease Forms ozone and causes several disease Highly carcinogenic Carcinogenic Highly carcinogenic and causes other health effects Highly carcinogenic and causes other health effects

http://www.ucsusa.org/clean_energy/coalvswind/co2c.html.

Fig. 4

Contribution of solid fuels to ambient air pollution. http://apps.who.int.

Pollution From Solid Waste Material Generation of solid waste has been a part of human activity from time immemorial. Felling trees have accelerated the ecodegradation. Natural events, like volcanic eruptions, release huge quantities of gases that affect the temperature profile of the region. The treatment of solid wastes includes incineration, on-site thermal destruction, and thermal desorption process. However, temperature controlled incineration and treatment of contaminated soils, sediments, and wastes at superfund sites are the preferred methods of remediation. The containment of superfund sites prevents the spread of contaminants, to the soil, air, and water. The combustion and thermal processes of solid wastes are dominant sources of air pollution. Pollutants are emitted due to combustion of solid wastes in the gas phase and fine and ultrafine particulate matter or PM (PM2.5 with an aerodynamic diameter of 2.5 mm; PM0.1 mm with an aerodynamic diameter of 00.1 mm). Particulate matter (PM) being a complex mixture consists of varying combinations of dry solid fragments, solid cores with liquid coatings and small droplets of liquid. PM10 being oneseventh the diameter of a normal human hair is found majorly in indoor air pollution and forest fires. It consists of sulfate, nitrates, ammonia, sodium chloride, and black carbon dust. PM2.5 results from biomass and fossil fuel combustion as well as natural sources like windblown dust and volcanic activity (Fig. 5). Since PM2.5 is ultrafine in size, it can cause several diseases related to lungs and heart. These emissions impact human health, including cardiopulmonary disease and cancer, and other life threatening diseases. The particles deposited in the respiratory tract are directly proportional to aerodynamic diameter of the particles. PM10 is deposited mainly in the upper respiratory tract and may be cleared by mucociliary actions (Fig. 6). PM2.5 and PM0.1 penetrate the alveolar regions of the lung, whereas the ultrafine PM can rapidly penetrate the epithelium. Fine and ultrafine PMs are mediated mainly

Air Pollution From Solid Fuels

Fig. 5

53

It shows the PM2.5 air pollution, mean annual exposure (micrograms per cubic meter), 2015.

250

PM 10 [ug/m3]

200 150 100 50

0 l

bu

at a lk Ko

Ai

re s

ka

a Dh

os

iro

Ca

en

ci ty co ex i

M

M

g

jin

i Be

n ta

Is

Bu

o Pa o Sa

an

gh

ul

ai

i lh Sh

De

i

ba

um

Fig. 6 PM10 levels for available mega cities of more than 14 million habitants for the last available year in the period 2011–2015. WHO’S urban ambient air pollution database.

by phagocytic activity and particle dissolution. The ability of PM0.1 to translocate to the pulmonary interstitium suggests that these particles might have potential health impact on the organ systems. The 2,3,7,8-tetrachlorodibenzo-p-dioxin (TCDD) enters the body by multiple routes of exposure, such as inhalation, ingestion, or dermal absorption, resulting in its tissue deposits in the liver and fat. Hydrocarbons (including brominated/chlorinated dioxins) and redox-active persistent and free radicals are emitted from combustion and thermal processes. Ozone and other organic pollutants such as benzene, polychlorinated dibenzo-p-dioxins and dibenzofurans, acrylonitrile, and methyl bromide are the products of incomplete combustion of solid waste. Greenhouse gases, CO2, and nitrogen oxides are produced from complete combustion.

54

Air Pollution From Solid Fuels

Impact on Health According to WHO, approximately 3 billion people all over the world are still using solid fuels to sustain their basic household requirements. This results in air pollution contributing to 4.5% of the global burden of disease. The burden from solid fuels disease remains most prominent in areas having improper access to clean fuels, mostly poor household in developing countries. Globally, 2.6% of all ill-health is attributable to indoor smoke from solid fuels, nearly all in poor regions. Research questions remain to be elaborated about the exact mechanism of production of air pollution from combustion of solid fuels, including the chemical and biologic reactivity of the pollutants in the human system (Fig. 7). Also, the bioavailability of toxicants in the body tissues and its cellular and molecular mechanism of toxicity are to be well understood. The health effects associated with exposure to by-products from the combustion of hazardous solid wastes include acute respiratory infections (ARI), chronic obstructive pulmonary disease (COPD), lung cancer (from coal smoke), asthma, cancer of the nasopharynx and larynx, tuberculosis, cardiovascular disease, perinatal conditions and low birth weight, cataract, and blindness due to exposure to persistent pollutants. The reproductive effects include developmental changes in the male reproductive tract such as testicular abnormalities. The O2-carrying capacity of blood may be reduced through exposure to CO during pregnancy, and can retard the growth of fetuses. Chronic cardiovascular inflammation including both time- and dose-dependent myocardial injuries might result from long-term exposure to PM. The histopathology of cardiac tissues revealed randomly distributed foci of inflammatory responses, neutrophils, lymphocytes, and macrophages. Burning of biomass-based solid fuels might increase cytogenetic alteration in blood lymphocytes of those who are exposed to noxious gases and toxic substances. The exposure to biomass fuel smoke leads to greater levels of DNA damage than exposure to LPG combustion products. The air pollutants can modulate immune responses due to certain respiratory viral infections. The PMs can interfere with the replication of respiratory syncytial viruses, leading to decreased production of pro inflammatory cytokines. The exposure to PM2.5 produced from indoor smoke generated from burning of solid fuels is much higher than that from urban outdoor pollution, which has been frequently associated with asthma. Strong associations have been observed between exposure of pollutants due to biomass fuel and chronic bronchitis in women and respiratory infection in children. Meta-analysis concluded the risk of pneumonia might be increased in young children due to exposure to unprocessed solid fuel by a factor of 1.8. In India, an increased risk of cataracts among people has been reported, using solid biomass fuels. Evidence suggests that exposure to solid fuels is associated with cataracts, and animal studies show that cataracts can be caused by wood smoke. An adjusted excess risk of stillbirth of 50% women using biomass fuels during pregnancy has been reported. Low birth weight has been associated with household exposure to biomass smoke in Guatemala. A statistically significant relationship has been reported between the use of biomass fuel and the incidence of tuberculosis in adults. Exposure to outdoor air pollution has also been associated with tuberculosis. Animal studies have shown that wood smoke causes immune suppression in the respiratory system. Interstitial lung disease and ischemic heart disease have association with long-term exposures to particulate air pollution due to solid biomass fuels. Wood smoke might cause increased risk of developing nasopharyngeal and digestive tract cancers. Fig. 8 shows the death rate from air pollution per 100,000 individuals (differentiated by attribution to ozone particles or indoor fuel pollution). In Fig. 9, Asia and Africa are the countries using solid fuels most extensively, the percentage of all DALYs due to household pollution has seen a encouraging trend. The period between 1990 and 2015 has seen a 15.5% decrease in total deaths and 37% decrease in DALY in relation to household air pollution exposure. Fig. 10 shows that of all the risk factors air pollution is the most predominant one ranking 10th among the risk factors contributing to approximately 2.85 million deaths in the year 2015 afflicted with cardiovascular diseases, COPD, LRIS and lung cancers. In that case of children below 5 years, it is 7th leading risk factor for mortality due to respiratory infections such as pneumonia. In south Asian countries house hold air pollution is 4th leading mortality risk factor and in the countries which are considered to

Fig. 7 Abnormality in skin appearance and fingers due to exposure to solid fuels. http://apps.who.int/iris/bitstream/10665/43766/1/ 9789241505735_eng.pdf.

Air Pollution From Solid Fuels

Fig. 8

Death rate from air pollution per 100,000, World. IHME, GBD 2016.

7.0

6.0

% total DALYs

5.0

4.0 3.0 2.0

1.0

0.0 1990

1995

2000

2005

Western Pacific Region

Eastern Mediterranean Region

South-East Asia Region

Region of the Americas

African Region

European Region

2010

2015 GLOBAL

Fig. 9 Trends in percentage of total DALYs attributable to household air pollution by region, 1990–2015, using WHO-defined regions. State of global air/2017 a special report on global exposure to air pollution and its disease burden.

55

56

Air Pollution From Solid Fuels

Fig. 10 Percentage of total DALYs in each country attributable to household air pollution in 2015. State of global air/2017 a special report on global exposure to air pollution and its disease burden.

Fig. 11

DALY. https://upload.wikimedia.org/wikipedia/commons/1/19/DALY_disability_affected_life_year_infographic.svg.

be under developed by the World Bank, the risk factor is much higher; it is the 2nd most leading risk factor. Asia and Africa are the countries in which the burden of mortality is the highest due to household air pollution. DALY is the measure of disease burden expressed in terms of number of years lost due to ill-health, disability or early death (Fig. 11).

Interventions Solid fuels will continue to remain as the main source of household fuel in the developing countries for a foreseeable future. Pollution arising from the usage of solid fuel essentially demands multiple measures of intervention for its control. The design improvement of cooking stove is a necessary step, with the provision that it is cost-effective for the poorer people. Various interventions are available to reduce indoor air pollution depending on the level at which they are affective.

Interventions on the Source of Pollution Alternative fuels

A switch from solid fuels to clean and more efficient fuels and energies like:

• • • •

liquid petroleum gas (LPG) biogas and producer gas electricity solar power

Air Pollution From Solid Fuels

57

Improved stoves

Biomass as a fuel for cooking is a common practice in rural India, and about 700 million people use traditional stoves to meet their energy demand (Source: Science Direct, 2017). Selection of stove is an uphill task for stove user (Fig. 12), for a variety of consideration, such as (1) the cost of the stove should be affordable for the poorer section of the society; (2) physical structure, durability, and dependability of stove should be optimum so that it is easy to use; and (3) the types of fuel and its easy availability and cost effectiveness. An improved stove should have the following features to reduce pollution: The cooking surface may be maintained at a waist height to minimize exposure of cooking women to pollutants. The stove should have enclosed combustion chamber to prevent emission of air pollutants in the kitchen and adjacent living areas. The outer wall of the stove may be made with layers of bricks, on top of which the firebrick combustion chamber is formed. Improved wood- or charcoal-burning stoves attached with a chimney may vent out emissions from the kitchen, with consequent effect in reducing pollution. Studies from Nepal show that improved stoves with a chimney reduce pollution by approximately two-third. In Latin America, the plancha-type stove has been shown to reduce PMs by approximately two-third. Reports from India show reduction in the level of PM, from 1500 to 2000 mg m 3 using wood and animal dung to 76–101 mg m 3 using LPG. Evidence does indicate that improved biomass stoves were a cost-effective intervention in South Asia and sub-Saharan Africa (Fig. 13).

Interventions to the living environment

The cooking and living area should be properly ventilated which can reduce the exposure to smoke. Use of chimneys, smoke hoods and enlarged and repositioned windows can be immensely helpful in this.

Fig. 12 (A) Women using conventional methods for cooking causing direct exposure to them. (B) A collection of improved biomass stoves. Global alliance for clean cook stoves.

Reduction in pollution (in %)

120 100

100 80 65 60

50

40 20 0 East Africa (improved)

Fig. 13

Latin America (improved) Region

Percentage reduction in air pollution among the population using improved cooking stove.

India (improved)

58

Air Pollution From Solid Fuels

Interventions to user behavior

Some changes in behavior and habits of using solid fuels like drying the fuel wood before use causes reduction in smoke production. The most vulnerable age group like young children should be kept away from smoke and other conditions that can be hazardous to their health. Fig. 14 shows the comparison of use of solid fuels and nonsolid fuels. The more and more use of nonsolid fuels contributes to a prosper environment and development along with increased cleanliness, efficiency, cost and convenience.

Air Pollution Tolerance Index Air pollution tolerance index or APTI is the evaluation of tolerance and sensitivity of the tree species to air pollution. The plant biochemical parameters such as ascorbic acid content, chlorophyll content, leaf extract pH, and relative water content are impacted by air pollution. Plants cannot be categorized as sensitive or tolerant based on these parameters as the response of plants is different to different pollutants (Table 3). The APTI was determined by the formula: APTI ¼ A ðT þ PÞ þ R=10; where A is the ascorbic acid content (mg/g), T is the total chlorophyll (mg/g), P is the pH of the leaf extract, and R is the relative water content of leaf (%).

Increasing cleanliness, efficiency, cost and convenience Decreasing health impacts Electricity Gas, liquid petroleum gas Kerosene

Non-solid fuels

Solid fuels

Charcoal, coal Wood Crop waste, dung

Increasing prosperity and development Fig. 14

Comparison of use of solid fuels and nonsolid fuels. South Sudan Medical Journal.

Table 3

Vegetables tolerant to air pollutants

Pollutants

Vegetables

Hydrogen fluoride

Tomato (Lycopersicon esculentum) Squash (Cucurbita maxima) Pumpkin (Cucurbita pepo) Beet (Beta vulgaris) Lettuce (Lactuca sativa) Mint (Mentha papyracea) Turnip (Brassica rapa) Carrot (Daucus carota) Cabbage (B. oleracea capitata) Cowpea (Vigna sinensis) Cabbage (B. oleracea capitata) Cauli flower (Brassica oleracea var. botrytis) Carrot (Daucus carota) Onion (Allium cepa) Cucumber (Cucumis sativus) Tomato (Lycopersicon esculentum) Onion (Allium cepa) Cauli flower (Brassica oleracea var. botrytis) Brinjal (Solanum melongena)

Ozone

Peroxyacyl nitrate (PAN)

Sulfur dioxide

https://www.slideshare.net/sjcc/air-pollution-evs-presentation-65063.

Air Pollution From Solid Fuels

59

See also: Air Quality Legislation; Biomass Burning and Regional Air Quality; Community Outdoor Air Quality: Sources, Exposure Agents and Health Outcomes; Complex Air Pollution in Chinese cities; Dust Production Following Forest Disturbances: Health Risks; Indoor Air Pollution Attributed to Solid Fuel Use for Heating and Cooking and Cancer Risk; Residential and Non-Residential Biomass Combustion: Impacts on Air Quality; Solid Fuel Use: Health Effect; Solid Fuels: Health Effects; Solid Waste Incinerators: Health Impacts.

Further Reading Abalak, R., Bruce, N., McCraken, J.P., Smith, K.R., de Gallardo, T., 2001. Indoor respirable particulate matter concentrations from an open fire, improved cook stoves and LPG/ open fire, combination in rural Guatemalan community. Environmental Science & Technology 35, 2650–2655. Agarwal, S., Yamamoto, S., et al., 2015. Effect of indoor air pollution from biomass and solid fuels combustion on symptoms of preclampsia/eclampsia in Indian women. Indoor Air 25, 341–352. Aggarwal, A.L., Raiyani, C.V., Patel, P.D., Shah, P.G., Chatterjee, S.K., 1982. Assessment of exposure to benzo(a)pyrene in air for various population groups in Ahmedabad. Atmospheric Environment 16, 867–870. Al-Khulaifi, N.M., Al-Mudhaf, H.F., Abu-Shady, A.I., et al., 2018. A new method for simultaneous analysis of semi-volatile organic compounds in outdoor/indoor air of large office buildings. International Journal of Environmental Science and Technology. https://doi.org/10.1007/s13762-018-1828-2. Amegah, K., Quansah, R., Jaakkola, K., et al., 2014. Household air pollution from solid fuels use and risk of adverse pregnancy outcomes: A systematic review and meta analysis of the empirical evidence. PLoS One 9 (12), e113920. https://doi.org/10.1371/Journal.Pone.0113920. Ashraf, A., Khanam, S., et al., 2013. Effects of indoor air pollution on human health: A micro level study of Aligarh city-India. Merit Research Journal of Education and Review 6, 139–146. Asif, Z., Chen, Z., et al., 2018. A study of meteorological effects on PM 2.5 concentration in missing area. Atmospheric Pollution Research 9 (4), 688–696. Azhari, A., Latif, T.M., et al., 2018. Road traffic as an air pollutant contributor within an industrial park environment. Atmospheric Pollution Research 9 (4), 680–687. Baker, R.J., Hertz-Picciotto, I., Dostal, M., et al., 2007. Coal home heating and environmental tobacco smoke in relation to lower respiratory illness in Czech children, from birth to 3 years of age. Environmental Health Perspectives 7, 1126–1132. Balakrishnan, K., Parikh, J., Sankar, S., et al., 2002. Daily average exposures to respirable particulate matter from combustion of biomass fuels in rural households of Southern India. Environmental Health Perspectives 110, 1069–1075. Balakrishnan, K., Ramaswamy, P., et al., 2011. Air pollution from household solid fuel combustion in India: An overview of exposure and health related information to inform health research priorities. Global Health Action 4, 5638–5647. Boy, E., Bruce, N., Delgado, H., 2002. Birth weight and exposure to kitchen wood smoke during pregnancy in rural Guatemala. Environmental Health Perspectives 110, 109–114. Brauer, M., 1999. Health impacts of biomass air pollution. In: Goh, K.-T., Schwela, D., Goldammer, J.G., Simpson, O. (Eds.), Health guidelines for vegetation fire eventsdBackground papers, 6–9 October 1998. WHO/UNEP/WMO, Lima, Peru, pp. 186–257. Bruce, G., Dherani, M., Das, J., Balakrishnan, K., et al., 2013. Control of household air pollution for child survival: Estimates for intervention impacts. BMC Public Health 13, 1471–1485. Carrer, P., Wolkoff, P., 2018. Assessment of Indoor air quality problems in office-like environments: role of occupational health services. International Journal of Environmental Research and Public Health 15 (4), 741. https://doi.org/10.3390/ijerph15040741. Chapman, R.S., He, X., Blair, A.E., Lan, Q., 2005. Improvement in household stoves and risk of chronic obstructive pulmonary disease in Xuanwei. China: Retrospective cohort study. British Medical Journal 331, 1050–1056. Chen, Y., Du, W., Zhuo, S., Liu, W., Liu, Y., Shen, G., Wu, S., Li, J., Zhou, B., Wang, G., Zeng, E.Y., Cheng, H., Liu, W., Tao, S., 2017. Stack and fugitive emissions of major air pollutants from typical brick kilns in China. Environmental Pollution 224 (421). Clark, M.L., Jennifer, P., Balakrishnan, K., et al., 2013. Health and household air pollution from solid fuel use: The need for improved exposure assessment. Environmental Health Perspectives 121, 1120–1128. Cormier, S.A., Lomnicki, S., Backes, W., Dellinger, B., 2006. Origin and health impacts of emissions of toxic by-products and fine particles from combustion and thermal treatment of hazardous wastes and materials. Environmental Health Perspectives 114, 810–817. Dappe, V., Uzu, G., et al., 2018. Single particle analysis of industrial emissions brings new insights for health risk assessment of PM. Atmospheric Pollution Research 9 (4), 697–704. Dherani, M., Pope, D., Mascarenhas, M., Smith, K.R., Weber, M., Bruce, N., 2008. Indoor air pollution from unprocessed solid fuel use and pneumonia risk in children aged under five years: A systematic review and meta-analysis. Bulletin of the World Health Organization 86, 390–398. Donaldson, K., Tran, L., Jimenez, L.A., et al., 2005. Combustion-derived nanoparticles: A review of their toxicology following inhalation exposure. Particle and Fibre Toxicology 2, 10. https://doi.org/10.1186/1743-1768. Du, W., Shen, G., Chen, Y., Zhuo, S., Xu, Y., Li, X., Pan, X., Cheng, H., Wang, X., Tao, S., 2017. Wintertime pollution level, size distribution and personal daily exposure to particulate matters in the northern and southern rural Chinese homes and variation in different household fuels. Environmental Pollution 231 (497). Emmelin, A., Wall, S., 2007. Indoor air pollution: A poverty-related cause of mortality among the children of the world. Chest 132, 1615–1623. Ezzati, M., Kammen, D.M., 2002. The health impacts of exposure to indoor air pollution from solid fuels in developing countries: Knowledge, gaps, and data needs. Environmental Health Perspectives 110, 105–1068. Garcia-Marcos, L., Guillen, J.J., Diniddie, R., Guillen, A., Barbero, P., 1999. The relative importance of socio-economic status, parental smoking and air pollution (SO2) on asthma symptoms, spirometry and bronchodilator response in 11-year-old children. Pediatric Allergy and Immunology 10, 96–100. Johnston, F.H., Bailie, R.S., Pilotto, L.S., Hanigan, I.C., 2007. Ambient biomass smoke and cardio-respiratory hospital admissions in Darwin, Australia. BMC Public Health 7, 240. Available online at. http://www.biomedcentral.com/1471-2458/7/240. Khoshoo, T.N., 1986. Environmental priorities in India and sustainable development. Indian Science Congress Association, New Delhi. Kowalsa, M., 2016. Relationship between quality of ambient air and respiratory diseases in polish population. WIT Transactions on Ecology and the Environment 207, 195–202. Lei, Z., et al., 2018. Effect of natural ventilation on indoor air quality and thermal comfort in dormitory during winter. Building and the Environment 25 (15), 240–247. Liao, J., et al., 2017. The impact of household cooking and heating with solid fuels on ambient PM2.5 in peri-urban Beijing. Atmospheric Environment 165, 62–72. Magnavita, N., 2015. Work-related symptoms in indoor environments: A puzzling problem for the occupational physician. International Archives of Occupational and Environmental Health 88, 185–196. https://doi.org/10.1007/s00420-014-0952-7. Masatoshi, T., 2018. 43 Indoor air conditions of office room in winter. Occupational and Environmental Medicine 75, A198. Mavalankar, D.V., Trivedi, C.R., Grah, R.H., 1991. Levels and risk factors for perinatal mortality in Ahmedabad, India. Bulletin of the World Health Organization 69, 435–442. McCracken, J.P., Smith, K.R., Anaite, D., Murray, A., Mittleman, J.S., 2007. Chimney stove intervention to reduce long-term wood smoke exposure lowers blood pressure among Guatemalan women. Environmental Health Perspectives 115, 996–1001.

60

Air Pollution From Solid Fuels

Milt Statheropoulos, M., Goldammer, J.G., 2007. In: In: Vegetation fire and smoke: Nature, impact and policies to reduce negative consequences on human and the environment. As a Contribution to the 4th International Wild Land Fire Conferences, Sevilla, Spain, 13–17 May 2007. Mishra, V.K., Retherford, R.D., Smith, K.R., 1999. Biomass cooking fuels and prevalence of tuberculosis in India. International Journal of Infectious Diseases 3, 119–129. Mohan, M., Sperduto, R.D., Angra, S.K., et al., 1989. India–US case-control study of age-related cataracts. India–US case-control study group. Archives of Ophthalmology 107, 670–676. Mott, J.A., Meyer, P., Mannino, D., et al., 2002. Wild land forest fire smoke: Health effects and intervention evaluation, Hoopa, California, 1999. The Western Journal of Medicine 176, 157–162. http://www.ewjm.com. accessed 11 December 2009. Mudway, I.S., Duggan, S.T., Venkataraman, C., Habib, G., Kelly, F.J., Grigg, J., 2005. Combustion of dried animal dung as biofuel results in the generation of highly redox active fine particulates. Particle and Fibre Toxicology 2, 6. https://doi.org/10.1186/1743-8977-2-6. Available online at. http://www.particleandfibretoxicology.com/content/2/1/6. Musthapa, M.S., Lohani, M., Tiwari, S., Mathur, N., Prasad, R., Rahman, Q., 2004. Cytogenetic biomonitoring of Indian women cooking with biofuels: Micronucleus and chromosomal aberration tests in peripheral blood lymphocytes. Environmental and Molecular Mutagenesis 52, 243–249. Orakij, W., Chetiyanukornkul, T., Chuesaard, T., Kaganoi, Y., Uozaki, W., Homma, C., Boongla, Y., Tang, N., Hayakawa, K., Toriba, A., 2017. Personal inhalation exposure to polycyclic aromatic hydrocarbons and their nitro-derivatives in rural residents in northern Thailand. Environmental Monitoring and Assessment 189, 10. Ovrebs, S., Fjeldstad, P.E., Grzybowska, E., Kure, E.H., Chorazy, M., Haugen, A., 1995. Biological monitoring of polycyclic aromatic hydrocarbon exposure in a highly polluted area of Poland. Environmental Health Perspectives 103 (9), 838–843. Pandey, A.K., Bajpayee, B., Parmar, D., et al., 2005. DNA damage in lymphocytes of rural Indian women exposed to biomass fuel smoke as assessed by the comet assay. Environmental and Molecular Mutagenesis 45, 435–441. Parajuli, I., Lee, H., et al., 2016. Indoor air quality and ventilation assessment of rural mountainous households of Nepal. International Journal of Sustainable Built Environment 5, 301–311. Patel, D., Kumar, N., 2018. An evaluation of air pollution tolerance index and anticipated performance index of some tree species considered for Green Belt development: A case study of Nandesari industrial area, Vadodara, Gujarat, India. Open Journal of Air pollution 7, 1–13. Rao, C., Qin, C., Robison, W., Zigler, J., 1995. Effects of smoke condensate on the physiological integrity and morphology of organ cultured rate lenses. Current Eye Research 14, 295–301. Rehfues, E., Mehta, S., Pruss-Ustun, A., 2006. Assessing household solid fuel use: Multiple implication for the millennium development goals. Environmental Health Perspectives 114, 373–378. Robin, L.F., Lees, P.S.J., Winget, M., Steinhoff, M., Moulton, L.H., Santhosham, M., 1996. Wood burning stoves and lower respiratory illness in Navajo children. The Pediatric Infectious Disease Journal 15, 859–865. Saha, A., Kulkarni, P.K., Shah, A., Patel, M., Saiyed, H.N., 2005. Ocular morbidity and fuel use: An experience from India. Occupational and Environmental Medicine 62, 66–69. Shalini, V., Lothra, M., Srinivas, L., 1994. Oxidative damage to the eye lens caused by cigarette smoke and fuel smoke condensates. Indian Journal of Biochemistry & Biophysics 31, 261–266. Sinha, S.N., Patel, T.S., Shah, S.H., et al., 2003. A correlation of secondary aerosol (nitrate and sulphate) with respirable particulate matter (RPM) in ambient air at different traffic junctions of Vadodara city. Journal of Environmental Biology 26 (2), 187–190. Sinha, S.N., Kulkarni, P.K., Shah, S.H., et al., 2005. Gas chromatographic mass spectroscopic determination of benzene in indoor air during the use of biomass fuels in cooking time. Journal of Chromatography A 1065 (2), 315–319. Sinha, S.N., Kulkarni, P.K., Shah, S.H., et al., 2006. Environmental monitoring of benzene and toluene produced in indoor air due to combustion of biomass fuels. The Science of the Total Environment 357, 280–287. Smielowska, M., Marc, M., et al., 2017. Indoor air quality in public utility environmentsdA review. Environmental Science and Pollution Research International 24, 11166–11176. Smith, K.R., Liu, Y., 1994. Indoor air pollution in developing countries. In: Samer, J. (Ed.), Epidemiology of lung cancer. Dekker, New York. Smith, K.R., Samet, J.M., Romieu, I., Bruce, N., 2000. Indoor air pollution in developing countries and acute lower respiratory infections in children. Thorax 55, 518–522. Strachan, D.P., Cook, D.G., 1998. Health effects of passive smoking. 6. Parental smoking and childhood asthma: Longitudinal and case control studies. Thorax 53, 2004–2212. Thomas, P., Zelikoff, J., 1999. Air pollutants: Moderators of pulmonary host resistance against infection. In: Holgate, S.T. (Ed.), Air pollution and health. Academic Press, San Diego, CA. West, S., 1992. Does smoke get in your eyes? Journal of the American Medical Association 268, 1025–1026. Xi, H., Li, W., Micheel, D.A., Nadas, A., Frenkel, K., Finkelman, R.B., 2005. Mapping and prediction of coal workers’ pneumoconiosis with bioavailable iron content in the bituminous coals. Environmental Health Perspectives 113, 964–968. Xie, H., Zhao, S., Cao, G., 2014. Study on control standard and comparison of PM2.5 at home and abroad. Building Science 6 (30), 37–43. Xu, X., Dring, H., Wang, X., 1995. Acute effects of total suspended particles and sulfur dioxides on preterm delivery: A community based cohort study. Archives of Environmental Health 50, 407–415.

Air Quality Legislation A Fino, National Research CouncildInstitute of Atmospheric Pollution Research (CNR-IIA), Monterotondo, Roma, Italy © 2019 Elsevier B.V. All rights reserved.

Abbreviations AQGs Air quality guidelines BaP Benzo[a]pyrene CAA Clean Air Act Cd Cadmium CLTRAP Convention on long-range transboundary air pollution CO Carbon monoxide CRF Concentration–response function Hg Mercury IARC International Agency for Research on Cancer NH3 Ammonia NMVOC Nonmethane volatile organic compounds NO2 Nitrogen dioxide NOx Nitrogen oxides O3 Ozone Pb Lead PM Particulate matter PM10 PM < 10 mm in aerodynamic diameter PM2.5 PM < 2.5 mm in aerodynamic diameter POPs Persistent organic pollutants SO2 Sulfur dioxide UNECE United Nations Economic Commission for Europe UNFCCC United Nations Framework Convention on Climate Change WHO World Health Organization

Introduction and Objectives Air pollution has adverse effects both on human health and on the environment. The effects of air pollution have been intensively studied in the last years. An extensive body of legislation, which sets evidence-based standards and objectives, has been defined for some air pollutants as response to this issue. The air we breathe contains emissions deriving from many natural sources, such as volcanoes, vegetation and oceans, and anthropogenic sources (i.e., induced by human activities) including motor vehicles, industry, sector of energy production as well as household fuel burning. Air pollution harms human health, particularly vulnerable population because of their age or existing health problems and fragile ecosystems. The aims of this work is to focus on the progress made within the air quality legislation, analyzing the WHO Air Quality Guidelines, set on the base of health-effect evidence, the environmental policies established in the last years at international level and air quality standards adopted in European Union, in United States and in China for the protection of human health.

Air Pollutants Air pollutants differ in a lot of features such as their chemical composition, their reactions, emissions, persistence in the environment, ability to be transported over long or short distances and their eventual impacts on human health and/or on the environment. However, they share some similarities and they can be grouped into different categories: 1. 2. 3. 4.

Gaseous pollutants (e.g., SO2, NO2, CO, ozone, volatile organic compounds). Persistent organic pollutants (e.g., dioxins). Toxic heavy metals (e.g., lead, mercury). Particulate matter (including PM10 and PM2.5, respectively, known as coarse and fine particulate matter).

Encyclopedia of Environmental Health, 2nd edition, Volume 1

https://doi.org/10.1016/B978-0-12-409548-9.11045-0

61

62

Air Quality Legislation

Gaseous pollutants contribute to a great extent in variations of the atmosphere composition and are mainly due to combustion of fossil fuels. Nitrogen oxides are mainly emitted as NO which rapidly reacts with ozone or radicals in the atmosphere forming NO2. The main anthropogenic sources are mobile and stationary combustion sources. Ground-level ozone (that differs from ozone present in upper layers of the atmosphere) is not emitted directly into the air, but is created by chemical reactions between nitrogen oxides (NOx) and volatile organic compounds (VOCs) in the presence of sunlight. CO, on the other hand, is a product of incomplete combustion. Its major source is road transport. While the anthropogenic SO2 results from the combustion of sulfur-containing fossil fuels (principally coal and heavy oils), while volcanoes and oceans are its major natural sources. Many of the so-called classical pollutants belong to this category. They are: SO2, NO2, CO and O3. These pollutants have been subject to in depth investigation on their health effect and many air quality guideline values and standards have been defined over time for them. Persistent organic pollutants are a toxic group of chemicals. They persist in the environment for long periods of time and their effects are magnified as they move up through the food chain (bio-magnification). Bio-magnification or Bio-accumulation is an increase in the concentration of a chemical in a biological organism over time, compared to the chemical’s concentration in the environment. This group of pollutants include pesticides, as well as dioxins, furans and polychlorinated biphenyls PCBs. Toxic Heavy metals include basic metal elements such as lead, mercury, cadmium, nickel, vanadium, chromium and manganese. They are natural components of the earth’s crust; they cannot be degraded or destroyed, and can be transported by air, and enter water and food chain. In addition, they enter the environment through a wide variety of sources, including combustion, waste water discharges and manufacturing facilities. They enter human bodies where, at higher concentrations they can become toxic. Most heavy metals are dangerous because they tend to bio-accumulate in the human body and have adverse effects (this is the case of mercury and its compounds). Particulate matter (PM) is the generic term used for a complex mixture of extremely small particles and liquid droplets that are suspended in the air, which vary in size and composition and are produced by a wide variety of natural and anthropogenic sources. Major sources of particulate pollution are industries, power plants, incinerators, motor vehicles, construction activity, fires, and natural windblown dust. The size of the particles varies (PM2.5 and PM10 have an aerodynamic diameter smaller than 2.5 and 10 mm respectively) and different categories have been defined: Ultrafine particlesdsmaller than 0.1 mm in aerodynamic diameter, fine particlesdsmaller than 1 mm, and coarse particlesdlarger than 1 mm. In common ways of speaking, the PM10 and PM2.5 are respectively referred as coarse and fine PM. The size of the particles determines the respiratory tract where they will deposit: PM10 particles deposit mainly in the upper respiratory tract while fine and ultra fine particles are able to flow deeper reaching lung alveoli.

WHO Air Quality Guidelines and Health Effects The WHO estimates that Ambient outdoor air pollution (the so-called ambient air pollution) has caused 4.2 million premature deaths worldwide in 2016; this mortality is due to exposure to PM2.5, which represents the cause of cardiovascular and respiratory disease. It is important to underline that humans are usually exposed to pollutant mixtures rather than to single air pollutants, and that the pollutant composition, the dose and time of exposure can lead to diverse impacts on human health. Human recognized health effects can range from nausea and difficulty in breathing or skin irritation, to cancer. Epidemiological data indicate that primarily affected systems are the cardiovascular and the respiratory system. However, the function of several other organs and systems (i.e., the nervous, urinary and digestive systems) can be also influenced. Also exposure during pregnancy is an issue of big concern. Health effects can be distinguished into acute, chronic not including cancer and cancerous. WHO estimated that in 2016 about 58% of ambient air pollution-related premature deaths were due to ischemic heart disease and strokes, while 18% of deaths were due to chronic obstructive pulmonary disease and acute lower respiratory infections, respectively, and 6% of deaths were due to lung cancer. In 2013 the International Agency for Research on Cancer (IARC) concluded that ambient air pollution is carcinogenic to humans, in particular the PM component of air pollution which is most closely associated with increased cancer incidence, especially lung cancer, and with an association also with increased cancer of the urinary system. In May 2015 the World Health Assembly, the decision-making body of WHO, adopted resolution WHA68.8 on health and the environment: highlighting and addressing the health impact of air pollution, requesting to all policy-makers to strengthen worldwide efforts to protect populations from the health risks posed by air pollution. The resolution recognized the relevant role of WHO air quality guidelines (AQGs) in providing guidance and recommendations for protecting human health from air pollution. Since the mid-1980s the WHO Regional Office for Europe has coordinated the development of a series of AQGs. WHO AQGs are the reference manuals that provide scientific evidence-based recommendations that help policy-makers, across the world, in setting standards and goals for air quality management and health protection. Three editions of ambient AQGs have been published respectively in 1987, 2000 and 2006. The WHO AQG authors clearly stated in the first edition in 1987 that “compliance with recommendations regarding guideline values does not guarantee the absolute exclusion of effects at levels below such values.” They recognized the limitations in protection especially in sensitive groups of the population and due to the uncertainties related to “combined exposure to various chemicals or exposure to the same chemical by multiple routes.” In that edition the definition of an adverse health effect proposed by the United States Environmental Protection Agency (US EPA) was adopted: “any effect resulting in functional impairment and/or

Air Quality Legislation

63

pathological lesions that may affect the performance of the whole organism or which contributes to a reduced ability to respond to an additional challenge.” Furthermore the first edition of the AQGs highlights a clear distinction between guidelines and standards, in the sense that the guideline values are not to be regarded as standards in themselves. Policy-makers and regulatory authorities need to consider economic, social and cultural factors when using the guidelines as a basis for setting standards. In certain circumstances pollutant standards can be set above or below the guideline values considering the wider context of environmental, social, economic and cultural conditions. The first AQG edition provided recommendations for 28 chemical air pollutants, including O3, NO2, CO, formaldehyde, lead and many others including formaldehyde, lead, manganese and mercury (Hg). Guidelines values were provided for combined exposure to SO2 and PM (black smoke). A second edition of the WHO AQGs was published in 2000 providing recommendations in the form of numerical values/ranges and unit risk factors or Concentration–Response Function (CRFs) for the pollutants included in the previous edition. In addition butadiene, polychlorinated biphenyls, dibenzodioxins and dibenzofurans, fluoride and platinum were added. A separate section for indoor air pollutants was also provided within the second edition. No new guideline values were provided for acrylonitrile, carbon disulfide, 1,2-dichloroethane, vinyl chloride, asbestos, hydrogen sulfide and vanadium compared to the previous edition; for these pollutants the recommendations from the 1987 AQGs were maintained. Furthermore for the first time guideline values were provided separately for SO2 (500–125d50 mg/m3 for averaging time of respectively 10 mind24 hd1 year) and PM compared to the previous edition. For PM (and also for O3) guidelines were provided as the relative risks of the estimated concentration–response functions (CRFs) developed for several outcomes for both long- and shortterm exposure. This allowed policy-makers to set their own standards (by selecting a level of acceptable exposure and associated health risk) by taking into account their local circumstances. Furthermore a numerical guideline was proposed for O3 (120 mg/m3 for an averaging time on 8 h) and for NO2 (200 and 40 mg/m3 for averaging time of respectively 1 hd1 year). Furthermore guideline values for sulfur dioxide, nitrogen oxides and ozone based on effects on terrestrial vegetation were provided in this WHO AQG second edition. Air quality guidelines: global update 2005, published in 2006, was a substantially different report from the 1987 and 2000 AQGs, as it focused only on four classical air pollutants: PM, ozone, NO2 and SO2. This was, to date, the last WHO publication that provided numerical ambient AQGs for PM, ozone, NO2 and SO2. The WHO Air quality guidelines are currently under revision with an expected publication date in 2020. The same guideline values were retained from the second edition of the WHO AQGs for NO2 (200 and 40 mg/m3 for averaging time of respectively 1 hd1 year). Concentration–response estimates (relative risks) were presented for PM in addition to the guideline values. For the first time interim targets were proposed for PM, ozone and SO2, as pollutant concentrations associated with a specified increase of mortality risk over that expected at the guidelines level, intended to address especially Member States with high levels of air pollution. For PM10dannual meandthe following interim targets (IT) and AQG value were provided: 70 mg/m3 as IT-1, 50 mg/m3 as 30 mg/m3 as IT-3 and 20 mg/m3 as AQG value. For PM2.5dannual meandthe following interim targets (IT) and AQG value were provided: 35 mg/m3 as IT-1, 25 mg/m3 as 15 mg/m3 as IT-3 and 10 mg/m3 as AQG value. For PM10–24/h meandthe following interim targets (IT) and AQG value were provided: 150 mg/m3 as IT-1, 100 mg/m3 as 75 mg/m3 as IT-3 and 50 mg/m3 as AQG value. For PM2.5–24/h meandthe following interim targets (IT) and AQG value were provided: 75 mg/m3 as IT-1, 50 mg/m3 as 37.5 mg/m3 as IT-3 and 25 mg/m3 as AQG value.

IT-2, IT-2, IT-2, IT-2,

Table 1 summarizes the guideline values set by WHO in the AQG global update 2005 (published in 2006) for PM, ozone, NO2 and SO2.

Air Quality Legislation Background Air quality guidelines and standards play an important role in the management of air quality. It is generally accepted that an air quality standard is a description of a certain level of air pollution, which is adopted by a regulatory authority as enforceable. At its simplest, an air quality standard is generally defined in terms of one or more concentrations and associated averaging times to be attained together with the exact definition of the type of standard established by the legislation (e.g., EU legislation provides different types of air quality standards for health protection: limit values and target values; the first are more legally binding than the latter). In addition, a number of elements have to be specified in the formulation of a standard. These elements include:

• • •

the measurement and monitoring strategy the data handling procedures the statistics used to derive the value to be compared with the standard including quality assurance and quality control requirements.

64

Air Quality Legislation Table 1

WHO Air Quality Guidelines 2006

Pollutant

Time-weighted average

Averaging time

Nitrogen dioxide

200 mg/m3 40 mg/m3 100 mg/m3 50 mg/m3 20 mg/m3 25 mg/m3 10 mg/m3 500 mg/m3 20 mg/m3

1h 1 year 8h 24 h 1 year 24 h 1 year 10 min 24 h

Ozone PM10 PM2.5 Sulfur dioxide

Source: WHO Regional Office for Europe. (2006). Air quality guidelines: global update 2005dParticulate matter, ozone, nitrogen dioxide and sulfur dioxide. Copenhagen: WHO Regional Office for Europe. http://www.euro.who.int/en/health-topics/environment-and-health/air-quality/publications/ pre2009/air-quality-guidelines.-global-update-2005.-particulate-matter,-ozone,-nitrogen-dioxide-and-sulfur-dioxide, accessed in May 2018.

International Air Quality Legislation Pollutants emitted from one country can be transmitted over long distances, especially in the case of persistent compounds in the air, causing adverse effects in other countries. In the international context, the problem of air pollution as a cross–border issue has been pushing the definition of legislation forward. The Convention on Long-range Transboundary Air Pollution (CLTRAP) was the first international legally binding instrument to deal with problems of air pollution on regional basis, aiming to solve the problem of the “acid rain” that was destroying forests, causing fish loss in lakes and putting entire ecosystems at risks in the Northern Hemisphere. The Convention was adopted on 13 November 1979 within the framework of the United Nations Economic Commission for Europe (UNECE, 1 of 5 regional commissions of UN that brings together 56 countries, including Canada and United States) and open then, in the same year, for signatures. The Convention entered into force on 16 March 1983 and has been then extended by eight protocols that identify specific measures to be taken by Parties to gradually reduce emissions of air pollutants. Over the years, the number of substances covered by the Convention and its protocols has been gradually extended to include many pollutants such as sulfur dioxide (SO2), nitrogen oxides (NOx), ground-level ozone (O3), persistent organic pollutants (POPs), heavy metals (leaddPb, cadmiumdCd, mercurydHg), ammonia (NH3) and particulate matter (PM). The CLTRAP has created, over the years, the essential international cooperation framework for controlling and, as far as possible, reducing the damage to human health and the environment caused by transboundary air pollution. Fifty-one pan-European States are so far Parties to the Convention together with Canada and United States. In 1985 the Vienna Convention for the Protection of the Ozone Layer was adopted and entered into force on 22 September 1988. The Vienna Convention served as a framework to protect the earth’s ozone layer. On 16 September 1987 the Montreal Protocol on Substances that Deplete the Ozone Layer was agreed and entered into force on 1 January 1989. The Protocol, was designed to reduce the production and consumption of ozone depleting substances in order to reduce their abundance in the atmosphere. On 16th September 2009, the Vienna Convention and the Montreal Protocol became the first treaties in the history of the United Nations to achieve universal ratification. This universal participation, maintained also 3 years later with the entrance of a new member of the United Nations, showed that the world was not merely observing effects of environmental pollution, but that it was also taking action to limit them by ensuring that the ozone protection treaties achieved global support and implementation. On 9 May 1992 the United Nations Framework Convention on Climate Change (UNFCCC) was adopted and opened for signature at the “Earth Summit” in Rio de Janeiro, later in June 1992. Its sister Rio Conventions are the UN Convention on Biological Diversity and the Convention to Combat Desertification. The three are intrinsically linked. The UNFCC entered into force on 21 March 1994, after the 50th instrument of ratification, acceptance, approval or accession had been deposited. Currently, there are 197 Parties to the UNFCC, so it has reached an universal ratification, within members of United Nations considering that all of the United Nations member states have signed and ratified the UNFCCC. The ultimate objective of the UNFCCC is to stabilize greenhouse gas concentrations “at a level that would prevent dangerous anthropogenic (human induced) interference with the climate system.” It states that “such a level should be achieved within a time-frame sufficient to allow ecosystems to adapt naturally to climate change, to ensure that food production is not threatened, and to enable economic development to proceed in a sustainable manner.” However, the resolutions adopted at the 1992 Climate Convention were insufficient to arrest climate changes. As a consequence, a legal instrument imposing the obligation to protect air quality, the Kyoto Protocol, was adopted in Kyoto, Japan, on 11 December 1997. The Kyoto Protocol, which commits its Parties by setting internationally binding emission reduction targets, entered into force on 16 February 2005, on the ninetieth day after the date on which not < 55 Parties to the Convention, incorporating Parties included in Annex I of the Convention which accounted in total for at least 55% of the total carbon dioxide emissions for 1990 of the Parties included in Annex I, had deposited their instruments of ratification, acceptance, approval or accession (the so-called the

Air Quality Legislation

65

principle of “2 times 55”). The Protocol entered into force after long negotiations between the EU and Russia, and Russia agreed to ratify the Protocol, while for China, which is now considered to be one of the biggest emitters of greenhouse gases in the world, it was established in 2013 that may continue to emit excessive amounts of CO2. So far, there are 192 Parties to the Kyoto Protocol. Under the Protocol, Parties must meet their emission targets primarily through national measures. However, the Protocol also offers them additional means to meet their targets by way of three following mechanisms in a cost-effective way: International Emissions Trading; Clean Development Mechanism (CDM) and Joint implementation (JI). On 19 January 2013 the Minamata Convention on Mercury was approved by delegates representing about 140 countries in Geneva. The Convention was adopted and signed later, on 10 October 2013, at a Diplomatic Conference held in Kumamoto, Japan. The Minamata Convention is an international treaty aimed to protect human health and the environment from adverse effects of anthropogenic emissions and releases of mercury and its compounds. The Minamata Convention entered into force on 16 August 2017, after the date of deposit of the 50th instrument of ratification, acceptance, approval or accession. Ninety-two member States are so far Parties to the Minamata Convention. The Convention draws attention on mercury (Hg) that is one of the hazardous chemical of big concern at global level due to its persistence into the environment, its long-range atmospheric transportation, its ability to bio-accumulate in ecosystems and its significant negative effects on human health and the environment. Mercury is released to the atmosphere, soil and water from a variety of sources. Controlling the anthropogenic releases of mercury throughout its entire lifecycle, by for example, the phase out and phase down of mercury use in a number of products and processes, has been a key aspect of the obligations defined under the Convention. So global concerns related to air pollution have been addressed by a variety of actions taken by many countries at the international level, and the international strategy of air protection it is under continuous development.

Air Quality Legislation in European Union The National Emission Ceilings Directives together with the Ambient Air Quality Directives and the source and derived legislation linked to them provide the legal framework for the EU’s air policy. In 2011–13 the European Commission conducted a review of the EU air policy which led, on 18 December 2013, to the adoption of the Clean Air Policy Package. The package has a number of components:

• •



A new Clean Air Programme for Europe, updating the 2005 Thematic Strategy on Air Pollution, with new air quality objectives for 2020 and 2030. The package also includes support measures to reduce air pollution, with a focus on improving air quality in cities, supporting research and innovation, and promoting international cooperation. A revised National Emission Ceilings Directive with stricter national emission ceilings for the six main pollutants. The main legislative instrument is Directive 2016/2284/EU on the reduction of national atmospheric emissions which entered into force on 31 December 2016. This Directive sets national reduction commitments for five pollutants (sulfur dioxidedSO2, nitrogen oxidesdNOx, nonmethane volatile organic compoundsdNMVOC, ammoniadNH3 and fine particulate matterdPM2.5) responsible for acidification, eutrophication and ground-level ozone pollution. The new Directive repeals and replaces Directive 2001/81/EC, the National Emission Ceilings Directive (the NEC Directive) from 30 June 2018. The new NEC Directive entered into force on 31 December 2016 and shall be transposed by EU Member States by 30 June 2018. ensuring that the emission ceilings for 2010 set in the previous Directive shall apply until 2020. Directive 2016/2284 also transposes the reduction commitments for 2020 taken by the EU and its Member States under the amended Gothenburg Protocol to Abate Acidification, Eutrophication and Ground-level Ozone (one of the eight Protocols within the CLTRAP umbrella) and sets ambitious and specific reduction commitments in 2020 and beyond (EU emission reduction commitments in 2020, taking the year 2005 as a base year, are set for sulfur dioxide, nitrogen dioxide, ammonia, volatile organic compounds and PM2.5 by 59%, 42%, 6%, 28% and 22%). A proposal for a new Directive to reduce pollution from medium-sized combustion installations, such as energy plants for street blocks or large buildings, and small industry installations (between 1 and 50 MW hdMWth).

On the other hand, the EU Ambient Air Quality Directives, set local air quality limits which may not be exceeded anywhere in the EU territory. Every part of the territory is covered by air quality standards. The Directive of the European Parliament and of the Council on ambient air quality and cleaner air for Europe adopted in 2008 is, so far, the basic legal instrument regulating air quality assessment and management. When this directive came into force on 11 June 2010, it replaced directives 96/62/EC, 1999/30/EC, 2000/ 69/EC, 2002/3/EC and council decision 97/101/EC on Exchange of Information. This new directive requires EU member states to guarantee that the permissible levels of substances it lays down shall not be exceeded. Furthermore Directives 2008/50/EC and Directive 2004/107/EC require Member States to divide their territory into a number of zones and agglomerations, defined on the basis of their regime of concentrations of a specific set of air pollutant standards, and to report on them. Health based standards and objectives are set within these Directives for the following air pollutants: sulfur dioxidedSO2, nitrogen dioxide and nitrogen oxidesdNO2 and NOx, particulate matter and fine particulate matterdPM10 and PM2.5, leaddPb, carbon monoxidedCO, benzene, ozonedO3, arsenicdAs, cadmiumdCd, nickeldNi and polycyclic aromatic hydrocarbonsdPaH. Where levels are elevated above limit or target values, member states shall prepare an air quality plan or program to ensure compliance with the limit value before the date when the limit value formally enters into force. In addition, information

66

Air Quality Legislation

on air quality should be disseminated to the public while data and information on zones, designated under the Ambient Air Quality Directives, and other relevant monitoring information have to be reported to the European Commission.

Air Quality Legislation in United States The United States has a long history of legislation on air protection. Congress designed the Clean Air Act (CAA) to protect public health from different types of air pollution caused by many pollution sources. Congress established the law’s basic structure in the Clean Air Act Amendments of 1970, and made major revisions in 1977 and 1990. In 1970, the CAA set standards for six pollutantsdSO2, NO2, carbon monoxide, O3, particulate matter and lead. In addition, the act laid down requirements regarding the implementation of air quality programs. In the same year, the Congress established the Environmental Protection Agency (EPA), whose task was to overdsee the implementation of the standards set out in the Clean Air Act of 1970. Since many states failed to meet mandatory air quality standards, amendments were subsequently introduced to the CAA. In June 1989 the President proposed sweeping revisions to the Clean Air Act. November 15, 1990 marks a milestone in Clean Air Act history, the signing of the 1990 Amendments. These amendments set the stage for protecting the ozone layer, reducing acid rain and toxic pollutants, and improving air quality and visibility. They also addressed the reduction of sulfur dioxide emissions from power plants. Great progress has been made in achieving national air quality standards, which EPA originally established in 1970 and updates periodically based on the latest science. One sign of this progress is that visible air pollution is less frequent and widespread than it was in the 1970s. The Clean Air Act, which was last amended in 1990, requires EPA to set National Ambient Air Quality Standards for pollutants considered harmful to public health and the environment. Being a federal act, it applies throughout the country. The EPA has set National Ambient Air Quality Standards for six principal pollutants, which are called “criteria” air pollutants. The Clean Air Act identifies two types of national ambient air quality standards. Primary standards provide public health protection, including protecting the health of “sensitive” populations such as asthmatics, children, and the elderly. Secondary standards provide public welfare protection, including protection against decreased visibility and damage to animals, crops, vegetation, and buildings. To reflect new scientific studies, EPA revised the national air quality standards for fine particles (2006, 2012), ground-level ozone (2008, 2015), sulfur dioxide (2010), nitrogen dioxide (2010), and lead (2008). After the scientific review, EPA decided to retain the existing standards for carbon monoxide. The United States has made great progress since 1970 in cleaning the air, but other initiatives have to be further implemented.

Air Quality Legislation in China China faces serious air quality challenges in ambient air quality due to particularly high levels of air pollution. Maximum measured values in China greatly exceed current air quality standards according to recent studies. In China there are about 500 air quality monitoring stations with more than half of them (> 250) indicating increased levels of air pollution. This issue is strictly linked to the fact that China is dependent on coal for much of its electricity generation, and coal-fired power plants emit large quantities of particulate matter (PM) and SO2. In 2012 the China State Council passed the roadmap for ambient air quality standards with the aim of improving the environment and protecting human health. The Ambient Air Quality StandardsdGB 3095–2012dare comparable to the interim targets (IT) set by the WHO. In China: Grade I Standards are for nature reserves, scenic spots and other areas in need of special protection. Grade II Standards are for function areas and residential areas. The new standards have been implemented on January 1, 2016 for all of China. A detailed comparison of GB 3095–2012 (Chinese standards), European, US standards and WHO AQGs is provided in the following paragraph and Tables.

Air Quality Standards PM Particulate matter (PM), according to WHO studies, affects more people than any other pollutant. The major components of PM are sulfate, nitrates, ammonia, sodium chloride, black carbon, mineral dust and water. It consists of a complex mixture of solid and liquid particles of organic and inorganic substances suspended in the air. Chronic exposure to particles contributes to the risk of developing cardiovascular and respiratory diseases, as well as of lung cancer. A comparison of PM standards adopted in China, European Union, United States and guidelines values recommended by WHO is provided in Table 2. PM10, US 24-h standard is much higher than the one established by the European legislation and recommended by WHO. For PM2.5, US annual standards (as primary and secondary) are much lower than that set by the EU and similar to that recommended by WHO.

Air Quality Legislation Table 2

67

Air quality standards and Guideline values for PM EU

Averaging Pollutant period PM10 PM10

24 h Annual

PM2.5

24 h

PM2.5

Annual

PM2.5 PM2.5

United States

Stand.

Comm.

Stand.

Comm.

50 mg/m3 40 mg/m3

Limit valuea

150 mg/m3

Prim. & second. st.b

35 mg/m3 25 mg/m3 20 mg/m3

Limit value 12.0/15.0 mg/m3 Exposure Conc. obligatione Exposure reduction target 0%–20% in exposuree

China

WHO

Stand. Grade I

Stand. Grade II

Value (AQGs) Comm.

50 mg/m3 40 mg/m3

150 mg/m3 70 mg/m3

50 mg/m3 20 mg/m3

75 mg/m3

25 mg/m3

35 mg/m3

10 mg/m3

Prim. & second. st. c 35 mg/m3 Prim./second. st.d 15 mg/m3

Annual

10 mg/m3 10 mg/m3

Not to be exceeded > 35 days per year. Not to be exceeded more than once per year on average over 3 years. c 98th percentile, averaged over 3 years. d Annual mean, averaged over 3 years. e Based on 3-year average. Source: European Commission. (2008). EU Directive 2008/50/EC of the European Parliament and of the Council of 21 May 2008 on ambient air quality and cleaner air for Europe. Official Journal of the European Union, L 152, 1–44.; WHO Regional Office for Europe. (2006). Air quality guidelines: global update 2005dParticulate matter, ozone, nitrogen dioxide and sulfur dioxide. Copenhagen: WHO Regional Office for Europe. http://www.euro.who.int/en/health-topics/environment-and-health/air-quality/publications/pre2009/air-qualityguidelines.-global-update-2005.-particulate-matter,-ozone,-nitrogen-dioxide-and-sulfur-dioxide, accessed in May 2018; US Environmental Protection Agency (2013). National Ambient Air Quality Standards for Particulate Matter. Federal Register, Vol. 78, No. 10. https://www.gpo.gov/fdsys/pkg/FR-2013-01-15/pdf/2012-30946.pdf, accessed in September 2018; United States Congress. (1990). US Clean Air Act. United States Code. Title 42, Chapter 85. a

b

Ozone Emissions from industrial facilities and electric utilities, motor vehicle exhaust, gasoline vapors, and chemical solvents are some of the major anthropogenic sources of NOx and VOCs, that are O3 precursors, but also the production of biogenic VOCs is a relevant natural sources, such as asthma. Excessive ozone in the air can have a marked effect on human health. It can cause breathing problems, trigger asthma, reduce lung function and cause lung diseases, particularly for children, the elderly, and people of all ages who have already lung diseases. Ground-level ozone can also have harmful effects on sensitive vegetation and ecosystems. A comparison of O3 standards adopted in China, European Union, United States and guidelines values recommended by WHO is provided in Table 3.

Nitrogen Dioxide Nitrogen dioxide (NO2) is one of a group of highly reactive gases known as nitrogen oxides (NOx). NO2 is the main source of nitrate aerosols, which form an important fraction of PM2.5 and, in the presence of ultraviolet light, of ozone. The major sources of anthropogenic emissions of NO2 are combustion processes. Breathing air with a high concentration of NO2 can irritate airways in the human respiratory system. WHO epidemiological studies have shown that symptoms of bronchitis in asthmatic children increase in association with longterm exposure to NO2. Reduced lung function growth is also linked to NO2 at concentrations currently measured in cities of Europe and North America. A comparison of NO2 standards adopted in China, European Union, United States and guidelines values recommended by WHO is provided in Table 4.

Sulfur Dioxide Sulfur dioxide (SO2) is one of a group of gases called sulfur oxides (SOx). The other gases in the group are much less common in the atmosphere. The largest source of SO2 in the atmosphere is the burning of fossil fuels by power plants and other industrial facilities. SO2 can affect the respiratory system and the functions of the lungs, and causes irritation of the eyes. Inflammation of the respiratory tract causes coughing, mucus secretion, aggravation of asthma and chronic bronchitis and makes people more prone to infections of the respiratory tract. A comparison of SO2 standards adopted in China, European Union, United States and guidelines values recommended by WHO is provided in Table 5.

68 Table 3

Air Quality Legislation Air quality standards and Guideline values for O3 EU

United States

Averaging Pollutant period

Stand.

Comm.

O3

8-h mean

120 mg/m3

8-h mean

120 mg/m3

1-h 1-h

180 mg/m3 240 mg/m3

Human health 0.070 ppm Primary and long-term objecta (or 140 mg/m3) secondary stand.c Human health target valueb Information thresh. Alert thresh.

Stand.

Comm.

China

WHO

Stand. Grade I Stand. Grade II Value (AQGs) Comm. 100 mg/m3

160 mg/m3

160 mg/m3

200 mg/m3

100 mg/m3

Not to be exceeded on > 25 days per year averaged over 3 years. Maximum daily 8-h mean. c Annual fourth-highest daily maximum 8 hour average concentration, averaged over 3 years. Source: European Commission. (2008). EU Directive 2008/50/EC of the European Parliament and of the Council of 21 May 2008 on ambient air quality and cleaner air for Europe. Official Journal of the European Union, L 152, 1–44; US Environmental Protection Agency. (2015). National Ambient Air Quality Standards for Ozone. Federal Register, Vol. 80, No. 206. https://www.gpo.gov/fdsys/pkg/FR-2015-10-26/pdf/2015-26594.pdf (accessed in September 2018); United States Congress. (1990). US Clean Air Act. United States Code. Title 42, Chapter 85; WHO Regional Office for Europe. (2006). Air quality guidelines: global update 2005 – particulate matter, ozone, nitrogen dioxide and sulfur dioxide. Copenhagen: WHO Regional Office for Europe. a

b

Table 4

Air quality standards and Guideline values for NO2 EU

United States

China

Averaging Pollutant period

Stand.

Comm.

NO2

1h

200 mg/m3

Primary standardb 200 mg/m3

3h 24 h Annual

400 mg/m3

Human health 100 ppb limit valuea (or 190 mg/m3) c Alert threshold Human health limit value

Primary and secondary standard

3

40 mg/m

Stand.

53 ppb (or 100 mg/m3)

Comm.

WHO

Stand. Grade I Stand. Grade II Value (AQGs)

80 mg/m3 40 mg/m3

200 mg/m3

200 mg/m3

80 mg/m3 40 mg/m3

40 mg/m3

Comm.

Not to be exceeded on > 18 h per year. 98th percentile of 1-h daily maximum concentrations, averaged over 3 years. c To be measured over three consecutive hours at location representative of air quality over at least 100 km2 or an entire zone or agglomeration. Source: European Commission. (2008). EU Directive 2008/50/EC of the European Parliament and of the Council of 21 May 2008 on ambient air quality and cleaner air for Europe. Official Journal of the European Union, L 152, 1–44; WHO Regional Office for Europe. (2006). Air quality guidelines: global update 2005dParticulate matter, ozone, nitrogen dioxide and sulfur dioxide. Copenhagen: WHO Regional Office for Europe. http://www.euro.who.int/en/health-topics/environment-and-health/air-quality/publications/pre2009/air-qualityguidelines.-global-update-2005.-particulate-matter,-ozone,-nitrogen-dioxide-and-sulfur-dioxide, accessed in May 2018; US Environmental Protection Agency. (2012). National Ambient Air Quality Standards for Oxides of Nitrogen and Sulfur. Federal Register, Vol. 77, No. 64. https://www.gpo.gov/fdsys/pkg/FR-2012-04-03/pdf/2012-7679.pdf (accessed in September 2018); United States Congress. (1990). US Clean Air Act. United States Code. Title 42, Chapter 85. a

b

Carbon Monoxide CO is a colorless, odorless gas that can be harmful when inhaled in large amounts. Breathing air with a high concentration of CO reduces the amount of oxygen that can be transported in the blood stream to critical organs like the heart and brain. At very high levels, which are possible indoors or in other enclosed environments, CO can cause confusion, unconsciousness and death. Very high levels of CO are not likely to occur outdoors. However, when CO levels are elevated outdoors, they can be of particular concern for people with some types of heart disease. A comparison of CO standards adopted in China, European Union, United States and guidelines values recommended by WHO is provided in Table 6.

Toxic Metals Human exposure to arsenic, cadmium, lead nickel and mercury ambient air concentrations above the limit or target values or recommended WHO guideline values is usually a local problem, restricted to a few areas, especially in Europe, and is typically caused by specific industrial plants. However, atmospheric deposition of toxic metals contributes to the exposure of ecosystems and organisms to toxic metals and to bioaccumulation in the food chain, thus affecting human health.

Air Quality Legislation Table 5

Pollutant SO2

69

Air quality standards and guideline values for SO2 Averaging period 10 min 1h 3h 24 h Annual

EU

United States

China

Stand.

Comm.

Stand.

Comm.

Stand. Grade I

Stand. Grade II

350 mg/ m3 500 mg/mc

Human health limit valuea Alert thresholdc

500 mg/m3

Human health limit valuee

Primary standardb Secondary standardd

150 mg/m3

125 mg/ m3

75 ppb (or 200 mg/m3) 500 ppb (or 1300 mg/m3)

50 mg/m3

150 mg/m3

20 mg/m3

60 mg/m3

WHO Value (AQGs)

Comm.

500 mg/m3

20 mg/m3

Not to be exceeded on > 24 h per year. 99th percentile of 1-h daily maximum concentrations, averaged over 3 years. c To be measured over three consecutive hours at location representative of air quality over at least 100 km2 or an entire zone or agglomeration. d Not to be exceeded more than once per year. e Not to be exceeded on > 3 days per year. Source: European Commission. (2008). EU Directive 2008/50/EC of the European Parliament and of the Council of 21 May 2008 on ambient air quality and cleaner air for Europe. Official Journal of the European Union, L 152, 1–44; WHO Regional Office for Europe. (2006). Air quality guidelines: global update 2005dParticulate matter, ozone, nitrogen dioxide and sulfur dioxide. Copenhagen: WHO Regional Office for Europe. http://www.euro.who.int/en/health-topics/environment-and-health/air-quality/publications/pre2009/air-qualityguidelines.-global-update-2005.-particulate-matter,-ozone,-nitrogen-dioxide-and-sulfur-dioxide, accessed in May 2018; US Environmental Protection Agency. (2012). National Ambient Air Quality Standards for Oxides of Nitrogen and Sulfur. Federal Register, Vol. 77, No. 64. https://www.gpo.gov/fdsys/pkg/FR-2012-04-03/pdf/2012-7679.pdf (accessed in September 2018); United States Congress. (1990). US Clean Air Act. United States Code. Title 42, Chapter 85. a

b

Table 6

Air quality standards and guideline values for CO

Pollutant

Averaging period

CO

1h

EU Stand.

8h

10 mg/m3

24 h Annual mean



United States

China

WHO

Comm.

Stand.

Comm.

Stand. Grade I

Stand. Grade II

Value (AQGs)

Primary standardb

10 mg/m3

10 mg/m3

30 mg/m3

Human health limit valuea

35 ppm (or 40 mg/m3) 9 ppm (or 10 mg/m3) –

Comm.

10 mg/m3

Primary standardb 4 mg/m3 –

4 mg/m3



a

Maximum daily 8-h mean. Not to be exceeded more than once per year. Source: European Commission. (2008). EU Directive 2008/50/EC of the European Parliament and of the Council of 21 May 2008 on ambient air quality and cleaner air for Europe. Official Journal of the European Union, L 152, 1–44; WHO Regional Office for Europe. (2006). Air quality guidelines: global update 2005dParticulate matter, ozone, nitrogen dioxide and sulfur dioxide. Copenhagen: WHO Regional Office for Europe. http://www.euro.who.int/en/health-topics/environment-and-health/air-quality/publications/pre2009/air-qualityguidelines.-global-update-2005.-particulate-matter,-ozone,-nitrogen-dioxide-and-sulfur-dioxide, accessed in May 2018; US Environmental Protection Agency. (2011). National Ambient Air Quality Standards for Carbon Monoxide. Federal Register, Vol. 76, No. 169. https://www.gpo.gov/fdsys/pkg/FR-2011-08-31/pdf/2011-21359.pdf (accessed in September 2018); United States Congress. (1990). US Clean Air Act. United States Code. Title 42, Chapter 85. b

A comparison of toxic metal standards adopted in China, European Union, United States and guidelines values recommended by WHO is provided in Table 7.

Concluding Remarks WHO AQGs have been a wide application in environmental decision-making, particularly in setting standards at a global level, despite the inclusion of the words “for Europe” on the cover of the first two editions. The AQGs have provided and will still a basis for “protecting public health from adverse effects of air pollutants and for eliminating, or reducing to a minimum, those contaminants of the air that are known or likely to be hazardous to human health and well-being.” The United States and European Union have made great progress since the last years in cleaning the air, while in China increased levels of air pollution are measured. Further initiatives have to be strengthened and coordinated both at global and local scales especially for transboundary air pollution.

70

Air Quality Legislation

Table 7

Air quality standards and guideline values for toxic heavy metals

Pollutant

Averaging period

Stand.

Comm.

Arsenic

Annual

6 ng/m3

Human health target value

Cadmium Lead

Annual Annuala

Human health target value Human health limit value

Nickel Mercury

Annual Annual

5 ng/m3 0.5 mg/ m3 20 ng/m3 –

EU

Human health target value

United States Stand.

0.15 mg/ m3

Comm.

Prim. and second.b

China Stand. Grade I

Stand. Grade II

WHO Value (AQGs)

Comm.

30 mg/m3 10 mg/m3 5 ng/m3 0.5 mg/m3 1 mg/m3

a

Calculated not as annual average but as rolling 3 month average. Not to be exceeded. Source: European Commission. (2004). EU Directive 2004/107/EC of the European Parliament and of the Council of 15 December 2004 relating to arsenic, cadmium, mercury, nickel and polycyclic aromatic hydrocarbons in ambient air. Official Journal of the European Union L 23, 3–16. https://eur-lex.europa.eu/eli/dir/2004/107/oj, accessed in May 2018; WHO Regional Office for Europe. (2000). Air quality guidelines for Europe, 2nd edn. Copenhagen: WHO Regional Office for Europe (WHO Regional Publications, European Series, No. 91). http://www.euro.who.int/en/publications/abstracts/air-quality-guidelines-for-europe, accessed in May 2018; US Environmental Protection Agency. (2016). National Ambient Air Quality Standards for Lead. Federal Register, Vol. 81, No. 201. https://www.gpo.gov/fdsys/pkg/FR-2016-10-18/pdf/2016-23153.pdf (accessed in September 2018); United States Congress. (1990). US Clean Air Act. United States Code. Title 42, Chapter 85. b

See also: Air Pollution Episodes; Air Pollution From Solid Fuels; Air Pollution and Lung Cancer Risks; Air Transportation and Human Health; Community Outdoor Air Quality: Sources, Exposure Agents and Health Outcomes; Environmental Carcinogens and Regulation; Exposure Guidelines and Radon Policy; Indoor Radon Prevention and Mitigation.

Further Reading CLRTAPdConvention on Long–Range Transboundary Air Pollution, 1979. https://treaties.un.org/Pages/ViewDetails.aspx?src¼IND&mtdsg_no¼XXVII-1&chapter¼27&clang¼_en. (Accessed May 2018). European Commission, 2004. EU Directive 2004/107/EC of the European Parliament and of the Council of 15 December 2004 relating to arsenic, cadmium, mercury, nickel and polycyclic aromatic hydrocarbons in ambient air. Official Journal of the European Union L 23, 3–16. https://eur-lex.europa.eu/eli/dir/2004/107/oj. (Accessed May 2018). European Commission, 2008. EU Directive 2008/50/EC of the European Parliament and of the Council of 21 May 2008 on ambient air quality and cleaner air for Europe. Official Journal of the European Union L 152, 1–44. https://eur-lex.europa.eu/legal-content/EN/TXT/?uri¼celex:32008L0050. (Accessed May 2018). European Commission, 2013. Communication from the commission to the European parliament, the council, the European economic and social committee and the committee of the regions. A Clean Air Programme for Europe. COM(2013) 0918 final. http://eur-lex.europa.eu/legal-content/EN/TXT/PDF/?uri¼CELEX:52013DC0918&from¼EN. (Accessed May 2018). European Commission, 2016. EU Directive 2016/2284 of the European Parliament and of the Council of 14 December 2016 on the reduction of national emissions of certain atmospheric pollutants, amending Directive 2003/35/EC and repealing Directive 2001/81/EC. Official Journal of the European Union L 344, 1–31. http://eur-lex.europa.eu/ legal-content/EN/TXT/?uri¼uriserv:OJ.L_.2016.344.01.0001.01.ENG&toc¼OJ:L:2016:344:TOC. (Accessed May 2018). Kampa, M., Castanas, E. (2008). Human health effects of air pollution. Review article. Volume 151, Issue 2, January 2008, Pages 362–367. Environmental Pollution. Kuklinska, K., Wolska, L., Namiesnik, J., 2015. Air quality policy in the U.S. and the EUdA review. Atmospheric Pollution Research 6 (1), 129–137. Muller, C.O., Yu, H., Zhu, B., 2015. Ambient air quality in China: The impact of particulate and gaseous pollutants on IAQ. Procedia Engineering 121, 582–589. UNFCCCdUnited Nations Framework Convention on Climate Change. (1992). https://unfccc.int/process#:d8f74df9-0dbd-4932-bf3c-d8a37f8de70e, accessed in May 2018. US EPA. (1990) Clean Air Act. https://www.epa.gov/clean-air-act-overview/clean-air-act-text#what, accessed in May 2018. Vienna Convention for the Protection of the Ozone Layer, 1985. http://ozone.unep.org/en/treaties-and-decisions/vienna-convention-protection-ozone-layer. (Accessed May 2018). WHO Regional Office for Europe, 1987. Air quality guidelines for Europe. WHO Regional Office for Europe (WHO Regional Publications), Copenhagen. European Series, No. 3. WHO Regional Office for Europe, 2000. Air quality guidelines for Europe, 2nd edn. WHO Regional Office for Europe, Copenhagen. WHO Regional Publications, European Series, No. 91. http://www.euro.who.int/en/publications/abstracts/air-quality-guidelines-for-europe. (Accessed May 2018). WHO Regional Office for Europe, 2006. Air quality guidelines: Global Update 2005dParticulate matter, ozone, nitrogen dioxide and sulfur dioxide. WHO Regional Office for Europe, Copenhagen. http://www.euro.who.int/en/health-topics/environment-and-health/air-quality/publications/pre2009/air-quality-guidelines.-global-update-2005.-particulatematter,-ozone,-nitrogen-dioxide-and-sulfur-dioxide. (Accessed May 2018). WHOdWorld Health Organizaion, 2017. Evolution of WHO air quality guidelines: past, present and future. WHO Regional Office for Europe, Copenhagen. http://www.euro.who. int/__data/assets/pdf_file/0019/331660/Evolution-air-quality.pdf?ua¼1. (Accessed May 2018).

Air Transportation and Human Healthq BS Cohen, New York University School of Medicine, New York, NY, United States AL Bronzaft, Professor Emerita, City University of New York, New York, NY, United States © 2019 Elsevier B.V. All rights reserved.

Abbreviations EPA Environmental Protection Agency FAA Federal Aviation Administration FICAN The American Federal Interagency Committee on Aviation Noise GAO General Accounting Office HACAN Heathrow Association for the Control of Aircraft Noise LAX Los Angeles International Airport LTO Landing and takeoff NESCAUM Northeast States for Coordinated Air Use Management NJCAAN New Jersey Citizens Against Aircraft Noise ONAC Office of Noise Abatement and Control PAH Polycyclic aromatic hydrocarbon PM Particulate matter SCAQMD South Coast Air Quality Management District SEA-TAC Seattle-Tacoma International Airport VOC Volatile organic compound

Introduction As air traffic and airports continue to grow, public awareness of the health and environmental impact of air transportation has increased. Issues of concern include air pollution, noise, cabin air quality, infectious disease transmission, soil and water pollution, accidents, appearance of the environment, and occupational health risks at the airport and in flight. The overall population is subjected to air traffic’s contribution to air pollution and general environmental degradation. Especially impacted are residents of the community surrounding airports, who are affected not only by aircraft noise but also by the traffic to and from the airports. Directly impacted constituencies include the passengers and the workforce associated with air travel, such as pilots, flight attendants, and the ground workforce of aircraft service and maintenance workers, as well as a large variety of airport personnel. A series of governmental and private studies have been completed in the United States and abroad to assess some of these issues with respect to airports, and more are under way. Because of the extremely high cost of a definitive study, most have been focused on a particular airport and on one or two of the issues; a few have addressed air traffic in general. The release of combustion products into the environment by aircraft, as well as the associated vehicular traffic, and the noise and annoyance experienced by humans are clear. They all affect health, well-being, and the quality of life. Among proven effects of chronic exposure to air pollution, the most severe is premature death. The risks to human health associated with increased environmental ozone and particulate pollution as well as the effect of carbon dioxide on climate change suggest that air traffic has a substantial impact. Although quantitative evidence of the specific air transport-related impact is difficult to obtain, the accumulated evidence of health risks associated with air pollution and noise makes it clear that action to protect human health should not be deferred. The Noise Control Act passed by Congress in 1972 gave American citizens the right to an environment free from noises that could jeopardize their health and well-being. Under an amended section of this Act, the Federal Aviation Administration (FAA) was to consult with the Environmental Protection Agency (EPA) on aircraft noise even though the FAA had the authority to regulate aircraft noise emissions. Whereas the EPA could make proposals concerning aircraft noise, it was within the powers of the FAA to act on these proposals. Even if the FAA chose not to accept EPA’s recommendations, the EPA through its publications could educate the public to the dangers of aircraft noise, and the public, in turn, could demand appropriate responses from the FAA. This article will examine the adverse impacts of airport-related noise and explore whether the Noise Control Act has indeed protected residents exposed to airport-related noises.

q

Change History: September 2018. B.S. Cohen and A.L. Bronzaft prepared the update. This is an update of B.S. Cohen, A.L. Bronzaft, Air Transportation and Human Health, Editor(s): J.O. Nriagu, Encyclopedia of Environmental Health, Elsevier, 2011, Pages 53–63.

Encyclopedia of Environmental Health, 2nd edition, Volume 1

https://doi.org/10.1016/B978-0-12-409548-9.11658-6

71

72

Air Transportation and Human Health

Airliner cabin air quality issues related to passengers and crew have been addressed in a National Research Council report, and further research into some specific issues such as disease transmission is still under way. In general, aircraft occupants are subject to most of the same chemical, biological, and physical agents as any indoor environment, but unexpected events can result in the intake of additional chemical contaminants to the cabin. Ergonomic issues resulting from an ongoing decrease in passenger space allotment may also become significant as the industry provides smaller seats and less leg room. Cabin noise affects crew and passengers alike. Aircraft manufacturers are noting that newer aircraft will be quieter, acknowledging cabin noise. Additionally, advertisements suggest that passengers use headphones to lessen the internal aircraft noise, indicating an awareness that such noise can be disturbing. Waste production in association with air transport can also impact public health and the environment. In particular, water runoff from airports results in contamination caused by the need for ground and aircraft deicing. The solid waste stream is likely similar to that produced in any mode of transportation. An issue unique to early aircraft results from illumination of instrument panels by radium-based dial paints. When the aluminum from these aircraft is reclaimed by an electrolytic process, the radium remains with the aluminum. The result is that most reclaimed aluminum is slightly contaminated with this radioactive element, although any impact on health is minute.

Public Heath Impact of Large Airports The public heath impact of large airports was evaluated by the Health Council of the Netherlands (1999), which considered generalized problems of aviation and particular problems that can affect the expansion of Amsterdam Schiphol Airport. The role of public health in airport development was assessed at three European airports as case studies. The report concluded that the effects of air pollution, noise, accidents, soil and water pollution, infectious diseases, appearance of the environment, and occupational health risks at the airport on public health are cumulative, but the integrated result of all of the factors cannot be determined based on current data. The most severe acute responses to air pollution are premature death and aggravation of respiratory and cardiovascular disorders resulting in hospital admissions. For effects known to be caused by noise, ischemic heart disease is reported as most severe, occurring at outdoor levels above 70 dB(A). Annoyance and sleep disturbances are noted as affecting most people at an outdoor level above 42 dB(A) day–night level and an indoor sound exposure level of 35–50 dB(A), respectively. The specific impact of airport-generated air pollution cannot be evaluated because air pollution from all airport-associated sources, especially traffic to and from the airport and other destinations, is inextricably mixed. A notable conclusion, which also has become evident from other studies, is that “air pollutant levels around large airports are similar to those in urbanized areas, and are to a large extent determined by road traffic emissions.” The Netherlands reports recognize that noise is one of the most “noticeable environmental factors of airport operations.” Airport workers are exposed to both aircraft and ground traffic noise, and such continued exposure may lead to hearing loss. This report also states that noise may make it more difficult for the cabin crew to communicate with passengers, leading to passenger unhappiness and added job stress for crew members. The report further notes that annoyance, hypertension, sleep disturbance, and poor performance at school have been linked to noise and lists these as potential adverse effects on residents exposed to airport-related noise. The following year (2000) saw two significant reports, one prepared for the US Congress (GAO) and another for the UK Department of the Environment, Transport and Regions (Arthur D. Little). GAO obtained information from the 50 busiest US commercial service airports. Noise generated by aircraft operations was identified as the major challenge currently facing airports, with air quality issues becoming a greater concern and challenge in the future. Noise-related issues, in addition to aircraft noise, include limited control over nearby land use and growing residential populations near airports. In order to receive funds to assist with noise mitigation, airports have to participate in the FAA’s noise compatibility program. With regard to air quality in the United States, airports are reported to have difficulty understanding their responsibilities under the Clean Air Act. They are grouped with such entities as ski resorts and coal mines, and are in need of guidance and technical assistance. Another important concern for airports is the difficulty of coordination with local government zoning and planning boards to prevent residential development of properties near airports. The fractionation of responsibility for both airport functions (e.g., ground service vehicles, aircraft service gates, and fleet composition) and management is also an issue. The major source of air pollution is identified as fossil fuel–operated vehicles used to access, and operate, the facility. The UK report focused on the potential impact of changes in technology, including environmental impact and environmental mitigation technologies, concluding that technology advances are expected to produce material improvement globally on carbon dioxide (CO2) and oxides of nitrogen (NOx) emissions and locally on air quality and noise impacts, but they “cannot offset the additional environmental impact associated with forecast growth.” In sum, the overall environmental impact from aviation is predicted to increase in spite of reductions due to technological improvements. This was supplemented in a 2002 report by the Royal Commission on Environmental Pollution, which concluded that the technological and operational improvements will not offset the adverse effects from the growth of air transport on the atmosphere (e.g., ozone, ultraviolet changes, and climate). It also emphasizes the disproportionate adverse impact caused by short-haul passenger flights. Similarly, an older survey, Flying Off Course (1996), of America’s airports found a growth in environmental impacts associated with growth in air travel and concluded that “. the regulatory framework concurrently in place to address these impacts is inadequate.” Individual states have challenged the adequacy of the FAA’s environmental impact statements regarding Airspace Redesign programs in the courts (Wyatt, 2011).

Air Transportation and Human Health

73

Ambient Air Pollution Estimates Using Mathematical Modeling Ground-level aircraft idling and taxiing at airports emit volatile organic compounds (VOCs), nitrogen oxides (NOx), particulate matter (PM), and sulfur dioxide (SO2) into the atmosphere. These compounds are known to have significant health and environmental impact (Table 1). Several efforts have been made to calculate the contribution of airports to the atmospheric mix. Resulting estimates are helpful, but only measurements can ultimately provide the true values, and this will require a large effort and significant resources. In 1999, the US EPA detailed the impact of aircraft emissions in 10 US cities including New York. A straightforward modeling approach was used to estimate the quantity of material emitted by aircraft. Such modeling is the only method currently available for estimating emissions from airports. Input to the model requires an estimate of the mix of aircraft and engines in the airport fleet. This information is combined with the number and pattern of landings and takeoffs (LTOs). An LTO cycle is specified, in which different modes of engine operation are assumed, and the time in each mode is assigned for each aircraft. A value is selected for the altitude below which the emissions are considered to mix into the ground layer of the atmosphere. Emissions in the mixing layer may then be calculated for the time the aircraft operates in each mode, based on independent measurements of specific aircraft engine emissions. The resulting contribution to ambient concentrations can then be calculated. In 2003, the Northeast States for Coordinated Air Use Management (NESCAUM) quantified airport-related emissions for three northeast airports and reported their aggregate: Logan (Boston, MA), Manchester Airport (Manchester, NH), and Bradley International (Windsor Locks, CT) for 1999. Their inventory included nonmilitary aircraft, auxiliary power units, and ground service equipment. The calculated emission inventories for relatively large airports such as LaGuardia Airport (New York, NY) were reported by EPA. Nearly 800 metric tons of VOCs, roughly 1500 tons of oxides of nitrogen, and 80 tons of sulfur dioxide were estimated to have been emitted at LaGuardia Airport in 1990. For VOCs and NOx, these emissions amounted to approximately 0.1% and 0.26%, respectively, of total emissions in the New York City area. For the northeast airports in the United States, NESCAUM reports aggregated emission of 3538 tons of NOx, 4461 tons of carbon monoxide (CO), and 700 tons of hydrocarbons in 1999. These contributions are expected to rise because of anticipated growth in airport activity and also because emissions from other mobile sources will decrease as a result of legislative restrictions. The estimates are higher for the contribution to local air pollution from Amsterdam Schiphol Airport in 1990. The estimated contribution to the ambient mix of individual pollutants ranged from 3% to 9% depending on the component. Pollutants considered were NOx, CO, VOCs, SO2, and black smoke. The latter is an indicator of PM. Based on EPA’s study of 10 cities, aircrafts are responsible for approximately 1% of the total US ground-level emissions of NOx from mobile sources. In addition, the proportion of total urban emissions attributable to aircraft is projected to increase for all 10 cities in 2010. The study confirmed “that commercial aircraft emissions have the potential to significantly contribute to air pollution in the ten study areas.” Other estimates indicate that at airports, nationally, vehicles used for accessing the airport release 39% of the

Table 1

Representative health effects of air pollutants

Pollutant

Representative health effects

Carbon monoxide

Cardiovascular effects, especially in persons with heart conditions (e.g., decreased time to onset of exerciseinduced angina) Lung irritation and lower resistance to respiratory infections Lung function impairment, effects on exercise performance, increased airway responsiveness, increased susceptibility to respiratory infection, increased hospital admissions and emergency room visits, pulmonary inflammation, lung structure damage, and increased risk of premature death Premature mortality, aggravation of respiratory and cardiovascular disease, changes in lung function, increased respiratory symptoms, changes to lung tissues, and altered respiratory defense mechanisms increase the risk of heart attacks, strokes, and emergency room visits for asthma and cardiovascular disease Airway irritation of respiratory tissue. Irritation is also caused by reaction products such as sulfuric acid Eye and respiratory tract irritation, headaches, dizziness, visual disorders, and memory impairment. Possible cancer risk at high levels

Nitrogen oxides Ozone

Particulate matter

Sulfur dioxide Volatile organic compounds

74

Air Transportation and Human Health

emitted NOx and 14% is generated at the airport by ground service vehicles. Also, the ground access vehicles are responsible for 56% of VOCs, whereas 33% of the VOCs are from aircraft LTOs. Ground service equipment is responsible for approximately 11% of airport-generated VOCs. Thus, a significant amount of each pollutant is contributed by airport-related “nonaircraft” sources. During a 6 day closing of airports in 2010 due to the volcanic eruption of Eyjafjallajökull in Iceland a reduction in aircraft emissions would be expected. Based on a model developed by Carslaw et al to explain NOx concentrations at monitoring sites near Heathrow UK airport, it was possible to estimate the effect of this short term intervention. Input data was obtained on prevailing wind conditions and other relevant parameters during the flight ban to project what the NOx contribution would have been had air traffic been normal. These results, which they label the “business as usual case” (BAU), were compared to the measured values during the flight ban. For the most affected site they report that prior to the shutdown their model predicted a BAU concentration of 85 mg/m3 compared with a measured value of 80 mg/m3; during the ban the respective numbers were 79 and 50 mg/m3. Reductions were seen at three other measuring sites as well, but for one site there was a small increase. In spite of the difficulty of establishing a proper baseline such as accounting for altered ground traffic, it seems clear that the aircraft contribution to the overall NOx concentration was significantly reduced. Concern about emission of toxic air pollutants has generally focused on large commercial airports where the use of kerosene based jet fuel predominates. Small piston-engine aircraft at general aviation airports use high-octane leaded fuel referred to as aviation gas, or “avgas.” Lead is known to cause cognitive and behavioral deficits in children and exposure at any level is detrimental. Lead levels have been shown to be elevated near a number of airports. As reported by Kessler avgas fueled planes are “the chief source of lead emissions in the United States emitting 57% of the 964 tons of lead put into the air in 2008 according the US EPA”. She also reports that less than half of this lead lingers near airports with the remainder widely dispersed and able to enter the food chain. In a study of lead levels in children living near an airport at which planes use the avgas, “blood lead levels exhibited a monotonically decreasing dose-response pattern, with the largest impact on children living within 500 m” and there is evidence of an effect on children living within 1000 m of the airport. The US Federal Aviation Administration is working with engine manufacturers, fuel producers and the US EPA to develop an avgas unleaded fuel and to make a smooth transition.

Experimental Studies Around Airports A number of experimental studies have tried to measure the contribution of airports to the atmospheric mix for United States airports. Reports are available for measurements made in the vicinity of Los Angeles (LAX), Chicago’s O’Hare, and Seattle-Tacoma (SEA-TAC) International Airports, and around Teterboro, a smaller local airport in New Jersey. Each of these studies was undertaken primarily because of local residents’ concern regarding the health effects of aircraft emissions. Health effects of air pollution have been extensively documented as part of the regular review of standards required of the US EPA, and the documentation is readily available. Seattle-Tacoma International Airport: Benzene and other VOCs were the principal focus of preliminary survey of VOCs, carbonyl, and CO levels in the vicinity of Seattle-Tacoma International Airport. Benzene was detected in all VOC samples, but for most compounds (more that 50%), the methods sometimes applied to identify volatile hydrocarbon mixtures were indicative of automobiles and did not resemble aircraft exhaust profiles. Thus, the VOCs were attributed primarily to motor vehicle emissions. A few samples of formaldehyde raised the possibility that the concentrations resulted directly, or indirectly, from airport sources. Los Angeles International Airport: Evaluation of the impact of Los Angeles International Airport on air quality in surrounding neighborhoods was undertaken by the South Coast Air Quality Management District (SCAQMD) in response to community residents’ concern about emissions impact of aircraft during landing and takeoff and of ground operations. They also hoped to gain insights into the impact of a proposed airport expansion. Eleven residential sites were sampled for short periods, with three to four samples collected at each site over a period of 10 days. All of the VOCs measured in the study, except for chlorinated hydrocarbons, are emitted by mobile sources. A follow-up study was conducted 2 months later at nine sites. Key toxic compounds detected were benzene, 1,3-butadiene, and elemental carbon, all of which are associated with mobile sources. Fallout was also collected to address a concern of residents that large particles were descending from airborne craft as a result of LTOs. For the most part nothing could be concluded regarding sources because of heavy airport ground traffic-related air emissions. Chicago O’Hare International Airport: Several sets of screening measurements in the fall and winter of 1999–2000 to document the contribution of emissions from Chicago’s O’Hare Airport targeted PM, VOCs, and semi-VOCs, with emphasis on those air contaminants that might cause adverse health effects. Risk analysis for both cancer and noncancer health effects based on data from a single sample collected downwind of the airport resulted in elevated risk as compared with US federal guidelines. In the following year, a larger air-monitoring program was carried out over 6 months in the vicinity of Chicago O’Hare Airport by the Illinois EPA “to assess the relative impact of airport related emissions and levels of airborne contaminants characteristic of large urban areas.” Air toxics were monitored at two sites near O’Hare and at three sites in the Chicago metropolitan area. The monitoring results indicated that average concentrations measured at O’Hare for many of the compounds were comparable with the concentrations at other Chicago sites. When compared to other large United States cities, concentrations of several of the principal urban air toxics were similar to those in metropolitan areas of Atlanta, Detroit, Houston, and Milwaukee. On 5 of the 16 sampling days, higher concentrations were found downwind for 20%–85% of the target compounds. These included acetaldehyde, benzene, formaldehyde, polycyclic organics, toluene, and lead, all of which are associated with airport operations, but concentrations were considered typical for an urban area.

Air Transportation and Human Health

75

Teterboro Airport, NJ: A 2-day (48 h) environmental sampling program was carried out to screen for fuel-related air toxics, VOCs, aldehydes, and polycyclic aromatic hydrocarbons (PAHs). Measured levels of benzene, toluene, and ethyl benzene were elevated with respect to normal annual levels reported in other places in New Jersey. Benzene was elevated as compared with annual averages measured in Camden and Elizabeth, New Jersey; 1,3-butadiene was detected at one of the four sampling locations. A preliminary risk evaluation assumed long-term exposure to measured air concentrations at the fence line. This is a common method used to estimate upper level of risk. It assists in determining whether further investigation is warranted. Both cancer and noncancer risks were found to be elevated with respect to regulatory benchmarks. Cancer risk was driven by benzene and 1,3butadiene, both fuel-related compounds. Noncancer risk was elevated primarily as a result of benzene and toluene levels. As with most of these studies, because of the proximity of roadways, investigators were unable to differentiate between vehicular road traffic and airport activity as the source of the contaminants. LaGuardia Airport, New York City: A study around LaGuardia Airport in New York City examined both air pollution and noise in the surrounding community. Airborne PM was measured to determine if concentration differences could be detected between homes that are upwind and downwind of the airport. In addition 24 h noise measurements were made in 12 homes near the airport, and the impact of noise was assessed by a Community Wellness and Health Promotion Survey. PM concentrations were higher during active airport operating hours than during nonoperating hours, and the percentage increase varied inversely with the distance from the airport. However, hourly differences between paired upwind and downwind sites were not remarkable. The noise studies demonstrated that noise was a significant issue in the surrounding community. Residents living near the airport were exposed to noise levels as much as four times greater than those experienced by residents in a comparable quiet home. Impulse noise events were detected from both aircraft and vehicular traffic. Over 55% of the people living within the flight path were bothered by aircraft noise, and 63% by highway noise; these were significantly higher percentages than for residents in the nonflight area. The change in PM concentrations with distance during operating hours compared with nonoperating hours, traffic-related impulse noise events, and the elevated annoyance with highway as well as aircraft noise among residents in the flight path area show airportrelated motor vehicle traffic to be a major contributor to the negative impact of airports on people in the surrounding communities.

Mitigation of Air Pollution An evaluation of the impact of air pollution on the population of the United States by the American Lung Association (2009) reports that approximately 175.4 million Americans live in counties where ozone monitors recorded too many days with unhealthy ozone levels. Reduction in ozone levels depends on reduction of the precursor gases such as NOx and VOCs emitted by aircraft, so that all possible measures to reduce levels of these pollutants are essential. Additionally, substantial quantities of air toxics are emitted as a result of air transportation. Elevated levels of compounds known to be carcinogenic (e.g., benzene) have been measured in the vicinity of commercial and passenger airports in several studies. All of the studies identified elevated levels of fuel-related compounds in the vicinity of, and sometimes downwind of, airports. Efforts such as encouraging the use of compressed natural gas in access vehicles and increased use of electric powered vehicles are under way at some airports. No studies provide evidence that could separate airport-generated contamination from area vehicular sources, but a significant fraction of nearby vehicular traffic is generally attributable to the airports. Furthermore, health-risk estimates for some of the measured compounds are above US federal screening levels, so that further investigation is clearly warranted to evaluate average concentrations over time, seasons, and weather conditions. Where risk evaluation has been done, the risks are above US federal benchmarks, but risk estimates have been based on very conservative exposure assumptions (e.g., 70 years of exposure to the maximum concentration measured in any sample taken near a fence line) and limited short-term measurement data, such as data from one unusually high sample. Significant efforts are under way by producers of aircraft engines, and independent businesses, to develop higher efficiency aircraft engines, as well as advanced engine control technology in an effort to reduce fuel expenditures. Increased fuel efficiency will be accompanied by a reduction in VOCs. Measures other than technological improvements such as air traffic management systems can also be used to reduce the quantity of pollutant gases. The Boeing Company, in association with several airlines, has been testing an air management system called “tailored descent” to reduce fuel consumption. The system allows landing aircraft to use continuous descent, rather than a series of level segments when approaching an airport for landing. Such systems are based on advanced technology for contact between the aircraft and the ground. They report a dramatic drop in fuel consumption (up to 39%) and an accompanying reduction in CO2 emissions.

Predicting the Impact on Public Health In a study of air quality and the public health effect of UK airports Yim et al. examined PM 2.5 to estimate the number of early deaths that resulted from aircraft activity at the top 20 UK existing airports based on 2005 data. They further projected estimates to 2030 for either expansion of Heathrow or replacement by a new airport in the Thames estuary. Emissions were estimated for aircraft landing and takeoff, auxiliary power units and ground support equipment for a variety of scenarios. Complex chemical transport and

76

Air Transportation and Human Health

dispersion models were used to calculate PM 2.5 concentrations for different regions, then combined with population density data to estimate the population exposure attributable to aviation. They estimate 110 (90% CI 72–160) early deaths per year due to 2005 airport emissions. This number is roughly 45 times smaller that than those of UK road transport. The authors project that “even if capacity is constrained,. the health impacts of UK airports increases by 170% in 2030 due to an increasing and aging population, increasing emissions, and a changing atmosphere.” They further estimate that applying the maximum of mitigation measures, including desulfurization of fuel, would reduce attributable early deaths by up to 65%. Levy et al. also attempted to project the change in health impact of aviation, in this case from 2005 to 2025 in the U.S. Their modeling exercise projected simple changes to input parameters resulting in what they refer to as ‘what if’ projection. They focused on PM2.5 and particle precursors from aircraft, plus changes in background pollution, and the size and health of the population. No allowance was made for technology changes or mitigation efforts. Meteorological inputs were based on 2005 conditions. Several possible scenarios used changing baseline mortality or mobility rate and risk coefficients. Emissions modeling was based on results from 99 US airports that contribute over 94% of passenger enplanements and 82%–95% of the total continental commercial aircraft emissions. Emissions were evaluated for one day in 2004 with relatively heavy traffic and light weather. The emissions were assumed to remain the same in 2005. Activity was scaled up to 2025 by growth projection for the individual airports. Nonaviation emissions to estimate background pollutant concentrations were also scaled from 2005 but included projected growth and decreases due to currently “on the books” controls. They estimated increase for 2025 that varied for the different airports mainly depending on the projected growth. Projected non-aviation emissions declined. The resulting estimates are 460 premature deaths per year in 2025 as compared with 75 in 2005, a factor of 6.1 increase. They conclude that any goal to not increase the adverse health impact of aviation will need to reduce emissions to offset other trends.

Airport-Related Noise The Mandate of the Noise Control Act of 1972 With the passage of the Noise Control Act of 1972, the Office of Noise Abatement and Control (ONAC) in the EPA was established to carry out the mandate of this act. Whereas the Occupational Safety and Health Administration regulates noise as it impacts on the safety of workers and the Housing and Urban Development Department sets some noise standard regulations, ONAC was designed to oversee the noises that affect the quality of life of American citizens as they go about their daily activities. To this end, ONAC established noise emission standards for several categories of transportation and construction equipment, required noise emission limits on certain household products, and offered technical assistance to states regarding the establishment of statewide noise regulations. ONAC and EPA were highly regarded for their excellent educational publications on the hazardous effects of noise to human health and well-being and the ways to abate noise. EPA’s Public Education and Information Manual for Noise (June 1980) included The Quiet School Program aimed at educating school children on the dangers of noise as well as teaching them to “. modify noisy behavior and begin to develop a noise ethic for teens and preteens as a means to promote self-initiated, individual and group actions to reduce noise.” EPA was especially cognizant of the adverse impacts of aircraft noise, and its special report entitled Noise: A Challenge to Cities (May 1978) opened up with an illustration of a large airplane flying low over the rooftops of homes in a residential community, underscoring the fact that aircraft noise intrudes on the peace and quietness of community residents. This publication contains a discussion of aircraft and airport noise and bemoans the fact that the FAA had chosen not to implement EPA’s recommendations regarding aviation noise standards. However, the article goes on to optimistically state that “. excessive noise caused by airplanes and airports can be reduced” and urges the public and public officials to work for quieter planes and for changes in operations that would lessen the noises from nearby airports. It should be pointed out that this publication also stressed the fact that in US parks and forests, quiet is often shattered by airplane noise. The above-mentioned publication was not the only one in which aircraft noise was identified as a source of noise. In the EPA and ONAC’s publication Is Quiet Possible at the Dudley Home (December 1978), an airplane is seen flying over a house. More to the point with respect to abating aviation noise was EPA’s publication of Aviation Noise: Let’s Get on with the Job (April 1976), which is a reprint of the remarks made by Russell E. Train, the then US EPA administrator, at a 1976 conference on noise. In his remarks, Mr. Train acknowledged the physical, mental, and emotional distress that residents living near airports must endure because of the intrusive noises. He also spoke of the “utter hopelessness and helplessness that overwhelms them” when their pleas to lessen the noise were ignored. He stated: “We need a national air transportation which is healthy as well as safe. The evidence is overwhelming that, unless we make that system quieter, both human health and the financial health of the industry will continue to suffer. We need no miracles to achieve that kind of system.” He concluded that this country knows how to quiet its aviation noise, but it lacks the will to do so, and then urged: “Let’s get on with the job.” The air transportation industry grew by leaps and bounds during the next 40 years, despite the temporary slowdown after September 11, 2001, but, unfortunately, the residents on the ground exposed to aviation noise continued to complain about the noise.

The Adverse Impacts of Noise on Health and Well-Being EPA and ONAC published Noise: A Health Problem (August 1978) that called noise “America’s most widespread nuisance,” but more than that identified noise as a health hazard. Although this publication noted that additional studies were necessary to clarify noise’s

Air Transportation and Human Health

77

role as a health hazard, the publication believed there was enough evidence already to indicate that the danger of noise is real. The booklet went on to describe the ways that noise endangered mental and physical health, including the effects on hearing loss, heart disease, sleep disruption, and mental and social well-being. The booklet’s concluding statement by the then Surgeon General Dr. William H. Stewart summed up the EPA’s position on noise effects: “Calling noise a nuisance is like calling smog an inconvenience. Noise must be considered a hazard to the health of people everywhere.” In understanding the effects of noise, one must differentiate noise from sound. Sound begins as a vibrating object that causes the movement of air molecules, which in turn creates alternate bands of compression and expansion of the air. When these vibrations strike an individual’s eardrum, they move to the mechanisms of the middle ear, which then carry the vibrations to the hair cells of the inner ear. Hair cells respond to the pattern of these vibrations, converting them to a code, which is then transmitted to the brain. The brain decodes the messages conveyed by these transmissions giving information about the sound, essentially its frequency and intensity, but the brain also gives an emotional evaluation of the sounds heard. The brain decides whether the sound is pleasant or unpleasant, wanted or unwanted. The sounds that the brain deems to be unpleasant, unwanted, and intrusive are generally identified as noises. It is the intensity of the sound that gives one the sense of loudness, recognizing that frequency also makes a contribution with higher frequency sounds perceived as louder. The intensity, volume, or loudness of sound is measured through a modified decibel scale, allowing for the effect of frequency, known as the dBA scale. The dBA scale is not a linear but a logarithmic scale with an increase of 10 dB indicating a sound that is approximately twice as loud as the previous sound. The scale ranges from a low point of zero to over 170 dBA. Whispers measure approximately 20 dBA, quiet home between 20 and 30 dBA, conversations approximately 50–60 dBA, household appliance approximately 60–86 dBA, subway stations sounds can exceed 90 dBA, disco music approximately 120 dBA, and jet take offs more than 140 dBA. As to the effects of different sounds on human beings, they need not be loud to be identified as noises. Whispers in a movie theater would be disturbing to nearby attendees and a dripping faucet or continuous aircraft takeoffs may prevent someone from falling asleep. However, loud sounds, music at discos, and blasting boom car stereos may be judged pleasant by listeners, but these sounds can still be damaging to their ears. Thus, hearing loss has been generally accepted as resulting from exposure to very loud sounds repeatedly over time. Individuals who work in noisy environments such as night clubs, bars, discos, and video arcades are at risk for hearing loss and should be warned to protect their ears as should the entertainers who play in loud pop/rock bands. Yet, a single exposure to a very loud sound, such as an explosive or a military weapon, can also damage hearing ability. It should also be pointed out that hearing loss interferes with communication, can impede performance in the workplace, and can cause people to be reluctant about entering into new social relations. Noisy urban environments may reduce one’s hearing acuity over time but in the short term, such environments may interfere with conversations and social interactions as well as prevent people from being able to hear warnings or danger signals. With respect to residents living in environments where they are exposed to loud overhead jets or excessive traffic because of their proximity to airports, hearing loss has not been identified as the major physical hazard. Rather, the focus has been on the indirect effects of noise to their physical and mental well-being. The sounds emanating from overhead aircraft or highway traffic to and from airport can be identified as noise because they are intrusive, bothersome, and unpleasant. Residents react to intrusive noises physiologically by becoming angry, unhappy, and disturbed and, in turn, these feelings can bring about a complex set of physiological reactions such as an increase in blood pressure, a change in heart rhythm, or excessive secretions of certain hormones. This complex set of physiological reactions has been commonly called stress. Additionally, it has also been found that noise intrudes on sleep patterns, which can lead to ongoing fatigue. Should the stress continue as the noise intrudes continuously on one’s activities and sleep, the individual may then experience ill effects such as cardiovascular and circulatory problems. The World Health Organization has stated that noise affects quality of life and even if there are no obvious physical symptoms because of noise exposure, the individual living near a noisy source experiences a poorer quality of life. In other words, if the noise from the airports intrudes on one’s daily activities, for example, conversations with others in the home or on the telephone, resting in the backyard or on the patio, watching television, or reading, then that individual is not “living a healthy lifestyle.” Being healthy is not merely the absence of identifiable symptoms. Furthermore, failure to enjoy one’s living quarters also affects one’s mental state as people grow increasingly annoyed, angrier, more frustrated, and unhappy with the continuous noises to which they are exposed. There is another reaction to noises that exacerbates the physical and mental health of those living with these noises, namely, “learned helplessness,” the feeling that no one is listening to your complaints about noise nor do they care about your unhappiness and stress. Community groups in both the United States and in Europe, for example, New Jersey Citizens Against Aircraft Noise (NJCAAN) in the United States and Heathrow Association for the Control of Aircraft Noise (HACAN) in the United Kingdom have reported that their complaints about airport-related noises too often “fall on deaf ears.” Fighting back can alleviate feelings of helplessness, and the vigorous battle of the groups opposed to Heathrow’s expansion is one such example. The antiHeathrow expansionists may not win their battle but at least they have taken greater control of their lives. The American Federal Interagency Committee on Aviation Noise (FICAN, 2000) after reviewing more than 20 studies, including those examining the impacts of airport-related noise, concluded that noise negatively affects children’s reading, language, and memory skills. More recently, European studies focusing on the effects of aircraft noise on children skills found that aircraft noise interfered with reading comprehension and cognitive development. Too many children around the globe are attending schools or living in homes exposed to airport-related noises and such exposure has been linked to deficits in children’s learning and achievement.

78

Air Transportation and Human Health

More than 40 years ago, the US Federal Government recognized the dangers of noise and was supportive of the research to more clearly define the relationship between noise and mental and physical health. Today, studies have examined the health effects of noise on residents living near railroads, highways, and airports and the growing body of evidence appears to support the former Surgeon General of the United States who in 1978 declared noise a health hazard. The recently released World Health Organization’s Environmental Noise Guidelines (2018) recognizes that noise most certainly has a negative impact on health and it makes recommendations re: sound levels emanating from rail, road and overhead aircraft. One can only hope that noise policies will follow the overwhelming research linking noise to adverse health impacts (Bronzaft, 2017).

Air Travel Progressed But Noise Abatement Lagged Behind Despite the stronger data supporting the adverse effects of airport-related noises, noise abatement did not accompany the advances in air travel over the past 40 years. Former Surgeon General Train’s plea to get on with the job was largely ignored in the ensuing years. In fact, Professor Sydney A. Shapiro, a law professor, in his 1991 report labeled EPA and Congress irresponsible toward carrying out the goals of the Noise Control Act. More specifically with respect to aircraft and airport noise, Professor Shapiro believed that the FAA would not relinquish any control over aircraft to the EPA and the FAA would be supported in its position by the airlines and the air travel industry. Yet, the major reason for why noise abatement lagged behind at the federal level was former President Ronald Reagan’s “defunding” of the ONAC with the approval of the Congress. Keeping with Reagan’s philosophy that noise pollution is a local issue that is best dealt with at the local level, the EPA and ONAC got out of the noise business in 1982. The FAA was now largely in control of aviation noise and noise abatement, and many citizens and residents believe that noise was not a primary concern of the FAA; expansion of air travel was its primary goal. Efforts have been and are still underway by New York Congress people to get ONAC and EPA “back in business.” Rapid growth in air travel in the 1980s and 1990s led to expansion of airports and the increase in flights. Thus, residents living in communities adjacent to airports received increased noise from overhead jets as well as the noise from traffic to and from the airport. It is far better to speak of airport-related noise rather than airport or aircraft noise, because in this way noise from cars and trucks traveling to the airports as well as the sounds from the car alarms that go off when planes fly low over parking lots can be included. With the increased noise, there was the concomitant increase in complaints from residents. Residents protested to the airports as well as their public officials and community meetings among interested parties often became loud and cantankerous. Community residents in the United States then formed organizations to try and deal with the noise from nearby airports, and with the growth of the Internet, communities from around the world discovered that interests of citizens on the ground were not often responded to appropriately (www.queensquietskies.org; www.hacan.org). The airlines have argued that aircraft noise has decreased in the past 40 years especially with the removal of the noisiest jets and the introduction of Stage 3 quieter aircraft. However, the increase in the total number of flights has more than made up for the removal of the noisier jets. Furthermore, the FAA tends to measure the average levels of noise in neighborhoods near airports but residents in these neighborhoods are exposed to single-event noises and the increase in air traffic has brought about louder single-event noises. The rises in fuel prices and the request to emit less harmful gases in the environment in the past years have speeded up the design of quieter aircraft. Pratt and Whitney have designed an engine that will lower both fuel consumption and noise; Emirates Airlines, when it introduced its new A 380, noted that it will use fuel more economically, be quieter within, and generate far less noise when it takes off compared to other planes; Boeing’s plans for its newer airplanes also indicated that they would consume less fuel, emit less harmful gases, and be quieter. When American Airlines announced in 2009 its efforts to modernize its fleet by adding Boeing 737–800s, it added that it would take approximately 10 years to phase out its noisy McDonnell Douglas MD-80s, “a reliable but noisy aircraft that gulps 35 percent more fuel than the 737-800.” Another way to “quiet the skies” above the heads of residents living near airports is to redirect the flow of air traffic so that the planes are not routed over heavily congested communities. The proposal initiated by the FAA, Overview of NY/NJ/PHL Airspace Redesign Project (1999), to redesign the airspace in this northeastern region to improve traffic flow had included community noise concerns when it first issued its plans. However, in the Airspace Redesign Project document issued years later, the noise consideration was dropped and the goals of the project centered on more efficient airspace, safety, and on time performance. The community outcry was vehement when the plan began to be implemented in 2008 and communities experienced increases in overhead noise. These communities then challenged the FAA changes in the court. Senators Christopher Dodd and Arlen Specter, similarly outraged by the FAA actions, supported the court challenges. The Senators argued that the FAA did not make a serious effort to alleviate the effects of the increased noise pollution. The parties to the lawsuit, as well as the Senators, did not disagree on the need to redesign the airspace but argued that the FAA should not have dropped the consideration of noise impacts from their redesign plans. The parties to the lawsuit argued in the US Court of Appeals for District of Columbia on 11 May 2009, as well as the Senators, did not disagree on the need to redesign the airspace but argued that the FAA did not appropriately examine the environment impacts. The US Government Accounting Office was asked to investigate the manner in which the FAA conducted the Airspace Redesign Project and it concluded in its 2008 report that the FAA had not listed noise abatement as a goal. However, the report notes that in the future the FAA should employ noise assessment techniques that could better measure the impacts of noise on residents. Several years ago when Honeywell announced that it was installing a SmartPath Precision Landing System at Bremen Airport in Germany, which supports precision approach and landings, it

Air Transportation and Human Health

79

forecast a system that should allow airports to reduce noise in surrounding communities, as well as save fuel and lower emissions. This System has been installed in other airports in Europe and the United States.

Beyond the Major Airlines The growth of air transportation in the past 40 years also saw the introduction of smaller private jets and the expansion of smaller airports near major cities. The corporate leaders, entertainers, and public officials who use private aircraft, then fly into the nearby cities via helicopters, another source of noise for community residents. Community residents near these smaller airports are now battling the expansion of these facilities and noise is a major source of their complaints (http://www.aviationwatch.org). Air travel is not just a source of noise but of air pollution as well and many of the users of private aircraft speak to the importance of a greener, healthier community but fail to realize the adverse environmental impacts of their private aircraft.

Aircraft Cabin Air Quality In 2000, a committee of the National Research Council, after evaluation of the passenger cabin environment in commercial aircraft with regard to systems, exposures, and health considerations, concluded that cabin environment is currently regulated fairly adequately. Contaminant exposures do occur, including odors and gases emitted by passengers, ozone, organic compounds, allergens, irritants and toxicants, but the concentrations under routine conditions have not yet been well characterized. They also conclude that health complaints of passengers and crew are so broad and nonspecific that it is difficult to define precise illnesses and questions remain about causes of the symptoms. The report contains 10 specific recommendations including improved methods to assure compliance with current US FAA standards for specific air contaminants, increasing efforts to provide information to passengers, crew, and health professionals, removing passengers from aircraft within 30 min after a ventilation failure, and the establishment of a research program to answer six high-priority questions concerning status and control of cabin air quality. In flight, fresh cabin ventilation air is taken in from the engines and automatic systems adjust for humidity, temperature, and pressure. A recent news report cited certain “low probability” events involving engine oil fumes that contaminate intake air. Although a very small fraction of flights are reportedly affected the total number could be significant. Fresh air intake is typically mixed with an equal amount of cabin air that is recirculated through a high-efficiency filter. The airflow is relatively laminar with intake overhead and the outflow near the floor. There is reportedly no mixing throughout the cabin and little between adjacent rows. When the aircraft is on the ground there are auxiliary units to provide fresh air. Thus under normal operating conditions, airborne infectious agents shed by a passenger will not spread throughout the cabin, but may spread to passengers seated in the same, or an adjacent row. In the case of a nonroutine event that disrupts the air circulation system, transmission of an infectious agent may be possible. An extensive evaluation of many investigations into disease transmission on aircraft by Mangili and Gendreau in 2005 concluded that no valid peer reviewed report linked cabin ventilation rates and air quality to health risks beyond that observed in other modes of transportation, or office environments. Unfortunately, a concerned individual cannot remove him- or herself from an aircraft cabin as readily as one can leave an ordinary indoor environment of concern. Transmissions by means other than the flow of contaminated air are also possible. These are direct spread of droplets from a cough or sneeze to nearby passengers, food or water contamination, and spread of disease by insects or other vermin. The magnitude of the risk is unclear, but very few incidents have been observed. A few cases of transmission of tuberculosis have been reported, as was one incident where flu was spread through a cabin with an inoperative ventilation system.

Other Issues Physical Agents Other physical agents such as temperature, barometric pressure, and relative humidity are normally controlled in airline cabins and should not result in adverse health effects except perhaps in the event of an accident or serious malfunction. In flight, exposure to ionizing radiation is generally somewhat increased because of the increase in cosmic radiation with altitude. Depending on the altitude and duration, a flight can add a fraction of a percent of the average annual radiation dose people receive from natural background. Since over 4 billion air passengers per year have been reported for 2017, the overall world population dose would be large, but increased risk to an individual is very small. In the case of pilots or flight attendants who fly often, radiation doses are considered on the basis of occupational exposure limits.

Airport Runoff As winter air traffic has increased, so has concern over the contamination of groundwater as a result of runoff from airport surfaces of the chemicals used to deice aircraft and to assist with snow and ice removal on runways and other surfaces. The chemicals used are frequently freezing point depressants such as potassium acetate and sodium formate. Potassium acetate became the most frequently used freezing point depressant during the 1990s, replacing the formerly used urea. These materials can increase the biological

80

Air Transportation and Human Health

oxygen demand in aquatic systems. The water-based solutions of these chemicals also contain additives such as rust inhibitors. Deicer runoff management systems are in place at some airports, and fairly rapid chemical degradation of the deicers helps, but total containment of runoff from all surfaces is extremely difficult. Researchers have reported occasional levels of contaminants from deicers that exceed benchmark levels of concern regarding aquatic toxicity. These materials undoubtedly contribute to some level of environmental degradation and must be considered when evaluating the impact of airports on human and environmental health. Also, road salt runoff from airports is added to the general urban road salt runoff problem causing an increase in the salt content of water supplies.

Summary and Conclusions A consensus has emerged that although air pollution emissions from most sources are decreasing, airport emissions are increasing and some regulation, such as a cap on airport emissions, will be needed to control their contribution to atmospheric pollution. Growth forecasts and modeling studies suggest that air emissions will continue to increase. Estimates of the contribution of individual pollutants from airports to the local ambient mix range from approximately 0.1% to as high as 9%. Investigation of the literature from several European airports (Netherlands, United Kingdom, Germany) indicates that road traffic is generally considered to be the dominant contributor to the air pollution in the vicinity of airports. Government intervention can help to reduce the number of flights and airport expansion, by promoting transport by rail, especially for replacement of short-haul flights. Airport-related noise and noise in general have not become part of the larger environmental worldwide movement and the major burden of educating the public and legislators to the dangers of noise has been taken on by antinoise groups. Legislators have been responding to citizen calls for quieter environments by introducing antinoise legislation in a number of states. In combating the FAA Airspace Redesign Project for the northeast, communities and legislators have joined forces. The fight against the expansion of Heathrow Airport, which would bring increased noise to neighboring communities, has similarly brought together anti-airport-noise groups and public officials. Hopefully, these battles will forecast a future where proponents of airport expansions and air space redesign will acknowledge potential harm of proposed changes on nearby residents and factor in noise reduction in their plans. Additionally, calls for alternative modes of transportation, such as rail, especially for short trips will lessen the need to expand airports. Kevin C. Coates in his 2006 article Shrinking America’s Energy Attitude referenced a Japanese source that stated that Japan’s high-speed railway carries 281 million passengers and to carry the same number of people by air would require 1900 flights per day and four large airports. Had the ONAC been functioning and distributing materials on noise, the FAA may have been more cautious in assessing the impacts of noise in its plans and redesigns. Sadly, the Presidents following Ronald Reagan up to the present have not seen fit to reinstate this office and it is doubtful, despite the efforts of several congressmen, that we will see this office functioning in the near future. In conclusion, airports are sources of noise and air contaminants, and both the noise and the compounds emitted can adversely affect human health. People, especially those living in the vicinity of airports, are not able to avoid exposure to these stressors. Mitigation efforts and technological advances can reduce the impact, and governmental and international action can encourage adoption of new technologies that will improve the situation.

See also: Air Pollution and Lung Cancer Risks; Air Quality Legislation; Combined Transportation Noise Exposure in Residential Areas; Global Climate Changes and International Trade and Travel: Effects on Human Health Outcomes; Intercontinental Air Pollution Transport: Links to Environmental Health; Long-Range Transport and Deposition of Air Pollution; Mental Health Effects of Noise; Noise and Health: Annoyance and Interference.

Further Reading Arthur, D., 2000. Little Limited, Study into the potential impact of changes in technology on the development of air transport in the UK. In: Final report to Department of the Environment, Transport and Regions, November 2000. DETR Contract No. PPAD 9/91/14Arthur D. Little Limited, Science Park, Milton Road, Cambridge CB4 0DW. Reference 71861. C6237-FR-001, Issue 1.0. Bat-Chava, Y., Schur, K., 2000. Longitudinal trends in hearing loss: Nineteen years of public screenings. In: Paper presented at the 128th annual meeting of American Public Health Association, Boston. Bronzaft, A.L., 1998. A voice to end the government’s silence on noise. Hearing Rehabilitation Quarterly 23, 6–12, 29. Bronzaft, A.L., 2017. Impact of noise on health: The divide between policy and science. Open Journal of Social Sciences 5: 108–120. https://doi.org/10.4236/jss.2017. 55008. Bronzaft, A.L., Ahern, K.D., McGinn, R., O’Connor, J., Savino, B., 1998. Aircraft noise: A potential health hazard. Environment and Behavior 30, 101–113. Carslaw, D.C., Beevers, S.D., Ropkins, K., Bell, M.C., 2006. Detecting and quantifying aircraft and otheron-airport contributions to ambient nitrogen oxides in the vicinity of a large international airport. Atmospheric Environment 40 (28), 5424–5434. Carslaw, D.C., Williams, M.L., Barratt, B., 2012. A short-term intervention study – Impact of airportclosure due to the eruption of Eyjafjallajokull on near-field air quality. Atmospheric Environment 54, 328–336. Chen, Y.C., Borken-Kleefeld, J., 2016. NOx emissions from diesel passenger cars worsen with age. Environmental Science & Technology 50, 3327–3332. Coates, K.C., 2006. Shrinking America’s Energy Attitude. http://www.areco.org (accessed December 2009). Cohen, B.S., Bronzaft, A.L., Heikkinen, M., Goodman, J., Nadas, A., 2008. Airport-related air pollution and noise. Journal of Occupational and Environmental Hygiene 5, 119–129.

Air Transportation and Human Health

81

Corsi, S.R., Geis, S.W., Bowman, G., Failey, G.G., Rutter, T.D., 2009. Aquatic toxicity of airfield-pavement deicer materials and implications for airport runoff. Environmental Science and Technology 43 (1), 40–46. ENVIRON (2000) City of Park Ridge, Illinois, Preliminary study and analysis of toxic air pollutant emissions from O’Hare International Airport and the resulting health risks created by these toxic emissions in surrounding residential communities, August 2000, Volume IV, preliminary risk evaluation of Mostardi-Platt Park ridge project data monitoring adjacent to O’Hare airport by ENVIRON International Corp., Arlington, VA, Princeton, NJ, August 2000. Project # 02-8733A. ENVIRON (2001) Screening Air Quality Evaluation of Teterboro Airport, Teterboro, New Jersey. Prepared for: Coalition for Public Health and Safety, Moonachie, NJ by ENVIRON International Corp., Groton, MA, Princeton, NJ October 12, 2001. Federal Aviation Administration (1999) Overview of NY/NJ/PHL Airspace Redesign Project. Briefing to Manhattan Borough President’s Helicopter Task Force. Washington, DC. Author. Frey R, Carlson J, and Clayton J (2017) “Effects of Noise Pollution from F-35A Aircraft at Gowen Field Air Base” 2017 Undergraduate Research and Scholarship Conference. https:// scholarworks. boisestate.edu/sps_17/12l. GAO (2000) GAO/RCED-00-153, aviation and the environment, airport operations and future growth present environmental challenges report to the ranking democratic member, committee on transportation and infrastructure, house of representatives, August 2000. Haines, M.M., Stansfeld, S.A., Job, R.F.S., Berglund, B., Head, J., 2001. Chronic aircraft noise exposure, stress responses, mental health and cognitive performance in school children. Psychological Medicine 31, 265–277. HCN, 1999. In: In: Public heath impact of large airports. Health Council of the Netherlands. Report to the Ministers of Health, Welfare and Sport; of Housing Spatial Planning and the Environment; and of Transport, Public Works and Water Management. N0 1999/14 E, The Hague, 2 September 1999. Hilkevitch J (2009) American Airlines adds 737–800 in effort to modernize fleet. Move will also lead to eventual retirement of American’s McDonnell Douglas MD-80s. Chicago Herald Tribune ([email protected]). Holzman, D., 1977. Plane pollution, focus. Environmental Health Perspectives 105, 105–112. IEPA, 2002. Chicago O’Hare airport air toxic monitoring program. June–December 2000. In: Final Report, Illinois Environmental Protection Agency, Bureau of Air, May 2002. Kessler, R., 2013. Sunset for leaded aviation gasoline? Environmental Health Perspectives 121 (2), A54–A57. Lawyer, G., 2016. Measuring the potential of individual airports for pandemic spread over the world airline network. BMC Infectious Diseases 16, 70. Levy, J.I., Woody, M., Baek, B.H., Shankar, U., Arunachalam, S., 2012. Current and future particulate-matter-related mortality risks in the United States from aviation emissions during landing and takeoff. Risk Analysis 32 (2), 237–249. Mangili, A., Gendreau, M.A., 2005. Transmission of infectious diseases during commercial air travel. Lancet 365, 989–996. Mangili, A., Vindenes, T., Gendreau, M., 2015. Infectious risks of air travel. Microbiology Spectrum 3 (5). UNSP IOL5-0009-2015. McCulley, Frick, and Gilman (1995) Air quality survey of Seattle-Tacoma International Airport, prepared for Port of Seattle Aviation Planning Department, Atmospheric Sciences Group, McCulley, Frick, and Gilman, Inc., 3400 188th Street SW, Suite 400, Lynnwood, WA 98037, January 1995. Miranda, M.L., Anthopolos, R., Hastings, D., 2011. A geospatial analysis of the effects of aviation gasoline on childhood blood lead levels. Environmental Health Perspectives 119 (10), 1513–1516. Møller, K.L., Brauer, C., Mikkelsen, S., et al., 2017. Copenhagen airport cohort: Air pollution, manual baggage handling and health. BMJ Open 7, e012651. https://doi.org/ 10.1136/bmjopen-2016-012651. Mostardi-Platt (2000) City of Park Ridge, Illinois, Preliminary study and analysis of toxic air pollutant emissions from O’Hare International Airport and the resulting health risks created by these toxic emissions in surrounding residential communities, August 2000, Volume III, preliminary downwind site sampling investigation for air toxic emissions from O’Hare international airport by Mostardi-Platt associates, Inc., 945 Oaklawn Avenue, Elmhurst, Illinois 60126. National Research Council, 2002. The airliner cabin environment and the Health of Passengers and Crew. National Academy Press 2101 Constitution Ave, N.W., Washington, DC, 20418. NESCAUM (2003) Controlling Airport-Related Air Pollution. Northeast States for Coordinated Air Use Management and the Center for Clean Air Policy. PANYNJ (2001) Port Authority of New York and New Jersey, Ensuring a Cleaner Environment, brochure. Royal Commission on Environmental Pollution (2002) The Environmental Effects of Civil Aircraft In Flight. The Sanctuary, Royal Commission on Environmental Pollution Westminster, London SWIP 3JS, Crown Copyright. SCAQMD (2000) South Coast Air Quality Management District Monitoring and Analysis Air Monitoring Study in the Area of Los Angeles International Airport, Report Prepared April 2000, Part I. Written by Rudy Eden, Reviewed by Melvin D. Zeldin. Sen, O., 1997. The effect of aircraft engine exhaust gases on the environment. International Journal of Environment and Pollution 8, 148–157. Stansfeld, S.A., Berglund, B., Clark, C., et al., 2005. Aircraft and road traffic noise and children’s cognition and health: A cross-national study. Lancet 365, 1942–1949. Stenzel, J., 1996. Flying off course: Environmental impacts of America’s Airports. NY National Resources Defense Council, New York. Stewart, J., Bronzaft, A.L., McManus, F., Rodgers, R., Weedon, V., 2011. Why noise matters. Earthscan, London, New York. Suppan, P., Graf, J., 2000. The impact of an airport on regional air quality at Munich, Germany. International Journal of Environment and Pollution 14, 375–381. US DOT (2001) U.S. Department of Transportation, Report to the U.S. Congress on Environmental Review of Airport Improvement Projects, May 2001. US EPA (1996) Air Quality Criteria for Ozone and Related Photochemical Oxidants EPA/600/P-93/004a-cF, July 1996. US EPA (1999) Evaluation of air pollutant emissions from subsonic commercial jet aircraft, U.S. EPA, air and radiation, EPA420-R-99-013. US General Accountability Office (2008) The FAA Airspace Redesign: An Analyses of the New York/New Jersey/Philadelphia Project. Report GAO-08-786. Shirmohammadi, F., Sowlat, M.H., Hasheminassab, S., Saffari, A., Ban-Weiss, G., Sioutas, C., 2017. Emission rates of particle number, mass and black carbon by the Los Angeles international airport (LAX) and its impact on air quality in Los Angeles. Atmospheric Environment 151, 82–93. World Health Organization, 2018. Environmental noise guidelines for the European region. http://www.euro.who.int/en/publications/abstracts/environmental-noise-guidelines-forthe-european-region-2018. Wyatt, T.R., 2011. Balancing airport capacity requirements with environmental concerns: Legalchallenges to airport expansion. https://scholar.smu.edu. Yang, X., et al., 2018. Characterization of aircraft emissions and air quality impacts of an international airport. Journal of Environmental Sciences in press. Yim, S.H.L., Stettler, M.E.J., Barrett, S.R.H., 2013. Air quality and public health impacts of UK airports. Part II: Impacts and policy assessment. Atmospheric Environment 67, 184–192.

Relevant Websites njcaan.org http://www.njcaan.orgdA volunteer citizens Group working to address airport and airplane noise pollution. www.queensquietskies.org. www.nonoise.org. https://hacan.org.uk.

Ambient Concentrations of Acrolein: Health Risksq TJ Woodruff, University of California, San Francisco, San Francisco, CA, United States DA Axelrad, US Environmental Protection Agency, Washington, DC, United States © 2019 Elsevier B.V. All rights reserved.

Abbreviations GSH Glutathione HAP Hazardous air pollutant RfC Reference concentration TAC Toxic air contaminant

Introduction Acrolein, also known as acrylic aldehyde, acrylaldehyde, allyl aldehyde, ethylene aldehyde, 2-propenal, and prop-2-en-1-al, is a colorless or yellow liquid at room temperature with a disagreeable, choking odor. It rapidly changes to vapor when heated. Acrolein is an aldehyde; other aldehydes include formaldehyde and acetaldehyde. The following text discusses the health risks from ambient air concentrations of acrolein. Acrolein is listed as a hazardous air pollutant (HAP) under the Clean Air Act, and is designated as one of the 33 air toxics that present the greatest threat to public health in urban areas under the US EPA’s urban air toxics strategy. It has also been identified by the State of California as one of five toxic air contaminants (TACs) that may cause infants and children to be especially susceptible to illness.

Air Concentrations and Sources of Exposure Ambient Concentrations and Emissions Acrolein is highly reactive in air, with a half-life < 1 day. The presence of acrolein in ambient air is attributable to both direct emissions of acrolein and secondary formation of acrolein that results from breakdown of other ambient pollutants, particularly 1,3butadiene. Until recently, measurement of acrolein in air was considered to be difficult and was not conducted on a routine basis. New methods have been developed, and the US EPA reported summary data for over 1300 ambient acrolein measurements taken in 2006 at 59 sites in 38 urban and rural areas across the country (Table 1). The median level of these measurements was 0.62 mg m 3. No California sites are included in the US EPA data, but a separate summary of over 500 measurements taken at 17 California sites

Table 1

Summary of 2006 US ambient air monitoring for acrolein (mg m 3)

Mean 25th percentile 50th percentile 75th percentile 90th percentile Maximum

Urban air toxics monitoring program a

California air toxics monitoring program b

0.94 0.41 0.62 1.08 – 12.35

1.36 –c 1.15 – 2.30 13.80

a Based on 1329 samples taken at 59 sites in 38 urban and rural areas across the United States. No California sites were included in this data set. In all, 1048 samples had detectable levels of acrolein (method detection limit ¼ 0.25 mg m 3). b Based on 500 measurements taken at 17 California sites (method detection limit ¼ 0.7 mg m 3). c “–” indicates the statistic was not reported in the published data summary.

q

Change History: September 2018. TJ Woodruff and DA Axelrad updated the Further Reading section. This is an update of T.J. Woodruff, D.A. Axelrad, Ambient Concentrations of Acrolein: Health Risks, Editor(s): J.O. Nriagu, Encyclopedia of Environmental Health, Elsevier, 2011, Pages 803–809.

82

Encyclopedia of Environmental Health, 2nd edition, Volume 1

https://doi.org/10.1016/B978-0-12-409548-9.11662-8

Ambient Concentrations of Acrolein: Health Risks Table 2

83

Summary of 1999 US ambient acrolein concentrations (mg m 3) modeled at the census tract levela

Mean 25th percentile 50th percentile 75th percentile 90th percentile 95th percentile

National

California

0.11 0.03 0.08 0.14 0.26 0.41

0.22 0.09 0.16 0.33 0.49 0.56

a

The US EPA dispersion model estimates annual average concentrations of hazardous air pollutants for each census tract in the United States. There are approximately 60,000 census tracts in the United States, typically with 4000–5000 residents each.

in 2006 found a median level of 1.15 mg m 3. Acrolein was measured in ambient air at levels in excess of 10 mg m 3 at selected locations in 2006. More extensive estimates of acrolein ambient concentrations in the United States come from dispersion modeling conducted by the US EPA. The most current estimates available are for the year 1999 and found the median concentration in the United States to be 0.08 mg m 3 (Table 2). Comparisons of the dispersion model estimates to monitoring data suggest that the model underestimates ambient acrolein levels; for example, in California, the median modeled concentration is 0.16 mg m 3, and the median monitored concentration is 1.15 mg m 3. However, the dispersion model estimates remain important because of their broader coverage (all census tracts in the United States, compared with the limited number of monitoring sites) and their suggestion that acrolein is ubiquitous in the ambient air of the United States in both urban (median ¼ 0.094 mg m 3) and rural areas (median ¼ 0.021 mg m 3), and for the information they provide on sources of ambient acrolein. The dispersion model has also found that ambient acrolein levels are projected to decline by approximately 40% from 1999 to 2030. The US EPA’s national inventory of acrolein emissions estimates that 58.4 million pounds of acrolein were emitted to the atmosphere in 1999. Seventy-two percent of these emissions came from small stationary sources and fires (i.e., area sources); 25% of emissions from mobile sources, both onroad and nonroad; and 3% of emissions from major stationary sources. This profile of acrolein emissions sources contrasts with findings of contributions to estimated ambient concentrations from the US EPA’s dispersion model, which estimates that approximately 72% of ambient acrolein is attributable to mobile sources (52% onroad, 20% nonroad), with 26% attributable to area sources and the remainder to major sources. The mobile source contribution to ambient concentrations is much greater than the mobile source contribution to acrolein emissions because of the important role of secondary formation. A substantial portion of ambient acrolein concentrations is attributable to secondary formation due to atmospheric breakdown of 1,3-butadieneda hazardous air pollutant that largely originates from mobile source emissions. Given that the dispersion model concentration estimates appear to be underestimates, it is possible that emissions of acrolein and its precursor 1,3-butadiene are also underestimated; thus the relative contributions of different source types may be over- or underestimated.

Indoor Air Sources and Concentrations Acrolein may occur in indoor air due to penetration of acrolein from outdoor air, from indoor combustion sources including tobacco smoke and wood stoves, and from cooking of food. Lumber and other building materials may also emit acrolein into indoor air. The available information on levels of acrolein in homes is somewhat conflicting. A study of 234 homes in three cities, with no smoking residents, found that indoor sources made minimal contributions to indoor concentrations. The median indoor concentration in this study was 0.59 mg m 3, and the median outdoor concentration at the same locations was 0.46 mg m 3. Another study, conducted in nine homes with no smoking residents in three California counties, found that indoor acrolein concentrations were much greater than outdoor concentrations, with building materials and cooking identified as the main indoor sources. Indoor concentrations ranged from 2.1 to 12.2 mg m 3 in this study, and were generally approximately 10 times the outdoor concentrations at the same locations.

Adverse Health Effects Acute exposure to acrolein in studies with human subjects has resulted in eye, nose, and throat irritation. Eye and respiratory system irritation increased in human subjects with increasing acrolein concentration, with data showing that when human subjects were exposed to levels as low as 0.09 ppm (0.02 mg m 3) for 5 min, they complained of eye irritation. Acute exposure in animal studies has also found respiratory effects, with higher doses leading to death. Necropsies of the deceased rats found death was attributable to lung damage. Evaluation of rabbits surviving acute exposures has found damage to the lungs, including edema, damage to the bronchial lining of the large airways, and necrosis of the lung parenchyma. Studies of acute exposure in animals have also found alterations in respiratory function, such as decreased respiratory rate and increased expiratory flow resistance. In addition, a study of acute exposures in guinea pigs found bronchial hyperresponsiveness following acrolein exposure, suggesting that asthmatics may be

84

Ambient Concentrations of Acrolein: Health Risks

more predisposed to an asthma attack following acrolein exposure. Few animal studies evaluated other effects from acute exposure, such as eye irritation, although one study of baboons did note such effects. A number of studies of subchronic exposures (between a few days to 90 days, exposures generally between 0.1 and 4 ppm) in animals have also found increasing exposures to acrolein associated with increasing severity of damage to the respiratory system. Dose-dependent relationships between subchronic acrolein exposures and metrics of reduced lung capacity have been found in rodents, including increases in measures of lung volume (residual volume, total lung capacity, and forced residual capacity), decreases in lung compliance, obstructive lesions in small and large airways, bronchopneumonia, epithelial necrosis of the peribronchiolar and bronchiolar regions, bronchiolar edema with macrophages, focal pulmonary edema, alveolitis, hyperplasia and metaplasia of the airway epithelium, and inflammatory alterations. Similar respiratory effects have also been reported in mice, monkeys, guinea pigs, dogs, hamsters, and rabbits. Lung function impairments are closely related to the manifestation of chronic respiratory disease. Studies showing increases in measures of respiratory volume and total lung capacity indicate increases in lung volumes with increasing acrolein exposure, which suggests that a reduced volume of air is expired. These are consistent with lung function changes seen in obstructive lung diseases, such as asthma, chronic bronchitis, and emphysema. Increased lung volume measures are also consistent with decreased lung compliance. One study reports an increase in lung collagen. Increased collagen deposition is part of the airway remodeling process found in chronic asthma; this leads to the narrowing of the airways and potentially reduced airway compliance. It is plausible that acrolein exposure could affect collagen deposition, as exposure to other highly reactive airborne toxicants, specifically ozone, alters the expression of fibroblast growth factors. Data from in vitro human and in vivo animal studies indicate that exposure to acrolein may exacerbate asthma. Asthma is a complex respiratory disease characterized by chronic inflammation of the bronchial tubes, causing swelling and narrowing of the airways. This can be caused by several major processes acting on the bronchi: airway inflammation, altered epithelial function, bronchospasm, and hyperreactivity (overreaction of the bronchi to factors that can precipitate asthma). In an in vitro human study, lung tissues from nonatopic, nonasthmatic patients undergoing lung cancer surgery were sensitized to asthma by incubation in sera from asthmatic patients. Preexposure to acrolein of the sensitized lung tissue exhibited an increased maximal contractile response from exposure to the antigen Dermatophagoides pteronyssinus, suggesting that exposure to acrolein sensitizes the lung to antigens. A separate study found that acrolein increased mucus glycoprotein gene expression, which can result in mucus hypersecretion and lead to inflammatory airway disorders. Animal studies also find that acrolein exposure may exacerbate asthma. Studies in guinea pigs found an association between increasing exposure to acrolein and increased bronchial responsiveness and increased sulfidopeptide leukotrienes (bronchoconstrictive lip mediators thought to have an important role in asthma). Studies in mice and rats also have found that exposure to acrolein can increase mucous cell hyperplasia and metaplasia in air surface epithelium and airway lumen, along with increased mucin mRNA and mucin glycoproteins (as noted earlier, this can lead to increased gene expression, which can result in mucus hypersecretion, leading to inflammation). In general, acrolein has been shown to interact with glutathione (GSH), both through GSH depletion and formation of acrolein GSH (GSH protects cells by removing reactive metabolites that can damage them). Acrolein has also been shown to suppress host defense mechanisms and elicit proinflammatory processes. These three important mechanisms have been hypothesized to contribute to acrolein toxicity.

Limitations of the Toxicological and Human Studies The available studies of acrolein have focused on mature adults or animals. Children can be more susceptible to exposure to air pollution because their lungs are still developing. They can also have greater exposures than adults because of higher breathing rates per unit body weight compared to adults, and because children tend to breathe more through their mouths, limiting nasal filtration. Studies of pollutants such as ozone suggest that lung development can be impaired by early-life exposures, which can make children more susceptible to effects from exposures to air pollution later in life. Although there are no data on the effects of exposure to acrolein during lung development, studies that find developmental effects from exposure to ozone, a similarly reactive air pollutant, and acrolein’s demonstrated ability to affect mature lungs suggest a concern for developmental exposure to acrolein. A further limitation of the animal and human studies is that only exposures of 90 days or less have been evaluated. Exposure data for acrolein suggest that exposure is ubiquitous and chronic. Finally, exposure to acrolein does not occur in isolation. Assessment of air pollutants shows ubiquitous exposure to multiple air pollutants, including known respiratory irritants such as particulate matter, ozone, and other aldehydes such as formaldehyde and acetaldehyde. There have been a limited number of studies of mixtures of air pollutants on respiratory function. A study in mice of coexposure to acrolein and carbon black (a component of particulate matter) found impaired ability to eliminate subsequent infectious agents from the lung. Another study of exposure to a mixture of aldehydes, including acrolein, found enhanced decreased breathing frequency from the combined exposures compared to individual pollutant exposures, although the response was less than dose-additive. The studies suggest that exposure to acrolein along with other air pollutants can enhance the risks observed from the studies of exposure to acrolein alone.

Ambient Concentrations of Acrolein: Health Risks

85

Risks to the Public Several studies have compared ambient concentrations of acrolein to the US EPA reference concentration (RfC). The RfC is defined as an estimate of continuous inhalation exposure to the human population, including sensitive subgroups, which is likely to be without appreciable risk of deleterious effects over a lifetime. The RfC for acrolein is 0.02 mg m 3. These analyses have found that estimated acrolein ambient concentrations exceed the RfC in more than 90% of the 60,000 continental US census tracts for 1990 and 1996; the RfC was exceeded in more than 90% of urban census tracts in 1999. In addition, estimated acrolein concentrations in 1996 and 1999 exceeded 10 times the RfC in more than 10% of US census tracts. As noted earlier, the estimated ambient concentrations underestimate monitored levels. The median monitored concentration of acrolein in 2006 was approximately 30 times greater than the RfC nationally and approximately 60 times greater in California. In the absence of any further regulatory controls, the average ambient concentration of acrolein in the United States in 2030 is projected to be more than three times the RfC. One study has developed estimates of increased risk to the US population from ambient exposure to acrolein using dose– response data from animals and statistical modeling techniques. The analysis is based on animal experiments showing subchronic exposure to acrolein associated with metrics of decreased lung function, where the dose–response data from the animal experiment were extrapolated to lower exposures and used to estimate risks to the human population for estimated ambient concentrations of acrolein for 1999. The metrics used were specific compliance, which decreases as acrolein exposure increases, and the ratio of residual volume to total lung capacity, which increases as acrolein exposure increases, suggesting that a reduced air volume is expired. These two measures were used as markers of lung function. The estimated population risks were higher for specific compliance, with a median estimated risk of 2.5 additional adverse outcomes per 1000 exposed people across the United States. The 5th percentile of risk was 0.28 per 1000, and the 95th percentile was 14 per 1000. The estimated risks for the other metric of lung function, the ratio of residual volume to total lung capacity, was in general about two orders of magnitude lower and approached zero around the median estimated ambient concentration of acrolein. The study finds risks of acrolein in urban areas to be about five times greater than the risks in rural areas, due to differences in ambient concentrations. The study assumes that animals and humans will have similar responses to equivalent acrolein exposures, that chronic and subchronic exposures will have similar effects, and that risks at lower exposures are proportional to those at higher exposures. The study does not account for the potential increased susceptibility of children to acrolein or concurrent exposure to other air pollutants that can damage the lung. In addition, ambient concentrations were underestimated, and no contribution from indoor sources was included. Although there are uncertainties in the method, this information, combined with previous studies showing ambient concentrations of acrolein over the US EPA RfC, suggests that acrolein is likely to be a risk factor for respiratory-related disease in the public.

See also: Air Quality Legislation; Biomass burning and regional air quality; Persistent Organohalogen Pollutants and Phthalates: Effects on Male Reproductive Function.

Further Reading Agency for Toxic Substances and Disease Registry (ATSDR), 2007. Toxicological profile for acrolein. US Department of Health and Human Services, Public Health Service, Atlanta, GA. Bateson, T.F., Schwartz, J., 2008. Children’s response to air pollutants. Journal of Toxicology and Environmental Health. Part A 71 (3), 238–243. California Air Resources Board (2008) Annual statewide toxics summary: Acrolein. http://www.arb.ca.gov/adam/toxics/statepages/acrostate.html (accessed on 15 July 2009). California Environmental Protection Agency (2001) Prioritization of toxic air contaminants-children’s environmental protection act: Acrolein. http://www.oehha.ca.gov/air/toxic_ contaminants/pdf_zip/acrolein_final.pdf (accessed on 15 July 2009). Cook, R., Strum, M., Touma, J.S., et al., 2007. Inhalation exposure and risk from mobile source air toxics in future years. Journal of Exposure Science & Environmental Epidemiology 17 (1), 95–105. Costa, D.L., Kutzman, R.S., Lehmann, J.R., Drew, R.T., 1986. Altered lung function and structure in the rat after subchronic exposure to acrolein. The American Review of Respiratory Disease 133 (2), 286–291. Dwivedi, M., et al., 2018. Inflammatory effects of acrolein, crotonaldehyde and hexanal vapors on human primary bronchial epithelial cells cultured at air-liquid interface. Toxicology in Vitro 46, 219–228. Glasser, A.M., Katz, L., Pearson, J.L., et al., 2017. Overview of electronic nicotine delivery systems: A systematic review. American Journal of Preventive Medicine 52 (2), e33–e66. Henning, R.J., Johnson, G.T., Coyle, J.P., et al., 2017. Cardiovascular Toxicology 17, 227. https://doi.org/10.1007/s12012-016-9396-5. Kutzman, R.S., Wehner, R.W., Haber, S.B., 1984. Selected responses of hypertension-sensitive and resistant rats to inhaled acrolein. Toxicology 31 (1), 53–65. Kutzman, R.S., Popenoe, E.A., Schmaeler, M., Drew, R.T., 1985. Changes in rat lung structure and composition as a result of subchronic exposure to acrolein. Toxicology 34 (2), 139–151. Laugesen M. Safety report on the Ruyan® e-cigarette cartridge and inhaled aerosol. 2008. http://www.healthnz.co.nz/RuyanCartridgeReport30-Oct-08.pdf (accessed on 18 November 2017). Liu, W., Zhang, J., Zhang, L., et al., 2006. Estimating contributions of indoor and outdoor sources to indoor carbonyl concentrations in three urban areas of the United States. Atmospheric Environment 40, 2202–2214.

86

Ambient Concentrations of Acrolein: Health Risks

Ratajczak, A., Feleszko, W., Smith, D.M., Goniewicz, M., 2018. How close are we to definitively identifying the respiratory health effects of e-cigarettes? Expert Review of Respiratory Medicine 12 (7), 549–556. https://doi.org/10.1080/17476348.2018.1483724. Roux, E., Hyvelin, J.M., Savineau, J.P., Marthan, R., 1999. Human isolated airway contraction: Interaction between air pollutants and passive sensitization. American Journal of Respiratory and Critical Care Medicine 160 (2), 439–445. Seaman, V.Y., Bennett, D.H., Cahill, T.M., 2007. Origin, occurrence, and source emission rate of acrolein in residential indoor air. Environmental Science & Technology 41 (20), 6940–6946. US EPA, 2003. Toxicological review of acrolein. US Environmental Protection Agency, Washington, DC. US EPA, 2007. 2006 Urban Air Toxics Monitoring Program (UATMP) final report. Office of Air Quality Planning and Standards, Research Triangle Park, NC. US EPA. (2008) 1999 National-scale air toxics assessment. http://www.epa.gov/ttn/atw/nata1999 (accessed on 15 July 2009). Woodruff, T.J., Wells, E.M., Holt, E.W., Burgin, D.E., Axelrad, D.A., 2007. Estimating risk from ambient concentrations of acrolein across the United States. Environmental Health Perspectives 115 (3), 410–415. Xu, Y., Wu, L., Chen, A., Xu, C., Feng, Q., 2018. Protective effects of olive leaf extract on Acrolein-exacerbated myocardial infarction via an endoplasmic reticulum stress pathway. International Journal of Molecular Sciences 19, 493. Zhang, S., Chen, H., Wang, A., et al., 2018. Environmental Science and Pollution Research 25, 25306. https://doi.org/10.1007/s11356-018-2584-z.

An Ecological Disaster Zone with Impact on Human Health: Aral Seaq L Erdinger, University of Heidelberg, Heidelberg, Germany H Hollert, RWTH Aachen University, Aachen, Germany P Eckl, University of Salzburg, Salzburg, Austria © 2019 Elsevier B.V. All rights reserved.

Abbreviations DDT Dichlorodiphenyltrichloroethane EU/TACIS European Union/Technical Assistance to the Commonwealth of Independent States NGO Nongovernmental organization PCB Polychlorinated biphenyl POP Persistent organic pollutant TCDD 2,3,7,8-Tetrachlorodibenzodioxin UN United Nations UNDP United Nations Development Program UNESCO United Nations Educational, Scientific and Cultural Organization USAID United States Agency for International Development USSR Union of Soviet Socialist Republics

Background: History of the Aral Sea Story Water has been a major driving force for human development and civilization in history. Nearly, all ancient cultures developed in river basins that could provide enough freshwater for drinking and irrigation. Archeological findings indicate early colonization of the Aral Sea basin, especially along the Amu Darya and Syr Darya rivers, and in the oasis located between both rivers. The ancient city of Samarkand, for example, was founded in the 14th century BC, and is one of the oldest cities in the world. The Aral Sea is a salt or brackish lake having the rivers Amu Darya and Syr Darya as main inflows and no outflow. Owing to the evaporation of lake water, salt concentration increases over time. Generally, salt lakes develop when the outflow of inland drainage (endorheic) basins is restricted. Next to the Aral Sea, lakes like the Great Salt Lake and Lake Walker in Nevada (all in the United States), or the Dead Sea in the Near East are other examples of endorheic lakes with increased salt concentrations. Interestingly, most of them are shrinking, however, to different extents. In prehistoric times, the Aral Sea and the Caspian Sea formed one big unit splitting up approximately 14,000 years ago. Since then, the size and shape of the Aral Sea changed depending on the amount of water arriving at the lake. Although its northern tributary (Syr Darya) flowing through the Kyzylkum desert was always heading toward the Aral Sea, its southern tributary (Amu Darya) changed its course several times and was flowing toward the Caspian Sea for centuries. Both rivers have their origin in the Pamir and Tien Shan mountains, 1500 km southeast of the Aral Sea. Their water levels depend on the amount of rainfall in this area, as well as on meltwater contributions of glaciers. The water levels of the rivers have immediate influence on the level of the Aral Sea. Before finally entering into the sea, both rivers split up in wide deltas giving shelter to ecologically interesting ecosystems with a rich biodiversity. In classical antiquity, the Amu Darya was known as the Oxus, whereas the northern river Syr Darya was called Jaxartes. Over centuries the Aral Sea basin was a hard-fought area where warlords like Alexander from Macedonia in the West, or Genghis Khan and Timur from Mongolia in the East, afflicted the local population. Big and famous ancient metropolis like Bukhara and Samarkand located in oasis between both rivers were important emporiums with international significance and reputation, located on the Silk Road between China and Europe. Further west to Bukhara, the city of Khiva, now inscribed in UNESCO’s World Heritage List (like Bukhara and Samarkand), was another famous trading center. This long-lasting trading tradition came prematurely to an end when the whole area was incorporated into the Soviet Union. Situated in a desert-like dry area and climate with very high temperatures during summer, intensive irrigation is necessary to grow crops in the Aral Sea basin, mainly in both river valleys. Therefore, water was and still is deviated from the rivers by thousands of open trenches. In most cases, the irrigation trenches are unsealed leading to a massive loss of water by evaporation and seepage.

q

Change History: October 2018. Jerome Nriagu updated the references and recommended the slight change in title to give a different (more relevant) emphasis on the article. This is an update of L. Erdinger, H. Hollert, P. Eckl, Aral Sea: An Ecological Disaster Zone with Impact on Human Health, Editor(s): J.O. Nriagu, Encyclopedia of Environmental Health, Elsevier, 2011, Pages 136–144.

Encyclopedia of Environmental Health, 2nd edition, Volume 1

https://doi.org/10.1016/B978-0-12-409548-9.11727-0

87

88

An Ecological Disaster Zone with Impact on Human Health: Aral Sea

Although water is a very precious resource in the area, drainage of the irrigated areas was neglected. In consequence of irrigation without drainage, the salt concentration of the soil is amplified, leading to a continuous decline in soil productivity. In ancient times, shifting cultivation was practiced and farmland, being too salty, was simply abandoned until its fertility was restored by natural processes. Ancient documents report that the irrigated area was probably even larger than in modern times; however, only a certain part of it was used at the same time, and irrigation methods were probably much more conservative with water compared to modern times when electro pumps make it easy to distribute significantly higher quantities of water. Because of large and ambitious irrigation projects to increase crop production in the river valleys, more and more water was deviated from the Aral Sea and tributaries and starting from the 1960s, the Aral Sea started to shrink.

Water Level of the Aral Sea The water level of the Aral Sea and, in general, of all endorheic lakes is a function of the amount of water arriving through its tributaries, whereas the direct contribution of rainfall or runoff water is not significant. The climate in the Aral Sea area is typically continental, with very cold winters of down to  40 C and very hot summers of up to þ 45 C. During the summer, evaporation of the lake water is driven by very low air humidity, whereas evaporation during winter is less significant. The Aral Sea system is in equilibrium when the amount of water flowing in equals the amount of water evaporating over the year. If the amount of inflowing water is reduced for any reason, the surface area as well as the volume of the lake will shrink to a level when evaporation again equals the amount of inflowing water. As long as any water reaches the lake, it cannot and will not completely dry out. However, the ecological effects of reducing the inflow are dramatic for several reasons. With an average depth of approximately 16 m (before the deviation of the feeder rivers) the lake was very shallow. Especially on its western shoreline even a small decrease in the water level leads to the exposition of large areas of former sea bottom. As mentioned earlier in the text, the southern Amu Darya once changed its flow direction and headed toward the Caspian Sea for a long time, leading to a significant shrinkage of the Aral Sea. Analysis of sediments from the area indicates that the water later returned and the size of the Aral Sea increased again. The same kind of analysis shows an increase in the Aral Sea level when the irrigation systems and dams along the Syr Darya and the Amu Darya were destroyed during wartime or conflicts. Although the river valleys of the tributaries were farmland dependent on artificial irrigation, residents in villages located along the coastline of the Aral Sea were mainly fishermen exporting their products to the hinterland. A considerable fishing industry with canneries in several cities located directly on the seaside was built up during the 20th century. At least for experts it was foreseeable that the equilibrium necessary for a stable water level of the Aral Sea would be damaged when the Soviet centrally planned economy decided to greatly expand the amount of irrigated farmland along the Syr Darya and the Amu Darya during the 1950s. The production of cotton and other crops was largely increased to meet the growing inland demand as well as for export. It was nothing but a short-sighted political decision that cotton was more valuable than fish, not taking into account the ecological consequences for the area and its inhabitants. Although the Syr Darya and the Amu Darya together brought approximately 50 km3 of water annually to the lake before 1960, this number continuously decreased since the beginning of the irrigation projects and fell close to nothing during the middle of the 1980s. At the same time the economy and the population along the rivers steadily increased. However, the effects of the reduced inflow on the level of Aral Sea and its ecology and the economic situation on people living in cities and villages along the Aral Sea were dramatic (Table 1).

Table 1

Development of the Aral Sea profile

Year

Part

Average level (m)

Average area (km2)

Average volume (km3)

Average salinity (g L 1)

1960 1971 1976 1992 1994

Total Aral Total Aral Total Aral Total Aral Total Aral Large Sea Small Sea Total Aral Large Sea Small Sea Total Aral Total Aral

53.4 51.1 48.3 36.6 n.a. 36.8 40.8 n.a. 33.4 41.6 31.0 30.4

66,900 60,200 55,700 33,600 31,938 28,856 3082 25,217 21,776 3441 18,240 27,000

1090 925 763 n.a. 298 273 25 212 186 26 118.8 112.2

10 11 14 34.4 n.a. >35 21–25 n.a. >60 18–20 n.a. n.a.

2000 2003 2006

An Ecological Disaster Zone with Impact on Human Health: Aral Sea

89

Environmental Effects of the Falling Level of the Aral Sea The Aral Sea is filling up a depression confined in the West and North by (low) mountain ranges and open to the other sides. For reasons mentioned earlier in the text, the natural salt concentration in the lake was always significantly higher than in its tributaries. Before 1960 when the water level began to fall, salinity was in the area of 10 g L 1 and the ecosystem in the lake was adapted to this salt concentration. Because of the high salt concentration making the water brackish, before recent changes, the Aral Sea had low biodiversity. There were few if any endemic species, and there appeared to be no major limnological features of distinction. In contrast, other sources call the Aral Sea “a paradise in the desert” being exceptionally rich in both biodiversity and biomass. However, there is no doubt that the lake had great importance as a natural ecosystem within a regional context. Mainly its islands were important feeding, refuge, and breeding sites for numerous species of migratory and resident birds. During the first half of the 20th century, the average depth of the Aral Sea was approximately 16 m with very shallow littorals in the eastern and southern part. Therefore, the reduction in the amount of water arriving at the lake soon led to a significant retreat of its shorelines in the shallow areas. Although in former days the cities of Aralsk on the northeastern corner of the lake and Munyak on its southern end were typical seaside cities connected by ferry lines and attracting tourists, both of them and many other villages along the shallow lakeside soon fell dry. At first, authorities tried to support the fishing industry by channels excavated between the harbors and the open sea, but this had to be abandoned after the distance between the city and the lake became too far. Pictures of former fishing trawlers now lying abandoned in the desert sand are often used to illustrate this situation. When the lake lost more of its volume, the deeper zones became more and more separated, depending on the surface structure of the sea bottom. At first, the lake broke into a smaller but deeper northern part named the “Small Aral Sea,” and a southern part called the “Large Aral Sea” separated by a natural barrier. Both parts were connected via a natural channel called the “Berg Strait” allowing water to flow from the northern part of the lake to the Large Aral Sea. This connection was closed by a dam completely separating the Small and the Large Aral Seas and allowing to recover at least the northern part of the former lake. Plans for this strategy were already coming up in 1994 during an Aral Sea Conference held in Paris. However, due to political and possibly financial reasons the project was delayed for a long time. The construction of the dam across the Berg Strait led to a substantial increase in the water level of the Small Aral Sea and subsequently to a significant change in salinity: In the past, the level of the lake was falling and in consequence the salt concentration of the water steadily increased. However, this increase was different in different parts of the lake. Although salinity of the Large Aral Sea rose even over concentrations normally found in the oceans, salt concentration in the northern, smaller part did not increase so dramatically. The change of the salinity had significant impact on the fish population in the lakes. Falling salinity in the Small Aral Sea will lead to a return of diverse species thus improving again the prospective for the fishing industry. However, up to now (2008) the waters did not reach the old harbor of Aralsk, but there are already ambitious plans to reconnect it to the free water. Although the possibly surviving Small Aral Sea located mainly in Kazakhstan is still a large body of water, and although the economic situation of people living in this area will significantly improve, the major part of the old Aral Sea in Uzbekistan has an uncertain future, and most likely increasing salt concentrations will kill all remaining life in the Sea.

Economic Situation of the Riparian States Kazakhstan and Uzbekistan are the only riparian states to the Aral Sea. Compared to Kazakhstan, the economy in Uzbekistan is weaker and average income of the population is lower. Kazakhstan’s economy is larger than those of all the other Central Asian states combined largely due to the country’s vast natural resources and a recent history of relative political stability. In the western part of Uzbekistan where the Aral Sea is located, approximately 1.2 million people are living in the Autonomous Republic of Karakalpakstan, a part of Uzbekistan since 1936. Karakalpakstan is considered to be the poorest part of Uzbekistan. The Autonomous Republic is suffering not only from the drying out of the Aral Sea but also the climate change over the centuries is a reason for the drying up of former reed marshes, forests, and oasis in the former Aral Sea area and the delta of the Syr Darya. The capital of Karakalpakstan is Nukus. Moynaq or Muynak was its former Aral Sea port now located approximately, 200 km inland. Approximately 43,000 km2 of Uzbekistan’s arable land is intensely cultivated and irrigated; this is 11% of its total area. Prolonged drought and cotton monoculture in Uzbekistan and Turkmenistan create water-sharing difficulties for Amu Darya River states. However, the political situation seems to be stable and far away from often predicted water-related armed conflicts. The median age of the country’s inhabitants was 23.8 years in 2008. In 2007, the per capita income in Uzbekistan rose from $460 in 2005 to $500; one-third of the population is living below the poverty line and 44% of the available labor force is working in agriculture. In Kazakhstan, 13.8% of the population is living below the poverty line and only 32% of the population is working in agriculture. Infant mortality rate in Uzbekistan is much higher (67.78 deaths per 1000 live births) than in Kazakhstan (26.56 deaths per 1000 live births). Agriculture is the country’s main economic factor with a share of 30% of the GDP. Nevertheless, Uzbekistan is the world’s second-largest cotton exporter and fifth largest producer (2007); its economy relies on cotton production as the major source of export earnings. Other export earners include gold, natural gas, and oil. However at present, Uzbekistan’s oil imports are higher than oil exports.

90

An Ecological Disaster Zone with Impact on Human Health: Aral Sea

In Kazakhstan, cotton production does not play a significant economic role. Owing to its lower economic dependence on agricultural products grown in the river valleys of the Aral Sea tributaries, Kazakhstan can afford to initiate much more effective measures to save water for the Aral Sea than Uzbekistan. Additionally, Kazakhstan is rich in oil and gas and is profiting from the rising prices of fossil energy. Aided by strong growth and foreign exchange earnings, Kazakhstan aspires to become a regional financial center and has created a banking system comparable to those in Central Europe. However, compared to other countries in Central Asia Kazakhstan is suffering most from environmental problems of the Soviet era. In Kazakhstan atomic bombs of the Soviet Union were tested (Semipalatinsk), and in the Aral Sea area biological weapons were developed. Many international organizations like UNDP, USAID, EU/TACIS, and World Bank are active in Kazakhstan supporting the country’s environmental activities.

Effects on Income and Social Structures of Inhabitants After the Second World War, local wheat production was greatly increased and was essential to sustain the Soviet Union. Mainly for climatic reasons, production of cotton was placed in Uzbekistan, whereas wheat, barley, millet, and rice were mainly grown along the Syr Darya in Kazakhstan. During the 1960s, it became increasingly obvious that the environmental conditions and the economic effects on the population living in cities and in villages along the coastline of the Aral Sea profoundly impaired. The immediate impact on the local population included loss of working places followed by a breakdown of major parts of the social system. However, there was no political will and consequently no efforts were undertaken to save water in the river valleys for the Aral Sea and to stop the falling water level. The reactions of the authorities were helpless and failed to address the origin of the problem. Instead, the area was declared to be a “zone of ecological disaster” and (small) subsidies were offered to people affected at most. Fishing activities in the Aral Sea declined, and starting from 1975 there was only some activity during spring and autumn. Nevertheless many families were dependent on this very small fish catch. To sell smoked fish and other goods to travelers in the trains passing Aralsk was the only possibility for women of Aralsk to earn some money on their own. In addition, the effects of the increasing irrigation and agricultural production along the tributaries were fatal for the native Kazakh nomadic culture. It is estimated that 1 million Kazakhs died or left the area to move to countries south of Kazakhstan, like Uzbekistan and Tajikistan. However, from all over the Soviet Union farmworkers were brought to the new farmland along the river valleys. The sociocultural character of the area changed completely. From the beginning of the 1990s, the former source of income for the local inhabitants was more or less completely lost. During this time, many former Aral Sea fishermen moved away to other lakes, some of them to Lake Balkhash in the eastern part of Kazakhstan. However, Lake Balkhash is also shrinking because of the diversion of water from the rivers that feed it, and is suffering from severe overfishing. Financed by subsidies of the USSR, other fishermen were moved to Kapchagay, Alakol, and Zaysan lakes situated close to or in the Tien Shan mountains 1000–1500 km east of the Aral. Others were moved west to the Caspian Sea, and most of the rest was settled in state collective farms along the Syr Darya. Because the area obtained the before-mentioned status of a “zone of ecological disaster,” subsidies are still paid to the inhabitants as an “ecological disaster addition” to the salary or pension. Today, the northern area of the Aral Sea is considered to be the poorest region in Kazakhstan.

Effects on Fishery To maintain the employment in fishing industry in cities along the shoreline of the Aral Sea, frozen fish was imported over thousands of kilometers from other parts of the USSR, for example, from the Baltic Sea and the Pacific Ocean. This practice was abandoned with the Kazakh independence in 1991 and all of the remaining workers from the canneries were released. During the 1970s, several salt fish species were introduced in the Aral Sea including Black Sea flounders. However, up to the 1990s flounder fishery was not initiated mainly because local politicians did not approve the necessary investments, and because the economic potential in the area was so low that no economic effect was expected. Another major problem was that fishermen were not familiar with the new and “unusual” fish species. There was concern among the fishermen whether the local population would accept this type of fish looking very different to the formerly caught species. However, due to a Danish nongovernmental organization (NGO) initiative, flounder fisheries finally started up and fish catch increased again. In the southern part of the Aral Sea, the fish catch later declined again because of the growing salinity. In 2003, the Black Sea flounder had disappeared again because the salinity in the Large Aral Sea rose to over 70 g L 1, whereas the salinity in the northern part of the lake is too low for flounders. Meanwhile, two completely different ecosystems have formed and only the northern part of the old Aral Sea has a reasonable perspective of economic recovery.

Effects on Climate Local residents frequently report that the incidence of sandstorms significantly increased and that health conditions of the population declined. However, there are no data for the documentation of these changes available.

An Ecological Disaster Zone with Impact on Human Health: Aral Sea

91

Without any doubt the climate in the former coastal cities, formerly moderated by the waters of the lake, changed to the very different and much harder conditions of the desert now surrounding these settlements. Additionally, as the climate is changing in many areas of the world, the impact of the shrinking Aral Sea on the local climate is hard to estimate and there is no information on the development of the climate in other remote areas of Kazakhstan. Up to now no state-of-the-art measurements on dust deposition rates or particle size distribution have been made. Measurements of deposition rates using dust traps indicate at least partially very high dust concentrations. From a medical point of view, these data are very difficult to analyze because in other studies linking particle size and concentrations to human health different sampling methods have been employed.

Effects on Health of Local Population One of the first reactions of the former Soviet Union authorities to the falling water level of the Aral Sea was to declare the area to be a “Zone of Ecological Disaster.” To the authors’ knowledge, this status is valid up to now giving the inhabitants of the area a basis to claim for a basic level of support by the state. A central point of argumentation for the establishment of this “Zone” is the extent of environmental pollution and its influence on the ecological system as well as on the health of the people living in this environment. Next to high amounts of salt remaining after the retreat of the Aral Sea, there are speculations on large amounts of pesticides arriving in the area. Some authors even compare the local situation in the Aral Sea area to environmental catastrophes like Minamata or Kyushu in Japan. However, there is a certain lack of scientific and plausible data to support this comparison. It seems as if at least in some cases the opinion of researches was biased by understandable pity for the local population. People living in the area are poor, even for Central Asian standards; they suffer from unemployment and accompanying social problems as well as from malnutrition and anemia. Unfortunately, there are only few data comparing results of medical examinations and data of human biomonitoring from the Aral Sea area to equivalent data from other remote areas in Central Asia, where people are living under comparable conditions except for the influence of the drying out Aral Sea. A study on contaminations in human breast milk of mothers from various sites in Kazakhstan shows that, in general, the burden of halogenated pesticides is within the range measured in other places worldwide, with the exception of dibenzodioxins and dibenzofurans in certain places, but not in cities located at the former shoreline of the Aral Sea. Inhabitants from the cotton-growing areas carry a much higher burden of certain environmental pollutants than people affected directly by the retreat of the Aral Sea. The localized high concentrations of 2,3,7,8-tetrachlorodibenzodioxin (TCDD), the most toxic of all dioxin congeners, are the highest documented in the world in a population currently of reproductive age, comparable only to breast milk samples collected earlier in South Vietnam when Agent Orange was being sprayed. It can be concluded that people living in the Syr Darya and, most probably, the Amu Darya valleys as well suffer much more from pesticide pollution as people living in the former shoreline area of the Aral Sea. However, although people at the former Aral Sea shoreline suffer from desperate economic conditions, people in the cottongrowing areas are basically better off under an economic point of view. Many chlorinated pesticide residues commonly seen in Europe cannot be detected in samples of human breast milk, probably because these pesticides are not in use in Kazakhstan. The median concentrations of detectable organic pesticides observed at each site were within the range of median concentrations in other countries. Because of the excessive use of pesticides in the river valley, body burden of these compounds is of major interest. However, local authorities like “Kazakh Sanitary Station” with offices and laboratories in all major Kazakh cities are not able to collect these data on their own. The main reason is a lack of modern equipment and of trained specialists. Although equipment is present in the main office and laboratory in Almaty, there is a lack of adequate knowhow and political support in this area. Regarding other massive environmental problems present in Kazakhstan, it seems to be very desirable to improve capabilities in this field. Most laboratories on the countryside are equipped with very simple instrumentation allowing, for example, microbiological water examinations. However, these analyses are not performed according to international standards and therefore results are difficult to compare. Therefore, most studies on body burden by environmental pollutants were performed at least in cooperation with Western universities or hospitals. Some preliminary analysis using pooled blood samples of local inhabitants seemed to prove that the pesticide load of the population, especially of local children, is high compared to body burden measured in European children. Although these superficial analyses were scientifically not reliable, other evidence-based information on health effects on the population inhabiting the Kazakh and Karakalpak section of the Aral Sea is hard to obtain. Unfortunately, much of the information on the health status of local residents published so far does not meet even basic scientific standards but seems to be biased by political aspects and by (understandable) pity for the local population. There is no doubt that the health of inhabitants of the area is severely affected by the environmental conditions. From a scientific point of view, the proof of a direct and measurable impact of certain environmental parameters on the health of people compared to indirect effects caused by socioeconomic changes is very hard to obtain. Factors of significant health influence discussed so far include deterioration of the drinking water quality, lung problems induced by airborne dust and fine particles, and an increase in health problems related to high concentrations of diverse pesticides used in the irrigation areas. It is known since long that it is basically very difficult to explain certain individual health problems by single individual parameters from the environment. Repeatedly opinions were published that the local population suffers from high levels of airborne dust and related lung dysfunctions. At least in Karakalpakstan, where dust storms occur frequently, this assumption could not be proven and there was no correlation between the dust deposition rate measured by simple dust traps and the lung function in children.

92

An Ecological Disaster Zone with Impact on Human Health: Aral Sea

Studies indicate that the prevalence of asthma is low in the Aral Sea area and that asthma appears to be unrelated to dust exposure. Compared to dust, especially to fine particles with aerodynamic diameters of less than 10 mm in the air of cities, the dust in the Aral Sea area is of different composition and therefore health effects may not be compared. Although in urban surroundings a major part of the fine particles in the air is emitted by road traffic, and although these particles are acting as carrier for various residues of incomplete combustion, particles in the Aral Sea area are likely of completely different composition, although no detailed analysis has been made up to now. From information obtained from local health authorities, it seems to be plausible that a certain part of the dust consists of water-soluble salts that will not penetrate into the deeper airways. Other components like silicate are known triggers of certain occupational diseases, but only in very high ambient concentrations not likely to be reached in the area. It must be emphasized that no detailed investigations into the significance and health activity of the dust have been made up to now, and no comparison to urban particulate matter as it is measured on a regular basis in Western countries has been made. Other scientific data are pointing at an increased level of renal dysfunction in children living close to the Aral Sea compared to children living in other parts of the country. Reasons for the findings of this study remained unclear because no elevated concentrations of heavy metals in biomaterials like hair, nails, or urine could be shown. The authors speculated that elevated values for certain bioindicators could be caused by an increased uptake of uranium, but there are no data supporting this assumption. Additionally, in contrast to cadmium and lead, uranium has a much lower tendency for bioaccumulation in human bodies and its toxicity is lower compared to other heavy metals. In Western countries, groundwater may be polluted by uranium contaminations in phosphate fertilizers, and due to the intensive agriculture along the river valleys uranium contaminations from this source could be present too. However, as mentioned earlier in the text, up to now there are no data on this problem. Because of reasons mentioned earlier in the text, the remaining local population living around the Aral Sea mainly consists of the native population with only few people with Caucasian roots. Most of the people still living in the former seaside villages could simply not afford to move away from the area, mainly because of financial reasons. Monthly income of families is very low, even for Central Asian standards. In consequence, in many families, daily composition of food is monotone and unhealthy, and fresh vegetables, for example, are usually lacking. Like in many other underdeveloped areas, anemia in children and mothers prevails. Under this aspect, the situation in Aralsk and other townships around the former Aral Sea (“Priaraliye”) is comparable to other dwellings in the Central Asian deserts. People have no reliable and constant access to safe freshwater, sanitation is underdeveloped, and in consequence waterborne diseases like diarrhea appear frequently. Malnutrition and diarrheal diseases have especially impacted children. Physical and cognitive growth is dependent on adequate and balanced nutrition, both prenatally and postnatally. Additionally, malnourished children are much more sensitive to infectious diseases. Children’s psychological development depends on security and a positive family atmosphere at home. These factors are highly influenced by the families’ social security. However, recently at least a couple of data trying to describe health factors of children in Aralsk have been published. A comparison of data describing growth of children from Aralsk and Akchi, a small town in the steppe in the surroundings of Almaty, indicates that development of children in Aralsk is slightly retarded. As mentioned earlier in the text, environmental factors may play an indirect role for this result. In general, the influence of environmental pollution seems to play about the same role for the total burden of disease compared to other remote areas of Kazakhstan or Uzbekistan. Comparison of several parameters describing body loads of certain chemical contaminants in children of Akchi and Aralsk indicated only small differences. Concentration of dichlorodiphenyltrichloroethane (DDT) metabolites was elevated in blood samples of children from both locations indicating the presence of this compound in the environment, although Kazakhstan abolished the use of DDT since long. It seems to be plausible that DDT probably is still in use for specific applications, maybe for the treatment of camels or other animals against parasites. However, data on the contamination of camel’s milk or other foodstuff made from the milk are not available. Although several authors report on high concentrations of persistent organic pollutants (POPs) like polychlorinated biphenyls (PCBs) and halogenated compounds in environmental samples around the Aral Sea, there seems to be no evidence for elevated concentrations of these compounds in blood samples of children from Aralsk compared to children from other parts of Kazakhstan. Actually there is no comprehensible reason why the concentrations of POPs should be higher at the former shoreline of the Aral Sea compared to other remote areas of Central Asia. PCB concentrations of the blood samples were comparable to PCB concentrations of children in Western Europe indicating ubiquitous distribution of these compounds and not a local effect. As mentioned earlier in the text, there is not much scientific information on food contaminations in the area. During a fish catch trial of a Danish NGO mentioned earlier in the text, an unknown amount of fish samples was sent to a Danish laboratory to investigate heavy metal and pesticide residues in flounders. There were “no signs of contamination,” which probably means that the load was compatible with Danish food standards.

Political Aspects A synthesis of UN’s Millennium Ecosystem Assessment states that “over the past 50 years, humans have changed ecosystems more rapidly and extensively than in any other comparable period of time in human history, largely to meet rapidly growing demands for food, freshwater, timber, fiber, and fuel. The changes that have been made to ecosystems have contributed to substantial net gains in human well-being and economic development, but have been achieved at growing costs in the form of the degradation

An Ecological Disaster Zone with Impact on Human Health: Aral Sea

93

of many ecosystem services, and the exacerbation of poverty for some groups of people. The degradation of ecosystem services could grow significantly worse during the first half of this century and is a barrier to achieving the Millennium Development Goals.” At a first glance, it seems as if these words were written to describe the situation in the Aral Sea basin. During times of statedirected economic leadership priorities were set neglecting environmental impact of measures leading to an amelioration of the Soviet Union’s economic situation. In Western countries, projects of this size and impact were not practicable, however, not for environmental reasons but only because the political power and will for agricultural projects of this size were not existent. The 1950s were the time when the use of pesticides was increasing worldwide without any concern of their impact on the environment. During these times, the environment seemed to be an invulnerable resource simply by its size. Many major environmental problems of today have their roots in these years of environmental unawareness. It is probably not believable that Soviet experts could not foresee the outcome of the massive increase in irrigated farmland. However, the government of the Soviet Union administered the country’s economy and society. There was no hesitation to achieve aims set by the politburo even at the price of the drying out of the Aral Sea, mainly because sustainable economy and development or environmental conservation were not of particular importance to any politician. The Soviet Union was under massive economic pressure of the Western countries, and therefore economic decisions were not much affected by consequences coming true probably in tens of years. Decisions made during these times are nearly irrevocable: Too many people are living in the cotton-growing areas, and rapid changes of the economic basis of the population would have unforeseeable consequences on the political situation of Central Asia as a whole. Water is a valuable resource and probably the most important economic factor in the countries of Central Asia. Water is the basis of all agricultural activities and of economic success. Although the distribution of available water between riparian states is a major political issue, up to now the predicted scenarios of Central Asian water wars have not become a reality. Several countries are located in the Aral Sea basin: Kazakhstan, Uzbekistan, Turkmenistan, Kyrgyzstan, and Tajikistan. During Soviet times, the political influence of individual Soviet Republics was limited, and all major decisions were prepared and initiated by the central government. After the decline of the Soviet Union, the now independent states tried to tackle the problems by diverse initiatives. Additionally, Western countries and organizations tried to initiate a policy of active engagement through economic, political, and environmental assistance programs. The main targets were the disengagement of the independent republics from Russia’s sphere of influence and a prevention of closer connections to Iran. The environment issue provided a safe area for intervention of the West since all regional stakeholders recognized the need for help in cleaning up the environmental consequences of the Aral Sea shrinkage. Focus was mainly on practical, real problems on water quality and public health conditions with a high visibility. However, the problem of the drying out Aral Sea only played a minor role in the overall political plans. The problems of the Aral Sea cannot be solved in the short term. The problems are a result of economic decisions made in the 1950s within a different political context on the basis of a Soviet-Union-wide organized economy. Now that small countries are struggling for survival of their local economies, there is, at least in the nonoil producing countries, not enough economic power for a rapid change. Small steps like the construction of the dam separating the Small Aral Sea from its southern part and projects improving the use of irrigation water and leading to an overall water saving effect are helpful and feasible. The area of the Aral Sea will enlarge with the amount of water left over in the Syr Darya and the Amu Darya, even if the Aral Sea probably will not regain its former size during the next generation.

See also: Bhopal Gas Catastrophe 1984: Causes and Consequences; Kuwait: Before and After the Gulf War; Lebanon: Health Valuation of Water Pollution at the Upper Litani River Basin; Oil Industry and the Health of Communities in the Niger Delta of Nigeria; Tunisia: Salinization and Sustainability of Agriculture.

Further Reading Aladin, N., Crétaux, J.F., Plotnikov, I.S., et al., 2005. Modern hydro-biological state of Small Aral Sea. Environmetrics 16, 375–392. Anand, R.K., 2015. The Aral Sea disaster and health crisis. IOSR Journal of Humanities and Social Science 20 (5), 32–37. Beeton, A.M., 2002. Large freshwater lakes: Present state, trends, and future. Environmental Conservation 29, 21–38. Bennion, P., et al., 2007. On behalf of the Médecins san Frontières/Aral Sea respiratory dust and disease project team: The impact of airborne dust on respiratory health in children living in the Aral Sea region. International Journal of Epidemiology 36, 1103–1110. Boomer, I., Aladin, N., Plotnikov, I., Whatley, R., 2000. The palaeolimnology of the Aral Sea: A review. Quaternary Science Reviews 19, 1259–1278. Carpenter, D.O., et al., 2006. Children’s environmental health in Central Asia and the Middle East. International Journal of Occupational and Environmental Health 12, 362–368. Glantz, M.H., 1999. Creeping environmental problems and sustainable development in the Aral Sea Basin. Cambridge University Press, Cambridge. Hashizume, M., et al., 2005. Anaemia, iron deficiency and vitamin A status among school-aged children in rural Kazakhstan. Public Health Nutrition 8, 564–571. Kaneko, K., et al., 2003. Renal tubular dysfunction in children living in the Aral Sea Region. Archives of Disease in Childhood 88, 966–968. Kobori, I., Glantz, M.H. (Eds.), 1998. Central Eurasian Water Crisis: Caspian, Aral, and Dead Seas. United Nations University Press, Tokyo. McDermida, S.S., Winter, J., 2017. Anthropogenic forcings on the climate of the Aral Sea: A regional modeling perspective. Anthropocene 20, 48–60. Micklin, P., 2007. The Aral Sea disaster. Annual Review of Earth and Planetary Sciences 35, 47–72.

94

An Ecological Disaster Zone with Impact on Human Health: Aral Sea

Micklin, P.P., Williams, W.W., 1996. The Aral Sea basin. In: NATO ASI Series, 2. Environment, vol. 12. Springer, New York. Pala, C., 2006. Once a terminal case, the North Aral Sea shows new signs of life. Science News 312, 183. Reinhardt, C., Wünnemann, B., Krivonogov, S.K., 2008. Geomorphological evidence for the Late Holocene evolution and the Holocene lake level maximum of the Aral Sea. Geomorphology 93, 302–315. Saiko, T.A., Zonn, I.S., 2000. Irrigation expansion and dynamics of desertification in the Circum-Aral region of Central Asia. Applied Geography 20, 349–367. Schrad, M.L., 2006. Threat level green: Conceding ecology for security in Eastern Europe and the former Soviet Union. Global Environmental Change 16, 400–422. Shibuo, Y., Jarsjo, J., Destouni, G., 2007. Hydrological responses to climate change and irrigation in the Aral Sea drainage basin. Geophysical Research Letters 34, L21406. The Danish Society for a Living Sea (1998). The Aral Sea and its fishery. A project report. From Kattegat to Aral SeadA fishery project. http://www.levendehav.dk/uk/from-kattegatto-aral.htm. Accessed June 2010. Waehler TA, Dietrichs ES (2017). The vanishing Aral Sea: Health consequences of an environmental disaster. Tidsskriftet den Norske Legeforening, https://tidsskriftet.no/en/2017/ 10/global-helse/vanishing-aral-sea-health-consequences-environmental-disaster. Accessed September 20, 2018. Weinthal E (2005). Central Asia: Aral Sea Problem. Foreign Policy in Focus, Washington, DC; http://www.fpif.org/reports/central_asia_aral_sea_problem. White, K.D., 2013. Nature–society linkages in the Aral Sea region. Journal of Eurasian Studies 4, 18–33.

Relevant Websites http://www.aralsea.net/en/index.htmdAral Tenizi Society. http://www.unu.edu/unupress/unupbooks/uu18ce/uu18ce00.htmdCentral Eurasian Water Crisis. http://www.esa.int/esaEO/SEMGVT6CTWF_index_0.htmldESA. http://www.worldlakes.org/lakedetails.asp?lakeid¼9219dLakeNet. http://www.nytimes.com/2006/04/06/world/asia/06aral.html?_r¼1&ex%BCdNew York Times. http://na.unep.net/atlas/webatlas.php?id¼11dUN EP, Atlas of our changing environment. http://unesdoc.unesco.org/images/0012/001262/126259mo.pdfdUnesco: Water-related vision for the Aral Sea basin for the year 2025.

Animal and Human Waste as Components of Urban Dust Pollution: Health Implicationsq I Rosas, Universidad Nacional Autónoma de México, México D.F., Mexico CF Ama´bile-Cuevas, Fundación Lusara, México D.F., Mexico E Calva, Universidad Nacional Autónoma de México, Cuernavaca, Mexico AR Osornio-Vargas, Department of Pediatrics, University of Alberta, Edmonton, AB, Canada © 2019 Elsevier B.V. All rights reserved.

Abbreviation CD14 Cluster of differentiation 14 EAEC Enteroaggregative E. coli EC-5 E. coli-5 EPEC Enteropathogenic E. coli ESBL Extended spectrum beta-lactamase EU Endotoxin units IL Interleukin LBP LPS-binding protein LPS Lipopolysaccharide MD-2 Lymphocyte antigen 96 MLST Multilocus sequence typing PCR Polymerase chain reaction PDGF-Ra Platelet-derived growth factor receptor alpha PM Particulate matter PM10 PM smaller than 10 mm PM2.5 PM smaller than 2.5 mm STEC Shiga toxin-producing E. coli TLR Toll-like receptor TNFa Tumor necrosis factor alpha UPEC Uropathogenic E. coli

Introduction Dust is a mixture of organic and inorganic particles deposited on ground surfaces that move continuously from or into air, water, and soil. Inadequate management of human and animal waste causes the dust to include fecal material, both in rural and urban areas. In a natural setting, dust is incorporated into the soil as mineral nutrients or organic matter. However, in urban zones, soil is covered and sealed with pavement, causing dust to accumulate on the surface. This dust can transfer more particle matter than in a natural cycle into air by eolic action and into water through rainwater runoff. Thus, different agents present in urban dust can cause air, water, and urban ground pollution. In addition to direct exposure through the ingestion and inhalation of dust, indirect human exposure occurs through aquatic bodies or indoor environments. Thus, dust material is transported either by rainwater into marine and fresh aquatic systems or by eolic and mechanical factors into indoor environments; both of these modes of transport have been reported as important sources of fecal pollution. However, few studies focus on its impact on air and ground surface pollution, which are the main source of direct human exposure, by inhalation and ingestion of fecal microorganisms and microbial debris (Fig. 1). Epidemiological studies have related the ingestion of urban dust with enteric diseases, and its inhalation with respiratory tract illnesses. Fecal-polluted urban dust can be considered a substantial threat to public health. This pollution is mainly caused by the following factors: (1) overpopulation of urban areas, defined as a population larger than 400 inhabitants per square kilometer; (2) lack or insufficient sanitation infrastructure and technology; and (3) poverty and education. Of course, these conditions often hand in hand.

q

Change History: October 2018. I. Rosas, C.F. Amabile-Cuevas, E. Calva, A.R. Osornio-Vargas have updated the text throughout the article. This is an update of I. Rosas, C.F. Amábile-Cuevas, E. Calva, A.R. Osornio-Vargas, Animal and Human Waste as Components of Urban Dust Pollution: Health Implications, Editor(s): J.O. Nriagu, Encyclopedia of Environmental Health, Elsevier, 2011, Pages 75–82.

Encyclopedia of Environmental Health, 2nd edition, Volume 1

https://doi.org/10.1016/B978-0-12-409548-9.11673-2

95

96

Animal and Human Waste as Components of Urban Dust Pollution: Health Implications

Watersheds indoor environments Indirect exposure

Dust Rural settings

Direct exposure

Health effects

Dust

Human and animal waste

Urban setting

Incorporation to soil

Fig. 1 Dust and fecal pollution. Human and animal waste get into dust both in rural and urban settings; however, although in rural settings dust tends to incorporate into the soil (within certain limits), minimizing human exposure, in urban settings, where soil is mostly sealed (and dust and waste are in higher concentrations due to overpopulation), humans are directly or indirectly exposed to fecal pollution in dust, and suffer diverse health effects.

In recent decades, the world’s urban population has grown at an unprecedented pace. Approximately 3.3 billion people, or approximately 47% of the world’s population, live in urban areas. Urban population represents 76% of the total population living in developed countries and 40% in developing countries. It is expected that by 2030, 60% of the world population will be urban. These percentages evidence the need of considering urban dust issues in future urbanization planning. This article presents recent evidence of health risks posed by urban dust containing fecal microorganisms. It describes how dust gets contaminated with microbes from fecal waste and how it is distributed in the environment, including exposure pathways. It also explains how fecal bacteria associated with urban dust show great variability in pathogenic and antibiotic-resistant patterns, imposing additional health threats. Finally, it considers the presence of bacterial endotoxins, another risk linked to fecal pollution of urban dust.

Pathway of Exposure to Urban Dust Fecal Pollution In 2004, the percentage of people worldwide who had access to sanitation facilities was 59%; of these, 83% had access to water supply with 74% connected to a piped water system. Despite this infrastructure, four billion episodes of diarrhea occurred annually, 50% of them being related to bad hygiene practices at home. Commonly, children ingest dust and soil contaminated with fecal material from humans and domestic animals; although this behavior has been argued to be an instinctive way to provide early exposure to pathogens, inducing immune responses, such an exposure might be too much in modern cities. Indoor nonconcrete floor or carpets act as a sink for dust containing dust mite, bacteria, and allergens, including street dust tracked on shoes. Little attention has been paid to the inhalation or ingestion of these materials in urban areas; hence they become a threat to human health (Fig. 1). The most important sources of fecal pollution that affect urban ground surfaces are nonpoint (diffuse) sources of human and animal (domestic and wild) waste, related to the lack of sewage, sewer leaks, failing septic tanks, improper disposal of waste and garbage, and open-air fecalism from humans, stray dogs, pets, and other animals; all of these are common in developing countries. These sources originate a random dispersion of fecal material in urban areas. Analysis of fecal pollution in watersheds shows that waste-polluted urban dust, which arrives to watersheds through rainwater runoff, can affect the proportion of human/animal feces particles in water systems. Thus, sources of fecal pollution must be identified to develop effective control and restoration processes. During the dry season, dust is suspended and resuspended by mechanical and eolic factors, thus increasing the exchange between indoor and outdoor environments. Indoor environments then suffer the increase in fecal pollution, already present by indoor activities, domestic animals, building materials, furniture, nonconcrete floors and rugs, and the level of ventilation. During the rainy season, dust that has accumulated on various surfaces is washed out by rainfall; nevertheless, water that does not infiltrate the ground runs on impervious surfaces from residential, commercial, industrial areas, roads, highways, and bridges, collecting contaminants and garbage, which are then discharged into coastal or inland water. This affects the quality of recreational, irrigation, and fishing water. As ground surface imperviousness increases, more rainfall is converted into runoff. Human and animal waste deposited on urban ground contains high numbers of microorganisms, mainly bacteria. In developing countries, domestic waste including diapers and fecal material lies most of the times on the street. But this is not exclusive to poor countries: for instance, each day, dogs leave 82,000 kg of waste on the ground in the United States alone; a single gram of dog feces can contain up to 23 million fecal coliform bacteria. Of course, coliforms are only a well-known minority among gut microbiota, as shown in Table 1.

Animal and Human Waste as Components of Urban Dust Pollution: Health Implications Table 1

97

Viable bacteria per gram of feces from adult animals

Animal

Escherichia coli

Clostridium perfringens

Streptococci

Bacteroides

Lactobacilli

Cattle Sheep Horse Pig Chicken Rabbit Dog Cat Mouse Human

4.3 6.5 4.1 6.5 6.6 2.7 7.5 7.6 6.8 6.7

2.3 4.3 0 3.6 2.4 0 8.4 7.4 0 3.2

5.3 6.1 6.8 6.4 7.5 4.3 7.6 8.3 7.9 5.2

0 0 0 5.7 0 8.6 8.7 8.9 8.9 9.7

2.4 3.9 7.0 8.4 8.5 0 4.6 8.8 9.1 8.8

Note: Logarithmic median values of 10 animals. Todar K (2002) The bacterial flora of humans. Department of Bacteriology, University of Wisconsin-Madison. http://www.bact.wisc.edu/Bact303/Bact303normalflora (accessed January 2010 with permission from University of Wisconsin-Madison).

Table 2

Total exposure to bacteria via dust ingestion and to endotoxins via dust/airborne particle matter #10 mm (PM10) inhalation, assessed in Mexico City (2004) Fecal coliforms (bacteria per day)

Outdoor Median Minimum Maximum Indoor Median Minimum Maximum

Endotoxins in PM (endotoxin units (EU) per day)

Child

Adult

Child

Adult

60 0 200

120 0 400

45 31 75

91 63 149

60 30 310

120 60 620

32 15 25

64 30 50

Note: Calculated from the following data: number of coliforms per gram of dust, outdoor mean 6 (range 0–20), indoor mean 6 (range 3–31); endotoxin levels, outdoor mean 4.5 EU m 3 (range 3.1–7.5 EU m 3), indoor mean 3.2 EU m 3 (range 1.5–25 EU m 3); inhalation, child 10 m3 per day, adult 20 m3 per day; dust ingestion, child 0.016 g per day, adult 0.010 g per day.

The high mobility of environmental dust and associated pollutants increases the routes of exposure. Pathogenic intestinal bacteria or their debris (containing proinflammatory endotoxins, discussed in the following text) or both become attached to urban dust. Hence, illness can originate by direct exposure to air, or through the hand-to-mouth pathway, as well as by the indirect exposure to polluted water (as in recreational activities) or indoor dust. The ingestion, through the hand-to-mouth route, of different toxic compounds associated with urban dust has been calculated by risk assessment models; however, little information exists about the ingestion of fecal bacteria contained in dust. The potential levels of exposure for children and adults using known bacterial concentrations per gram of outdoor and indoor dust in Mexico City have been calculated (Table 2).

Human Health Effects Associated With Polluted Urban Dust Microorganisms mixed into urban dust have been associated with different health problems, such as enteric diseases (diarrhea), presence or transfer of antibiotic resistance from dust bacteria to pathogenic and nonpathogenic bacteria elsewhere, and immune responses (inflammation) to bacterial endotoxins.

Enteric Pathogens in Urban Dust According to the World Health Organization, there is no tolerable lower limit for pathogens; hence, water intended for preparing food and drink or for personal hygiene should not contain human pathogenic agents. This should be similar for air, dust, or soil. Although such a goal might sound unreachable, sanitary efforts should move in that direction. Although the presence of pathogens, especially those from feces, is particularly dangerous in foodstuff and water, it is no less of a health threat to have it in urban dust, as it is continuously ingested, directly or indirectly, as mentioned earlier in text.

98

Animal and Human Waste as Components of Urban Dust Pollution: Health Implications

In a recent study, urban dust from Mexico City showed a high content of Escherichia coli isolates harboring genes associated with different pathotypes such as EAEC (enteroaggregative E. coli), EPEC (enteropathogenic E. coli), and STEC (Shiga toxin-producing E. coli), which refer to the mode of pathogenic mechanisms or adherence by the bacterium to tissue-cultured human epithelial cells. This finding could be related to the results of several epidemiological studies in Mexico City, where also high prevalence of these pathotypes was found in human and animal feces. However, the isolates from urban dust only yield PCR (polymerase chain reaction) amplicons containing a small fraction of the complete array of the virulence genes described in reference strains obtained from patients. This could mean that the nonamplified genes are not present or that the alleles are present, yet divergent enough in their nucleotide sequence so as to not allow PCR amplification; in any case, the “true” virulence of these isolates could not be confirmed. Isolates from various environments, not only dust, do not necessarily show the genotype and phenotype of known reference strains, as has been noted previously. To add to this variability, uropathogenic E. coli strains (UPEC) have been described to contain EAEC virulence genes or to present the typical EAEC cell-adherence pattern plus some EAEC genetic markers. This indicates a potential horizontal gene transfer in the environment, resulting in strains with both diarrheagenic and uropathogenic potential abilities. The proposal that a specialized subset of bacterial strains can reproduce and persist in the environment, outside animal hosts has been put forward and illustrated in a recent study comprising 190 E. coli environmental isolates. They were isolated from sand cores in the wave-wash zone of six lake beaches, representing a variety of fecal input sources, and were studied by multilocus sequence typing (MLST). The study revealed the presence of persistent genotypes or genetic lineages comprising the autochthonous members of the microbial community. This is certainly a distinct possibility that could well be occurring in strains from dust, given their variability, as discussed earlier in the text. Such an issue warrants further investigation. The genetic diversity in the environmental isolates of enteric bacteria, whose implications are not understood, prompts questions such as, What is the role of the “incomplete” sets of enterovirulence genes in environmental bacteria? How much horizontal gene transfer occurs in the environment inside and outside the animal hosts? Which are the environmental selective forces that determine the assortment of bacterial genotypes? Are comprehensive diagnostic assays for bacterial enteropathogens feasible, in view of their genetic heterogeneity? Health risks must be considered other than the obvious enteropathogenic microorganism that can be ingested after contaminating foods or water. Bacteria harboring only a few virulence genes can contribute them to the “gene pool” available through horizontal gene transfer, and new pathotypes can emerge from the assembly of genes from different origins. Bacterial abilities related only to open-environment survival, such as adherence and biofilm formation, can, in new cellular backgrounds, enable previously innocuous bacteria to colonize or even infect animal hosts. This complex panorama scales up when adding other determinants to this gene flux: antibiotic resistance genes.

Antimicrobial Resistance in Environmental Fecal Bacteria There is a little information on the amount and diversity of bacterial species originated from fecal pollution in the air and dust of cities. The majority of the information deals with E. coli and related fecal bacteria. Even less is known about antibiotic resistance among these microorganisms. For instance, recent reports on the prevalence of resistance phenotypes and mobile genetic elements linked to resistance in E. coli isolates from outdoor urban dust indicate a close resemblance to strains found in clinical settings. Although these observations could be extended to other Enterobacteriaceae, this bacterial family actually represents a minimal fraction of fecal microbiota. Therefore, little is known about the fate of the most prevalent gut bacteria when they go into open environments. In urban settings, it is assumed that most fecal contamination comes from humans and domestic animals, mostly pets. Both are often treated with antibiotics; hence they are common reservoirs of resistant strains (Fig. 2). The prevalence of resistance might reflect the antibiotic usage at each location, although this is not always the case for nonpathogenic bacteria. Fecal contamination could also represent a means for resistance genes considered to be restricted to hospitals, such as those encoding extended spectrum beta lactamases (ESBLs) or glycopeptide resistance, to escape to the community. Three relevant questions arise: Is antibiotic resistance maintained in open environments once bacteria are released? What are the underlying maintenance mechanisms, if any? And do these traits pose a health or an environmental threat or both? There is copious evidence that antibiotic-resistant determinants can be found in environments where antibiotics are not present, indicating both that resistant bacteria selected by antibiotics can be dispersed widely and that resistance is itself resistant to elimination. In this respect, there is the interesting proposal that, in nature, antibiotics function at subinhibitory concentrations as modulators of gene expression; accordingly, it is easy to envision the appearance of antibiotic-resistant strains even in the absence of therapeutic concentrations of such antibiotics. There is no reason to assume a different behavior of resistance traits in urban settings, unless some of them could pose an adaptive disadvantage or are lost quickly under nonselective conditions. Early notions about a supposed fitness disadvantage of carrying resistance genes, or genetic elements containing resistance genes, in the absence of selective pressure, were mostly proven to be wrong. Perhaps some resistance traits, such as nitrofurantoin resistancedthat depends on the suppression of protective enzymesdcould be an actual disadvantage in the absence of selective pressure, and also a reason for its low prevalence even in clinical settings. But for most resistance mechanisms, no disadvantage, other than the useless energy consumption, has been detected or can be foreseen, indicative of why the trait could not be maintained in the bacterial population once it is released from an animal host into the urban environment.

Animal and Human Waste as Components of Urban Dust Pollution: Health Implications

99

Infection

Antibiotic-treated people and animals

Selective pressure? (antibiotics, heavy metals, other pollutants, environmental stresses, host relationship)

Food contamination

Finding: resistant E. coli in the environment

Horizontal transfer from E. coli Polymicrobial fecal pollution

Horizontal transfer to E.coli

Antibiotic failure

Other sources?

Fig. 2 Potential pathways for antibiotic-resistant bacteria and their resistance genes. Environmental isolates of E. coli (center) have been found in different surveillance studies; these could come directly from people or animals treated with antibiotics (top left), releasing feces to the environment, or from other unknown sources, and become enriched by a variety of selective pressures that favor resistant bacteria; these could also be the result of the transfer of resistance genes from bacteria other than E. coli in the open environment (bottom center). In any case, these resistant E. coli cells can contaminate food or water, infecting people (top right), or can transfer resistance genes to other bacterial species that can, in turn, infect people through ingestion or other paths.

In addition to the natural stability of resistance determinants, some factors may further increase the prevalence of some of them in an urban setting. Antibiotics are present in active form, and sometimes in copious amounts, in the urine and feces of treated people and animals; hence, if the environment can be thus contaminated with fecal bacteria, the same could happen with antibiotics. Some drugs have ephemeral presence such as beta-lactams, but others are quite stable such as the aminoglycosides. Genes that encode resistance to other contaminants such as heavy metals are often linked to antibiotic resistance genes; thus, such contaminant-resistant loci allow the coselection for antibiotic resistance even in the absence of antibiotics. These contaminants are often found in cities. Also, some “protective” genes that have a role in antibiotic resistance enable bacteria against environmental stress, some particular to urban environments, such as ozone or disinfectants. Overall, far from being a disadvantage, at least some resistance determinants could actually be useful for bacteria in an urban setting, so that they are being actively maintained within the bacterial population. Having resistance genes in free-living bacteria poses, at least, two possible threats: (1) such bacteria could directly cause infection and those infections would therefore be more difficult to treat and (2) such bacteria could act as reservoirs of resistance genes. The first option has the obvious public health consequence of making contaminant fecal bacteria a much more complicated issue than their mere presence, as it would make infections more debilitating and expensive. The second option is only beginning to be assessed, as the role of these resistance reservoirs has been mostly overlooked. Many resistance genes reside on mobile genetic elements, and horizontal gene transfer has been recognized as a main component of resistance dispersal. Such transfer could occur in the soil involving, for instance, nonvirulent but antibiotic-resistant enteric bacteria and susceptible free-living opportunistic pathogens. The transfer could also occur within a person or an animal that has been colonized by a resistant fecal contaminant and whose resistant determinants can now be mobilized to bacteria from their own microbiota. Although the probability of such events is rather low, the overabundance of fecal contaminants and of resistance genes raises the possibilities. So far, the risks posed by living bacteria have been considered, either as “complete” pathogens of the intestinal tract or as donors and recipients of virulence and resistance genes that might assemble new combinations whose health and environmental effects are difficult to predict. However, even when dead, Gram-negative bacteria can pose a further risk when they contaminate urban dust; endotoxins from these bacteria and their debris exert a potent effect on immune system, leading to chronic airway inflammation and other problems.

100

Animal and Human Waste as Components of Urban Dust Pollution: Health Implications

Bacterial Endotoxins and Respiratory Tract Inflammation The proportion of Gram-negative bacteria increases when ground surfaces become contaminated with fecal material. Endotoxin is a highly thermoresistant active biomolecule from Gram-negative bacteria that can be present in both urban dust and air as a contaminant, representing a health risk. The endotoxin is a structural component of the lipopolysaccharide (LPS) of the outer membrane, which can be liberated from the bacterial wall during cell growth, division, and death. The LPS is composed by hydrophilic polysaccharides covalently linked to a hydrophobic lipid moiety, named lipid A. The LPS varies among strains resulting in variations in biological potency; hence, there are various types of endotoxins. Lipid A is a semiconserved moiety, where the induction of most of the biological and toxic effects resides. It has been proposed that the LPS represents a common signal that allows vertebrates to sense the presence of Gram-negative bacteria in their tissues and that it helps bacteria persist in host epithelia. The lipid A structure represents the fundamental player in all these processes. Since Gram-negative bacteria can grow under minimal nutrient conditions, endotoxins are continuously liberated into the environment, thereby becoming a conspicuous pollutant. Endotoxin-triggered biological and toxic effects are related to inflammation, where monocytes and macrophages play a major role. LPS recognition depends on a highly complex mechanism, involving several molecules of the innate immune system: LPS-binding protein (LBP), cluster of differentiation 14 (CD14), lymphocyte antigen 96 (MD-2), and toll-like receptor 4 (TLR4). LBP is thus an endotoxin carrier that requires binding to CD14 to be recognized by the cell TLR4 receptors, which, in turn, require the participation of MD-2 for optimal cell activation. Activated cells produce inflammatory mediators (e.g., cytokines like tumor necrosis factor alpha (TNFa), interleukin1 (IL-1), IL-6, IL-8) that should act locally to prevent systemic infection and inflammation. However, if endotoxins reach the circulation, fever, tissue injury and even shock can occur. Larger endotoxin levels have been described in occupational settings where favorable conditions for the growth of Gramnegative bacteria occur. Some activities in which endotoxin exposure can occur include exposure to other vegetal fibers (flax, hemp, jute), flour in the bakery industry, aerosols generated by sewage treatment, and shrimp shells in the fishery industry. Endotoxin levels in occupational settings can reach 1.5 mg m 3. In general, occupational exposure to endotoxin-containing materials is related to “asthma-like” conditions, chronic obstructive pulmonary disease, or even pulmonary fibrosis. Highest exposures occur in workplaces involving animal handling. Since current endotoxin levels found in various occupational settings are related to deleterious health effects, the current occupational thresholds (6 mg m 3 in the breathable particle fraction) need revisions. State members of the European Union have adopted lower levels in wood dust (0.5 mg m 3) and are working toward a new general threshold of 50 endotoxin units (EU) per m3 of air (5 ng m 3, if values referenced to standard endotoxins from E. coli-5 (EC-5)). More recently, endotoxins have been identified as components of the particulate matter (PM) that pollute the air (indoors and outdoors), resulting from the presence of bacteria in the resuspended dust. PM, containing an important fraction of settled dust, represents a commonly monitored air pollutant in urban settings, together with ozone, sulfur and nitrogen dioxide, lead, and carbon monoxide levels. PM has the strongest associations with the majority of the adverse health effects related to air pollution; asthma and cardiovascular mortality are examples of them. Although there is no clear indication of the mechanisms involved, proinflammatory and prooxidative processes play a central role. Adverse health effects are linked to both concentration and size of PM. Higher concentrations have higher impacts and no safe thresholds have been described. Size determines penetration into the body; two main fractions are monitored in most cities: PM10 and PM2.5 (particulate matter with mean aerodynamic diameter equal to or smaller than 10 and 2.5 mm, respectively). However, little is known about the relative participation of PM components on overall effects. The study of endotoxins present in PM sheds some light on the role of specific components in PM-related asthma occurrence. Endotoxins are present in both PM fractions; though larger amounts are found in PM10 than in PM2.5. Urban PM endotoxin content varies importantly by region and season (0.7–96.8 EU mg 1 of PM10) as well as its concentration in air (0.03–5.44 EU m 3). Rural settings tend to have higher levels than urban ones; the last ones are much lower than occupational levels. Some studies have reported adverse health effects when nonoccupational indoor settled dust concentrations are in the range observed for outdoor air. Indoor endotoxin levels also vary by region, size, and influence of outdoor air and have been positively associated with respiratory symptoms. Active research on gene–environment interactions has identified susceptible populations prone to asthma in which endotoxins and genetic polymorphisms participate. Single nucleotide polymorphisms in CD14 are among them. Additional in vivo and in vitro experimentation have provided evidence supporting that endotoxins present in PM potentiate its proinflammatory effects, by cell receptor induction or cell activation. In vitro experimentation indicates the participation of endotoxins and metals (e.g., vanadium) present in PM as important mediators in the secretion of IL-b by exposed macrophages. This response, in turn, stimulates myofibroblasts to upregulate PDGF-Ra, a molecule that enables myofibroblast migration and proliferation, a central aspect in asthma pathogenesis. LPS present in PM also upregulated platelet-derived growth factor receptor alpha (PDGF-Ra) in myofibroblast, suggesting the existence of additive effects among PM components. Another set of experimental evidence indicates that endotoxins present in PM are important inductors of cell activation when monocytic and endothelial cells are exposed in vitro to PM. Since these effects have been observed on cells relevant to the respiratory tract and the cardiovascular system, they provide biological plausibility for cardiopulmonary diseases linked to air pollution such as asthma and procoagulant activity, occurring in exposed human populations. Additional epidemiological evidence indicates positive associations between early exposure in life to microbes and their products like LPS, and the development of tolerance. Hence, microbes and endotoxins play an important immunoregulatory role that could prevent further development of allergies or asthma. This has been named the “Hygiene Hypothesis.”

Animal and Human Waste as Components of Urban Dust Pollution: Health Implications

Urban dust

101

Human exposure Enteric infection

Clear pathogens

Virulence and resistance genes Fecal microorganisms Possible and nonpathogens

Treatment failure

Nonfecal microorganisms

Other infections

Bacterial debris

Inflammatory responses

Fig. 3 Diverse health consequences of fecal pollution of urban dust. Microorganisms mixed into urban dust can be grouped as clear pathogens (e.g., EPEC and Salmonella) and possible or nonpathogens (such as E. coli harboring none or only some virulence genes). On exposure to pathogens, humans can develop enteric infections. However, bacterial debris from fecal microorganisms can elicit immune responses, mainly inflammation. Potentially, genetic exchange of virulence or antibiotic-resistant genes can transform nonpathogens and even nonfecal, environmental bacteria into pathogens capable of causing diverse infections. Should any of these infectious agents gain resistance traits, antibiotic treatments are likely to fail.

Final Considerations The presence of fecal material in the dust people ingest and inhale in cities, especially in developing countries, poses detected and potential risks to human health (Fig. 3). Enteric pathogens causing diarrheal diseases, and bacterial debris related to inflammatory responses, are clear causes of concern. But the presence of “incomplete” sets of virulence genes, as well as of antibiotic resistance genes, which can be exchanged between bacteria in the open environment, can cause infectious threats just as important. Governments and societies must realize the risks and take measures to curb the fecal pollution of urban dust. “Ill-cities,” so characteristic of poor countries, cluster and synergize many or all of the problems mentioned above: lack or deficient sanitary infrastructure, overcrowdings, deficient regulation, or enforcement related to health and sanitary issues, increased prevalence of infectious diseases, abuse of antimicrobial drugs, etc. But developed cities also fail to prevent many issues that result in fecal contamination of urban dust.

See also: Management and Export of Wastes: Human Health Implications; Microbial Risks Associated with Biogas and Biodigestor Sludge; Management and Export of Wastes: Human Health Implications.

Further Reading Abe, C.M., Salvador, F.A., Falsetti, I.N., et al., 2008. Uropathogenic Escherichia coli (UPEC) strains may carry virulence properties of diarrheagenic E. coli. FEMS Immunology and Medical Microbiology 52, 397–406. Alfaro-Moreno, E., López-Marure, R., Montiel-Dávalos, A., et al., 2007. E-selectin expression in human endothelial cells exposed to PM10: The role of endotoxins and insoluble fraction. Environmental Research 103, 221–228. Amábile-Cuevas, C.F., 2003. New antibiotics and new resistance. American Scientist 91, 138–149. Araujo, J.M., Tabarelli, G.F., Aranda, K.R.S., et al., 2007. Typical enteroaggregative and atypical enteropathogenic types of Escherichia coli are the most prevalent diarrheaassociated pathotypes among Brazilian children. Journal of Clinical Microbiology 45, 3396–3399. Arnone, R., Walling, J., 2007. Wastewater pathogens in urban watersheds. Journal of Water and Health 5, 149–162. Bonner, J.C., Rice, A.B., Lindroos, P.M., et al., 1998. Induction of the lung myofibroblast PDGF receptor system by urban ambient particles from Mexico City. American Journal of Respiratory Cell and Molecular Biology 19, 672–680. Braun-Fahrlander, C., Riedler, J., Herz, U., et al., 2002. Environmental exposure to endotoxins and its relation to asthma in school-age children. New England Journal of Medicine 347, 869–877. Davies, J., Spiegelman, G.B., Yin, G., 2006. The world of subinhibitory antibiotic concentrations. Current Opinion in Microbiology 9, 445–453. Delahoy, M.J., Wodnik, B., McAliley, L., Penakalapati, G., Swarthout, J., Freeman, M.C., Levy, K., 2018. International Journal of Hygiene and Environmental Health 221 (4), 661–676. Ercumen, A., Pickering, A.J., Kwong, L.H., Arnold, B.F., Parvez, S.M., Alam, M., Sen, D., Islam, S., Kullmann, C., Chase, C., et al., 2017. Environmental Science & Technology 51 (15), 8725–8734.

102

Animal and Human Waste as Components of Urban Dust Pollution: Health Implications

Headey, D. Newsflash: Chickens don’t use toiletsdWhy global WASH efforts should start focusing on animal feces. http://www.ifpri.org/blog/newsflash-chickens-dont-use-toilets. Headey, D., Nguyen, P., Kim, S., Rawat, R., Ruel, M., Menon, P., 2017. Is exposure to animal feces harmful to child nutrition and health outcomes? A multicountry observational analysis. The American Journal of Tropical Medicine and Hygiene 964, 961–969. https://doi.org/10.4269/ajtmh.16-0270. Huang, D.B., Mohanty, A., DuPont, H.L., Okhuysen, P.C., Chiang, T., 2006. A review of an emerging enteric pathogen: Enteroaggregative Escherichia coli. Journal of Medical Microbiology 55, 1303–1311. Kaur, M., Graham, J., Eisenberg, J.N.S., 2017. Livestock ownership among rural households and child morbidity and mortality: An analysis of demographic health survey data from 30 sub-Saharan African countries (2005–2015). The American Journal of Tropical Medicine and Hygiene 963, 741–748. https://doi.org/10.4269/ajtmh.16-0664. Kemper, N., 2008. Veterinary antibiotics in aquatic and terrestrial environment. Ecological Indicators 8, 1–13. Lacher, D.W., Steinsland, H., Blank, T.E., Donnenberg, M.S., Whittman, T.S., 2007. Molecular evolution of typical enteropathogenic Escherichia coli: Clonal analysis by multilocus sequence typing and virulence gene allelic profiling. Journal of Bacteriology 189, 342–350. Mentula, S., 2006. Relatedness of Escherichia coli strains with different susceptibility patterns isolated from beagle dogs during ampicillin treatment. International Journal of Antimicrobial Agents 27, 46–50. Mohamed, J.A., Huang, D.B., Jiang, Z.-D., et al., 2007. Association of putative enteroaggregative Escherichia coli virulence genes and biofilm production in isolates from travelers to developing countries. Journal of Clinical Microbiology 45, 121–126. Moreira, F.C., Vieira, M.A.M., Ferreira, A.J.P., et al., 2008. Escherichia coli strains of serotype O51: H40 comprise typical and atypical enteropathogenic E. coli (EPEC) strains and are potentially diarrhogenic. Journal of Clinical Microbiology 46, 1462–1465. Mueller-Anneling, L., Avol, E., Peters, J.M., Thorne, P.S., 2004. Ambient endotoxins concentrations in PM10 from Southern California. Environmental Health Perspectives 112, 583–588. Munford, R.S., 2008. Sensing Gram-negative bacterial lipopolysaccharides: An important determinant of human disease? Infection and Immunity 76, 454–465. Odagiri, M., Schriewer, A., Daniels, M.E., Wuertz, S., Smith, W.A., Clasen, T., Schmidt, W.-P., Jin, Y., Torondel, B., Misra, P.R., Panigrahi, P., Jenkins, M.W., 2016. Human fecal and pathogen exposure pathways in rural Indian villages and the effect of increased latrine coverage. Water Research 100, 232–244. https://doi.org/10.1016/ j.watres.2016.05.015. Oosterom, J., 1998. The importance of hygiene in modern society. International Biodeterioration & Biodegradation 41, 185–189. Osornio-Vargas, A.R., Bonner, J.C., Alfaro-Moreno, E., et al., 2003. Proinflammatory and cytotoxic effects of Mexico City air pollution particulate matter in vitro are dependent on particle size and composition. Environmental Health Perspectives 111, 1289–1293. Park, J.H., Spiegelman, D.L., Burge, J.A., Gold, D.R., Chew, G.L., Milton, D.K., 2000. Longitudinal study of dust and airborne endotoxins in the home. Environmental Health Perspectives 108, 1023–1028. Penakalapati, G., et al., 2017. Exposure to animal feces and human health: A systematic review and proposed research priorities. Environmental Science & Technology 51 (20), 11537–11552. Rosas, I., Salinas, E., Martinez, L., et al., 2006. Urban dust fecal pollution in Mexico City: Antibiotic resistance and virulence factors of Escherichia coli. International Journal of Environmental Health 209, 461–470. USEPA (2002) National menu of best management practices for storm water phase II. http://www.epa.gov/npdes/menuofbmps/menu.htm (accessed January 2010). Walk, S.T., Alm, E.W., Calhoun, L.M., Mladonicky, J.M., Whittam, T.S., 2007. Genetic diversity and population structure of Escherichia coli isolated from freshwater beaches. Environmental Microbiology 9, 2274–2288. William, G., O’Neill, C., Wellington, E., Hawkey, P., 2008. Antibiotic resistance in the environment, with particular reference to MRSA. Advances in Applied Microbiology 63, 249–280.

Antarctic: Persistent Organic Pollutants and Environmental Health in the Regionq Simonetta Corsolini, University of Siena, Siena, Italy © 2019 Elsevier B.V. All rights reserved.

Abbreviations AhR Aryl hydrocarbon receptor CFC Chlorofluorocarbon CHL Chlordane DDT Dichloro diphenyl trichloroethane p,p0 -DDE p,p0 -Dichlorodiphenyltrichloroethane ED Endocrine disruptor HCB Hexachlorobenzene HCH Hexachlorocyclohexane IARC International Agency for Research on Cancer n-FR Novel flame retardant OCP Organochlorine pesticide PBDE Polybrominated diphenyl ether PCB Polychlorinated biphenyl PCDF Polychlorinated dibenzofuran PCDD Polychlorinated dibenzodioxin POP Persistent organic pollutant TEF Toxic equivalency factor TEQ Toxic equivalent WHO World Health Organization

List of speciesdAdèlie penguin: Pygoscelis adeliae; Algae; Antarctic krill: Euphausia superba; Antarctic naked dragonfish: Gymnodraco acuticeps; Antarctic scallop: Adamussium colbecki; Antarctic skua: Catharacta antarctica; Antarctic toothfish: Dissostichus mawsoni; Antarctic whelk: Neobuccinum eatoni; Blackfin icefish: Chaenocephalus aceratus; Centropages hamatus; Elephant seal: Mirounga leonina; Emerald rockcod: Trematomus bernacchii; Emperor penguin: Aptenodytes forsteri; Humped rockcod: Gobionotothen gibberifrons; Leopard seal: Hydrurga leptonyx; Mackerel icefish: Champsocephalus gunnnari; Naked dragonfish: Gymnodraco acuticeps; Plankton; Silverfish: Pleuragramma antarctica; Snow petrel: Pagodroma nivea; South Polar skua: Catharacta maccormicki; Weddell seal: Leptonychotes weddellii.

Introduction Ecosystems and Contamination in Antarctica Antarctica is a snow-covered continent surrounded by the Southern Ocean that isolates it from other land masses (Fig. 1). The Antarctic circumpolar current (ACC) is a physicochemical boundary that isolates the Southern Ocean from the other oceans. The geographic isolation and extreme climate of Antarctica and the Southern Ocean are responsible for both their late discovery by humans and the absence of any anthropic impact (towns, industry, and mining), except for the scientific stations. Unfortunately, many studies have demonstrated that even this remote continent and ocean have been reached by contaminants such as persistent organic pollutants (POPs). Recently, microplastics have also been detected in Antarctic surface seawaters. The presence of plastic debris is an emerging problem affecting all the oceans and sea; they can be large pieces of plastics or nano- and micro-plastics of diverse origins (e.g. large plastic debris can degrade to smaller and smaller pieces). Although the Southern ocean is isolated from the other oceans, plastics could be found due to local sources (accidental release from scientific stations and tourist vessel) or long range transport through seastreams. The tourism in Antarctica is another emerging and worrisome issue,

q

Change History: December 2017. Simonetta Corsolini has made changes throughout the text. This is an update of S. Corsolini, Antarctic: Persistent Organic Pollutants and Environmental Health in the Region, Editor(s): J.O. Nriagu, Encyclopedia of Environmental Health, Elsevier, 2011, Pages 83–96.

Encyclopedia of Environmental Health, 2nd edition, Volume 1

https://doi.org/10.1016/B978-0-12-409548-9.11016-4

103

104

Antarctic: Persistent Organic Pollutants and Environmental Health in the Region

Fig. 1 Map of Antarctica and the Southern Ocean. (modified by Map 13351: Base map of Antarctica and the Southern Ocean courtesy of the Australian Antarctic Division, downloaded at https://data.aad.gov.au/aadc/mapcat/display_map.cfm?map_id¼13351, 9 November 2017; Scale 1:60, Projection: Polar Stereographic)

because the impact of thousands of people on the marine and terrestrial ecosystems may be very high due to their fragility. The International Association of Antarctica Tour Operators (IAATO) reported that approximately 44000 tourists reached the Antarctic seawaters in the 2016/2017 season and estimated 12400 landed. Chemical contamination in the Antarctic ecosystems was first reported in 1966, and, since then, there has been an increasing interest in studying and monitoring the presence of pollutants in this pristine region of the world. This awareness has been growing during the recent years, after the Arctic has been reported as a final sink for POPs.

Antarctic: Persistent Organic Pollutants and Environmental Health in the Region

Fig. 2

105

The long-range transport of POPs (schematic).

The ACC isolates the Southern Ocean from oceanic inputs, which, as a consequence, is evaluated to be very low. The North Atlantic Deep Water (NADW) is approximately 2 km in depth and flows southward; its path can ultimately be traced into the Southern Ocean as it mixes with the ACC. It brings waters from the Northern Hemisphere, where POPs are largely used. Some researchers think that the NADW brings contaminants from the boreal hemisphere that have been used at least a couple of years earlier. Migratory animals (South Polar skuas Catharacta maccormicki, other seabirds, and whales) may be a minor source of pollutants in polar regions because of their excrements and carcasses. The presence of synthetic and toxic chemicals in the Antarctic ecosystems is partially associated with the activities of the scientific stations and, since recent years, with the tourism; nevertheless, the main source of pollutants for this remote continent is the atmospheric transport. Volatile or semivolatile contaminants may be transported to the remote Antarctic continent mainly by air mass. Cold condensation and global fractionation have been proposed as mechanisms whereby POPs can reach polar regions; both POP condensation and fall out depend on physicochemical properties of the molecules and air temperature (Fig. 2). Owing to the extreme cold climate and winter darkness, the degradation of deposited POP is very slow in the polar regions and they may be entrapped in the ice. Through ice melting, POPs are released again into the ocean, where they enter the food webs (Fig. 3), bioaccumulate in the tissues of organisms, and biomagnify. Antarctic trophic webs are relatively simple and brief: animals at the top of the food webs depend on few key species such as silverfish (Pleuragramma antarctica) and krill (Euphausia superba) that are the prime food source for several bird species and marine mammals, which depend on them either directly or indirectly (Fig. 4). Hence, the decline in the population of stock of key species can have devastating impacts on the marine ecosystem. Owing to the differences in geographical features and ecosystem characteristics, organisms inhabiting Antarctica are exposed to different levels and patterns of organochlorine compounds; the evaluation of contaminant concentrations in their tissues provides information on the extent of contamination in these remote areas of the globe. Furthermore, because marine species living in the polar regions have greater lipid content than temperate or tropical species, they are susceptible to accumulating high concentrations of persistent, toxic, and lipophylic contaminants. An interesting example is given by krill and zooplankton: being very rich in lipids, even on their body surface, they show unexpected high POP concentrations in relation to their low trophic level.

Pollutants POPs include several groups of chemicals (Fig. 5) with similar structures and physical–chemical properties that elicit similar toxic effects. They have been used extensively worldwide in agriculture [pesticides, e.g.: aldrin, chlordanes, dichloro diphenyl trichloroethane (DDT), dieldrin, endrin, hexachlorobenzene (HCB), mirex, heptachlor, toxaphenes], industrial (PCBs, PBDEs) and health (DDT) applications; moreover, polychlorinated-dioxins (PCDDs) and polychlorinated-furans (PCDFs) are by-products. Among these chemicals, the PBDEs, for instance, are used worldwide as flame retardants (Fig. 5). They pose recognized health risk (liver and thyroid toxicity, neurodevelopmental toxicity) and their use was restricted since 2003 (initially in some States of the U.S.A. and in the European Community with the EU Directive 2002/95/CE and 2011/65/CE). A list of the new POPs is available at http://chm.pops.int/TheConvention/ThePOPs/TheNewPOPs/tabid/2511/Default.aspx. All these chemicals are synthetic, ubiquitous and hydrophobic and show long-range transport potency. They are persistent in soils and sediments, with environmental half-lives ranging from years to several decades or more. They are not very volatile and show high chemical and thermal stability

106

Antarctic: Persistent Organic Pollutants and Environmental Health in the Region

(A)

POPs

2

POPs

POPs

1 Trophic web

3 (B)

POP

deposition

Winter: ice entrap xenobiotics

Summer: xenobiotics released into seawater enter trophic webs

Fig. 3 The relationship between abiotic and biotic distribution of POPs (A: 1, evaporation from soil and water bodies; 2, wet and dry deposition; 3, abiotic-biotic exchange), and the release of POPs to seawater during summer ice melting (B).

Top predators

Killer whale

Other cetaceans Predators

Squid

Leopard seal Flying seabirds

Other fish Silverfish

Herbivores

Primary production Fig. 4

Penguins Seal

Krill

Zooplankton: copepods, amphipods, fish larvae, etc. Phytoplankton

Outline of an Antarctic trophic web (schematic).

and low biodegradability. Because of their resistance to biodegradation, they are also called xenobiotics. These chemicals bioaccumulate in the lipid components of tissues in organisms and accumulate in organisms through food webs. Consequently, the principal route for chronic exposure of both animals and humans is through diet. Mounting evidence suggests that populations of various animal species are, or have been, adversely affected by exposure to POPs. Some xenobiotics mimic natural hormones and are defined as xenoestrogens, namely, environmental chemicals that act as estrogens. Effects on the functioning of the endocrine system are the first damages to be detected. In fact, some POPs are known as endocrine disrupter compounds (EDCs), meaning that they are able to interfere with functions of the endocrine system (Fig. 6), although not all POPs are EDCs. It has been suggested that only those that cause adverse effects on individual organisms through primary effects on endocrine systems that could lead to population- and community-level impacts are EDCs. They include the following most widespread and well-known classes of contaminants: PCBs, PCDDs, PCDFs, PBDEs, polychlorinated-biphenyls

Antarctic: Persistent Organic Pollutants and Environmental Health in the Region

Aldrin

107

Dieldrin

p,p’-DDE HCB

Cis-chlordane

Mirex

Endrin

PCBs

Clx

Heptachlor

Cly

O Clx

PCDFs

Cly

O O Clx

Fig. 5

PCDDs

O Cly

Brx

PBDEs

Brx

Selection of Persistent organic pollutants.

Fig. 6 Schematic mechanism of action of the endocrine disruption [it shows the competition between the endogenous hormone (EH) and the endocrine disruptor chemical (ED) for the receptor (R) that is the first step of the mechanism. The resulting action in normal conditions (EH-R complex) can be reduced or amplified in relations to the agonistic or antagonistic role of the ED].

108

Antarctic: Persistent Organic Pollutants and Environmental Health in the Region

(PBBs), perfluorinated compounds (PFCs), and other halogenated hydrocarbons, often used as a pesticides. The most toxic POPs are the PCDDs and the PCDFs. They are structurally similar chlorinated hydrocarbons, produced as a by-product in many technical mixtures of halogenated compounds, including pesticides, and during paper and pulp bleaching. They also occur through urban and industrial waste incineration, metal production, fossil fuel and wood combustion, and are still present in PCB-filled electrical transformers. The 2,3,7,8-tetrachlorodibenzo-p-dioxin (2,3,7,8-TCDD or TCDD) is known to be the most toxic compound for organisms. The International Agency for Research on Cancer (IARC) announced in 1997 that the most potent dioxin, the 2,3,7,8-TCDD, is considered a Class 1 carcinogen, meaning a known human carcinogen, and it has been confirmed by the Environmental Protection Agency (EPA). TCDD binds to the cytosolic aryl hydrocarbon receptor (AhR) to build a substrate–receptor complex that can enter the cell nucleus and interfere with the expression of some genes. This interaction contributes to an increase in or induction of 2,3,7,8-TCDD inducible genes such as CYP1A1. There is a decrease in receptor binding affinities as the lateral substitutions decrease. Toxic effects due to POPs include cancer, reproductive, and developmental problems (e.g. low birth weight, hormone alterations, lower IQ, and emotional problems), alterations of the immune system, such as decreased ability to fight cancer and infections, endocrine disruption (affecting the thyroid and sex hormones), central nervous system defects, liver damage, skin and eye disease, and death. Apart from TCDD, many other chemicals elicit the same toxic effects due to isostereoisomerism with TCDD. Many POPs act in the same way as TCDD and are known as dioxin-like compounds; these include all the PCDDs and PCDFs that have chlorine atoms in the 2,3,7,8 positions on the molecule, plus certain specific PCBs and other compounds that can be isostereoisomers of TCDD (Fig. 7) and show AhR-mediated responses in cells. The Toxic equivalency factors (TEFs) express the toxic potency of a chemical in relation to that of the TCDD and can be used to calculate 2,3,7,8-TCDD toxic equivalents (TEQs), which are an important tool for estimating risk for organisms. The methods are based on the fact that dioxins cause AhR-mediated effects and that exposure is typically due to mixtures of dioxins. TEF values are based on in vitro and in vivo induction potency of the AhR; TCDD TEF was assigned a value of 1 and consequently all the other chemicals have a TEF lower than 1. The total toxicity can be calculated as follows: X X X TEQs ¼ ½PCDDi  TEFi  þ ½PCDFi  TEFi  þ ½PCBi  TEFi  n1

n2

n3

2,3,7,8-TCDD

Cl

Cl Cl

Cl Cl

Cl 3,3’4,4’,5,5’ (PCB169)

Fig. 7 Chemical structure of 2,3,7,8-TCDD, PCB169, and rotation of the C–C bond of the biphenyl that allows the chemical planar configuration of a dioxin-like PCB.

Antarctic: Persistent Organic Pollutants and Environmental Health in the Region

109

where PCDDi, PCDFi, and PCBi are the concentration of each congener, TEFi is the specific TEF value of each congener, and Sn1, Sn2, Sn3 are the sums of the TEQ values of each class of contaminants. Because polar organisms have developed few methods to deal with xenobiotic substances, evaluating levels and potential toxicity in Antarctic species is an important means to understanding the biological impacts in those organisms whose detoxifying enzyme systems are not yet fully understood. Hundreds of thousands of different industrial chemicals are or have been produced worldwide, but only a few of them were studied in Antarctic organisms; many of these chemicals are listed or are under consideration of the Stockholm Convention. They are pesticides [aldrin, CHLs, DDT, dieldrin, endrin, heptachlor, mirex, and toxaphene]; industrial chemicals [PCBs, HCB]; unintentional by-products (PCDDs and PCDFs) (Fig. 5); in addition, the study of the so-called emergent POPs in Antarctic ecosystems is also crucial because of their increasing contamination at the high latitudes of the Northern Hemisphere. Among these chemicals, the PBDEs, for instance, are used worldwide as flame retardants (Fig. 5). They pose recognized health risk (liver and thyroid toxicity, neurodevelopmental toxicity) and their use was restricted since 2003 (initially in some States of the U.S.A. and in the European Community with the EU Directive 2002/95/CE and 2011/65/CE). A list of the new POPs is available at http://chm.pops.int/TheConvention/ThePOPs/TheNewPOPs/tabid/2511/Default.aspx. Most volatile chemicals, such as HCB and low-halogenated PCBs and PBDEs, are expected to reach the polar regions faster than less volatile ones, such as the highly halogenated organic pesticides (e.g. OCPs), PCBs or PBDEs. Unlike migratory species that reach Antarctica in summer for feeding (e.g. cetaceans), or reproduction (e.g. seabirds), those endemic species of the Antarctica can give exact indications on the contamination levels in this remote and still pristine continent. The area characterized by polar or subpolar conditions is delimited by the Antarctic Convergence (northern boundary of the ACC), including the sub-Antarctic islands (Fig. 1). Among POPs, the most studied in Antarctic organisms are PCBs > DDTs > HCB. It is important to clarify that Antarctic biota refer mostly to marine organisms; the terrestrial species of flora and fauna are very few (as expected in a cold desert): lichens, moss, and small invertebrates; no large plants and animals can be found in the Antarctic continent. A couple of species of plant (Deschampsia antarctica and Colobanthus quitensis) grow up in the Antarctic Peninsula. The terrestrial vegetation and fauna as well as most of the small and large marine animals show a very slow growth and often are very long-lived, in relation to the extreme environment. The feeding period of marine biota is often concentrated during the summer months, when the climate allows the plankton to bloom, and all the species of the trophic webs have food and better conditions to breed.

Health of Antarctic Organism Environmental health is usually correlated to human health and it calls to mind human well-being. Following the World Health Organization definition, it addresses all the physical, chemical, and biological factors external to a person and all the related factors impacting behaviors. It encompasses the assessment and control of those environmental factors that can potentially affect health. It is targeted toward preventing disease and creating health-supportive environments. Essentially, human health cannot be regardless of environmental health. Humans are only occasional inhabitants of Antarctica for scientific purposes, and their presence and all activities are regulated by the Antarctic Treaty; the same treaty also controls tourism, and tourists cannot disembark without the permission and the guide of authorized scouts (see https://iaato.org/ for further information). In the light of these considerations, it is very important to study the environmental health in Antarctica: this continent is considered to be crucial for the ecological equilibrium of the planet, with regard, for instance, to the global climate, the freshwater mass balance, the global ecosystem equilibrium, and health. The Antarctic ecosystems are very fragile, and even a very small alteration may cause a dramatic and unpredictable consequence, often irreversible; their resilience capacity is very low. The knowledge of the environmental levels and patterns of toxic persistent contaminants can improve the knowledge on the related risk for organisms, including humans, and it is very important both for an evaluation of the global environmental health and for other possible consequences at a global scale.

Levels and Patterns of Contaminants Among xenobiotics, PCBs may be considered the most known, widely distributed, and studied. The accumulation of PCB congeners and isomers, their patterns, and relative abundance may give important information on their global transport, bioaccumulation paths, and distribution in organisms. The profile of PCB contamination in Antarctic organisms is often different from that of other parts of the world. In fact, lower chlorinated congeners often show high concentrations with respect to higher chlorinated ones. Trito penta-CBs were 35%–65% of the total PCB residue in a benthic trophic chain of the Ross Sea (Fig. 8). The same pattern was observed in euphaisiids, cryipolegaic fish, and bird eggs from the coasts of the Adèlie Land; in the same area, penta-CBs predominated in benthic organisms. Fingerprints confirm different patterns between species, likely depending on specific metabolisms, sex, age, and breeding activity. Fish and invertebrates show low detoxifying activity to metabolize PCB138, 153, and 180; their concentrations are lower in the prey with respect to their predator (sea urchin, Antarctic scallop Adamussium colbecki, Antarctic whelk Neobuccinum eatoni < emerald rockcod Trematomus bernacchii; Antarctic scallop < sea star; Antarctic scallop < sea urchin; algae < sea urchin; sea urchin, Antarctic scallop < sea star) (Fig. 9). By comparing fingerprints of Antarctic seawater and organisms, some important differences emerge. Lower chlorinated PCBs show remarkable concentrations in seawater, whereas high chlorinated ones are very low or absent. Penta- to octa-chlorobyphenils-are detectable only in organisms where they make up most of the residue.

110

Antarctic: Persistent Organic Pollutants and Environmental Health in the Region Tri-CBs

Tetra-CBs

Penta-CBs

Hexa-CBs

Hepta-CBs

Octa-CBs

Nona-CBs

Algae Yoldia Whelk Scallop Sea urchin Seastar Sea cocumber Rockcod_m Rockcod_l 0% Fig. 8 liver).

20%

40%

60%

80%

100%

Class of isomer composition of PCBs in some species of Antarctic benthic organisms (Rockcod_m ¼ rockcod muscle, Rockcod_l ¼ rockcod

T. bernacchi (benthic feeder) 14.5 ? O. validus (omnivorous, filter feeding) 5.82

N. eatoni (necrophagous) 0.793

Y. eightsi (deposit feeder on mud, filter feeding) 0.704

A. colbecki (filterfeeding, detritivorous) 0.989 ?

Holoturians (filter feeding or detritivorous) 8.179

S. neumayeri (herbivorous) 6.478

I. cordata 1..351 Fig. 9 PCB concentrations (ng g  1 wet wt) in relation to the supposed position of organisms in the trophic web (schematic, not based on analysis of carbon and nitrogen stable isotopes).

Fish show remarkable levels of low-chlorinated congeners as they assume them from water and food; in fact, low-chlorinated PCBs, being moderately hydrophobic, can reach a rapid equilibrium between seawater and fish. In general, the fingerprints of emerald rockcod and Centropages hamatus, compared to those of Adèlie penguin (Pygoscelis adeliae) and Weddell seal (Leptonychotes weddellii), emphasize a possible excretion of lower chlorinated congeners in fish. Penguin and seal accumulate contaminants mainly from food, and therefore show different congener compositions, with most of the residue being made up by hexa- and hepta-CBs. A few scientific articles have reported data on the POP presence in terrestrial organisms. An interesting study published in 1991 reported the detection of HCB, HCHs, DDTs, and PCBs in moss and lichens from Kay Island, Ross Sea. Levels were lower than those detected in similar species from Northern Europe at that time. Interestingly, alpha-HCH level was higher than gamma-HCH level (0.17 and 0.04 ng/g dry wt, respectively), indicating the arrival of old air; this pattern is typical of remote regions that currently reflect

Antarctic: Persistent Organic Pollutants and Environmental Health in the Region

111

past usage of alpha-HCH enriched technical mixtures, while gamma-HCH is at present found in areas under higher anthropogenic impact, due to its presence in mixtures of current use. The trend in moss and lichen samples from the Antarctic Peninsula was gamma-HCH > alpha-HCH (0.71 and 0.32 ng g  1 dry wt, respectively). This pattern is due to the transport of HCHs from areas where they were still used at the time of sampling, and it is typical of most of the anthropized regions. HCB was detected in lichen and moss from both the Antarctic Peninsula (0.49 ng g  1 dry wt) and Kay Island (Ross Sea, 0.3 ng g  1 dry wt), as well as p,p0 DDE and p,p0 -DDT. The DDE/DDT ratio was 0.7 in both locations, indicating a distance from the application sites, but a likely continuing input from distant sources at the time of sampling. PCBs were below detection limits in Kay Island samples (< 5 ng g  1 dry wt) and were 9.9 ng g  1 dry wt in the Antarctic Peninsula samples. The POP concentrations in organisms of the lower levels of the trophic webs (plankton, krill, and invertebrates) are inhomogeneous and at different levels of magnitudes. Concentrations do not increase from one trophic level to the next or from the smaller to the larger organisms, as expected. This may be due to various reasons: first of all, different time of sampling can play an important role in the bioaccumulation. Ice melting is reported as one of the major causes of contamination in polar regions as contaminants trapped in the ice can be released in the seawater during summer (see Fig. 3B). Thus, organisms living under the pack ice (krill, small larvae, and other planktonic organisms) accumulate POPs the first, and the POP transfer from seawater to organisms may depend on metabolism rate, temperature and POP physicochemical and accumulation properties. Because ice melting occurs at different times in different sites, levels detected in planktonic organisms may vary a lot depending on the time of collection. Second, the surface of small planktonic organisms can adsorb particulate organic material containing contaminants deriving from ice melting, and this could explain the variability of concentrations, depending on their different ratio body surface/volume. It seems that no biomagnification occurred between plankton and krill. It is likely that plankton accumulates contaminants mainly through bioconcentration and adsorption, whereas diet may be a minor intake path. The assessment of the bioconcentration factor (BCF) in a pelagic trophic web (phyto-, zooplankton, krill, silverfish, and Adèlie penguin) of the Ross Sea showed that the largest increment in PCB concentration was from water to phytoplankton, and from fish to seabirds. The progressive amplification of the quantity of contaminants in the organisms is due to bioconcentration and biomagnification; bioconcentration can prevail at the lower levels of the trophic webs, whereas biomagnification can become the main route of contamination at the higher ones, where the feeding habit of a predator plays a crucial role in POP uptake. The presence of HCH isomers was investigated in seawater and krill samples collected in the Ross Sea, at the margin of pack ice, where melting occurs. Concentrations of HCHs ranged 0.049–0.322 ng g  1 wet weight (wet wt) in krill and 0.65–1.53 pg l  1 in seawater, with a predominance of gamma-HCH in all krill and seawater samples. Heptachlor, heptachlor epoxide, dieldrin, and aldrin have also been detected in Antarctic low trophic level organisms. These pesticides are present in the Antarctic environments, but the paucity of data does not allow any speculation on bioaccumulation mechanisms and patterns. Among the Antarctic mollusks and benthic invertebrates, the Antarctic whelk showed slightly lower concentrations than its prey, the Antarctic scallop. The scallop is a filter feeding, and feeding habits may be responsible for bioaccumulation; in fact, a filterfeeding organism accumulates more POPs than expected in relation to its trophic position in the food web. Moreover, many Antarctic organisms accumulate lipids during the breeding season and to overwinter; as a consequence, it may pose a major risk for them to accumulate POPs compared to organisms that accumulate glycogen to overwinter. A decreasing trend of PCB levels might be hypothesized between samples collected at the end of the 1980s and the beginning of the 1990s, and those collected in 2000. CHLs have been largely used as insecticides in the Southern Hemisphere. As they are persistent and show long-range transport, they can reach the Antarctica and enter the trophic webs. All CHL compounds were below the detection limits in krill from the Ross Sea samples, whereas nonachlor and trans-CHL were detectable in samples from the Weddell seal, where mirex was also detected. A possible difference in the CHL contamination between the Ross Sea and the Weddell Sea may be due to the vicinity of the Weddell Sea to the South American lands. Pentachlorobenzene (QCB), PBDEs, polychloronaphthalenes (PCNs), non-ortho PCB congeners, PCDDs, and PCDFs were also detected in krill samples collected in the Ross Sea, confirming the presence of a wide variety of POPs in the Southern Ocean and their accumulation even at the lower levels of the pelagic and benthic trophic webs. There are only few species of Antarctic fish that are fished for commercial use; krill and the Antarctic toothfish (D. mawsoni) are caught by trawls and they are used both for human consumptions and for other purposes (animal diets) Fishing in Antarctica is regulated by the Antarctic Treaty. The demand for fishing in Antarctic seawaters is increasing recently because fish stocks of other oceans are decreasing. Hence, it is important to know the contaminant levels and the health status of stocks in these species. The Antarctic toothfish (a long-lived large fish heavily fished for human consumption) showed higher concentration of PCBs, DDTs, and HCB with respect to other species like Antarctic naked dragonfish (Gymnodraco acuticeps), and the emerald rockcod sampled during the 1987–90 seasons in the Ross Sea (74 S, 164 E), likely in relation to its large size that allows bioaccumulation with age. The relative abundance of HCB, DDT, and PCBs in some fish species from the Ross Sea and the Weddell Sea differed: it was HCB > DDTs > PCBs in humped rockcod (Gobionotothen gibberifrons), mackerel icefish (Champsocephalus gunnari), and blackfin icefish (Chaenocephalus aceratus) from the Weddell Sea, and PCB > DDT > HCB in those species from the Ross Sea. Many samples showed the following pattern: HCB > p,p0 -DDE, which is different from organisms from other locations of the world. DDE has a higher bioconcentration potential (log BCF ¼ 4.7 in fish) compared to the HCB (log BCF ¼ 3.1–4.5 in fish), but it is volatile and easily transported by air masses (HCB and p,p0 -DDE vapor pressure are 1.8  10–6 and 1.7  10–8 atm, respectively).

112

Antarctic: Persistent Organic Pollutants and Environmental Health in the Region

Therefore, fish-eating predators may accumulate a greater amount of HCB than p,p0 -DDE, which might be less available to organisms in cold polar region. A similar phenomenon has already been described for HCHs, with higher concentration in the Northern Hemisphere. The global transport is responsible for this pattern: due to global fractionation, highly volatile POPs (e.g. HCB) reach the polar regions quite rapidly compared to heavier molecules. Ice can be a trap for those chemicals such as HCHs and HCB, and can release them during melting as well as PCBs. Fish from the Antarctic Peninsula and from the Ross Sea, seabirds, and krill showed a HCB > p,p0 -DDE pattern. On the contrary, seals seem to accumulate more DDE than HCB as reported for Weddell seal from the Ross Sea, and for Weddell seal and elephant seal (Mirounga leonina) from the Antarctic Peninsula. DDT and PCB concentrations showed a decreasing time trend in the silverfish collected in the Ross Sea during the 1994/95 and 1999/2000 seasons; DDT ranged from 0.3 to 0.06  0.15 ng g  1 wet wt and PCBs from 9.39–138 to 3.51  3.03 ng g  1 wet wt. HCB levels were similar, being 4.4 and 4.85  5.49 ng g  1 wet wt, respectively, in the two periods, confirming its worldwide distribution and tendency to accumulate in the polar regions. A dissimilar trend can be observed in the emerald rockcod; samples collected in the same area (Terra Nova Bay, Ross Sea) along a 20-year time span showed a probable decreasing trend from 1987 to 90 seasons in PCB concentrations. HCB and DDT both decreased from late 1980s to 1995 and then increased again in 2000– 2002. Higher PCB, p,p’-DDE and PBDE concentrations were found in this species in 2001 and 2005 and this pattern might be ascribed to a huge iceberg, named B15, calved from the Ross Sea Shelf in 2000; this iceberg broke up into pieces in 2000, 2002, and 2003. The contaminants accumulated in the ice could be released again in the environment with the iceberg melting, being responsible of the POP peaks detected in the T. bernacchii collected in the area. These trends agreed with that reported for from the Weddell Sea (humped rockcod, mackerel icefish, and blackfin icefish). It is interesting to note that chlorinated pesticides (HCB, DDTs, and CHLs) showed similar concentration and patterns in the 1980–90s period in organisms from the Western and Eastern Antarctica, whereas a clear time trend is not easy to determine in both areas for PCBs. The initial POP decrease observed in many ecosystems around the world was due to their reduced or broken off production and use first in industrial countries of North America and Europe. The growing population and industrialization of many countries have likely increased the use of many low-cost pesticides and industrial mixtures like DDT and PCBs. The continuing legal or illegal use of many POPs and their release from stocks of unused chemicals may contribute to the continuing emission and distribution of contaminants in the global environments. Many of these compounds show physicochemical properties that allow their global transport; for many of them (e.g. some pesticides and PBDEs), a clear movement toward the polar regions that act as final sink has already been reported as well as their consequent bioaccumulation in polar organisms. On the contrary, PCB and PCDD/F concentrations seem to have declined or remained unchanged in the Arctic since 1980. CHLs, endrin, dieldrin, heptachlor and heptachlor epoxide, mirex, PBDEs, PCNs, PCDDs, and PCDFs were all detected in various species of Antarctic fish. POP and TEQ concentrations in Antarctic organisms are low compared to those reported for marine species from lower latitudes, and they are among the lowest in the world. For instance, the TEQs varied between 0.11 pg g  1 wet wt in the muscle of the Antarctic toothfish and 13.76 pg g  1 wet wt in the muscle of mackerel icefish, while values may be up to 100 pg g  1 in organisms from temperate/tropical regions. The presence of most of the industrial persistent contaminants in the Antarctic fish confirms that Antarctic ecosystems are no longer pristine. Their monitoring will be more and more important because of the fragility of trophic webs of such an extreme environment; moreover, the ice caps are a sink for these contaminants that can be released with melting even after the cessation of their use worldwide. Owing to the global warming and the following reduction in ice caps, an increasing amount of contaminants can be released in the seawater and then enter the trophic webs. The extreme weather conditions largely affect the physiology and ecology of organisms; feeding habits, lipid accumulation (strongly linked to food availability during summer months), and long life span may be considered factors of risk. Unusually high concentrations in invertebrates and fish may be of concern not only for organisms themselves but also for top predators, like marine mammals and seabirds (due to biomagnification). The evaluation of POP presence in the tissues of seabirds and marine mammals should be done keeping in mind the migratory habits of the organisms. Some species of penguins and seals spend their entire biological cycles in the Southern Ocean and Antarctic coasts, whereas other species of seabirds and marine mammals (including cetaceans) forage or breed in Antarctic seawaters in summer months and then go northward to overwinter. Thus, their ecology can affect the contaminant body burden a lot; those organisms that forage or breed in the Antarctica during summer and then migrate to northern ranges may accumulate a greater amount of contaminants if they overwinter in polluted areas. Among seabirds, penguins breed on the Antarctic continent or islands and overwinter in the Southern Ocean; therefore, they are very interesting in ecotoxicology because they can be a valid biomonitor of contaminants in the Southern polar region. Petrels, fulmars, and other seabird species that breed in the Antarctic continent or islands can migrate very far from the Southern Ocean. Studies of xenobiotic concentrations in tissues of penguins and flying seabirds have been published since the 1960s, and these articles report data on different families of contaminants (PCBs, DDTs, HCB, HCHs, CHLs, dieldrin, PBDEs, PCNs, PCDDs/Fs, etc.). The data on the presence of PCBs, HCB, and p,p0 -DDE in various tissues of penguins and flying seabirds are very useful to compare results and speculate on the time trend of contamination and health status of Antarctic seabirds. A sample of emperor penguin (Aptenodytes forsteri) fat collected in 1911 and left in an igloo in the Ross Island did not contain any DDT residue. This result has a historic meaning because the sample was collected more than 30 years before the beginning of mass DDT use worldwide. PCBs were detected for the first time in penguin eggs from the Ross Sea region at the end of the 1960s. Samples from the Ross Sea showed a decreasing concentration pattern in the order PCBs > DDTs > HCB, whereas in penguins from Antarctic Peninsula, it was HCB > DDTs > PCBs. The abundance of these POPs in organisms from Syowa Station (Indian sector of the Antarctic costs) was similar to that found in the organisms from the Antarctic Peninsula, whereas the profiles in penguins from Davis and Casey

Antarctic: Persistent Organic Pollutants and Environmental Health in the Region

113

stations were similar to that of the Ross Sea. These patterns agreed with those observed in fish and invertebrates from the same regions. It seems that the increasing PCB and decreasing DDT concentrations follow a clockwise and opposite trend. It is likely due to a mix of factors: the different POP global transport paths, the use of different chemicals in various countries, and the movements of air masses within the Southern Hemisphere as well as on a global scale may affect a differential transport and accumulation in the Antarctica. Levels were lower than those detected in other areas of the world, with few exceptions. It is interesting to note that flying seabird species that overwinter north of the Antarctic Convergence showed higher levels than penguins and snow petrels (Pagodroma nivea) that overwinter in the Antarctic seawaters. Moreover, skua eggs collected in the Ross Sea showed the typical pattern observed in this area (PCBs > DDE); in skua and petrel eggs collected in the Antarctic Peninsula or in the Indian sector, the order of abundance was not homogeneous. This may mean that skua breeding in the Ross Sea overwinter in the same sector of the Southern Ocean, migrating to northern sub-Antarctic islands or to Australian and New Zealander coasts; this migrating behavior was observed in adults, while young specimens can reach the Northern Hemisphere. Migrating seabirds that breed in the other regions of Antarctica may have wider overwintering ranges and reach northern and anthropized lands more easily and rapidly. Interestingly, PCB, DDE, and HCB levels in blood samples of penguins and skuas from both the Antarctic Peninsula and other regions show a profile different from that measured in egg samples; in fact, PCBs made up most of the residue, followed by DDE or HCB (Fig. 10). Adèlie penguins show low capacity to metabolize POPs, compared to South Polar skua and humans. Differences in the accumulation burden and pattern may be due to the specific capacity to metabolize POPs and to the different diets. The knowledge of the detoxifying activity in the other species of penguins is poor. It was preliminary reported that DDT and PCBs may affect the levels of hematological and immunological blood parameters in Chinstrap penguins, suggesting their potential toxic effect in penguins. It was described that the contaminant levels in Adèlie penguins vary with diet, being higher when specimens feed on krill, a fatty food resource; at the same time, it seems that the contaminant distribution and concentrations in the body vary also with starvation, with muscle and bone accumulating higher levels of POP residues. POPs other than HCB, DDTs, and PCBs have been detected in other tissues of penguins and flying seabirds nesting in Antarctica, and concentrations vary depending on the tissue analyzed and the species. Studies published since 1966 report data of POP levels in fat, heart, kidney, liver, lungs, muscle, pancreas, oviduct, and testes, stomach content, guano, and preen gland oil. Concentrations were higher in organisms collected in the Antarctic Peninsula seawaters, followed by those from the Indian sector and finally by those from the Ross Sea. The concentrations in all these species were in the same range of those reported in many flying seabirds that overwinter in non-Antarctic regions and values were lower than in the Antarctic skua (Catharacta antarctica). Other POPs reported in penguins and flying migrating seabirds nesting in Antarctica were: CHLs, PCDDs/Fs, PCNs, PFCs, mirex, PBDEs, and novel flame retardants (n-FRs). Recently, the presence of n-FRs have been recorded in T. bernacchhii, Pygoscelis papua (n ¼ 1), and brown skua (n ¼1) from the Antarctic Peninsula Islands; n-FRs include 2,3-dibromopropyl-2,4,6-tribromophenylether (DPTE), bis(2ethylhexyl)tetrabromophthalate (TBPH), 2-ethylhexyl-2,3,4,5-tetrabromobenzoate (TBB), 1,2-bis(2,4,6-tribromophenoxy)-ethane (BTBPE), decabromodiphenyl ethane (DBDPE), pentabromoethylbenzene (PBEB), hexabromobenzene (HBBz), and dechlorane plus. Only the last one was detected in the tissues of penguin ( 10 mg L 1 (in millions)

Vietnam



10

Cambodia

3,700

1

Pakistan

972

50–60

Maximum arsenic concentration (mg L 1)

3,050 Total number of samples ¼ 180 Percentage of samples >10 mg L 1 ¼ 72 Percentage of samples >50 mg L 1 ¼ 48) 1,340 Total number of samples ¼ 5,000 Percentage of samples >10 mg L 1 ¼ 50 Percentage of samples >50 mg L 1 ¼ 20)

Source of arsenic and environmental conditions Natural; Pleistocene and Holocene sediments, strongly reducing condition

Natural; alluvial/deltaic sediments

Natural

Only a few articles report drinking water arsenic contamination in the major cities worldwide such as arsenic in the range of < 0.0003 to 180 mg L 1 in the 992 drinking water sources was reported in randomly selected households of New Hampshire, United States in 1999. In this study, the domestic-drilled bedrock wells showed significantly higher arsenic concentration than municipal water sources. In 2001, an average 159 mg L 1 of arsenic ranging between 1 and 3050 mg L 1 was reported in the groundwater samples collected from private small-scale tube wells in the city of Hanoi and the neighboring rural districts of the Red River alluvial tract. In a recent study (2016), arsenic levels in the private water supplies exceeded the WHO and the United Kingdom guideline value of 10 mg L 1 in 5% of 497 properties surveyed in Cornwall, South West England. None of these sampling sites were major municipalities. In a recent study in a major city, Kolkata of India, arsenic concentration was tested in 262 water samples collected from all the 144 wards of the municipal water supplies of the Kolkata Municipal Corporation (KMC) and private wells. Sixty-nine percent (100 of 144 wards) had an alarming level of arsenic; 35.4% (51 of 144) exceeded the Indian standard of 50 mg L 1; 49 wards had arsenic level between 11 and 50 mg L 1; and only 30% (44 of 144 wards) had arsenic below 10 mg L 1 in the tested samples. The sample size in this study was insufficient (even < 2 samples per ward), so preparation of a comprehensive municipal ward-wise map of arsenic content in groundwater of KMC seems questionable. In another recent survey, 4210 groundwater samples were analyzed from 141 wards in the KMC. Nearly 14.2% samples had arsenic > 10 mg L 1 in 77 wards and 5.2% samples had arsenic > 50 mg L 1 in 37 wards. The arsenic-contaminated samples were highly prevalent in the southern part of the KMC than the other parts of the city. The surveyed communities in the KMC consume 0.95 mg kg 1 bw of arsenic on a daily basis that amounts to an estimated cancer risk of 1.425  10 3. The tested biological samples also had elevated levels of arsenic that indicate the presence of subclinical arsenic poisoning and therefore, the likelihood of enhanced lifetime cancer risk among the surveyed individuals is high in the southern KMC.

Arsenic in the Food Chain In most Asian countries including India, Bangladesh, Nepal, and Pakistan, the groundwater is used for irrigation. Because of the lax regulations, irrigation units are installed at the shallow depth, and they tap arsenic-contaminated water. Therefore, farmers also use arsenic-contaminated water for agricultural irrigation. Arsenic, in fact, has entered the food chain of humans and livestock. Based on the analysis of 597 irrigation tube wells, all located in the Deganga block in North 24-Parganas, West Bengal, India, nearly 6.4 tons of arsenic dumped into the crop fields. Almost 19% of the samples exceeded the Food and Agricultural Organization (FAO) of the United Nations standard for irrigation water (100 mg L 1). Over 76% of the arsenic present in the crops, irrigated by arsenic-laced water, is inorganic in this area. Domestic animals including cows and goats are vulnerable to arsenic as they consume water and straw laced with arsenic. In a recent study in Pakistan, high levels of arsenic in the groundwater were detected in the higher irrigation areas. Table 2 presents food safety standards for inorganic arsenic in various food products. Elevated levels of arsenic in the food materials are detected in India and Bangladesh including luffa, brinjal, cucumber, lady finger, gourd, green gram, rice, rice husk, wheat, maize, and lentils in India range between 13 and 800 mg kg 1, lowest in maize and highest in luffa, respectively. In Bangladesh, the arsenic concentration in the food materials ranges between 11 and 464 mg kg 1

Arsenic: Occurrence in Groundwater Table 2

157

Food safety standards for inorganic arsenic in various food products

Food products

Concentration of inorganic arsenic (mg/kg)

Algae Beans/pulses Egg Fish Flour Fresh milk Fruit Milk powder Other cereals Poultry Rice Shellfish Vegetables

1.50 0.10 0.05 0.10 0.10 0.05 0.05 0.25 0.20 0.05 0.15 0.50 0.05

Source: Adapted from Heikens, A. (2006). Arsenic contamination of irrigation water, soil, and crops in Bangladesh: Risk implications for sustainable agriculture and food safety in Asia. RAP Publication (FAO).

with the highest level in the gourd leaves and the lowest in the bean. Other arsenic-contaminated vegetables including arum stem, arum, arum tuber, beans, coriander, eggplants, gourd leaves, green chili, papaya, pumpkin, red amaranth, radish, spinach, and Indian spinach. The Global Environmental Monitoring System (GEMS) of the WHO has created 13 clusters for monitoring the food consumption and contamination. Table 3 provides exposure to a range of total and inorganic arsenic in these clusters originating from food products. The range of inorganic arsenic exposure via rice and rice products was the highest in the cluster G comprises of all arsenic hotspot countries including India, Bangladesh, Nepal, China, Pakistan, and Vietnam. The other countries in this cluster are Afghanistan, Cambodia, Indonesia, Lao People’s Democratic Republic, Malaysia, Mongolia, Myanmar, Sri Lanka, and Thailand. In a recent study of rice-based diets obtained from supermarkets in South Australia for inorganic arsenic contamination, the authors found 53% (31 of 59) of the samples exceeded the European Union recommended value (100 mg kg 1) of arsenic for young children and 22% (12 of 59) exceeded maximum level of 200 mg kg 1 recommended for adults. The highest inorganic arsenic content (126 mg kg 1) was detected in rice cracker followed by rice cakes (105 mg kg 1), other rice-based snacks (88 mg kg 1), baby rice (73 mg kg 1), puffed rice (45 mg kg 1), and ready-to-eat rice (45 mg kg 1). All these food products originated from Australia. In a market basket analysis of 22 seaweeds and seaweed products for arsenic concentrations collected from

Foodborne total and inorganic arsenic exposure at 50%–100% bioavailability

Table 3

Upper boundary of Range of inorganic arsenic Population Lower boundary of total Upper boundary of total Lowest boundary of inorganic arsenic b (50% inorganic arsenic c (100% exposure via rice and rice mid-2012 GEMS arsenic (mg kg 1 bw per arsenic (mg kg 1 bw per day) bioavailable) (mg per day) bioavailable) (mg per day) products (mg per day) (millions) Cluster day) a A B C D E F G H I J K L M a

0.91 2.87 1.38 1.32 1.41 1.84 2.08 1.15 0.87 0.97 1.04 2.69 1.35

1.26 3.47 1.79 1.72 1.83 2.19 2.42 1.55 1.18 1.28 1.48 3.05 1.83

4.8 10.37 9.09 6.71 5.75 5.25 7.82 6.44 5.02 5.01 6.6 7.88 6.44

53.4 108.35 85.46 66.95 63.45 57.27 75.14 66.54 52.2 51.88 66.13 79.1 70.56

0.92–6.95 0.32–2.41 0.95–7.22 0.33–2.53 0.13–0.97 0.13–0.97 3.79–28.78 0.65–4.9 0.38–2.9 0.75–5.67 2.39–18.19 3.84–29.1 0.35–2.64

302.5 224.9 263.7 408 339.2 26.7 3544.5 213.5 256.8 357 335.7 307.4 436.8

Assuming 60 kg body weight per individual. Lower bound for inorganic arsenic content assumes nondetect equals zero. c Upper bound for inorganic arsenic content assumes nondetect equals the limit of detection. Source: Oberoi, S., Barchowsky, A., and Wu, F. (2014). The global burden of disease for skin, lung and bladder cancer caused by arsenic in food. Cancer Epidemiology and Prevention Biomarkers, 23(7), 1187–1194. b

158

Arsenic: Occurrence in Groundwater

the local markets in the United States, the brown algae products exhibited higher arsenic concentration with the most elevated 83.7 mg g 1 in hijiki with 87% of the extractable arsenic as inorganic. The red algae products nori and red seaweed and seaweed extract products agar agar and kelp noodles had relatively low arsenic concentrations. Commercial products made from whole seaweed had substantial levels of arsenic (12–84 mg g 1), dominated by arsenosugars. Further, a total of 11 volunteers were fed 10 g per day of Brown algae kombu and wakame and red algae nori for 3 days each, while abstaining from rice and seafood following a 3-day washout period. Intact arsenosugars along with DMA, thio-DMAA, thio-DMAE all increased in urine after ingesting each type of seaweed and varied between seaweed types and between individuals. Only trace levels of the known toxic metabolite, thio-DMA, were observed, across individuals. Not only the terrestrial food chain but arsenic also has a concern of the aquatic food chain as reported in several studies such as 5.8 mg kg 1 of inorganic arsenic in blue mussels (Mytilus edulis) collected from certain areas in Norway and 2.6 mg kg 1 of inorganic arsenic in the freshwater fish from Thailand. Recently, a group of scientist studied Alviniconcha hessleri, Ifremeria nautilei, and Bathymodiolus manusensis collected from the hydrothermal vent fields in the eastern Manus Basin in the Bismark Sea of the Western Pacific Ocean. They found that Alviniconcha hessleri could accumulate 5580 mg kg 1, 721 mg kg 1, and 43 mg kg 1 of arsenic in its gills, digestive gland, and in the muscle, respectively. Ifremeria nautilei could accumulate 118 mg kg 1, 108 mg kg 1, and 22 mg kg 1 of arsenic in its gills, digestive gland, and muscle, respectively. Bathymodiolus manusensis was found to accumulate the comparatively lesser amount of arsenic in its gill (9.8 mg kg 1), digestive gland (15.7 mg kg 1), and muscle (4.5 mg kg 1). In all cases, As-III was the dominant species accumulated in various parts of the tested gastropods.

Arsenic-Induced Human Health Effects The Agency for Toxic Substances and Disease Registry (ATSDR) has set 1–3 mg kg 1 of inorganic arsenic as the minimal lethal dose and 600 mg kg 1 day 1 fatal for human beings. An appearance of Mee’s lines (transverse white lines across the nails) in the fingernails indicates acute arsenic poisoning and the victim may die due to cardiovascular collapse and hypovolemic shock. Persistent exposure to elevated levels of inorganic arsenic deactivates the function of enzymes, some essential anions, cations, and transcriptional events in cells and causes several direct or indirect effects including dermal, gastrointestinal, cardiovascular, respiratory, endocrinological, neurological, reproductive and developmental, cancer, and other effects. Table 4 summarizes the acute and subacute arsenic poisoning signs and symptoms. A detailed description on the listed arsenic-induced health effects is provided in the previous version of this encyclopedia. This article covers the recent development since then and highlights the carcinogenic and genetic effects of arsenic on human beings.

Dermal Effects The effects of arsenic toxicity in human are many, but the most visible symptoms are arsenical skin manifestation. Numerous epidemiological studies from various regions including Bangladesh, India, Taiwan, China, Chile, Argentina, and Mexico already reported the various type of arsenical symptoms and the dose–response relationship between arsenic exposure and arsenical skin lesions. The association between arsenic exposure and skin lesions using data on 10,182 adults without skin lesions at baseline through the biennial follow-up of the cohort (2000–2009) was evaluated. Multivariate-adjusted hazard ratios (HRs) for incident skin lesions comparing 10.1–50.0, 50.1–100.0, 100.1–200.0, and  200.1 mg L 1 with  10.0 mg L 1 of well water arsenic exposure were 1.17 (95% confidence interval, CI: 0.92, 1.49), 1.69 (95% CI: 1.33, 2.14), 1.97 (95% CI: 1.58, 2.46), and 2.98 (95% CI: 2.40, 3.71), respectively. The study indicated that dose-dependent associations were more pronounced in females, but the incidence of skin lesions was greater in males and older individuals. The study concluded that chronic arsenic exposure from drinking water was associated with increased incidence of skin lesions, even at low levels of arsenic exposure (< 100 mg L 1). In a recent study, 240 people from 20 villages were examined for arsenical skin lesions from Yatenga province, Burkina Faso. The range of arsenic in tube-well water was 1–124 mg L 1. More than half of the tube wells had arsenic levels above the WHO guideline value (10 mg L 1). Clinical examinations revealed that melanosis and keratosis were identified in 29.3% and 46.3% of the examined population. The study reported that the frequency of skin lesions was positively associated with arsenic levels in tube-well water. In a study conducted from Bangladesh, authors evaluated whether reduction of arsenic from drinking water will improve the skin lesions over time. This follow-up study was conducted in 2009–11 involving 550 individuals on a baseline population of 900 skin lesions cases who were previously registered in 2001–2003. Arsenic concentration in water was reduced by 41% during the study period, and no visible skin lesions were observed in 65 individuals who had skin lesions during the follow-up study. The study reported that reducing arsenic exposure would recover or show fewer skin lesions within 10 years of individuals who had skin lesion previously.

Arsenic and Prevalence of Diabetes It is evident from the literature that inorganic arsenic exposure for a prolonged period has been linked to an increased risk of diabetes from various regions. An association between chronic arsenic exposure through drinking water and diabetes was found in

Arsenic: Occurrence in Groundwater Table 4

159

A summary of the sign and symptoms of acute and subacute arsenic poisoning

Effects

Acute sign and symptoms

Chronic signs and symptoms

Dermal

• Delayed appearance of Mee’s lines in nail beds • Dermatitis • Melanosis • Vesiculation

• Hyperpigmentation • Pigment changes on the face, neck, and back “raindrop”

Gastrointestinal

Cardiovascular

Respiratory

Neurological

Hepatic

• Garlic odor on the breath • Severe abdominal pain • Nausea and vomiting • Thirst • Dehydration • Anorexia • Heartburn • Bloody or rice water diarrhea • Dysphagia • Hypotension • Shock • Ventricular arrhythmia • Congestive heart failure • Irregular pulse • T-wave inversion • Persistent prolongation of the QT interval

• Irritation of nasal mucosa • Pharynx, larynx, and bronchi • Pulmonary edema • Tracheobronchitis • Bronchial pneumonia • Nasal septum perforation • Sensorimotor peripheral axonal neuropathy

(paraesthesia, hyperesthesia, neuralgia) • Neuritis • Autonomic neuropathy with unstable blood pressure, anhidrosis, sweating, and flushing • Leg/muscular cramps • Lightheadedness • Headache • Weakness • Lethargy • Delirium • Encephalopathy • Hyperpyrexia • Tremor • Disorientation • Seizure • Stupor • Paralysis • Coma • Elevated liver enzymes • Fatty infiltration • Congestion • Central necrosis • Cholangitis • Cholecystitis

appearance

• Skin lesions • Skin hyperpigmentation and hyperkeratosis • Desquamation • Esophagitis • Gastritis • Colitis • Abdominal discomfort • Anorexia • Malabsorption • Weight loss • Arrhythmias • Pericarditis • Blackfoot disease (gangrene with spontaneous amputation)

• Raynaud’s syndrome • Acrocyanosis (intermittent) • Ischemic heart disease • Cerebral infarction • Carotid atherosclerosis • Hypertension • Microcirculation abnormalities • Rhino-pharyngo-laryngitis • Tracheobronchitis • Pulmonary insufficiency (emphysematous lesions) • Chronic restrictive/obstructive diseases • Neuropathy • Polyneuritis and motor paralysis • Hearing loss • Mental retardation • Encephalopathy, symmetrical peripheral polyneuropathy (sensorimotor resembling Landry–Guillain–Barre syndrome) • Electromyography abnormalities • Both sensory and motor neuron peripheral neuropathy

• Enlarged and tender liver • Increased hepatic enzymes • Cirrhosis • Portal hypertension without cirrhosis • Fatty degeneration (Continued)

160 Table 4

Arsenic: Occurrence in Groundwater A summary of the sign and symptoms of acute and subacute arsenic poisoningdcont'd

Effects

Acute sign and symptoms

Chronic signs and symptoms

Renal

• Hematuria • Oliguria • Proteinuria • Leukocyturia • Glycosuria • Uremia • Acute tubular necrosis • Renal cortical necrosis • Anemia • Leukopenia • Thrombocytopenia • Bone marrow suppression • Disseminated intravascular coagulation



Hematological

Endocrinological Other



• Rhabdomyolysis • Conjunctivitis

• Bone marrow hypoplasia • Aplastic anemia • Anemia • Leukopenia • Thrombocytopenia • Impaired folate metabolism • Karyorrhexis • Diabetes mellitus • Lens opacity • Cancer

a cross-sectional study conducted among 1004 men and women (age  30 years and exposure duration at least 6 months) from Bangladesh. The prevalence of diabetes was 9%. After adjustment for diabetes risk factors, an increased risk of diabetes was observed for arsenic exposure > 50 mg L 1. Authors reported that diabetes risks are higher with longer duration of arsenic exposure and the subjects who were exposed to the highest concentration of arsenic for > 10 years. The relationship between chronic low-level arsenic exposure and risk of diabetes among 141 cases of diabetes diagnosed between 1984 and 1998 from San Luis Valley compared with 488 participants, who were randomly sampled from 936 eligible participants who were disease free at baseline, was examined. The study results showed a significant association between inorganic arsenic exposure and diabetes risk (HRs ¼ 1.27, 95% ¼ 1.01, 1.59 per 15 mg L 1) after adjusting for ethnicity and time-varying covariates such as age, body mass index, and physical activity level. The study concluded that exposure to low-level inorganic arsenic in drinking water is associated with the increased risk for diabetes. In a cross-sectional study conducted from Inner Mongolia, China, the potential association between prolonged arsenic exposure through drinking water and the prevalence of diabetes among 669 adult males and females from Inner Mongolia, China, was investigated. This study did not obtain any significant difference of blood glucose among the groups with different arsenic exposure levels (10–50 and > 50 mg L 1). No statistical association was found between arsenic exposure and diabetes based on the study findings. The association between arsenic exposure and the occurrence of diabetes in Middle Banat region, Serbia, among exposed (arsenic concentration: 56 mg L 1) and unexposed populations (arsenic concentration: < 2 mg L 1) was investigated. Also, the number of cases of diabetes reported in years 2006, 2007, and 2009 was used to calculate standardized incidence rates for both populations. In this study, standardized incidence rates of diabetes and odds ratios (ORs) were higher in the exposed population, both men, and women, in the period from 2006 to 2009, when compared with the unexposed population. In a study, it was examined whether long-term exposure to low level of arsenic in drinking water has been linked to an increased risk of diabetes using a large prospective cohort from Denmark. The study recruited 57,053 persons during 1993–1997. Based on these study findings, it was concluded that long-term exposure to low-level arsenic in drinking water might contribute to the development of diabetes.

Pregnancy Outcome Chronic human exposure to arsenic can adversely affect reproductive performance apart from other health hazards. Several epidemiological studies have revealed the association between chronic arsenic exposure and adverse pregnancy outcome. The associations between prenatal arsenic exposure with obstetric outcome (self-reported) and child mortality among 498 women in a large population-based study from Bangladesh was investigated. Pregnancy outcomes such as live birth, stillbirth, and spontaneous/elective abortion were considered while assessing the relationship with adverse pregnancy outcomes. In this study, authors used Cox proportional hazard models to estimate HRs and 95% confidence intervals in relation to child mortality. A significant association between prenatal arsenic exposure and the risk of stillbirth (adjusted OR ¼ 2.50; 95% CI: 1.04, 6.01) was observed. Authors also observed elevated risk of child mortality (adjusted OR ¼ 1.92; 95% CI: 0.78, 4.68) in relation to prenatal arsenic exposure. The impact of low to moderate levels of inorganic arsenic in drinking water on an increased risk for spontaneous abortion among 150 women based on a hospital-based case–control study from Timis County, Romania, was investigated. The range of

Arsenic: Occurrence in Groundwater

161

arsenic concentration in drinking water was not detectable to 175 mg L 1. The research suggests that no increased risk of spontaneous pregnancy loss in association with low to moderate level of arsenic in drinking water. In a later prospective cohort study among 122 women, there was no confounder-adjusted effect for arsenic exposure on birth outcomes. It was also reported based on a cross-sectional study of 217 Romanian women exposed to low to moderate water arsenic exposure (< 10 mg L 1) and anemia. The adjusted prevalence’s for “any” anemia (prevalence proportion ratio, PPR ¼ 1.71, 95% CI: 0.75–3.88) and pregnancy anemia (PPR ¼ 2.87, 95% CI: 0.62–13.26) were higher among drinking water arsenic-exposed women than among unexposed women.

Cardiovascular Effect The cardiovascular diseases manifested are arteriosclerosis, atherosclerosis, ischemic heart disease (which results in hypertension), heart blockage, cardiac arrest, stroke, and infarction, and in the periphery, arteriosclerosis, black foot disease, gangrene, etc. Epidemiological studies have revealed high arsenic exposure (> 300 mg L 1) associated with the manifestation of both peripheral and cardiovascular disease from different parts of the world. A study determined the relationship between socioeconomic status factors such as occupation type, land ownership, educational attainment, and television ownership and the risk of cardiovascular disease from Araihazar, Bangladesh, measured as carotid intima-media thickness. The study showed that factory workers had lower levels of carotid intima-media thickness compared with those owing greater than one acre of land, owning a television, laborers, and business owners. It was also reported that business sector employment was positively associated with subclinical atherosclerosis after adjustment for confounders and the association was strongest in older men ( 50 yrs. old) compared with younger men (< 50 years). A prospective cohort study was evaluated the association between chronic low to moderate level of arsenic exposure and incident of cardiovascular disease among 3575 American Indian men and women aged 45–74 years from Arizona, Oklahoma, North, and South Dakota as measured in urinary arsenic species as a biomarker of arsenic exposure. The study results showed that 1184 participants developed fatal and nonfatal cardiovascular disease and 439 participants developed fatal cardiovascular disease. After adjusting the socio-demographic factors such as smoking, body mass index and lipids, the HRs for cardiovascular disease, coronary heart diseases, and stroke mortality were 1.65, 1.71, and 3.03, respectively, when compared the highest with lowest quartile arsenic concentrations (> 15.7 vs. < 5.8 mg g 1 creatinine). The study concluded that low to moderate chronic arsenic exposure was associated with cardiovascular disease and mortality. Another study investigated 499 subjects (156 men, 343 women, 40–96 years of age with a mean of 61 years) from three rural counties (Cochran, Palmer, and Bailey) of Texas, United States to determine if coronary heart disease, hypertension, and hyperlipidaemia were associated with low-level arsenic exposure (range 2.2–15.3 mg L 1). Authors found that hypertension was associated with higher arsenic exposure, while coronary heart disease was associated with higher arsenic exposure after adjustments for age, ethnicity, gender, education, smoking status, alcoholism, and antihyperlipidemia medication. No association was found between arsenic exposure and hyperlipidemia. In a community-based case–control study among 863 subjects conducted from north-eastern Taiwan, the significant dose– response trend of carotid atherosclerosis risk associated with increasing arsenic concentration (> 50 mg L 1) compared with the referent (< 10 mg L 1) was found. The study also found significant interaction effect on carotid atherosclerosis risk in subjects exposed to arsenic from their drinking water > 50 mg L 1 and polymorphisms in arsenic metabolic genes such as PNP, As3MT, and GSTO. In a case–cohort study of 369 incidents fatal and nonfatal cardiovascular disease including 211 heart disease and 148 stroke cases from Araihazar, Bangladesh, was conducted to evaluate the influence of arsenic methylation capacity on the risk of cardiovascular disease. The study concluded that arsenic exposure from drinking water and the incomplete methylation capacity of arsenic were adversely associated with the risk of heart disease.

Respiratory Effect Inorganic arsenic shows a major adverse manifestation on the human respiratory system; the longer the exposure, the more grave the problem. The respiratory effects among 112 subjects who were exposed to different levels of arsenic such as < 50 mg L 1, > 50 to < 150, and > 150 mg L 1 was investigated. In this study, the respiratory effect was evaluated by measuring the pulmonary function test. The respiratory function impairment among the male subjects found as restrictive type (26.41%), obstructive type (3.77%), and combined type (7.54%), whereas in females only the restrictive type of impairment (10.16%) was found. Restrictive type of impairments among the subjects increased as the concentration of arsenic in drinking water increased. A positive association between low-level arsenic in drinking water and the prevalence of respiratory symptoms from 446 subjects who were exposed to arsenic from their drinking water (range of arsenic was 11–50 mg L 1) compared with 388 control subjects (arsenic in drinking water was < 10 mg L 1) was found. The study results showed higher prevalence of respiratory symptoms, dyspnea, asthma, eye irritation, and headache in arsenic-exposed subjects compared with control. In a study, it was evaluated the effect of arsenic exposure from drinking water on respiratory symptoms using data from the Health Effects of Arsenic Exposure Longitudinal Study (HEALS), a large prospective cohort study established in Ariahazar, Bangladesh, during 2000–2002. A total of 7.31%, 9.95%, and 2.03% of the 11,746 participants completing 4 years of active

162

Arsenic: Occurrence in Groundwater

follow-up reported having a chronic cough, breathing problem, or blood in their sputum, respectively. Authors found significant positive associations between arsenic exposure and respiratory symptoms. As compared with those with the lowest quintile of water As level ( 7 mg L 1), the HRs for having respiratory symptoms were 1.27 (95% CI: 1.09–1.48), 1.39 (95% CI: 1.19–1.63), 1.43 (95% CI: 1.23–1.68), and 1.43 (95% CI: 1.22–1.68) for the second to fifth quintiles of baseline water As concentrations (7–40, 40–90, 90–178, and > 178 mg L 1), respectively. The study observed that children within utero and early-life arsenic exposure (> 500 mg L 1 of arsenic) had eight times more wheezing when not having cold compared with children exposed to arsenic < 10 mg L 1. The study included 650 children aged 7–17 years from Matlab, Bangladesh. Authors also reported three times more shortness of breath in exposed children when walking the level ground and when walking fast or climbing compared with children exposed to < 10 mg L 1. In a pilot study on early-life arsenic exposure and long-term lung function and respiratory symptoms among 32 adults exposed to > 800 mg L 1 of arsenic before < 10 years compared with 65 adults without high early-life exposure from Antofagasta and Arica, Chile, respectively was conducted. The study concluded that in utero and childhood exposure to arsenic in drinking water is associated with long-term lung function deficits and shortness of breath in humans, although authors suggested further research due to small sample size.

Neurological Effect Neurological involvement due to chronic exposure to arsenic was reported in many neurological-based studies from Bangladesh, several states of India, North America, and other regions worldwide. It is now well recognized that elevated levels of arsenic exposure caused neurological disorders, reduction of intelligence quotient (IQ) in children. The association between arsenic in drinking water and intelligence among 272 children in grades 3–5 in three Maine school districts, United States during 2008 were examined. Arsenic concentrations in good water were 9.88 mg L 1, and 31.2% samples had arsenic  10 mg L 1. The study found that arsenic from household well water is associated with decreased IQ scores without adjustment. With adjustment for maternal IQ and education, home environment, school districts, and number of siblings, water arsenic remains significantly negatively associated with full-scale IQ and perceptual reasoning, working memory, and verbal comprehension scores. Compared to those with water arsenic < 5 mg L 1, exposure to water arsenic  5 mg L 1 was associated with reductions in approximately 5–6 points in both full-scale IQ and most index scores. A study investigated the IQ estimated using the Raven’s Standard Progressive Matrices and the Kaufman Brief Intelligence Test among 408 children who were exposed to high levels of arsenic from their groundwater from the Sonargoan Thana of Bangladesh. The study included two age groups: 9 and 10 years; 4 and 5 years. The results indicated that arsenic exposure was responsible for a lower IQ. The concentration of urinary arsenic was associated with a reduced intellectual function in a dose–response manner. A stronger association was found between reduced intellectual function (IQ) and urinary arsenic than the level of arsenic in the drinking water. There was no association between verbal IQ scores and urinary arsenic of children in early childhood (aged 4 and 5 years). In another study from Purbasthali, Burdwan district of West Bengal, India, it was reported that arsenic concentration in drinking water was significantly associated with IQ scores determined by Raven’s progressive matrices as well as memory power of children. The study was conducted among 114 school going children (9–11 years old and 3rd and 4th grades) exposed to arsenic via drinking water (range 50–84 mg L 1).

Arsenic and Cancer Effects At present, the International Agency for Research on Cancer (IARC), WHO, US Environmental Protection Agency (USEPA), and other health protection authorities consider arsenic to be causing skin, lung, liver, urinary bladder, and kidney cancer. It is reported that lifetime consumption of arsenic-contaminated water at 1 L per day having arsenic at 50 mg L 1 concentration could cause cancer in 13 people out of 1000 population. Previously, we reported that being a tropical country, an adult drinks about 6 L of water per day, hence the risk of cancer is much higher. In the cohort study reported here, 1194 registered arsenic patients with skin lesions were reexamined between January 2009 and January 2010, out of the 2384 villagers that had been screened earlier (1995–2000) from 33 villages and 16 blocks/thanas from some districts of West Bengal (India) and Bangladesh. Only these patients were resurveyed because the study’s longer-term database contained the concentrations of arsenic in the hand-driven tube wells they had used for drinking along with arsenic data on their biological samples and details of their skin lesions. Findings of this cohort study indicate that 14% of the patients examined earlier (who had arsenical skin lesions) had died with nonhealing ulcers and 48% are currently suffering from Bowens and arsenic-related cancers. On the basis of this study, it was concluded: “Are Millions in Ganga-Meghna-Brahmaputra Plain already exposed to arsenic-contaminated water potentially at risk from cancer”? Skin cancer is most common and hence widely studied from the countries where the elevated amount of arsenic-contaminated water is used for drinking and cooking. Severe keratosis including Bowen’s disease could be an indication of future skin cancer. The characteristic arsenic-induced skin carcinomas are squamous cell carcinoma and multiple basal cell carcinoma. Mortality rates are low and may not be of serious danger. But skin cancer could be an indication of more dangerous internal cancers. Various Bowen’s symptoms in Indian patients are presented in Fig. 2.

Arsenic: Occurrence in Groundwater

Fig. 2

163

Various Symptoms of Bowens registered during 1995 to 2000 in the Ganga Meghna Brahmaputra Plain in India.

There is a clear relationship between the concentration of arsenic in drinking water and prevalence of skin cancer. Skin cancers have been reported from most countries where people are exposed to an elevated amount of groundwater arsenic like Taiwan, Argentina, Mexico, Chile, Bangladesh, and four states of (Uttar Pradesh, Bihar, Jharkhand, and West Bengal) India. In a nine member’s family of Md. Faizuddin Malitha of Murshidabad, we found Bowens symptoms among all the family members (Fig. 3). They all stopped drinking arsenic-contaminated water for 2–3 years, but the symptoms were still visible. In another example, all the individuals of a family of 11 people exposed to 921 mg L 1 of arsenic showed Bowens and Carcinoma symptoms only after the consumption for 3 years (Fig. 4). Fig. 5 lists various arsenicosis symptoms. The association between arsenic exposure through drinking water and cancers at various organs such as lung, bladder, kidney, urinary tract, etc. has been well reviewed in the literature. In a population-based case–control study was performed in northern Chile (Antofagasta) where > 250,000 people were exposed to elevated level of arsenic via drinking water concentrations from 1958 until 1970 from October 2007 to December 2010 involving 232 lung and 306 bladder cancer cases and 640 age and gender-matched controls, along with detailed information on past exposure and other potential confounders, such as smoking and occupation. The study provided evidence of fourfold increases in lung cancer and almost sevenfold increases in bladder cancer 35–40 years after high arsenic exposures ended. Authors finally suggested that prevention, treatment, and other mortality reduction efforts in arsenicaffected regions will be required for decades after exposure cessation. The association between baseline arsenic exposure and cancer mortality in 3932 American Indians 45–74 years from three US states (Arizona, Oklahoma, and North/South Dakota) who participated in the Strong Heart Study in 1989–91 and were followed through 2008 was evaluated. In this study, the cancer deaths (386 overall, 78 lungs, 34 liver, 18 prostate, 26 kidneys, 24 esophagus/ stomach, 25 pancreases, 32 colon/rectal, 26 breasts, 40 lymphatic/hematopoietic) were assessed by mortality surveillance reviews. The results showed that the adjusted HRs (95% CI) comparing the 80th versus 20th percentiles of arsenic were 1.14 (0.92–1.41) for overall cancer, 1.56 (1.02–2.39) for lung cancer, 1.34 (0.66, 2.72) for liver cancer, 3.30 (1.28–8.48) for prostate cancer, and 0.44 (0.14, 1.14) for kidney cancer. The study did not obtain an association between arsenic exposure and cancers of the esophagus and

164

Arsenic: Occurrence in Groundwater

Fig. 3

Skin lesions and suspected Bowens among nine members of a family in Murshidabad.

Fig. 4

Arsenicosis symptoms among 11 members of a family exposed to 921 mg L 1of arsenic.

stomach, colon and rectum, and breast. The study concluded that low to moderate exposure to inorganic arsenic was prospectively associated with increased mortality for lung, prostate, and pancreas cancers. A case–control study performed the multiple chemical exposures in 538 lung and bladder cancer cases and 640 controls in northern Chile, where people were exposed to high level of arsenic previously. The study results showed very high lung and bladder cancer ORs, and evidence of greater than additive effects, were seen in people exposed to arsenic concentrations > 335 mg L 1 and who were tobacco smokers (OR ¼ 16, 95% CI: 6.5–40 for lung cancer; and OR ¼ 23 [8.2–66] for bladder cancer). The study findings suggested that people coexposed to arsenic and other known or suspected carcinogens have very high risks of lung or bladder cancer. The dose–response relationship between arsenic in drinking water and mortality from liver cancer from 138 villages in the southwest coast of Taiwan was evaluated. Authors assessed arsenic levels in drinking water using data from a survey conducted by the government and reviewed death certificates from 1971 to 1990 to identify liver cancer cases involving 802 male and 301 female mortality cases of liver cancer during the 20-year period. The study results showed that arsenic levels > 640 mg L 1 were associated with an increase in the liver cancer mortality in both genders, although no significant effect was observed for lower exposure categories of arsenic.

Arsenic: Occurrence in Groundwater

165

Fig. 5 Various arsenicosis symptoms. 1: Village: Chok-Khorgachi, Block: Baduria, District: North 24-Parganas. A medical group from School of Environmental Studies (SOES), Jadavpur University surveyed this village first in 1996. Many people had arsenical skin lesions. In our study in October 2010 villagers reported from this village 35 people died from 1996 to 2010, and all had severe skin lesions. Many of them died of liver cancer. 2: Mr. Susanta Roy of Jadavpur, Kolkata died in 1997. He was suffering from cancer. One of his fingers was amputated. He had severe arsenical skin lesions. 3: Bishnu Gour from Balia, Uttar Pradesh. He died in 2004. He died of cancer. He had severe arsenical skin lesions. 4: A patient from Basirhat, North 24-Parganas. He had severe arsenical keratosis on palm and sole. 5: Arsenic patient from Sahibganj village of Jharkhand state. He died in 2005. He had several arsenical skin lesions and suffering from squamous cell carcinoma. 6: Parsuram Sarma from Lachutola, Bihar. He had several arsenical keratoses on palm, sole, and in the body. 7 and 8: Cancer patients from Rajnandgaon, India. 9, 10 and 11: Promila Shah from Lachutola, Bihar. She had all arsenical skin lesions. Her husband divorced her. 12 and 13: Photographs of Dulali Biswas were from Nakashipara, Nadia. In her family, six persons died of cancer. She had squamous cell carcinoma on palm and finally, the whole hand amputated. Her only son Kartik Biswas was working with me in SOES-laboratory (see the article in Science, 315, 23 March 2007).

166 Table 5

Arsenic: Occurrence in Groundwater Global burden of cancers caused by foodborne arsenic

Cancer

Male

Female

Total global burden by foodborne arsenic

Bladder Lung Skin (nonmelanoma)

4527–46,420 4913–50,373 5365–55,007

7096–72,756 6931–71,069 5365–55,007

9129–119,176 11,844–121,442 10,730–110,014

Source: Oberoi, S., Barchowsky, A., and Wu, F. (2014). The global burden of disease for skin, lung and bladder cancer caused by arsenic in food. Cancer Epidemiology and Prevention Biomarkers, 23(7), 1187–1194.

A study measured urinary arsenic metabolites in 94 lung and 117 bladder cancer cases and 347 population-based controls from areas in northern Chile with a wide range of drinking water arsenic concentrations ( 10 to > 200 mg L 1). The study reported that lung cancer ORs adjusted for age, sex, and smoking by increasing tertiles of %MMA were 1.00, 1.91 (95% CI: 0.99–3.67), and 3.26 (1.76–6.04). Corresponding ORs for bladder cancer were 1.00, 1.81 (1.06–3.11), and 2.02 (1.15–3.54). In analyses confined to subjects only with arsenic water concentrations < 200 mg L 1 (median ¼ 60 mg L 1), lung and bladder cancer ORs for subjects in the upper tertile of %MMA compared with subjects in the lower two tertiles were 2.48 (1.08–5.68) and 2.37 (1.01–5.57), respectively. The dose–response relationship of arsenic with the incidence of urinary cancer cohort involving 8086 cohort residents from northeastern Taiwan, who were followed for 12 years was investigated. The study found 45 incidences of urinary cancer and a monotonically increased risk of urinary cancer with increasing arsenic concentration (P < 0.001). The relative risks were greater than fivefold for arsenic exposure > 100 mg L 1, whereas the risk was elevated but not significant for low exposure (< 100 mg L 1). In a study from the southern part of Pakistan, biological samples such as blood and hair samples were analyzed from 175 arsenic exposed (30–150 mg L 1) and nonexposed males (< 10 mg L 1) cancer patients with bladder and lung, who were admitted in Nuclear Institute of Medicine and Radiotherapy Jamshoro during 2007–2009. The exposed cancer patients have two- to threefold higher level of arsenic in both biological samples compared with nonexposed case-matched cancerous male subjects. A population-based case–control study was conducted from southeastern Michigan, United States, where approximately 230,000 people were exposed to arsenic concentrations between 10 and 100 mg L 1. This study included 411 bladder cancer cases diagnosed between 2000 and 2004 and 566 controls recruited during the same period. Overall, an increase in the bladder cancer risk was not found for time-weighted average lifetime arsenic exposure > 10 mg L 1 when compared with a reference group exposed to < 1 mg L 1 (OR ¼ 1.10; 95% CI: 0.65, 1.86). Among ever-smokers, risks from arsenic exposure > 10 mg L 1 were similarly not elevated when compared with the reference group (OR ¼ 0.94; 95% CI: 0.50, 1.78). A case–control study was performed in 2007–10 of 122 kidney cancer cases and 640 population-based controls with individual data on exposure and potential confounders from northern Chile. In this study, 76 renal cell, 24 transitional cell renal pelvis and ureter, and 22 other kidney cancers were included. For renal pelvis and ureter cancers, the adjusted ORs by average arsenic intakes of < 400, 400–1000, and > 1000 mg per day were 1.00, 5.71, and 11.09, respectively. The study concluded with new evidence of adose–response relationship between arsenic in drinking water and renal pelvis and ureter cancer. The association between cancer incidence patterns and arsenic from three areas of Cordoba province, Argentina aquifer was assessed. Age-standardized incidence rates were obtained from Córdoba Cancer Registry and arsenic data from official reports of monitoring wells. Total age-standardized incidence rates by aquifers for males/females were 191.01/249.22 (Rioja plain); 215.03/225.37 (Pampa hills); and 239.42/188.93 (Chaco-Pampa plain). The study found an association between arsenic and increased risk of colon cancer in women, and lung and bladder cancers in both sexes. In a recent study on the exposure to arsenic through food products across the world, the authors summarize that total global burden of bladder and lung cancers was comparatively higher in female than male (Table 5).

Arsenic and Genotoxic Effects Arsenic also interferes with gene expression through epigenetic processes, for example, DNA methylation and posttranslational histone modifications. In recent, in vivo and in vitro studies on the effects of arsenic on histone residues in the arsenic-exposed individuals’ blood cells, collected from the Argentinean Andes, a significant decrease in global H3K9me3 in CD4 þ cells with increasing arsenic exposure was reported. Arsenic alters the H3K9Ac and the levels of the histone-deacetylating enzyme HDAC2 at very low arsenic concentrations and the protein levels of PCNA and DNMT1 and MAML1 that are part of a gene expression silencing complex. In a recent study, the authors report that although 0–50 mg L 1 of arsenic accumulates in HBE cells, it did not produce any cytotoxic effects. However, at similar the similar quantity, arsenic significantly changed the expression of genes and proteins in innate host defense pathways, thereby decreasing the ability of the lung epithelium to fight bacterial infection. Inorganic arsenic affects DNA methylation, miRNA expression, and histones. It modifies the enzymatic activity of DNA methyltransferases, histone deacetylase (HDAC), and histone acetyltransferase (HAT). Based on in vitro and in vivo studies, inorganic arsenic found to be an epigenetic modifier of those genes that play a vital role in critical cellular functions such as cellular growth and immune response. In another study of groups of four women each in the Argentinean Andes, the authors investigated the impact of inorganic arsenic on the gene expression. The median urinary arsenic concentrations among the groups varied between 65 and 276 mg L 1 and were found associated with genomewide alterations of gene expression and genomewide DNA methylation. Immune system regulating genes including tumor necrosis factor alpha and interferon gamma, as well as NF-kappa-beta complex genes were significantly downregulated in the high’arsenic group (276 mg L 1). A similar trend was found among the high arsenic group for

Arsenic: Occurrence in Groundwater

167

genomewide full methylation (> 80% methylation). The authors concluded that chronic arsenic exposure from drinking water triggers to changes in the transcriptome and methylome of CD4-positive T cells, both genomewide and in specific genes, therefore, causes immunotoxicity by interfering with gene expression and regulation. In another study in rural Bangladesh, the scientists evaluate the association between arsenic exposure and long interspersed nuclear element-1 (LINE-1) among 175 individuals from arsenic-endemic areas and 61 recruits from a nonarsenic-endemic area. The average LINE-1 methylation levels of the individuals of the arsenic-endemic areas had significantly lower levels of LINE-1 methylation when compared with the people of the nonarsenic-endemic area. Females showed a significant inverse association between arsenic concentration and LINE-1 methylation in blood leukocyte DNA that indicates the arsenic-induced elevation of blood pressure. In another study of 2171 subjects in Bangladesh, the authors genotyped their whole blood DNA samples and produced DNA copy number. All of them had no arsenic-induced skin lesions. They were followed up every 2 years for a total of 8 years to observe any signs or symptoms of skin lesions. The DNA deletion in various genes comprising OR5J2, GOLGA6L7P, APBA2, GALNTL5, VN1R31P, PHKG1P2, SGCZ, ZNF658, and lincRNA genes including RP11-76I14.1, CTC-535 M15.2, and RP11-73B2.2 affect individuals at higher risk for development of skin lesions.

See also: Arsenic Exposure From Seafood Consumption; Arsenic Pollution of Groundwater in Bangladesh; Arsenic: Occurrence in Groundwater; Effects of Cooking on Dietary Exposure to Arsenic From Rice and Vegetables: Human Health Risks; Environmental Carcinogens and Regulation; Drinking Water Nitrate and Health.

Further Reading Abernathy, C.O., Calderon, R.L., Chappell, W.R. (Eds.), 1997. Arsenic exposure and health effects. Chapman and Hall. Aposhian, H.V., Gurzau, E.S., Le, X.C., 2000. Occurrence of monomethylarsonous acid in urine of humans exposed to inorganic arsenic. Chemical Research in Toxicology 13, 693–697. National Research Council, 1977. Arsenic: Medical and biological effects of environmental pollutants. National Academy of Sciences, Washington, DC. Bundschuh, J., Armienta, M.A., Birkle, P., Bhattacharya, P., Matschullat, J., Mukherjee, A.B. (Eds.), 2008. Natural arsenic in groundwaters of Latin AmericadOccurrence, health impact and remediation. CRC Press/Taylor and Francis, ISBN 978-0-415-40771-7. Chappell, W.R., Abernathy, C.O., Calderon, R.L. (Eds.), 1999. Arsenic exposure and health effects. Elsevier. Chappell, W.R., Abernathy, C.O., Calderon, R.L. (Eds.), 2001. Arsenic exposure and health effects, IV. Elsevier. Chappell, W.R., Abernathy, C.O., Calderon, R.L., Thomas, D.J. (Eds.), 2003. Arsenic exposure and health effects. Elsevier. Chappell, W.R., Abernathy, C.O., Cothern, C.R. (Eds.), 1994. Arsenic exposure and health, science and technology letters. Elsevier. Environmental Health Criteria 224. Arsenic and arsenic compounds, 2nd edn., 2001. IPCS, Geneva. National Research Council, 1999. Arsenic in drinking water. National Academy Press, Washington, DC. National Research Council, 2001. Arsenic in drinking water (2001 update). National Academy Press, Washington, DC. Nriagu, O. (Ed.), 1994. Arsenic in the environment (Part I): Cycling and characterization. Wiley Interscience Publication. Nriagu, O. (Ed.), 1994. Arsenic in the environment (Part II): Human health and ecosystem effects. Wiley Interscience Publication. Ravenscroft, P., Brammer, H., Richards, K., 2009. Arsenic Pollution: A global synthesis. Willey-Blackwell. ISBN: 978–1–405-18602-5. Saha, K.C. (Ed.), 2002. Arsenicosis in West Bengal (environmental problems and solutions). Sadananda Prakashani, Kolkata, India. Smith, A.H., Marshall, G., Yuan, Y., et al., 2006. Increased mortality from lung cancer and bronchiectasis in young adults after exposure to arsenic in utero and in early childhood. Environmental Health Perspectives 114 (8), 1293–1296. Some Drinking Water Disinfectants & Contaminants including Arsenic. IARC monographs on the evaluation of carcinogenic risk to humans, Vol. 84, WHO, Lyon, France (2004). Special issue on arsenic in the environment: Biology and chemistry. Science of the Total Environment 379 (2–3), 2007, 106–266. Special issue on Arsenic: Environmental and health aspects with special reference to groundwater in South Asia. Journal of Environmental Science and Health (Part A), Toxic/ Hazardous Substances & Environmental Engineering A38 (1), 2003. Marcel Dekker, Inc. Special issue on Groundwater arsenic contamination and its effects in South East Asia. Journal of Environmental Science and Health (Part A), Toxic/Hazardous Substances & Environmental Engineering, 42 (12) (2007) Taylor & Francis. Special issue on Arsenic contamination in developing countries: Health effects. Journal of Health, Population and Nutrition 24 (2), 2006. ICCDDR, B. Special issues on Arsenic analytical chemistry and beyond. Talanta 58 (1), 2002. Elsevier. Special issue on Arsenic geochemistry, transport mechanism in the soil-plant system, human and animal health issues. Environmental International 35 (3), 2009, 453–454. Aballay, L.R., Díaz, M.P., Francisca, F.M., Muñoz, S.E., 2012. Cancer incidence and pattern of arsenic concentration in drinking water wells in Córdoba, Argentina. International Journal of Environmental Health Research 22, 220–231. Argos, M., Kalra, T., Pierce, B.L., Chen, Y., Parvez, F., Islam, T., Ahmed, A., Hasan, R., Hasan, K., Sarwar, G., 2011. A prospective study of arsenic exposure from drinking water and incidence of skin lesions in Bangladesh. American Journal of Epidemiology 174, 185–194. ATSDR, Toxicological profile for arsenic. Agency for Toxic Substances and Disease Registry, Division of Toxicology, Atlanta, GA 2007. Bloom, M.S., Neamtiu, I.A., Surdu, S., Pop, C., Anastasiu, D., Appleton, A.A., Fitzgerald, E.F., Gurzau, E.S., 2016. Low level arsenic contaminated water consumption and birth outcomes in RomaniadAn exploratory study. Reproductive Toxicology 59, 8–16. Bloom, M.S., Neamtiu, I.A., Surdu, S., Pop, C., Lupsa, I.R., Anastasiu, D., Fitzgerald, E.F., Gurzau, E.S., 2014. Consumption of low-moderate level arsenic contaminated water does not increase spontaneous pregnancy loss: A case control study. Environmental Health 13, 81. Bräuner, E.V., Nordsborg, R.B., Andersen, Z.J., Tjønneland, A., Loft, S., Raaschou-Nielsen, O., 2014. Long-term exposure to low-level arsenic in drinking water and diabetes incidence: A prospective study of the diet, cancer and health cohort. Environmental Health Perspectives 122, 1059–1065. Chakraborti, D., Mukherjee, S.C., Pati, S., Sengupta, M.K., Rahman, M.M., Chowdhury, U.K., Lodh, D., Chanda, C.R., Chakraborti, A.K., Basu, G.K., 2003. Arsenic groundwater contamination in middle ganga plain, Bihar, India: A future danger? Environmental Health Perspectives 111, 1194. Chakraborti, D., Rahman, M.M., Ahamed, S., Dutta, R.N., Pati, S., Mukherjee, S.C., 2016. Arsenic groundwater contamination and its health effects in Patna district (capital of Bihar) in the middle ganga plain, India. Chemosphere 152, 520–529. Chattopadhyay, B., Mukherjee, A., Gangopadhyay, P., Alam, J., Roychowdhury, A., 2010. Respiratory effect related to exposure of different concentrations of arsenic in drinking water in West Bengal, India. Journal of Environmental Science & Engineering 52, 147–154. Chen, C.-L., Chiou, H.-Y., Hsu, L.-I., Hsueh, Y.-M., Wu, M.-M., Wang, Y.-H., Chen, C.-J., 2010. Arsenic in drinking water and risk of urinary tract cancer: A follow-up study from northeastern Taiwan. Cancer Epidemiology, Biomarkers & Prevention 19, 101–110.

168

Arsenic: Occurrence in Groundwater

Chen, Y., Wu, F., Liu, M., Parvez, F., Slavkovich, V., Eunus, M., Ahmed, A., Argos, M., Islam, T., Rakibuz-Zaman, M., 2013. A prospective study of arsenic exposure, arsenic methylation capacity, and risk of cardiovascular disease in Bangladesh. Environmental Health Perspectives 121, 832. Das, D., Bindhani, B., Mukherjee, B., Saha, H., Biswas, P., Dutta, K., Prasad, P., Sinha, D., Ray, M.R., 2014. Chronic low-level arsenic exposure reduces lung function in male population without skin lesions. International Journal of Public Health 59, 655–663. Dauphiné, D.C., Ferreccio, C., Guntur, S., Yuan, Y., Hammond, S.K., Balmes, J., Smith, A.H., Steinmaus, C., 2011. Lung function in adults following in utero and childhood exposure to arsenic in drinking water: Preliminary findings. International Archives of Occupational and Environmental Health 84, 591–600. Ferreccio, C., Smith, A.H., Durán, V., Barlaro, T., Benítez, H., Valdés, R., Aguirre, J.J., Moore, L.E., Acevedo, J., Vásquez, M.I., 2013. Case-control study of arsenic in drinking water and kidney cancer in uniquely exposed northern Chile. American Journal of Epidemiology 178, 813–818. Ferreccio, C., Yuan, Y., Calle, J., Benítez, H., Parra, R.L., Acevedo, J., Smith, A.H., Liaw, J., Steinmaus, C., 2013. Arsenic, tobacco smoke, and occupation: Associations of multiple agents with lung and bladder cancer. Epidemiology 24, 898–905. García-Esquinas, E., Pollán, M., Umans, J.G., Francesconi, K.A., Goessler, W., Guallar, E., Howard, B., Farley, J., Best, L.G., 2013. Navas–Acien, A. Arsenic exposure and cancer mortality in a US-based prospective cohort: The strong heart study. Cancer Epidemiology, Biomarkers & Prevention 22, 1944–1953. Garshick, M., Wu, F., Demmer, R., Parvez, F., Ahmed, A., Eunus, M., Hasan, R., Nahar, J., Shaheen, I., Sarwar, G., Desvarieux, M., Ahsan, H., Chen, Y., 2017. The association between socioeconomic status and subclinical atherosclerosis in a rural Bangladesh population. Preventive Medicine 102, 6–11. Gong, G., O’bryant, S.E., 2012. Low-level arsenic exposure, AS3MT gene polymorphism and cardiovascular diseases in rural Texas counties. Environmental Research 113, 52–57. Hamadani, J.D., Grantham-McGregor, S.M., Tofail, F., Nermell, B., Fängström, B., Huda, S.N., Yesmin, S., Rahman, M., Vera-Hernández, M., Arifeen, S.E., 2010. Pre- and postnatal arsenic exposure and child development at 18 months of age: A cohort study in rural Bangladesh. International Journal of Epidemiology 39, 1206–1216. Hsieh, Y.-C., Lien, L.-M., Chung, W.-T., Hsieh, F.-I., Hsieh, P.-F., Wu, M.-M., Tseng, H.-P., Chiou, H.-Y., Chen, C.-J., 2011. Significantly increased risk of carotid atherosclerosis with arsenic exposure and polymorphisms in arsenic metabolism genes. Environmental Research 111, 804–810. IARC. Some drinking-water disinfectants and contaminants, including arsenic. Working Group on the Evaluation of Carcinogenic Risks to Humans. World Health Organization. International Agency for Research on Cancer (IARC), 2004. Islam, M.R., Khan, I., Hassan, S.M.N., McEvoy, M., D’Este, C., Attia, J., Peel, R., Sultana, M., Akter, S., Milton, A.H., 2012. Association between type 2 diabetes and chronic arsenic exposure in drinking water: A cross sectional study in Bangladesh. Environmental Health 11, 38. James, K.A., Marshall, J.A., Hokanson, J.E., Meliker, J.R., Zerbe, G.O., Byers, T.E., 2013. A case-cohort study examining lifetime exposure to inorganic arsenic in drinking water and diabetes mellitus. Environmental Research 123. Jovanovic, D., Rasic-Milutinovic, Z., Paunovic, K., Jakovljevic, B., Plavsic, S., Milosevic, J., 2013. Low levels of arsenic in drinking water and type 2 diabetes in middle Banat region, Serbia. International Journal of Hygiene and Environmental Health 216, 50–55. Li, X., Li, B., Xi, S., Zheng, Q., Lv, X., Sun, G., 2013. Prolonged environmental exposure of arsenic through drinking water on the risk of hypertension and type 2 diabetes. Environmental Science and Pollution Research 20, 8151–8161. Lin, H.-J., Sung, T.-I., Chen, C.-Y., Guo, H.-R., 2013. Arsenic levels in drinking water and mortality of liver cancer in Taiwan. Journal of Hazardous Materials 262, 1132–1138. Melak, D., Ferreccio, C., Kalman, D., Parra, R., Acevedo, J., Pérez, L., Cortés, S., Smith, A.H., Yuan, Y., Liaw, J., 2014. Arsenic methylation and lung and bladder cancer in a casecontrol study in northern Chile. Toxicology and Applied Pharmacology 274, 225–231. Meliker, J.R., Slotnick, M.J., AvRuskin, G.A., Schottenfeld, D., Jacquez, G.M., Wilson, M.L., Goovaerts, P., Franzblau, A., Nriagu, J.O., 2010. Lifetime exposure to arsenic in drinking water and bladder cancer: A population-based case–control study in Michigan, USA. Cancer Causes & Control 21, 745–757. Moon, K.A., Guallar, E., Umans, J.G., Devereux, R.B., Best, L.G., Francesconi, K.A., Goessler, W., Pollak, J., Silbergeld, E.K., Howard, B.V., 2013. Association between exposure to low to moderate arsenic levels and incident cardiovascular disease. A prospective cohort study. Annals of Internal Medicine 159, 649–659. Nahar, M.N., Inaoka, T., Fujimura, M., 2014. A consecutive study on arsenic exposure and intelligence quotient (IQ) of children in Bangladesh. Environmental Health and Preventive Medicine 19, 194–199. Parvez, F., Chen, Y., Brandt-Rauf, P.W., Slavkovich, V., Islam, T., Ahmed, A., Argos, M., Hassan, R., Yunus, M., Haque, S.E., 2010. A prospective study of respiratory symptoms associated with chronic arsenic exposure in Bangladesh: Findings from the health effects of arsenic longitudinal study (HEALS). Thorax 65, 528–533. Seow, W.J., Pan, W.-C., Kile, M.L., Baccarelli, A.A., Quamruzzaman, Q., Rahman, M., Mahiuddin, G., Mostofa, G., Lin, X., Christiani, D.C., 2012. Arsenic reduction in drinking water and improvement in skin lesions: A follow-up study in Bangladesh. Environmental Health Perspectives 120, 1733. Shih, Y.H., Islam, T., Hore, S.K., Sarwar, G., Shahriar, M.H., Yunus, M., Graziano, J.H., Harjes, J., Baron, J.A., Parvez, F., Ahsan, H., Argos, M., 2017. Associations between prenatal arsenic exposure with adverse pregnancy outcome and child mortality. Environmental Research 158, 456–461. Smith, A.H., Yunus, M., Khan, A.F., Ercumen, A., Yuan, Y., Smith, M.H., Liaw, J., Balmes, J., von Ehrenstein, O., Raqib, R., 2013. Chronic respiratory symptoms in children following in utero and early life exposure to arsenic in drinking water in Bangladesh. International Journal of Epidemiology 42, 1077–1086. Somé, I., Sakira, A., Ouédraogo, M., Ouédraogo, T., Traoré, A., Sondo, B., Guissou, P., 2012. Arsenic levels in tube-wells water, food, residents’ urine and the prevalence of skin lesions in Yatenga province, Burkina Faso. Interdisciplinary Toxicology 5, 38–41. Somé, T.I., Sakira, A.K., Kaboré, A., Traoré, A., 2014. A survey of arsenic level in tube-wells in bam province (Burkina Faso). Journal of Environmental Protection 5 (14), 1406. Srivastava, S., Chen, Y., Barchowsky, A., 2009. Arsenic and cardiovascular disease. Toxicological Sciences 107, 312–323. Steinmaus, C.M., Ferreccio, C., Romo, J.A., Yuan, Y., Cortes, S., Marshall, G., Moore, L.E., Balmes, J.R., Liaw, J., Golden, T., Smith, A.H., 2013. Drinking water arsenic in northern Chile: High cancer risks 40 years after exposure cessation. Cancer Epidemiology, Biomarkers & Prevention 22 (4), 623–630. Surdu, S., Bloom, M.S., Neamtiu, I.A., Pop, C., Anastasiu, D., Fitzgerald, E.F., Gurzau, E.S., 2015. Consumption of arsenic-contaminated drinking water and anemia among pregnant and non-pregnant women in northwestern Romania. Environmental Research 140, 657–660. Vaughan DJ (ed.) (2006) Special issue of Arsenic. Elements, vol. 2, No 2. Wadhwa, S.K., Kazi, T.G., Kolachi, N.F., Afridi, H.I., Khan, S., Chandio, A.A., Shah, A.Q., Kandhro, G.A., Nasreen, S., 2011. Case–control study of male cancer patients exposed to arsenic-contaminated drinking water and tobacco smoke with relation to non-exposed cancer patients. Human & Experimental Toxicology 30, 2013–2022. Wasserman, G.A., Liu, X., LoIacono, N.J., Kline, J., Factor-Litvak, P., van Geen, A., Mey, J.L., Levy, D., Abramson, R., Schwartz, A., 2014. A cross-sectional study of well water arsenic and child IQ in Maine schoolchildren. Environmental Health 13, 23. Wasserman, G.A., Liu, X., Parvez, F., Ahsan, H., Factor-Litvak, P., van Geen, A., Slavkovich, V., Lolacono, N.J., Cheng, Z., Hussain, I., 2004. Water arsenic exposure and children’s intellectual function in Araihazar, Bangladesh. Environmental Health Perspectives 1329–1333.

Relevant Websites http://www.physics.harvard.edu/wilson/dArsenic Foundation Inc. www.dchtrust.orgdDhaka Community Hospital, 190/1, Baro Moghbazar, Wireless Railgate, Dhaka-1217, Bangladesh. www.soesju.orgdSchool of Environmental Studies, Jadavpur University, Kolkata 700,032, West Bengal, India. https://www.atsdr.cdc.gov/csem/arsenic/docs/arsenic.pdf.

Arsenic Pollution of Groundwater in Bangladesh P Ravenscroft, Entec UK Ltd, Cambridge, United Kingdom © 2011 Elsevier B.V. All rights reserved.

Abbreviations ADI average daily intake ARP arsenic removal plant BGS British Geological Survey BMI body mass index DALY disability adjusted life years DOC dissolved organic carbon DPHE Department of Public Health Engineering FAO Food and Agriculture Organization HEALS Health Effects of Arsenic Longitudinal Study IDE International Development Enterprise LGM Last Glacial Maximum MDI maximum daily intake NGO nongovernmental organization O&M operation and maintenance OM organic matter PSF pond sand filter STW shallow tubewell WHO World Health Organization

The Country Bangladesh has a population of 153 million living in an area of 134 000 km2, and lies at the north end of the Bay of Bengal. The land comprises mostly floodplains of the combined deltas of the Ganges, Brahmaputra, and Meghna rivers (also referred to as the Bengal Basin), with slightly elevated terraces in the north (the Barind and Madhupur tracts) and hills in the east (Figure 1). The climate is tropical-monsoonal, with 1.5–3.5 m of rain a year, falling mostly between June and September. Although the cities are expanding rapidly, the population remains largely rural, and depends on subsistence farming of rice and wheat, which is heavily reliant on small-scale irrigation. Before c.1970, the majority of the population used surface water for drinking and domestic use, and resulted in high mortality due to diarrheal disease. Since then, with the support of international aid agencies, groundwater has replaced surface water as the main source of drinking water. Today, more than 90% of drinking water is drawn from shallow sandy aquifers that underlie most of the country. The water is obtained from millions of hand-pumped wells, each serving a group of interrelated households. Groundwater is easily accessible because it lies at shallow depth, with the water table falling to between 3 and 10 m below ground in the dry season. Urban supplies are also drawn mainly from groundwater, but usually from deeper wells, and where water may be treated to reduce high iron concentrations. Despite its importance for potable supply, the vast majority of groundwater abstracted is used for dry season irrigation. The water abstracted is fully replaced every year by infiltrating monsoon rain and floodwater, except around Dhaka city where intensive and continuous municipal pumping leads to ever-declining water levels. The provision of tubewells for water supply was haled as a success story because the switch away from polluted ponds and streams resulted in a massive reduction in morbidity and mortality from diarrheal disease. The benefits were real, but came at a high cost. Today, we know that the country is more affected by arsenic pollution of groundwater than any other.

Discovery and Measurement of Arsenic Pollution in Bangladesh Arsenic pollution was identified in the adjoining Indian state of West Bengal in 1983, and documented in international health journals in the late 1980s. Despite its apparent detection in Chapai Nawabganj in 1993, the issue was virtually unknown in Bangladesh until 1995. A few water samples from Dhaka city had been tested for arsenic in 1990 but all were below detection limits. The turning point was the conference in Kolkata organized by Dipankar Chakraborti in early 1995, which led to surveys in Bangladesh, initially

Encyclopedia of Environmental Health, 2nd edition, Volume 1

https://doi.org/10.1016/B978-0-444-63951-6.00347-8

169

170

Arsenic Pollution of Groundwater in Bangladesh

R. Tista

R. Jamuna

Legend N and E Hills R

Pleistocene terraces Sundarban Forest

BT Surm

a

Sy

dB

BT

Ol

N

rah ma pu

CN

tra

Rj/Pa Sr

F

R. Ganges

MT

Pn B Ma

Ku Go

Pa ad m ma

Dhaka

A

rai

Mu

ad

−M

Co

hu ti ma

J

Se

Sa

T

hna Meeggh L. M

Kh

Ct

Bay of Bengal

Figure 1 Major physical features in Bangladesh (A, Araihazar; B, Brahmanbaria; CN, Chapai Nawabganj; Co, Comilla; Ct, Chittagong; F, Faridpur; J, Jessore; Kh, Khulna; Ku, Kushtia; Ma, Manikganj; Mu, Munshiganj; N, Naogaon; Pa, Paba; Pn, Pabna; K, Khulna; R, Rangpur; Rj, Rajshahi; Sa, Satkhira; Se, Senbagh; Sr, Sreepur; Sy, Sylhet; T, Tala. Landforms: BT, Barind Tract; MT, Madhupur Tract).

Arsenic Pollution of Groundwater in Bangladesh

171

on a small scale, and then to national surveys between 1997 and 1999. The transition from almost complete ignorance to systematic national mapping and widespread awareness among the population took less than 4 years. Initially surveys were constrained by the limited capabilities of local laboratories, and relied heavily on the use of field test kits. When introduced in the 1990s, these kits were not reliable indicators of water safety at the 50 mg l 1 level. Since then they have been considerably improved and, with appropriate quality control, provide a practical tool at this level, although their performance at the 10 mg l 1 level remains problematic.

Distribution of As-Pollution The only systematic national survey (Table 1) of As-concentrations in wells was produced in 1998–99 by, who tested 3534 water samples, of which 42% contained >10 mg l 1 and 25% >50 mg l 1 arsenic, respectively (Table 1). The wells were predominantly rural hand tubewells, and therefore represent exposure among the rural population. The range and depth-distribution of Asconcentrations are shown in Table 1. The geographical distribution of As in water from tubewells 400 mg l 1. Such maps prioritize areas for mitigation. The apparent absence of As from the coastal area (Figure 2) is the result of high salinity in shallow groundwater. Here, communities utilize deep (150–350 m) wells to supply potable needs. The data in Table 1 represent contamination in well water and can only approximate the distribution of pollution in aquifers. The DPHE national surveys indicated that in 1998–99 approximately 25–30 million people consumed water containing more than the national standard of 50 mg l 1, and as many as 50 million used water containing more than the World Health Organization (WHO) guideline value of 10 mg l 1 As. Although based on testing of only a tiny fraction (perhaps 0.03%) of wells, they remain the best estimates of exposure at that time. Additional testing until 2007 of approximately 5 million wells indicates that approximately 20% of tubewells contained >50 mg l 1 and were used by 20 million people. These lower estimates reflect an unknown combination of the effects of actual water supply mitigation, the use of less reliable (than laboratory analysis) field-kits for As analysis, different demographic parameters, and possibly real changes in the aquifers. In 2316 villages, every well tested exceeded 50 mg l 1 of As. Both Figure 2 and Table 1 represent the distribution of pollution just before knowledge of arsenic began to influence well construction. The natural (prepumping) distribution of As-concentrations can never be known with certainty because the aquifers had been used with increasing intensity since the mid-1970s. Subsequent survey data are influenced both by real changes in groundwater quality (i.e., the effects of abstraction) and by deliberate actions to abandon polluted wells and to install new wells in low-As horizons.

Geology and Hydrogeology of the Affected Aquifers The As-affected areas of Bangladesh are underlain by alluvial sediments of the Bengal Basin that are exploited, to depths of 350 m, as the main source of water supply for drinking and irrigation. During the Last Glacial Maximum (LGM; 18 000–30 000 years ago), the major rivers incised deep channels (100 m) across the landscape, leaving blocks such as the Madhupur and Barind tracts (Figure 1) and lower, but extensive, paleosol-capped interfluves (20–50 m below the present land surface) standing high above sea level and the regional water table. The Pleistocene sediments forming these blocks (the Dupi Tila Formation) were oxidized and thoroughly flushed, giving them their characteristic brown color. During the post-LGM sea level rise, the main channels and lower delta were filled with gray sand and silt, abundant organic matter (OM), and peat layers in the south. The Holocene sands have, by and large, remained saturated since deposition, and had not been significantly flushed before the onset of intensive pumping in the 1970s. Table 1

Distribution of As-concentrations in wells in Bangladesh in 1998–99 As-concentration class (mg l

1

)

Well depth (m)

500

Total

All wells (no.) Wells 300 m (%)

2041 17.7 47.6 17.8 1.7 1.2 8.4 5.5

604 16.1 50.3 23.7 7.8 1.0 1.2 0.0

313 15.0 52.4 21.4 10.5 0.3 0.0 0.3

324 18.2 59.0 19.1 3.4 0.0 0.3 0.0

183 19.7 66.7 10.4 0.5 0.0 0.0 0.0

58 32.8 67.2 8.6 0.0 0.0 0.0 0.0

3523 17.6 50.8 18.7 3.6 0.9 5.1 3.2

Note: The first row of data shows the number of wells in each concentration class, whereas the percentage of wells in each concentration group is classified by depth class in the column below. Thus columns sum to 100%, but the rows do not. Source: Data courtesy of Department of Public Health Engineering, Bangladesh.

172

Arsenic Pollution of Groundwater in Bangladesh

P >10 µg l −1 As

P > 50 µg l −1 As

Probability of tubewell exceeding As-concentration threshold < 10%

30−60%

10−30%

60−100%

40 0

40 80 120 Kilometers

No Data

P >200 µg l −1 As

P >400 µg l −1 As

Figure 2 Distribution of arsenic in groundwater in Bangladesh. The maps show the probability of a well exceeding various concentration thresholds. Surfaces calculated from the data courtesy of Department of Public Health Engineering (Bangladesh) data set using the ArcView Spatial Analyst® software.

The occurrence of As is systematically related to the geology of the sediments, as expressed through their depth (Table 1), age, color, and depositional environment. The probability of a randomly located well being contaminated increases rapidly in the first few tens of meters, with peak As-concentrations typically occurring at 20–40 m, below which both maximum concentrations and the probability of exceeding 10 and 50 mg l 1 decrease rapidly. Both Holocene and Pleistocene sands form high-yielding freshwater aquifers, but as a consequence of their recent geological history, they have different properties. Owing to the oxidation and the formation of secondary clays and iron oxides, the Brown Pleistocene sands have lower permeabilities (20–30 m day 1), but better water quality, with low concentrations of iron (30 years old, and especially >60 years), and those with low BMI had higher likelihoods of developing skin lesions. A significantly increased risk of skin lesions was observed in people drinking water with >8.1 and 90% of persons have detectable BPA. Ingestion is considered the main route of human exposure to BPA. It is well established that BPA can leach out of polycarbonate plastics or epoxy resins, particularly in the presence of high heat, physical manipulation or repetitive use. Thus, microwaving food in plastic containers or plastic films could result in BPA leaching out of these plastics and into the food. BPA has also been detected in several samples of paper or cardboard used in fast food packaging. Specific foods which have been associated with increased BPA exposure include canned soup, bottled water and fast/takeout foods. Breastfeeding infants may ingest BPA via breastmilk. As surface and groundwater have been found to have BPA contamination, it is not surprising that drinking (tap) water may also have BPA; however, this is generally considered to be a relatively small source of exposure compared to ingestion from food products. Exposure to BPA may also occur from non-food exposure routes. For example, dermal exposure to BPA may result from handling thermal paper. The uptake of BPA from thermal paper is increased when fingers are greasy or wet. Some toys may be made using BPA; children mouthing or chewing these toys could also be exposed to BPA via ingestion. The use of BPA in dental sealants could result in BPA leaching into saliva, which could then be swallowed. Some hospital-based studies have also identified higher BPA concentrations among neonates who used a higher number of medical devices or products. Elevated BPA concentrations in air, which can lead to inhalation exposure of BPA, have been documented in several factories which manufacture or use BPA within manufacturing. Several recent studies have documented that workers in facilities which produce or use BPA (such as thermal paper manufacturing, plastic injection molding, or epoxy resin manufacturing companies) have elevated BPA exposure. BPA is rapidly metabolized and excreted from the body in urine. The estimated half-life of BPA in human adults has been estimated at 4–5 h; the half-life in children is estimated to be somewhat longer. This half-life has been calculated with the assumption that the majority of BPA exposure occurs via ingestion. However, there is some debate regarding whether inhalation and/or dermal exposure are also major contributors to overall BPA body burden. This could affect BPA half-life in that inhalation and dermal exposures would not go to the liver to be conjugated to promote rapid elimination; thus, the half-life of BPA dose from inhalation or dermal routes could be substantially longer than the 4–5 h estimate. Urine is the most widely used measure of BPA exposure; this is considered the best method due to the rapid elimination of BPA in urine. However, a variety of factors may influence this measurement, including the timing and frequency of urine colletction, assessment of urinary dilution, and whether conjugated or total BPA is measured. Urinary BPA assessments that are based on collection of multiple samples are thought to better represent a typical BPA exposure for an individual, as urinary BPA can vary dramatically over time. Additionally, urine density can vary within and between individuals which will also affect concentrations. Thus, urinary BPA values are frequently corrected for urinary dilution using creatinine or specific gravity. It is also possible to measure

Table 1

Sources of human exposure to BPA

Exposure route

Source

Inhalation

Indoor air (esp. occupational exposures) Combustion of plastics Drinking water Bottled watera Fooda,b Breast milk Mouthing/contact with toys Dental sealants Thermal paper/receipts Personal care products Flame retardants Prenatal exposure Medical equipmentc

Ingestion

Dermal Other

a

BPA contamination occurs via leaching from plastic containers. BPA contamination occurs via leaching from epoxy resins in canned foods. c BPA contamination via leaching from plastics in medical equipment, for example, intravenous equipment. b

426

Bisphenol A

either total BPA or conjugated BPA. It can be challenging to compare urinary BPA measurements collected using different criteria; these differences, in part, are thought to contribute to differences observed between individual epidemiology studies. As noted above, urinary BPA is detectable in > 90% of the United States population. The geometric mean total urinary BPA concentration from the 2013–14 National Health and Nutrition Examination Survey (NHANES) was 1.28 mg/g creatinine (95% confidence interval: 1.18–1.39); this is an estimate of average concentration within the United States population. Geometric mean urinary BPA among 6–11 year old children in the United States was 1.81 mg/g creatinine (95% confidence interval 1.68–1.96). Data from this and other studies have consistently reported urinary BPA concentrations are higher in children versus adults.

Health Impacts of BPA Exposure BPA is a member of the class of chemicals referred to as endocrine disrupting chemicals (EDCs), which are exogenous chemical(s) that affect activity or function or hormones. Hormones are produced by endocrine glands and distributed through the circulatory system to serve as signaling molecules with distant tissues or organs. Hormones regulate and maintain various physiological processes related to development, metabolism, and reproduction. Hormones work most effectively at concentrations within a physiological relevant range; detrimental health effects may be seen when concentrations either exceed or are lower than the ideal concentration range. For example, hypothyroidism occurs when too little thyroid hormone is present, whereas hyperthyroidism occurs in the presence of excessive amounts of thyroid hormone. Additionally, prenatal and early life development is a period with tight regulation of both the timing and quantity of hormones; this serves to coordinate the complex processes of organogenesis and tissue differentiation. Alterations to this highly regulated process, such as through exposure to BPA, may disrupt the trajectory of the developmental process, which in turn can lead to lifelong impacts on health. Sigmoidal or u-shaped (non-monotonic) dose response curves have been observed for BPA and other endocrine disruptors. Hormones are high affinity receptors and are intended to be active in the presence of low concentrations of hormone, and even small shifts in concentration can affect the outcome. For some outcomes, impacts are seen at low concentrations but not at higher concentrations; which makes identification of a threshold value for a “no effect” concentration problematic. Illustraing this, many studies have also reported effects at doses lower than the current regulatory guidelines, which were based on establishing a NOAEL, a threshold-based value. BPA is an estrogen mimic, meaning that it acts in a manner similar to estradiol in the body. However, increasing evidence suggests that BPA toxicity is mediated through multiple mechanisms of action, not just estrogen-mediated activity. BPA has been shown to bind to and activate estrogen receptor a and estrogen receptor b; the affinity of BPA is approximately 10 times higher for estrogen receptor b versus a. Non-genomic steroidal actions as a result of BPA binding to membrane estrogen receptors has been shown to be as strong as that of estradiol. BPA has also been shown to increase expression of progesterone receptors. BPA is noted as being an antiandrogen, and also an antagonist of the thyroid receptor. Exposure to BPA, particularly early in life, can result in epigenetic modification, including altered global DNA methylation and activiation of genes related to prostate cancer. Inflammation can occur following BPA exposure due to BPA’s induction of proinflammatory cytokines and chemokines. Epigenetic activity is also a potential mechanism for intergenerational impacts of BPA, this is an area of current active research. Several hundred studies in animals and over a hundred studies in humans have been conducted on the health impacts of BPA; these have been thoroughly reviewed in several papers (see Further Reading). Bisphenol A has been associated with numerous health outcomes, including obesity, diabetes, cardiovascular disease, female reproductive health, male reproductive health, hormone-sensitive cancers, prostate and thyroid-related impairments, and neurodevelopment. Specific effects vary with dose and timing of exposure; fetuses and neonates are considered to be most vulnerable to the effets of BPA exposure. Sex-specific effects have been noted for several outcomes. There is currently more evidence for health effects of BPA from animal versus human studies; there is a particular need for additional longitudinal studies in humans. Some key health outcomes that are suspected to result from BPA exposure are summarized below:







Cardiovascular and metabolic disease. Several cross-sectional studies using NHANES data identified associations between urinary BPA with self-reported type 2 diabetes or HbA1c. Other cross-setional and longitudinal studies also identified higher BPA to be associated with diagnosed cardiovascular disease as well as physicological endpoints indicative of cardiovascular disease. Animal studies have consistently identified an association of BPA exposure and obesity, particularly following prenatal exposure. Numerous cross-sectional studies in humans have supported this; however, there is currently limited longitdunal evidence regarding an association between BPA and obesity in humans. Reproductive health. Elevated BPA exposure has been linked with several male and female reproductive health outcomes. Multiple prospective studies among women undergoing in vitro fertiilization treatment found that higher BPA was associated with poorer reproductive health outcomes including poorer ovarian response, reducted number of mature oocytes, and lower serum 17-beta estradiol. Occupational and cross-sectional studies have identified that higher BPA in men is associated with reduced sperm quality. Numerous studies have found BPA associated with changes in circulating sex hormone concentrations in both men and women. There is also some evidence that elevated BPA may be associated with miscarriage. Cancer. Animal studies have consistently provided evidence that BPA exposure, particularly prenatal exposure, is associated with increased mammary (or breast) cancer. In contrast, studies on BPA and breast cancer in humans are do not suggest an

Bisphenol A



427

association; but these are, to date, limited and to not include prenatal assessment of BPA exposure. Both rodent and in vitro studies suggest BPA may be associated with the development of prostate cancer, but human studies are, again, limited. Neurodevelopmental and neuroendocrine impacts. Evidence suggests BPA exposure affects child neurodevelopment. Several longitudinal studies have suggested that early-life exposure to BPA may be related to impaired neurobehavioral development in children. Several studies have also reported sex-specific impacts. However, studies to date are somewhat inconsistent regarding the key time period of exposure is important and which sex is more vulnerable to these effects. Impacts of thyroid function changes in relation to BPA exposure have been reported, including sex-specific impacts. However, there is relatively less literature in this area.

Policy and Regulation The use of BPA in food packaging and food contact materials in the United States is regulated by the US Food and Drug Administation, which most recently updated their standards in 2014. Currently, BPA is allowed for use in most food contact materials, except for uses in some products for infants and children. The American Chemistry Council petitioned that FDA restrict the use of BPA in sippy cups and baby bottles as manufacturers eliminated use of polycarbonates in these products. Thus, use of BPAcontaining polycarbonate plastics in baby bottles and sippy cups was eliminated in 2012 and the use of BPA-containing epoxy resins in infant formula containers was eliminated in 2013. Based on their scientific reviews, the FDA states that BPA is safe at the current levels present in foods. Meanwhile, as of 2014, 12 states have implemented state bans on the use of BPA in child and infant food containers. The United States Environmental Protection Agency established an oral Reference Dose (RfD) for BPA of 50 mg/kg bw/day in 1988; this has not yet been updated to incorporate any of the substantial amount of research completed since the 1980s. A BPA Action Plan released on 2010 recommended that EPA consider including BPA on the Toxic Substances Control Act (TSCA) Concern List due its potential environmental impacts, develop data to determine whether or not BPA poses a risk to the environment, and initiate collaborative activities to encourage reduction in the use of BPA and its release to the environment. In 2014, a document describing alternatives to the use of BPA in thermal paper was released; however, no regulatory action of BPA using TSCA is anticipated at this time. The European Food Safety Authority (EFSA) revised their tolerable daily intake (TDI) for BPA to 4 mg/kg bw/day in January 2015; the TDI was previously 50 mg/kg bw/day. Based on their review of the scientific literature, the EFSA concluded that current human exposure to BPA occurs at concentrations well below the current TDI, and, thus, current uses of BPA are safe. This TDI is scheduled to be reviewed again in 2018, following anticipated publication of new toxicological assessments. In September 2018, the European Parliament’s Environment, Public Health and Food Safety Committee reduced the migration limit of BPA in food packaging to 0.05 mg/kg (previously 0.6 mg/kg) and to prohibit the migration of BPA into any baby food or infant formula.

Controversies and Future Research New Paradigms in Risk Assessment Evaluation of the health risks of BPA and other endocrine disruptors has faced substantial controversy over the past few decades. In part, this stems from the fact that the field of endocrine disruption research is still in its relative infancy; the term “endocrine disruptor” itself is < 30 years old. This field has resulted in some important paradigm shifts in risk assessment; however, these have yet to be fully incorporated into the formalized risk assessment processes used by government regulators. The paradigm shifts highlighted by research on endocrine disruptors include include the presence of nonmonotonic dose response curves, the existence of low-dose effects, and the importance of timing of exposure. As noted above, substantial research has described and documented that dose-response curves of BPA toxicity are not always monotonic, that effects can occur at lower doses even when no impacts are observed at higher doses, and that the health impacts resulting from exposure can vary substantially by the timing of exposure. Extensive evidence suggests prenatal exposures are of the most concern. These concepts challenge the classic paradigms of toxicology that “the dose makes the poison”, and that a threshold model for risk assessment is sufficient to identify a “safe” exposure dose. The concept that the dose makes the poison, originally stated by Paracelsus, suggests that at higher doses one would expect increasingly greater harm to occur. A threshold model for risk is one that assumes that once a dose which does not cause an observable effect is identified, that any exposure below this level would not cause appreciable harm. Risk assessment is historically based on the concepts of monotonic dose-responses and the existence of a safety threshold. Experimental designs to assess these risks are often not designed in such a way as to identify low-dose, non-monotonic effects, or early-life exposure impacts. At the same time, some risk assessments require that studies document that they have followed Good Laboratory Practices (GLP) standards, a series of recordkeeping procedures more commonly maintained by industrial-based research studies rather than academic studies due to the additional administrative burden involved in maintaining GLP. The risk assessment on which FDA’s conclusion that current uses of BPA are safe are based on two studies which have been criticized as being insufficiently sensitive to be able to detect endocrine system perturbations as well as not incorporating > 100 well-designed studies from academic researchers because they did not incorporate GLP. Thus, the concern from the research community is that the FDA’s determination may substantially overestimate the safety of BPA. To address these concerns, the Consortium Linking Acadmic and Regulatory Insights on BPA Toxicity (CLARITY-BPA) was launched by the National Toxicology Program, the National Institute of Environmental Health Sciences, and the Food and Drug

428

Bisphenol A

Administration in 2013. This consortium-based research program is a novel method to enhance collaboration between regulatory and academic based research to further toxicology research on the health effects of BPA. The program consists of a main intramural, GLP-compliant, perinatal 2-year toxicity study of BPA with additional studies contributed by 12 extramural researchers selected through a competitive proposal process. Some initial study results have recently become available; an integrated report is expected in fall 2019.

Substitution of BPA With Bisphenol Analogues Over the past several years, there has been extensive public concern regarding BPA exposure and toxicity among infants and children. As noted above, this concern led many companies to voluntarily remove BPA from products intended for infants and children. The US FDA, several US states, and several European Countries have also formally banned use of BPA in baby bottles and sippy cups. However, compounds used as replacements for BPA were not thoroughly tested prior to being used in commerce, and there are growing concerns that the compounds used as replacements for BPA in BPA-free products may themselves have some toxicity. Many compounds used as BPA replacements are chemically very similar to BPA in that they also have two phenol rings. At least 16 of these bisphenol analogue chemicals are currently being used in a variety of products including epoxy resins, thermal paper, and personal care products. The more commonly used include bisphenol S (BPS), bisphenol F (BPF), and bisphenol AF (BPAF). Publications from the past few years have documented the presence of these bisphenol analoges in food, dush, sediment, sludge, water, and human urine. Toxicology research to date suggests the health impacts of exposure to bisphenol analogues is similar to that of BPA. In vitro and in vivo animal studies have identified estrogenic, antiandrogenic, cytotoxicy, and genotoxic activity of these compounds. Moreover, several studies have identified that the average potency of these compounds is of a similar magnitude to BPA. The health impacts among humans of these bisphenol analogues has not yet been well described; nevertheless, understanding the potential health consequences of substituting BPA with other compounds is an important area of ongoing research.

See also: Exposure Science: Contaminant Mixtures; Phthalates: Exposure and Health Effects; Phthalates: Human Exposures; Phthalates: Occeurence and Human exposure; Tetrafluoroethylene: For Production of Teflon, Fluoroplastics, and Fluoroelastomers; The Exposome: An Approach Toward a Comprehensive Study of Exposures in Disease.

Further Reading Braun, J.B., 2017. Early life exposure to endocrine disrupting chemicals and childhood obesity and neurodevelopment. Nature Reviews Endocrinology 13 (3), 161–173. Calafat, A.M., Ye, X., Wong, L.Y., Reidy, J.A., Needham, L.L., 2008. Exposure of the U.S. population to bisphenol A and 4-tertiary-octylphenol: 2003–2004. Environmental Health Perspectives 116, 39–44. Corrales, J., Kristofco, L.A., Steele, W.B., Yates, B.S., Breed, C.S., Williams, E.S., Brooks, B.W., 2015. Global assessment of bisphenol A in the environment: Review and analysis. Dose-Response 13, 1559325815598308. Diamanti-Kandarakis, E., Bourguignon, J.-P., Guidice, L.C., Hauser, R., Prins, G.C., Soto, A.M., Zoeller, T., Gore, A.C., 2009. Endocrine-disrupting chemicals: An endocrine society scientific statement. Endocrine Reviews 30, 293–342. Gore, A.C., Chappell, V.A., Fenton, S.E., Flaws, J.A., Nadal, A., Prins, G.S., Toppari, J., Zoeller, R.T., 2015. EDC-2: The endocrine Society’s second scientific statement on endocrine-disrupting chemicals. Endocrine Reviews 36, E1–E150. Heindel, J.J., Newbold, R.R., Bucher, J.R., Camacho, L., Delclos, K.B., Lewis, S.M., Vanlandingham, M., Chruchwell, M.I., Twaddle, N.C., McLellen, M., Chidambaram, M., Bryant, M., Woodling, K., de Costa, G.G., Ferguson, S.A., Flaws, J., Howard, P.C., Walker, N.J., Zoeller, R.T., Fostel, J., Favaro, C., Schug, T.T., 2015. NIEHS/FDA CLARITYBPA research program update. Reproductive Toxicology 58, 33–44. Rochester, J., 2013. Bisphenol A and human health: A review of the literature. Reproductive Toxicology 42, 132–135. Rochester, J., Bolden, A.L., 2015. Bisphenol S and F: A systematic review and comparison of the hormonal activity of bisphenol A substitutes. Environmental Health Perspectives 123, 643–650. Rubin, B.S., 2011. Bisphenol A: An endocrine disruptor with widespread exposure and multiple effects. J. Steroid Chem. Mol. Biol. 127, 27–34. Schug, T.T., Heindel, J.J., Camacho, L., Delclos, B., Howard, P., Johnson, A.F., Aungst, J., Keefe, D., Newbold, R., Walker, N.J., Zoeller, T., Bucher, J.R., 2013. A new approach to synergize academic and guideline-compliant research: The CLARITY-BPA research program. Reproductive Toxicology 40, 35–40. Seachrist, D.D., Bonk, K.W., Ho, S.-M., Prins, G.S., Soto, A.M., Keri, R.A., 2016. A review of the carcinogenic potential of bisphenol A. Reproductive Toxicology 59, 167–182. Vandenberg, L., Maffini, M.V., Sonnenschein, C., Rubin, B.S., Soto, A.M., 2009. Bisphenol-A and the great divide: A review of controversies in the field of endocrine disruption. Endocrine Reviews 30, 75–95.

Relevant Websites https://www.niehs.nih.gov/health/topics/agents/sya-bpa/index.cfmdBisphenol A. United States National Institute of Environmental Health Sciences. https://www.cdc.gov/biomonitoring/BisphenolA_FactSheet.htmldBPA Factsheet. United States Centers for Disease Control and Prevention. https://pubchem.ncbi.nlm.nih.gov/compound/Bisphenol_AdBisphenol A. Open Chemistry Database, United States National Library of Medicine. https://ntp.niehs.nih.gov/go/bpadCLARITY-BPA Program. United States National Toxicology Program. https://www.efsa.europa.eu/en/topics/topic/bisphenoldBisphenol A. European Food Safety Authority. https://www.fda.gov/NewsEvents/PublicHealthFocus/ucm064437.htmdBisphenol A: Use in food contact application. United States Food and Drug Administation.

Blastocystis spp., Ubiquitous Parasite of Human, Animals and Environment Shahira A Ahmed, Suez Canal University, Ismailia, Egypt Panagiotis Karanis, Qinghai University, Xining, Qinghai, P.R. China; and University of Cologne, Cologne, Germany © 2019 Elsevier B.V. All rights reserved.

Introduction Blastocystis is an anaerobic, noninvasive, unicellular, large intestinal protozoan parasite. This parasite is a pleomorphic organism with multiple stages. The water-resistant infective cyst represents the transmissible stage while the amoeboid form has been suggested to link with pathogenicity. The different forms of Blastocystis life cycle are hard to recognize microscopically. Without staining, it can be often confused with other enteric organisms like yeasts or Cyclospora spp. Morphologically, isolates of Blastocystis spp. from different hosts (human and animals) are very close to each other, therefore morphology alone can’t be used as a single criterion to differentiate one isolate from another. Isolates genetically display marked variability on the basis of sequence homology of small subunit ribosomal DNA gene (SSU rDNA). Based on the diversity within SSU rDNA, 17 different subtypes (STs) are demonstrated. Human colonization is associated with nine STs (ST1–ST9) whereas others were exclusively found in nonhuman hosts. Interest in this parasite has increased in the last decade, since it is capable to infect human and wide range of animal hosts “amphibians, birds, cattle, insects, monkeys, reptiles, rodents, and pigs.” As a ubiquitous parasite its transmissible form “cyst” is present in the environment and likely to achieve transmission via fecal–oral, waterborne and foodborne routs (Fig. 1). Blastocystis species are widely distributed all over the world’s continents; globally over one billion human has been estimated to be infected with it. Its prevalence can reach 30% in developed countries and largely exceeds to reach 100% in developing ones due to poor hygienic practices. Such raised prevalence records arouse scientist’s concern about the future health impacts of this parasite. Blastocystis spp. are reported in asymptomatic and symptomatic individuals being opportunistic in immunocompromised patients such as HIV and cancer patients. Since this organism affect healthy asymptomatic individuals with high carriage and prolonged gut colonization, its mechanism of pathogenicity and diversity still fragmentary. However, putative virulence was identified by in vitro and in vivo studies coupled with recent genomic data. Gut colonization with Blastocystis is associated with nonspecific gastrointestinal disorders “diarrhea, abdominal pain, flatulence, vomiting, anorexia, dysentery.” Its infection might play pathogenic role in irritable bowel syndrome and link to other diseases like

Foodborne Transmission

Human Transmission

Feacal oral route Drinking contaminated water Eating contaminated food Fomits from infected individual Handling infected animals

Cyst In large intestine Ameboid form

Granular form

Zoonotic Transmission

Infect human and animal

Contamiinate water and food

Waterborne Transmission

Vacuolar form Avacuolar and Multivacuolar

Vacuolar form Cyst In stool

Fig. 1

Life cycle of Blastocystis spp. with different ways of transmission.

Encyclopedia of Environmental Health, 2nd edition, Volume 1

https://doi.org/10.1016/B978-0-12-409548-9.10947-9

429

430

Blastocystis spp., Ubiquitous Parasite of Human, Animals and Environment

urticaria, colorectal cancer, chronic liver disease, ulcerative colitis and anemia. Higher frequency of Blastocystis spp. appears in those who are in contact with animals (animal handlers and feeders), strongly reinforce the zoonotic nature of this parasite. Even though water treatment technologies, reporting of Blastocystis spp. is increasing in different water resources. An issue made WHO guidelines include Blastocystis for drinking water quality. In the context of diagnosis of other enteric parasites, it seems that this parasite is largely underestimated. Strategies for better surveillance and reporting systems are needed to assess this parasite real prevalence and risk of waterborne and foodborne transmission.

Taxonomic Aspects and Life Cycle of Blastocystis spp. The taxonomic classification of Blastocystis spp. has proven to be challenging. In 1900 Blastocystis has been considered as harmless gastrointestinal saprophytic yeast. Seventy years later, based on the presence of more than one nucleus, mitochondria, Golgi apparatus and endoplasmic reticulum, Zierdt reclassified the organism as a protest. In 1996, molecular analysis of small subunit ribosomal DNA (SSU rDNA) and elongation factor-1alpha (EF1a) helped in classifying Blastocystis under the Eukaryotic phylum Heterokontophyta or stramenopiles “algae, diatoms, slime molds and oomycetes.” Blastocystis then became a newer member of this complex group of “botanical Protists.” The stramenopiles are characterized by flagella surrounded by lateral hair like mastigonemes. Blastocystis morphologically don’t have such characteristics. Therefore placed in a newly created kingdom Chromista, subkingdom Chromobiota, infrakingdom Heterokonta, subphylum Opalinata, class Blastocystea (Tan, 2008). The life cycle of Blastocystis is still improperly elucidated due to the lack of animal model (Fig. 1). The cyst form is known to be the transmissible form shifting between environment and the affected host. Host gets the infection via fecal–oral route, drinking contaminated water, and/or eating aquatic plants contaminated with cysts. Unclean hands have been also reported to serve as fomites from infected individuals. Once the suitable host ingests the cyst form, continuation of life cycle is depending on the compatibility of the ST type with the affected host. In large intestine, cyst begins excystation to vacuolar form. The vacuolar form has the ability to transform into any of the other forms while it encysts in the intestinal lumen to form cysts that later shed in the feces for further transmission. Other forms “amoeboid, multivacuolar, and avacular” could be seen in diarrhea of symptomatic individuals. Blastocystis has diverse modes of replications such as binary fission, plasmotomy, budding, multiple fission, schizogony and endodyogeny. However binary fission of the vacuolar form is the most commonly observed and well established mode of reproduction. The large reservoir of Blastocystis spp. among various animal populations should be kept in mind and human is the potential host to numerous zoonotic subtypes (Table 1).

Morphotypes of Blastocystis spp. Various microscopic morphotypes of Blastocystis have been described. Some forms were common to be seen in stools of symptomatic host like (cyst, amoeboid, granular and vacuolar). Whereas avacuolar and multivacuolar forms were less frequent encountered cells. The Blastocystis morphological forms were variable in size and shape. The cyst is the smallest form while vacuolar form is the largest one. Description got to be different according to each form in the aspect of vacuole presence, shape, and sensitivity to temperature (Table 2). The vacuolar form “central vacuole” is the most frequently observed form in stool and in culture. It displays large size variations which occur within and between isolates. Large vacuole is present in the center of vacuolar form pushing cytoplasm and its organelles to form peripheral thin rim. This form contains fine materials (carbohydrates/lipids) that might have storage role for organisms. It has been suggested to play a role in schizogony-like reproduction. This form has the ability to change to any of the other following forms. The granular form is identical to vacuolar form with the exception of central vacuole and cytoplasm are incorporated with granules. Granules are heterogenous, may be myelin-like inclusions, crystalline granules, small vesicles and lipid droplets. It has been reported that certain reproductive granules representing progeny of Blastocystis. The amoeboid form is a rare reported form of Blastocystis. It contains one or two pseudopods like extensions which don’t involve in locomotion. It has been differently described of having vacuole or not or even has/hasn’t pseudopodia. Cytoplasmic organelles of this form present in the pseudopods like structures. Genotypic variations among Blastocystis isolates might explain differing descriptions. Nutritional role of this form is suggested due to presence of bacteria and bacterial remnants within it. Amoeboid form was reported to have pathogenic potential since it appears in symptomatic carrier. The cyst form has small size that result in confusion with fecal debris. It is protected by multilayer cyst wall that provide resistance to environmental conditions. Cyst form is definitely the transmissible form proven by successful experimental infectivity to different animal hosts. Other infrequent forms (avacuolar and multivacuolar) are usually missed during microscopic examination due to the lack of their awareness. Moreover their small size might return to strains variation. Avacuolar form was named so toward the absence of vacuole in it. On controversy, multiple vacuoles are present in the multivacuolar form. Such forms are predominant ones in vivo.

Table 1

Recent reports on Blastocystis with various STs in different hosts Host

Group of study

State of carrier a

Method

STs in studied group b

References

Turkey (Aydin) Brazil (Rio de Janeiro) Malaysia Iran (Tehran)

Human Human Human Human

ST3, ST1, ST2 ST3, ST1, ST2, ST4, ST8 ST3, ST1, ST2, ST4 ST2, ST3, ST1

(Yersal et al., 2016) (Barbosa et al., 2018) (Noradilah et al., 2017) (Alinaghizade et al., 2017)

Human

PCR-sequencing

ST3, ST2, ST1, ST4, ST6, ST7

(Seyer et al., 2017)

Italy Colombian Amazon basin France

Human Human

Symptomatic Asymptomatic Asymptomatic Symptomatic and asymptomatic Symptomatic and asymptomatic Symptomatic Not provided

PCR-STS analysis Nested PCR-sequencing PCR-sequencing PCR-sequencing

Cyprus (North)

Cancer patients Rural valleys inhabitants Community members Diarrhea and non-diarrhea patients Volunteers from main cities and surrounding rural areas 41 years old man Children under 15 years old

RFLP-PCR qPCR

ST1 ST1, ST2, ST3, ST4, ST6

(Angelici et al., 2018) (Sánchez et al., 2017)

Animal

Mammalian, avian, reptiles and insects

Asymptomatic

Screen: qPCR, sequencing Mixed infection: further non-qPCR, sequencing

(Cian et al., 2017)

China

Animal

Pigs, cattle, sheep, goat

Asymptomatic

PCR-sequencing

England (South East)

Animal

27 vertebrate species

Asymptomatic

Brazil (Triângulo Mineiro)

Animal

Pigs, sheep, cattle, cats and dogs

Asymptomatic

Nested PCRdCloningsequencing PCR-STS RFLP-PCR

China (Qinling mountains)

Animal

37 wild animal species

NP

ST1, ST2, ST3, ST4, ST5, ST7, ST8, ST10, ST13, ST14, ST15 Different prevalence according to each group of animal Pigs: ST5 Cattle: ST10, ST14 and ST3 are equal prevalence Sheep: ST10, equal prevalence of ST14, ST1, ST5 Goat: No Blastocystis detected ST4, ST10, ST14, ST1, ST5, and a potentially new subtype PCR-STS: Pigs, ST1 Other isolates were –ve PCR-RFLP: Isolates produced possible genotypes ST3/ST4/ST8 or ST5/ST7 13 subtypes were found, including 8 known subtypes (STs1–3, 5, 10, 12–14) and five possible novel subtypes (temporarily named as STs18–22), ST10 as the predominate subtype

NP, not provided; PCR-STS, PCR-sequence tagged site. a Symptomatic carrier can represent one or more clinical pictures. b STs ¼ subtypes are written with descending order of the most prevalent in the affected population.

Nested PCR-sequencing

(Wang et al., 2018)

(Betts et al., 2018) (Moura et al., 2018)

(Zhao et al., 2017)

Blastocystis spp., Ubiquitous Parasite of Human, Animals and Environment

Country

431

432

Special characterization of different morphological forms of Blastocystis

Morphological forms

Size range

Granular

Ranges from 2 to 200 mm Average 4–15 mm Ranges from 15 to 25 mm The largest is 80 mm

Cyst

Small size (2–5 mm) Do not exceed 10 mm

Amoeboid

Vacuolar (Central vacuole form)

Shape

Spherical

Number of vacuoles

Function

Sensitivity to temperature

Central vacuole: Schizogonylike reproduction Storage function Cytoplasm: Sensitive to Program cell death temperature changes

One vacuole Zero vacuole Vacuole is replaced Role in reproduction by granules (progeny of Blastocystis)

Where it is present

Reference/s

Feces Culture Feces of symptomatic host Culture

(Tan et al., 2002; Tan, 2004; Parija and Jeremiah, 2013) (Tan et al., 2002; Tan, 2004; Parija and Jeremiah, 2013)

Zero vacuole

Transmissible infective form

2.6–7.8 mm

Spherical to oval Irregular outline with pseudopods like extensions

Sensitive to temperature changes Resistant survive for 19 days at room temperature Fragile at extremes of heat and cold

May/may not have vacuole

Role in endocytosis/ phagocytosis

NR

Avacuolar

5–8 mm

Spherical to oval

Vegetative form

NR

Multivacuolar

5–8 mm

Spherical to oval

Zero vacuole More than one vacuole of different sizes

Feces (Tan et al., 2002; Culture Environment Tan, 2004) Feces of symptomatic host (Tan et al., 2002; Tan, 2004) Culture Feces but usually missed. (Parija and Culture “Predominant” Jeremiah, 2013)

Vegetative form

NR

Feces but usually missed. (Parija and Culture “Predominant” Jeremiah, 2013)

NR, not reported.

Spherical

Blastocystis spp., Ubiquitous Parasite of Human, Animals and Environment

Table 2

Blastocystis spp., Ubiquitous Parasite of Human, Animals and Environment

433

Clinical Pictures It is difficult to decide that presented clinical signs and symptoms are attributed to Blastocystis infection. Blastocystis infection is characterized by nonspecific clinical features of illness. Clinical features had varied largely on the intestinal level. Some individuals are completely healthy apparent with asymptomatic colonization. However symptomatic individuals present vague abdominal complaints pain, flatulence, bloating, anorexia, vomiting and diarrhea. Diarrhea has been reported to be mild in immunocompetent and sever in immunocompromised. It has been reported that acute presentation of gastrointestinal symptoms is linked with finding of (five parasites/HPF x400) or less by oil objective. Colonization with different STs or strains of this organism might explain variable clinical presentations. Blastocystis spp. were detected more frequently with patients of irritable bowel diseases suggesting unknown pathogenic role. Linked to irritable bowel syndrome, Blastocystis spp. are able to induce physiological disturbances which might be microinflammation, host cell apoptosis, and modulation of host immune response. For extra intestinal symptoms, this parasite has been frequently reported to be associated with cutaneous lesions (acute/chronic urticaria, palmoplantar neuritis, and chronic angioedema). The presence of Blastocystis infection in patients with urticaria indicates a causal role of the parasite. A link also has been reported between Blastocystis spp. and colorectal cancer, ulcerative colitis, anemia, chronic liver disease, and reactive arthritis. It is not yet identified the parasite molecule involved in causing extra intestinal symptoms. However, Blastocystis might resemble other parasites that its antigens stimulate T helper cell mechanism and/or initiate complement cascade. Evidence exists that infection with Blastocystis might exert effects on the inflammatory status of the intestine. It has been proposed that host secreted IgA is degraded via effect of Blastocystis proteases that might disrupt the intestinal epithelial barrier, and increase production of proinflammatory cytokines. Variability of virulence factor might extent to intra-ST level because the same ST is commonly found in both symptomatic and asymptomatic hosts. In most cases, the infection is self-limited.

Genetic Diversity Within Blastocystis Species Blastocystis is a parasite of high diversity on the genetic level. A confounding factor makes its pathogenicity a matter of debate. This parasite isolates from humans and other animals have been reported to be morphologically indistinguishable. Therefore molecular tools have been selected to differentiate genetic variability of numerous Blastocystis isolates. Some molecular tools were used in different studies like polymerase chain reaction (PCR)-restriction fragment length polymorphism, random amplified polymorphic DNA, sequence-tagged site-PCR, pyrosequencing, qPCR assay, single-strand conformational polymorphism, and matrix-assisted laser desorption/ionization time of-flight mass spectrometry. Isolates from mammalian and avian hosts were classified in 17 genetically divergent ribosomal lineages termed subtypes (STs) and other arguably separate species. Most subtypes have been isolated from human and animals, excluding ST9 were exclusively isolated from human. A confirmation is raised to low host specificity and zoonotic potential of Blastocystis suggesting that animals might serve as large potential reservoir for transmitting infection. Epidemiological frequency of Blastocystis subtypes in symptomatic and asymptomatic patients has been reported in different countries being variable according to geographical distribution. Human is unusual to harbor multiple subtypes, however mixed infection were reported. Such co-infection with different subtypes could happen due to multiple sources of infection. Subtypes 3, 2, and 1 were the most abundant in previous studies (Table 1). ST3 is the most frequent among them. Even though there is apparent correlation between the parasite genotype and the host specificity, no clear correlation has been reported about the ST dependent pathogenicity or symptomatic colonization of the parasite. However ST1 and ST3 have appeared to be more involved in causing blastocystosis and more likely have pathogenic potential than other subtypes.

Blastocystis as a Part of Intestinal Microbiota It is still unclear whether Blastocystis is a pathogen, a commensal or even a beneficial member of the human gut microbiome. Blastocystis has been reported to be a member of intestinal microbiota in normal mammalian Eukaryotic, since its long-term host colonization without prompting disease. Almost all subtypes of Blastocystis are commonly associated with humans and able to colonize the gut stably. This parasite has high global prevalence in humans and animals. The increased detection rate of this microorganism is noticed in nonwesternized populations. In developing countries whereas people live with stressed lifestyle; Blastocystis is more prevalent with higher pathogenicity. Widespread of Blastocystis in healthy and symptomatic individuals reflects being part of intestinal microbiota. Moreover, some animal’s microbiota includes Blastocystis subtypes. Previous sequencing of Blastocystis and bacteria in healthy individuals evidenced that Blastocystis is less diverse and more patchily distributed than bacteria. Potential direct interactions of Blastocystis with specific bacterial members of the gut microbiome have been proposed, referring that presence and abundance of Blastocystis were strongly correlated with those of archaea. Whether this organism is pathogenic or not, it seems its action depend on the effect of infection on the gut bacterial microbiota and the host immune response.

434

Blastocystis spp., Ubiquitous Parasite of Human, Animals and Environment

Blastocystis Cysts in Water Since Blastocystis spp. infects human and almost all animals, water contamination is an inevitable fate. A vicious cycle forms from disposing human and animal fecal waste into water systems which come back contaminated to both hosts by domestic use or recreational activities. The fecal cyst is the environmentally resistant transmissible form. Blastocystis cysts were reportedly able to survive in water for more than 1 or 2 months at 25 or 4 C with ability to resist chlorine treatment. Reporting Blastocystis spp. in different water systems is increasingly documented. A kind of urgent call that makes the management of WHO decide that Blastocystis parasite shouldn’t be present in drinking water. Such environmental potential is usually come from abundance of risk factors. Human behavior and his activities (drinking untreated water, waste disposal in different water systems, washing animals in rivers), lack of sanitation and hygiene, climate differences, seasonal variations and parasite burden in both animals and human are respected risk factors significantly affect the wide spread of this parasite and lead to complete and repeat its cycle.

Conclusions Blastocystis whether commensal, pathogen, or part of human microbiota is still ubiquitous parasite cycling among human, animal and environment. This parasite evolution is increasingly reported almost everywhere worldwide involving animal, human, and even water. However, Blastocystis still remains underestimated and underreported in comparison to other protozoa. Risk factors are abundant especially in developing world providing good conditions to flare up blastocystosis. Since Blastocystis acts silently and practice opportunism, it should be carefully considered even if individual looks healthy and with no symptoms. Existence of this parasite in water affects its quality for drinking. Human hygiene and water practices should be governmentally observed to decrease transmitting Blastocystis infection. Diversity of Blastocystis on the genetic and morphotypes levels forms unique characterization of this parasite. Even though its pathogenicity is still unsolved mechanism, diversity in its subtypes plays mysterious role in its manifestations. Different morphology of Blastocystis and wide variability in size among its forms makes microscopic diagnosis hard mission. Moreover, similar size of vacuolar form to Cyclospora spp. and cyst form to some yeast confuses microscopists. Some developing countries still don’t consider Blastocystis as a necessary organism to be included in stool analysis report of patients. Consequently, this organism will remain underreported and its prevalence increase with no medical interference. A condition that might evaluate genetic subtypes among patients and give unknown species. It would be helpful if this parasite is taken on the serious level and governments of developing countries care to include it in the list of its pathogenic organisms. Clinical doctors should also bear in mind Blastocystis non-specific manifestations. Requiring simple stool analysis from suspected patients is able to rule out blastocystosis. Moreover, advising infected healthy patients to follow up if symptoms appeared will make a difference in the way of management. Standardization of diagnostic tools, axenization of new STs and providing animal models will be beneficial to answer unexplained questions about Blastocystis spp.

See also: Water and Health: A Review of the Practices in the MENA Region.

References Alinaghizade, A., et al., 2017. Inter- and intra-subtype variation of Blastocystis subtypes isolated from diarrheic and non-diarrheic patients in Iran. Infection, Genetics and Evolution 50, 77–82. Angelici, M.C., et al., 2018. Blastocystis hominis transmission by non-potable water: a case report in Italy. The New Microbiologica 41 (1). Barbosa, C.V., et al., 2018. Intestinal parasite infections in a rural community of Rio de Janeiro (Brazil): Prevalence and genetic diversity of Blastocystis subtypes. PLoS One 13 (3) e0193860. Betts, E.L., et al., 2018. Genetic diversity of Blastocystis in non-primate animals. In: Parasitology. Cambridge University Press, pp. 1–7. Cian, A., et al., 2017. Molecular epidemiology of Blastocystis sp. in various animal groups from two French zoos and evaluation of potential zoonotic risk. PLoS One 12 (1) e0169659. Moura, R.G.F., et al., 2018. Occurrence of Blastocystis spp. in domestic animals in Triângulo Mineiro area of Brazil. Revista da Sociedade Brasileira de Medicina Tropical 51 (2), 240–243. Noradilah, S.A., et al., 2017. Molecular epidemiology of blastocystosis in Malaysia: does seasonal variation play an important role in determining the distribution and risk factors of Blastocystis subtype infections in the Aboriginal community? Parasites & Vectors 10 (1), 360. Parija, S., Jeremiah, S., 2013. Blastocystis: Taxonomy, biology and virulence. Tropical Parasitology 3 (1), 17–25. Sánchez, A., et al., 2017. Molecular epidemiology of Giardia, Blastocystis and Cryptosporidium among indigenous children from the Colombian Amazon Basin. Frontiers in Microbiology 8, 248. Seyer, A., et al., 2017. Epidemiology and prevalence of Blastocystis spp. in North Cyprus. The American Journal of Tropical Medicine and Hygiene 96 (5), 1164–1170. Tan, K.S.W., 2004. Blastocystis in humans and animals: new insights using modern methodologies. Veterinary Parasitology 126 (1–2), 121–144.

Blastocystis spp., Ubiquitous Parasite of Human, Animals and Environment

435

Tan, K.S.W., 2008. New insights on classification, identification, and clinical relevance of Blastocystis spp. Clinical Microbiology Reviews. American Society for Microbiology 21 (4), 639–665. Tan, K.S.W., Singh, M., Yap, E.H., 2002. Recent advances in Blastocystis hominis research: Hot spots in terra incognita. International Journal for Parasitology 32 (7), 789–804. Tan, K.S.W., et al., 2010. Current views on the clinical relevance of Blastocystis spp. Current Infectious Disease Reports 12 (1), 28–35. Yersal, O., et al., 2016. Blastocystis subtypes in cancer patients: Analysis of possible risk factors and clinical characteristics. Parasitology International 65 (6), 792–796. Zhao, G.H., et al., 2017. Molecular characterization of Blastocystis sp. in captive wild animals in Qinling Mountains. Parasitology Research 116 (8), 2327–2333.

Further Reading Ajjampur, S.S.R., et al., 2016. Ex vivo and in vivo mice models to study Blastocystis spp. adhesion, colonization and pathology: Closer to proving Koch’s postulates. PLoS One 11 (8) e0160458. Alfellani, M.A., et al., 2013. Genetic diversity of Blastocystis in livestock and zoo animals. Protist 164 (4), 497–509. Gentekaki, E., et al., 2017. Extreme genome diversity in the hyper-prevalent parasitic eukaryote Blastocystis. PLoS Biology 15 (9) e2003769. Koloren, Z., Gulabi, B.B., Karanis, P., 2018. Molecular identification of Blastocystis sp. subtypes in water samples collected from Black sea, Turkey. Acta Tropica 180, 58–68. Mohamed, A.M., et al., 2017. Predominance and association risk of Blastocystis hominis subtype I in colorectal cancer: a case control study. Infectious Agents and Cancer 12, 21. Sekar, U., Shanthi, M., 2015. Recent insights into the genetic diversity, epidemiology and clinical relevance of Blastocystis species. The Journal of Medical Research (JMR) 1 (11), 33–39. Wang, J., et al., 2018. Subtype distribution and genetic characterizations of Blastocystis in pigs, cattle, sheep and goats in northeastern China’s Heilongjiang Province. Infection, Genetics and Evolution 57, 171–176. WHO, 2011. Microbial fact sheets. In: World Health Organization guidelines for drinking-water quality (WHO GDWQ), 4th edn. Gutenberg, Malta.

Bolivia: Mining, River Contamination, and Human Healthq Jerry R Miller and Lionel F Villarroel, Western Carolina University, Cullowhee, NC, United States © 2019 Elsevier B.V. All rights reserved.

Introduction The Andes Mountains are one of the highest and widest mountain ranges in the world, extending for > 8900 km along the western margin of South America. Most geologists believe that the Andes were formed during the last 27 million years, the majority of it during a period of intensive orogenic activity between 6 and 15 million years ago as the Nazca tectonic plate collided with, and was subducted beneath the South American Plate. An important component of the Andes is the Central Andean Plateau, which in spite of its name, consists of two subparallel, approximately north–south trending mountain ranges of considerable relief, referred to as the Western Cordillera (Cordillera Occidental) and the Eastern Cordillera (Cordillera Oriental) (Fig. 1). The Western Cordillera is a volcanic arc produced by magmas generated by tectonic subduction. Its peaks rise > 1200 m above the floor of the Peru–Chile trench off the Chilean coast. Its geology is distinctly different from the Eastern Cordillera, the latter of which is dominated by extensively folded and faulted marine and nonmarine rocks of Paleozoic and Cenozoic age. Sandwiched between the two mountain ranges is a low relief depression, called the Altiplano in Bolivia, that stands at an elevation of between 3400 and 3900 m asl. The Altiplano is a formidable feature measuring > 800 km in length and 130 km in width. It possesses no external drainage to the ocean; rather waters accumulate in Lake Titicaca, or in the smaller Lake Poopó to which Lake Titicaca drains via the south flowing Desaguadero River. The entire length of the Eastern Cordillera has been intruded by felsic magmas. These magma bodies allowed hot solutions and molten geological materials to enter the surrounding rocks, producing numerous dikes and veins containing concentrations of economically valuable minerals. Archeologists and historians have shown that minerals containing tin, copper, and lead were exploited by Pre-Colonial civilizations, including the Tiahuanacu and, later, the Incans. However, it was gold and silver that first caught the attention of Spanish explorers in 1530 and which ultimately led to the Incan demise. The first Spanish silver mine in Bolivia was located at Oruro on the edge of the Altiplano. Although Oruro would eventually become an important mining district, it was the enormously rich silver mines at Cerro Rico, the first of which began operation on April 1, 1545, which dominated the mining industry of Bolivia during the 16th and early 17th centuries (Table 1). The City of Potosí, located adjacent to Cerro Rico, grew rapidly into the largest city in South America, and by 1650, was one of the largest cities in the world. Initially, miners extracted native silver from the top of Cerro Rico using a variety of metallurgical methods, but by the early 1570s silver was extracted from lower-quality ore using a mercury amalgamation process. The amalgam process produced silver at Potosί and other mines throughout the region that sustained the Peruvian and European economies for more than three centuries. Mining, however, has not been limited to silver, but has including a host of other metals and metalloids including antimony, copper, gold, lead, tin, and zinc from deposits that extend the length of the Andes in Bolivia (Fig. 2; Table 2). An unfortunate legacy of mining has been the widespread contamination of rivers by a variety of toxic trace metals and metalloids (e.g., antimony, arsenic, cadmium, lead, mercury, and zinc). The potential risks that these substances pose to ecosystem and human health have only begun to be documented and understood. It is clear, however, that the type, magnitude, and extent of metal contamination as well as the potential effects on human health vary from river to river as a function of the metal contaminant(s) as well as the climatic, hydrologic, geological, and biotic characteristics of the region. In the following pages, two climatically and topographically contrasting “end-member” catchment areas are examined as examples of the dispersal, exposure pathways, and potential health risks associated with toxic trace metals and metalloids (here after referred to together as metals) within riverine environments in Bolivia. They include (1) the Rio Beni–Rio Madeira basin within the humid to hyperhumid tropical rainforests in the north, and (2) the Rio Pilcomayo basin within the semiarid, heavily impacted, rivers that drain the eastern flank of Andes in the south. The impacts of historic and contemporary mining operations on water and sediment quality have been documented for a number of other catchments as well, including the high altitude rivers that drain into Lake Titicaca and Lake Poopó within the Altiplano, rivers in and around the Oruro mining district, and high altitude rivers north of La Paz. Additional details pertaining to these catchments can be found in the papers listed in the Further Readings section later.

The Rio Madeira Basin Mining and River Contamination The majority of pre-Colonial gold was derived from exposed granitic batholiths at the highest elevations of the Tipuani and other river basins along the eastern flank of the Andes located north of La Paz (Fig. 2). Gold was also derived at the time from old alluvial

q

Change History: October 2017. Jerry R. Miller and Lionel F. Villarroel udated all sections to varying degrees. Last two rows of Table 1 were updated; figures remained the same. This is an update of J.R. Miller, L.F. Villarroel, Bolivia: Mining, River Contamination, and Human Health, Editor(s): J.O. Nriagu, Encyclopedia of Environmental Health, Elsevier, 2011, Pages 421–441.

436

Encyclopedia of Environmental Health, 2nd edition, Volume 1

https://doi.org/10.1016/B978-0-12-409548-9.10943-1

Bolivia: Mining, River Contamination, and Human Health

437

(A)

(B)

Generalized geological cross section across the central Andes

km

5

Eastern Cordillera Folded-thrust belt

Western Cordillera

West Peru−Chile trench

Altiplano

Pacific Ocean

0

Amazon plains Brasilian shield

5 500

0

1000

km

Legend Younger volcanic rocks Granites of all ages Sediments younger than 250 million years, forming the Western Cordillera

Older sediments of Paleozoic age (590−250 million years) forming the Eastern Cordillera and Andean foothills Precambrian rocks (7590 million years) forming the basement of the Andes and the Brazilian shield

Fig. 1 (A) Satellite image of Bolivia showing the major physiographic units including the Central Andes, sub-Andes, and the Brazilian plains (image from Image Science and Analysis, NASA-Johnson Space Center). (B) Cross section showing topography and general geology of the area. Modified from Redwood, S. (1987) Going for gold in Bolivia. New Scientist, 20, 41–43.

438

Bolivia: Mining, River Contamination, and Human Health

Table 1

Major phases of mining activity in Bolivia

Mining period Pre-Colombian • Tiwanaku (2000 BC–AD 1120) • Inca (1380–1550 AC) 1530–1545 Colonial period 1545–1825 Republican period 1825–1929

1929–1970s 1980–1990 1990–2006 2006–present

General characteristics Achieved high level of mineral extraction technology, such as copper foundries, and silver smelters Mined and crafted tin, lead, silver, and gold; gold objects attracted Spanish in 1530; smelted silver ores Spanish arrive and discover rich silver deposits at Cerro Rico and Oruro; first Spanish mine opened at Cerro Rico in April, 1545 Silver is the predominant commodity; mined silver minted and shipped to Spain; silver changes economy of Peruvian region as well as Europe An increase in the world price of silver brought Bolivia a measure of relative prosperity and political stability in the late 1800s. During the early part of the 20th century tin became the country’s most important source of wealth Tin production slowly increases during this period and eventually exceeds silver as the leading mineral exported; tin production reaches a maximum of 47,000 tons in 1929 Tin production decreases, particularly during the 1930s, 1950s, and 1960s; mining industry recovers during the 1970s; polymetallic deposits of zinc–lead–silver gain in importance. Antimony production also increased significantly; in 1970 Bolivia becomes the world leader in antimony production Importance of mining to Bolivian economy decreases during early 1980s, and subsequently drops sharply following the collapse of tin market in 1985. Polymetallic minerals, particularly zinc, increases during the period; zinc production exceeds that of tin for first time in 1990 Mining industry rebounds from foreign investment; concentration on polymetallic ore deposits. New discoveries of low-grade and high-volume deposits discovered such as Bolivar, Porco, Kori Kollo (Inti Raymi), Kori Chaca, and the world class deposit San Cristobal Mining industry nationalized in Bolivia, resulting in a contraction of foreign investment and reduction in ore production. New foreign investments in mineral exploration are primarily from Asian. Production dominated by previous mining operations and cooperatives exploiting nationalized mines. Nationalized polymetallic exploration projects were taken over by Comibol. Few recent discoveries

(river) gravels associated with uplifted and entrenched terraces positioned downstream of the lode deposits. In fact, these Tertiary placer deposits host some of the richest zones of alluvial gold in Bolivia, if not South America. While native gold immediately captured the attention of the Spanish, it took approximately 30 years for them to find its source, in part because of the region’s rugged and inaccessible terrain, and in part because their attention was focused on the rich deposits of silver at Cerro Rico and other localities. After its rediscovery in 1566, gold was sporadically mined from the Tipuani, Mapiri, and other eastern drainages until the late 1960s and 1970s when high gold prices initiated a new gold rush that literally brought thousands of miners to the area. These deposits continue to be mined today, as do alluvial (placer) deposits along downstream reaches of the Rio Beni and its downstream counterpart, the Rio Maderia (Fig. 3). The method of gold extraction, both in the past and today, predominately relies on mercury (Hg) amalgamation methods. For processing of lode deposits, mercury amalgamation typically involved grinding or pulverizing the ore, and then mixing it with water, salt, and liquid (metallic) mercury, often for periods of several weeks. During the process, gold binds with the mercury creating a dense mercury–gold amalgam particle. These dense particles could then be extracted from the rest of the sediment by various means of gravity separation, after which they were heated to drive the mercury off as a vapor, leaving a semipure form of the precious metal. Mercury amalgamation was also used to extract gold from alluvial placer deposits. In this case, some procedure (e.g., a gold mining pan or sluice box) is used to separate the heavy minerals from the rest of alluvial sediments, after which the heavy mineral fraction is mixed with mercury to concentrate and remove fine gold particles. In yet other instances, mercury is placed above riffles on a sluice box where it captures the gold as the water and sediment passed over its surface. Regardless of the precise nature of the amalgamation process, mercury is released to the atmosphere as a vapor, and in metallic form directly to the river. Maurice-Bourgoin and his colleagues from the French Research Institute for Development estimated that > 200 mining cooperatives working in the Tipuani, Mapiri, and Kaka rivers utilized 250–500 kg of mercury per year following the onset of the gold rush, 50%–70% of which was released to the atmosphere, river waters, or local soils. A total of 330 t may have been released to the environment between 1952 and about 2002. Given the potential ecological and human health affects of mercury (Table 3), its release to the environment by mining activities is a significant concern, not only in Bolivia, but in a great number of other areas of the Amazon basin, such as portions of Brazil, Venezuela, Colombia, Ecuador, French Guiana, Guyana, and Suriname where Hg amalgamation is extensively used. The first detailed studies of the environmental consequences of mercury amalgamation mining on tropical ecosystems in South America began in the mid-1980s, the majority in Brazil. What emerged is a generally accepted hypothesis that mercury concentrations in water, air, sediments, soils, and aquatic biota (primarily fish) are well above global averages. These elevated mercury levels are presumably related, in part, to both historic and modern mercury amalgamation mining. However, the link between amalgamation mining and measured mercury in the environment is not as straight forward as initially presumed. For example, high mercury

Bolivia: Mining, River Contamination, and Human Health

439

d

Fig. 2 Map of the major mining districts in Bolivia, and the primary commodities that are associated with them. Mineralization is more widespread within the Rio Pilocamayo basin than farther north in the Rio Beni–Madeira basin. Descriptions of mining districts and their associated mines are found in Table 2.

440

Mining district

Major mining districts and possible associated contaminants Major mines

Period of mining

1 Madera-Amazon basin 1.a Rio Beni Apolobamba Sunchuli, Suches, Kantantika, and Sural Yani–Aucapata San Vicente, Pallaya, and La Suerte Tipuani–Mapiri–Guanay Tipuani, Mapiri, and Guanay Illimani

1800s to present

Viloco–Caracoles district Viloco Colquiri

Caracoles Colquiri

Chicote Grande Chicote Grande

Quioma 1.c. Rio Madera

Major commod

Minor commod

Potential contaminants

Pb–Cu–Zn

Hg, Pb, Cu, Zn, Cd

Au Au

Ag–Pb–Zn

Hg, Pb, Cu, Zn, Cd

Au Au

Ag

Hg

Sn–Au

W

As, Zn, Cu, Mo

Sn Sn, W

Cu, Mo

As, Zn, Cu, Mo

W

As, Zn, Cu, Mo

Pb–Cu

Pb–Zn–Cd

W W, Sn

Zn–Cu

Pb–Zn

W, Sn

Pb–Zn–Ag

Pb–Zn

Sn Sn

W–Bi–(Zn)

Zn–As

Pb–Ag–Sn Pb–Ag–Sn

Zn–Bi

Pb–Zn–As

Begun on 1500s. More active from 1990s to present

Au Quartz-gold veins hosted by metasedimentary rocks of the Au Andean segment called Cordillera de Apolobamba

Begun on Incan times Active from 1900s to present

Quartz-gold veins hosted by metasedimentary rocks

Begun on 1566. Colonial gold rush 17th and 18th centuries. 1960s to present

The largest gold placer deposits in Bolivia. Gold is contained in modern alluvial sediments and old Tertiary conglomeratic terraces Quartz–gold–tin veins hosted by metasediments. Genetically associated to Oligocene intrusion

1800s to present. More active from 1860 to 1985. From 1985 to present operated by cooperatives

Quartz–gold–tin veins hosted by metasediments. Genetically associated to Oligocene intrusions

1800s to present

Polymetallic veins hosted by Paleozoic rocks, mainly slates and shales

1800s to present

Quartz–wolframite veins hosted by Paleozoic rocks

Kami 1800s to present 1.b. Rio Mamore Siglo XX-Llallagua Llallagua, Siglo XX, 1800s to present and Uncia Colquechaca Colquechaca 1800s to present Quioma

Description

1800s to present

Quartz–cassiterite (Sn) veins hosted by Silurian rocks, genetically associated to subvolcanic Miocene intrusion Polymetallic deposit with a complex mineralogy. Veins are hosted by Paleozoic rocks and genetically associated to Miocene intrusion Polymetallic veins hosted by Paleozoic rocks

Sn, Bi Zn–Ag–Sn Zn–Ag–Sn

Pb–Zn–Ag Pb–Zn–Ag

Pb–Zn

Bolivia: Mining, River Contamination, and Human Health

Table 2

Araras

Nueva Esperanza

2. Pilcomayo basin 2.a. Upper Pilcomayo Cerro Rico de Potosí

Malmisa

Malmisa

Hg

The world’s largest silver deposit. Sulfide dominated, polymetallic vein systems enriched in silver and zinc. Hosted by Paleozoic rocks and intrusive rhyodacitic rocks. Associated to subvolcanic Miocene intrusion

Zn–Ag–Sn

Pb–Cu–W–Sb

Hg

Tin veins hosted by Paleozoic rocks and hosted by Paleozoic rocks. Genetically associated to Miocene intrusion

Sn

Sn

Zn–Ag–Cu–Bi–W

Zn–Cu

Zn–Ag–Sn Zn–Ag–Sn

Sb

Pb–Zn–Sb

Mines Extensive Ag exploitation discovered operated by in 1544. Begun on 1545 through 1860 several cooperatives From 1860 to 1985 Sn was mined From 1985 to present Ag–Zn ores are exploited From late 1800s to 1980s

1800s. Depleted mine

Layers of carbonaceous rocks containing disseminated cassiterite

Huari-Huari

1800s to present

Polymetallic veins hosted by high tectonized Paleozoic rocks

1500s to present

The major producer of zinc in Bolivia. Consists of a swarm of fissure-filling veins that cut the dacitic tuff. Genetically associated to Miocene stock and caldera environment

Andacaba

1800s

Kumurana

1800s

Quechisla

Polymetallic veins associated to Miocene intrusion

Chilcobija–Avaroa Chilcobija

Polymetallic deposit exploited by Ag, Zn and also Sn enriched at depth. Associated to subvolcanic Miocene intrusion

1900s to present

Predominantly Sb veins associated to folded Paleozoic rocks. Locally gold mineralization was exploited Predominantly Sb veins associated to folded Paleozoic rocks

1900s to present

Ag–Zn Ag–Zn–Pb-Sn

Pb–Zn-Sb

Pb–Zn Cu–W–Zn–Ag–Pb Pb–Zn–Cu

Sn–W

Cu–Pb–Au–Bi–Ag Pb–Cu–Zn

Ag–Zn–Pb–Sn Ag–Zn–Pb–Sn

Bi–Sb

Pb–Zn–Cu–Cd

Sb–(Au) Sb

Au

Pb–Zn–Sb

Sb

Pb

Pb–Zn–Sb (Continued)

441

1800s

Avaroa

Sn

Zn–Sn Polymetallic veins enriched in Sn and base metals, Pb–Zn–Ag associated to Miocene intrusion in Caldera environment Polymetallic veins enriched in Sn and base metals, Sn associated to Miocene intrusion in Caldera environment Sn–W

Chorolque 2.c Tupiza-San Juan del Oro Tatasi-Portugalete Tatasi

Mo

Pb–Zn–Cu–As–Sb Pb–Zn–Cu–As– Sb–cyanide

Bolivia: Mining, River Contamination, and Human Health

Colavi

2.b. Tumusla–Pilaya Porco Porco Kari Kari

Sn

Alluvial gold deposit. Gold particles are contained in modern alluvial sediments. The sources of gold are the gold veins from the Andean headwaters

Colavi–Canutillos

Huari-Huari

Au Au

1900s to present

Major mines

Other basins 1.Lago Titicaca basin Matilde Matilde 2. Lago Poopo basin Kori Kollo Kori Kollo

Corocoro Antequera

Corocoro

Bolivar Poopó

Huanuni

Huanuni

3. Salar de Uyuni basin San San Cristobal Cristobal Pulacayo

Pulacayo

Carguaycollo Carguaycollo

Period of mining

Description

Major commod

Polymetallic veins hosted by Paleozoic rocks

Zn

Pb–Zn–Cu

Au, Ag

As, Cu, Pb, Sb, Zn

Discovered in 1800s. Copper was exploited from 1952 Sulfide vein and disseminated Au–Ag deposit. Mineralization is hosted within a dacitic volcanic dome. to 1980. Massive Au–Ag exploitation start in 1980s Corresponds to volcanic hosted epithermal deposits type until early 2000s when the ore reserves were depleted. This is the most important Au Bolivian mine Discovered in the 1800s. Copper was exploited until Stratabound copper ores containing native copper and copper sulfide minerals associated to continental 1987. From this year to present is mined by redbeds cooperatives at small scale Discovered in the early 1800s. Tin extraction from 1880 and beyond 1910. From 1977 until present Zn–Ag is exploited Begun in 1800s to present

Polymetallic veins filled with Zn–Pb–Ag sulfides and hosted by Silurian rocks

Potential contaminants

Cu

Ag

Pb–Zn

Zn–Ag Zn–Ag

Pb

Pb–Zn–Cu

Pb

Pb–Zn–Cu

Ag–Zn–Pb

Pb–Zn

Polymetallic veins filled with Zn–Pb–Ag sulfides and also Zn–Ag–Sn Sn veins hosted by Silurian rocks Discovered in the 1500s. Tin production Begun in the The most important Bolivian tin mine. Polymetallic veins Sn early 1900s untill present hosted by Silurian rocks Discovered and exploited since colonial This mine is considered the most important and world timesd1500s. Exploited at medium scale including class AgeZn deposit. Veins and disseminated Pb–Zn– open pit system untill late 1990s. New operation Ag sulfides and sulfosalts mineralization. Genetically Begun on 2007 associated to andesitic and dacitic Miocene intrusions Discovered and extensively exploited in the 1800s. One of the most important Ag producers in the early During the mid-1900s Begun small scale production 1900s. The system consisted of two veins hosted by Paleozoic rocks and associated to Miocene subvolcanic intrusion Discovered in the 1500s This silver deposit consists of sulfide and sulfosalts veins. It is another example of Polymetallic deposits associated to subvolcanic intrusions

Minor commod

Pb–Zn–Ag

Pb–Zn

Pb–Zn–Ag

Pb–Zn

Zn–Ag–Sn–Sb

Cd–In

Pb–Zn–Sb–Cd

Bolivia: Mining, River Contamination, and Human Health

Mining district

Major mining districts and possible associated contaminantsdcont'd

442

Table 2

Bolivia: Mining, River Contamination, and Human Health

443

Fig. 3 Upper leftdFerralitic soil exposed along the Rio Madeira. These soils have been naturally enriched in mercury and their erosion contributes large quantities of mercury to the river. Upper rightdsmall scale, amalgamation gold mining along the Rio Tipuani at the headwaters of the Rio Beni–Madeira basin. Miners use mercury to capture the fine gold particles (photo courtesy of Danilo Bocangel). Lower leftdannually inundated floodplain along the Madeira river. Floodplains serve as important regulators of mercury flux to downstream areas of the Amazon basin. Lower rightda fish collected for mercury analysis along the Madeira river.

Table 3

Potential health affects associated with various forms of mercury

Mercury species

Major health effects 0

Elemental (metallic) mercury (Hg ) Methylmercury Dimethylmercury Inorganic and organic compounds in general

Vapor can be absorbed through lungs; major organs affected include kidneys and central nervous system. May also cause respiratory, cardiovascular, and gastrointestinal effects as well as death at high exposures. Common symptoms include restlessness, trembling, headaches, insomnia, gingivitis, and rapid mood swings Common exposure is through the consumption of contaminated fish which accumulate methylmercury. Primary effect is impaired neurological development, particularly in children and babies exposed in the womb. May cause coronary disease in children Extremely toxic, causes brain and liver damage High exposure to inorganic compounds effect the nervous system, gastrointestinal tract, and/or the kidneys; Hg2 þ compounds are more toxic than Hg1 þ salts. HgS (Cinnabar) is generally nonbioavailable as Hg is locked in crystalline structure of the mineral

concentrations in water, sediment, and biota in comparison to global averages were shown to occur where there was no known mining activity. In addition, concentrations of mercury measured in sediments of the Rio Madeira of Brazil and other river channels were found to be similar to the concentrations observed in ferralitic soils developed in old, and topographically high, terrace deposits (> 100 ng g 1) (Fig. 3). Mercury concentrations within these old soils (which were not affected by flooding or the deposition of contemporary river sediments) are relatively high in comparison to soils found in most temperate or Nordic environments (where mercury concentrations are often on the order of 20–40 ng g 1). M. Roulet and his colleagues were able to demonstrate that over long periods of time (105–106 years), mercury could accumulate in ferralitic soils by the slow, but continuous atmospheric deposition of mercury over the landscape. This naturally occurring mercury was derived largely from crustal degasification (volcanism) and the weathering of mercury containing sulfides and other minerals. Once deposited, mercury is carried to depth within the profile by the leaching of soluble humic–mercury complexes, where the complexes are then broken down. The released mercury is subsequently readsorbed by Fe and Al oxyhydroxides. Mercury in the soils might also be derived from the atmospheric deposition of mercury released during biomass burning and, more recently, the processing of mercury–gold amalgam, although the importance of these sources is currently unknown and remains a topic of controversy. The important point is that the ferralitic soils developed in these very old landscapes appear to be a natural mercury sink in humid, tropical regions of South America. The erosion of these

444

Bolivia: Mining, River Contamination, and Human Health

soils, such as occurs during deforestation, can release large quantities of mercury to the aquatic environment. In fact, within some Brazilian basins, amalgamation mining was responsible for < 5% of the total mercury content of the water column (Roulet et al., 1998). In spite of the influence of natural mercury contained in ferralitic soils on mercury concentrations in the environment, MauriceBourgoin and his colleagues have shown that headwater tributaries exploited for gold along the upper elevations of the Eastern Cordillera exhibit higher mercury levels and downstream mercury fluxes than those which have not been affected by gold mining (see Further Readings). In 1999–2000, the total mercury fluxes were more than four times higher in rivers with gold mining than those without. The association of higher mercury fluxes with the onset of intensive mining in the late 1960s and 1970s is also supported by changes in normalized mercury concentrations within floodplain deposits extracted downstream along the Madeira river. The ratio of observed mercury concentration to expected concentrations was approximately 1 from 1900 to 1965, but increased thereafter as a result the inputs from gold exploited, Andean drainages. While mercury inputs to the basin appear to be higher in areas of gold mining, the prevailing thought is that the direct contribution of mercury by amalgamation mining is relatively small in comparison to other potential sources, such as the erosion and influx of sediment from mercury-enriched soils and black shales containing high mercury contents. The input of millions of tons of these sediments in response to road construction, colonization, and agriculture, all of which are closely linked to gold exploitation, appears to overwhelm mercury released as part of the amalgamation process. Interestingly, the highest concentrations of mercury are not found in river waters within the gold exploited tributaries, but occur > 200 km downstream where the major rivers, such as the Beni, enter the low-relief Amazonian plains (llanos) (Fig. 1). This apparent disconnect between zones of mining and relatively high mercury concentrations in river waters results from the interaction of several factors, including the regional physiography, the climatic regime, the relatively undisturbed nature of the channel, floodplain, and surrounding catchment, and the predominant mode of mercury transport. Of particular importance is the strong affinity of mercury for organic matter and fine-grained particles, which allows the majority of the mercury to move downstream in particulate form as part of the suspended sediment load. During the 1999 water year, for example, the total mercury flux was estimated by Maurice-Bourgoin’s team to be 33 tons along the Rio Beni, 98% of which was associated with particulate matter during the wet season. The association of mercury with sediment is important because geographical patterns in concentration and flux coincide with spatial variations in the rivers’ ability to transport fine-grained sediment. In the Beni–Madeira river basin, steep slopes and concentrated flows within narrow upstream valleys allow sediments eroded within the gold-exploited tributaries to be rapidly transported downstream. Deposition farther downstream is limited as the rivers continue to traverse steep, narrow valleys along the eastern flank of the Andes. Upon reaching the Amazonian plains in Bolivia, 40% or more of the sediment load is deposited upon floodplains as channel gradients dramatically decrease and floodwaters annually inundate extensive floodplain surfaces (covering an area of about 150,000 km2). Estimates conducted along the Rio Beni indicated that 4.5 tons of clay-associated mercury may be deposited annually upon the floodplain after exiting the Andes. As much as 47% (2.1 tons) of mercury may reenter the river as the floodplain deposits are eroded, primarily by lateral channel migration. Considering that Andean rivers, including the Beni, supply 99% of the sediment to the Amazon river, and that most of the mercury is transported with particulates, the exchange of sediment and mercury between the river and their floodplains serves as an extremely important regulator of mercury from the gold-exploited Andean tributaries to downstream reaches of the Amazon.

Potential Effects of Mercury on Human Health The geochemistry of mercury is complex as it can exist in a number of inorganic and organic chemical forms. Inorganic forms, including metallic mercury (Hg0), mercurous mercury (Hg22 þ), and mercuric mercury (Hg2 þ) occur naturally in the environment, and are produced by a wide variety of industrial activities. The mercury utilized in amalgamation mining is initially in the metallic state. Exposure to metallic mercury primarily occurs through the inhalation of mercury vapors generated by the heating of amalgam particles (often called amalgam burning) during the refining process. Most studies have shown that about 80% of the inhaled mercury is retained in the body as it is highly diffusible and lipid soluble. In marked contrast, < 0.01% of metallic mercury is absorbed through the gastrointestinal tract following ingestion. Inorganic forms of mercury, including metallic mercury, can be transformed into organic mono- and dimethylmercury species by methanogenic bacteria and/or abiotic reactions in aquatic sediments. Monomethylmercury (or simply methylmercury) is the most common form of the two organic species and is readily accumulated and biomagnified in biota. In humans, about 95% of the ingested organic mercury is absorbed following ingestion, most commonly by means of consuming contaminated fish. Detailed epidemiological studies of the effects of mercury on human health in the Rio Beni–Rio Madeira basin are lacking. Analyses of the degree of mercury exposure are also limited, and based primarily on mercury concentrations measured in human hair of miners and other downstream populations living along the Rio Beni–Rio Madeira system. Hair is generally considered an acceptable indicator of organic mercury exposure, although it may contain inorganic mercury as well. The National Research Council suggests that mercury values exceeding 10 mg/g in hair represents the toxic effect threshold. A number of studies focused on villages located along rivers in the Amazonian basin of Brazil, such as the Tapajó River, have shown that mercury concentrations in human hair range from approximately 10–20 mg/g, thereby exceeding the toxic threshold level. Interestingly, Barbieri and his colleagues found that inhabitants in villages located in similar geographic settings along the lower Rio Beni were found to possess mercury levels that were only about one-third of those found in Brazil. In addition, mercury

Bolivia: Mining, River Contamination, and Human Health

445

levels varied semisystematically between villages located between the foothills of the Andes and more secluded downstream reaches in Bolivia. Low mercury levels, with averages ranging from approximately 2.3–3.6 mg/g, were associated with communities that relied heavily on farming for subsistence, whereas higher concentrations (with averages ranging between about 7.2 and 9.2 mg/ g) were associated with communities that relied on commercial fishing and/or that diversified their resources. The noted spatial/socioeconomic differences in mercury levels between these communities suggest that the consumption of fish is a primary exposure pathway for populations located along the lower Rio Beni–Rio Madeira system. Indeed, intermediate concentrations have been found in secluded downstream areas that participate in the illegal logging market, a trait that modulates their consumption of fish. In addition, median mercury values exceeded the 10 mg/g threshold in hair of two of the studied communities in which fish is a dietary staple. The analysis of fish from the Beni, Tuichi, and Quiquibey rivers showed that 86% of the samples exceeded the maximum permissible level of 0.50 mg g 1set by the World Health Organization (WHO). Mercury concentrations varied between the 24 collected species, primarily as a function of their dietary habits (Maurice-Bourgoin and Quiroga, 2002). Concentrations in herbivorous and omnivorous species ranged from 0.009 to 0.193 mg g 1 (wet weight), respectively. In contrast, total mercury concentrations in piscivorous fish averaged 0.986 mg g 1. The fish data, then, support the hypothesized exposure pathway, and suggest that mercury may pose a human health risk to communities that consume large quantities of fish along the lower Rio Beni and Rio Madeira. Upstream within the Andes mercury concentrations in hair are generally lower. These lower values are thought to be related to the fact that they consume less fish as many of the small, headwater tributaries do not contain edible species. Thus, miners on average, consumed fish less than twice per month, although some communities consume little if any fish. As would be expected, then, human hair of gold miners in these upstream areas is dominated by inorganic mercury derived from the inhalation of mercury vapor generated during amalgam “burning.” Higher levels of inorganic mercury in their hair appears to be associated with miners who use heavy mining equipment in comparison to artisanal mining activities. Laffont and his colleagues argue that the former may lead to the increased use and frequency of amalgam burning. It is important to note that the available data show that mercury exposure is significantly higher in downstream populations who, because of the physical processes responsible for mercury dispersal, are well removed (by 10s to 100 s of km) from the mine sites. These downstream populations receive little economic benefit from gold-mining activities.

The Rio Pilcomayo Basin The eastern flank of the Andes in southern Bolivia between approximately 17 and 19 south latitude is drained by the Rio Pilcomayo (Fig. 4). The Rio Pilcomayo flows for > 670 km from its headwaters on the edge of the Altiplano through the Andean foldedthrust belt to the Argentine border (Fig. 4). In Bolivia, its basin encompasses an area of roughly 91,100 km2. In contrast to the terrain farther north, the Pilcomayo basin is more extensive mineralization and contains a wider variety of economically viable ore deposits. Numerous historic and modern mining districts are spread throughout the basin where such minerals as silver, tin, lead, zinc, and antimony have been extracted (Fig. 2). Contamination of the Rio Pilcomayo and its tributaries by toxic trace metals is extensive, and can be attributed to (1) preColonial smelting of Andean silver ores, (2) historic mining operations that predate the use of modern ore processing and environmental management technologies, (3) present day, small- to medium-scale mining operations which rely on a combination of historic and modern technologies, but which apply only marginal environmental management techniques, (4) acid mine drainage from the historic and contemporary mines, and (5) the failure of mine and mill tailings impoundments, resulting in the near instantaneous release of contaminated effluents and particulates to the adjacent drainages. Metal contamination may also result from the influx of urban runoff and waste products from the city of Potosί and other communities. Details of how each these contaminant sources affect the axial channel of the Rio Pilcomayo and its tributaries vary across the catchment. Nevertheless, certain commonalities in riverine contamination and contaminant dispersal processes exist, the majority of which are displayed by the upper Rio Pilcomayo (see Further Readings).

The Upper Rio Pilcomayo Mine History and Contaminant Sources

One of oldest and most important mining districts in the Rio Pilcomayo basin is the Cerro Rico de Potosí precious metal–polymetallic tin deposits. These deposits, located in a conical hill, called Cerro Rico, were first exploited by pre-Incan metalsmiths who smelted silver-bearing ores. This initial phase of smelting was conducted in wind-drafted kilns called huayras, and culminated with the Incas. Colonial silver mining at Cerro Rico (Fig. 5) began in 1545, and continued until 1880. Extraction after about 1574 relied on the patio mercury amalgamation process described in the previous section for gold. Silver mining was eventually replaced by a period of intensive tin mining from approximately 1880–1985 when global tin prices fell. Mining then shift to the extraction of lead, zinc, and to a much lesser degree silver. At the time of this writing, lead, zinc, and silver continue to be the primary commodities extracted from Cerro Rico, and ore concentration (beneficiation) is conducted using a combination of froth flotation for lead and zinc and cyanidation for silver.

Bolivia: Mining, River Contamination, and Human Health

RQQ

Quila Quila

Andes

Yocalla

Tu ru

Flank

Tasapampa Tuero Chico

Rio

East

Porco Mine

Mondragon

RSS Sotomayor R4 Uyuni

ch ipa

446

Rio

R2

Study Area

R1c Tar apa ya Rio

km

10

o ay om

0

lc

Pi

R1

R1b

La Puerta R0

Rio

Potosi′

Pilaya

R4a R5

San Antonio Cerro Rico

Fig. 4

Rio La Ribera

0

km 100

Villa Montes

Sample location

Sample location (2004 only)

Community

Mine and mills

Map of the Rio Pilcomayo basin, and the sampling locations referred in Fig. 6.

Fig. 5 Upper leftdCerro Rico as viewed from Potosί the first Spanish mine was located near the top of the hill. Upper rightdMill processing facility near Potosί in 2001, prior to the use of tailings impoundments; Gray effluent was funneled down the hillside and directly entered the Rio de La Ribera, a tributary to the Rio Pilcomayo. Lower leftdRiver water in the Rio Tarapaya approximately 2 km downstream of the mines and mills in 2001. Lower rightdRiver water in the Rio Pilcomayo in 2001 near Mondragon located approximately 25 km from the mines and mills at Cerro Rico. Reddish water in foreground is acidic drainage produced by the oxidation of pyrite and other sulfide minerals released from the mills and deposited along the river. All photographs were taken during the dry season.

Bolivia: Mining, River Contamination, and Human Health

447

Recent studies of atmospheric mercury emission conducted by coring, sampling, dating, and analyzing sediment from Laguna Lobato, a lake located about 6 km downwind from Cerro Rico, have demonstrated that mercury concentrations in the sediments began to rise significantly above background values around 1100 CE in response to pre-Colonial smelting. Mercury values reach a peak at around 1250 CE, before steadily declining after about 1300 CE to the top of the core (Cooke et al., 2011). Presumably, mercury associated with pre-Colonial smelting was associated as an impurity with the silver ores. Interestingly, atmospheric mercury emissions associated with the use of the mercury amalgamation process never attained the peak values that occurred in 1250 CE, although the values were well above background. Mercury released from pre-Colonial and Colonial era smelting and mining operations has been identified in historic floodplain deposits of the Rio Pilcomayo > 600 km downstream. Contemporary channel deposits, however, exhibit mercury concentrations that decrease rapidly downstream and beyond about 200 km are only slightly above background values (Fig. 6). Until approximately 2004, the most significant contemporary sources of toxic trace metals to the upper Rio Pilcomayo was the release of froth flotation effluent and tailings materials directly into the Rio de La Ribera and its tributaries by processing mills located in the vicinity of Potosí (Fig. 5). The released effluent (containing contaminated liquids and particles) typically possessed a pH > 10, was highly enriched in trace metals, and exhibited a dark gray color that could be visually traced for > 175 km downstream (Fig. 5). In 2004, many of the mills were relocated and the majority of the effluent and tailings from the mills were captured in tailings impoundments, reducing its direct influx to the tributaries of the Rio Pilcomayo. As a result, it is thought that the primary source of metals and metalloids to the river today is acid mine drainage that results from the weathering of sulfide minerals associated with the ore deposits. Regardless of the primary source, detailed analyses of river waters and sediments have shown that the channel remains highly contaminated by a wide range of toxic trace metals for distances of 200 km or more downstream of Cerro Rico. The dispersal of trace metals along the upper Rio Pilcomayo (and presumably other rivers draining the eastern flake of the Andes), is highly complicated by a number of interrelated processes. The fundamental concepts inherent in the most important of these transport processes are summarized in the following paragraphs. (1) The dispersal of 95% or more of the trace metals occurs by the physical downstream transport of metal-enriched particles, or sediment to which the contaminants are attached (sorbed). Sulfide minerals (e.g., pyrite, galena, and sphalerite) associated with ore deposits, and released during the milling process, are particularly important carriers of trace metal contaminants. (2) Concentrations of trace metals in modern channel bed sediment are typically elevated above background levels as well as commonly used threshold effect and probable effect values for kilometers to 100s of kilometers downstream of the mines. In the case of the upper Pilcomayo, concentrations of arsenic, antimony, cadmium, copper, lead, mercury, silver, thallium, and zinc in modern channel sediments are elevated above regional background values for approximately 200 km downstream of Potosí (Fig. 6). Farther downstream, concentrations of copper, lead, mercury, and zinc are only slightly elevated above background values, and silver, cadmium, antimony, and thallium cannot be distinguished from background levels. In other tributaries where metal inputs are more modest, such as the Rio Chilco, trace metals concentrations decrease within a few 10s of kilometers of the mine sites. (3) Downstream decreases in concentration within channel bed sediments are produced by the complex interaction of numerous chemical and physical processes in addition to the quantity of released waste materials, including hydraulic sorting, dilution associated with distributing the metals over a larger area and by the influx of “clean” sediments, and channel bed aggradation and storage. Hydraulic sorting is often most important close to mine sites and is primarily associated with the separation and preferential deposition of relatively dense sulfide minerals in the channel bed because they settle more quickly, are transported less rapidly, or are moved less frequently than hydraulically lighter grains of similar size which normally dominate alluvial (river) sediments (e.g., quartz and feldspars). In contrast, dilution and channel bed aggradation and storage can be important along most reaches of the channel. Channel bed aggradation, involving the continuous deposition of sediment on the channel bed and its rise in elevation through time, is a widespread phenomenon throughout the upper Pilcomayo basin, and results in the removal and storage of large quantities of metals below the modern channel floor (Fig. 7). (4) Where present, floodplain and historic terrace deposits usually exhibit higher trace metal concentrations than the adjacent channel, and serve as important sources of contamination to the river. Studies of the river’s alluvial landforms and deposits show that metal contaminated particles are transported downstream and deposited on floodplains during large runoff events. The accumulation of mercury from pre-Colonial smelting and historic amalgamation mining operations on the floodplain surface has been found to be particularly important (Fig. 8). More recent deposits are characterized by a wide range of trace metals. Where these metal-enriched deposits exist, they can be eroded and reintroduced to the river, increasing metal concentrations. However, lateral channel migration and other geomorphic processes associated with channel bed aggradation has largely reworked and/or buried the deposits along all but the most downstream reaches of the Rio Pilcomayo, greatly reducing their significance as a metal source in this area. Floodplains, however, are extensive along many of the Pilcomayo’s tributaries. (5) The transport and deposition of contaminated particles varies abruptly along the axial channel and its tributary as a function of valley morphology (particularly its width and gradient). The upper Río Pilcomayo and its tributaries can be subdivided into distinct morphological segments, each of which possess a specific suite of landforms, that vary in their ability to transport and store contaminated sediment. The distribution of these segments is strongly influenced by the local geology, particularly the

Bolivia: Mining, River Contamination, and Human Health

Concentration (ppm)

1600

Copper

1200 800 400

16,000 Concentration (ppm)

448

12,000 8000 4000 0

0

R0 R1 R1b R2 R3 R4 R4a R5 R6

R0 R1 R1b R2 R3 R4 R4a R5 R6 2000 Arsenic 6000 4000 2000

Concentration (ppm)

Concentration (ppm)

8000

0

Lead 1600 1200 800 400 0

R0 R1 R1b R2 R3 R4 R4a R5 R6 Silver

300 200 100

R0 R1 R1b R2 R3 R4 R4a R5 R6 Concentration (ppm)

Concentration (ppm)

400

0

200

120 80 40 0

Antimony

60 40 20

R0 R1 R1b R2 R3 R4 R4a R5 R6 Concentration (ppm)

100 Concentration (ppm)

Cadmium

160

R0 R1 R1b R2 R3 R4 R4a R5 R6

80

Zinc

80

Thallium

60 40 20 0

0

R0 R1 R1b R2 R3 R4 R4a R5 R6

R0 R1 R1b R2 R3 R4 R4a R5 R6

Low-water channel High-water channel Historic deposits

Concentration (ppm)

Sampling sites 5000

Mercury

4000 3000 2000 1000 0 R0 R1 R1b R2 R3 R4 R4a R5 R6 Sampling sites

Fig. 6 Changes in metal concentrations within the channel bed sediments of the Rio Pilcomayo. Low-water deposits are inundated continuously, whereas high-water deposits are inundated and reworked during the wet season. Historic deposits are found in the floodplains and/or terraces. Sampling locations are shown on Fig. 4. Most metals decrease in concentration downstream. Trends in mercury reflect releases during the onset of pre-Colonial smelting as well as the use of amalgamation mining.

Bolivia: Mining, River Contamination, and Human Health

449

Fig. 7 Many, if not most, of the rivers in the Rio Pilcomayo basin are aggrading. Rises in channel bed elevation are easily recognized by examining changes in the depth of the channel beneath bridge structures, and existence of multiple generations of groins built one on top of the other to reduce bank erosion and protect communities from flooding. Contaminated sediment in the channel bed reduces trace metal concentrations along the river by removing it from the sediment–water interface. However, these contaminated deposits may be reworked leading to renewed downstream contamination years to decades later.

Historic sediment (mercury contaminated)

Pre-Colombian sediment

Fig. 8 Floodplain deposits located along the Rio Pilcomayo upstream of Villamontes. The reddish sediments predate both pre-Colonial smelting and mercury amalgamation mining and possess low levels of mercury. The sediments above the dashed line are of historic age and are highly contaminated with mercury. The dark buried soil that parallels the white dashed line roughly corresponds to pre-Colonial smelting and is also enriched in mercury. Erosion of the upper sediments can lead to contamination of the aquatic environment.

erodibility of underlying rock units. In general, highly resistant rock units produced narrow and steep valleys characterized by bedrock walls, and are efficient at transporting sediment through the reach with little if any sediment deposition and storage. In contrast, less-resistant units allow for the formation of wide, low-gradient streams within which sediments tend to be deposited. Zones of sediment storage not only tend to contain the majority of the contaminated sediment transported downstream from the mine sites, but are characterized by extensive floodplains used for agricultural purposes (Fig. 9).

450

Bolivia: Mining, River Contamination, and Human Health

Fig. 9 Abrupt changes in valley morphology occur along the Rio Pilcomayo (top photo) in response to changes in erodibility of the underlying bedrock. Sediment-associated trace metals are preferentially deposited along stream segments characterized by wide valleys with low gradients (lower left); narrow bedrock controlled valleys (lower right) store little contaminated sediment as it is efficiently transported through the reach.

(6) Dissolved concentrations of trace metals and metalloids do not necessarily parallel the geographical patterns of concentration found within the channel bed sediments, but may exhibit higher concentrations at significant distances from the mine sites. In the case of the upper Rio Pilcomayo, concentrations of arsenic, antimony, cadmium, and lead generally exceed WHO guideline values for drinking water from Potosí downstream to Puente Sucre, a distance of about 150 km. However, the dissolved concentrations of these substances were often higher at Puente Sucre (just downstream Tuero Chico) than they are immediately adjacent to the mills in Potosí (Fig. 4). The elevated downstream concentrations are probably due to variations in the decomposition of sulfide minerals, which in the case of the Rio Pilcomayo is likely to be impeded close to the mine sites by high pH ( 10) waters from the mills with low-oxygen contents. The presences of sulfate reducing bacteria (e.g., Thiobacillus ferrooxidans, Thiobacillus thiooxidans, and Leptospirillum ferrooxidans) may also play a role in the oxidation process. (7) Changes in dissolved, particulate, and channel bed sediment concentrations occur through time. Temporal changes in concentration can be attributed in part to fluctuations in the discharge of mine and mill effluent to its tributaries and to differences in precipitation and runoff from the basin, the relationship of which alters water chemistry, the relative quantity of effluent in the river, and the exchange of trace metals with channel bed and floodplain sediments. Dramatic, seasonal variations have been observed in both the dissolved and particulate concentrations of river waters in response to large changes in flood discharges during the wet and dry season. In general, contaminant concentrations decrease during the wet season as mine effluent is diluted by meteoric waters, although the downstream flux of contaminants generally increases as the shear volume of water and sediment moving through the channel can be enormous. Longer-term changes in channel bed concentrations, such as those shown in Fig. 10 for lead, also occur along the Rio Pilcomayo. The observed differences in lead content, which declined from a mean value of 1370  966 mg/g in 2000 to 130  231 mg/g in 2004, presumably reflect changes in dispersal and grain size dilution mechanisms associated with interannual changes in discharge. During relatively wet years, the highly dynamic nature of the channel allows large quantities of tailings and other contaminated particles from the mines and mills to be transported downstream over long distances, producing high concentrations within the river bed materials. The proportion of lead and other contaminants from the mines, however, are reduced during these wet years in downstream areas as a result of an increase in uncontaminated, fine-grained sediment delivery to the channel from tributary basins. During dry years, such as characterize the period from 2002 through 2004, the quantity and distance that contaminated particles are transported decreases, but the proportion of the metals in the channel bed from the mines increases because the influx of clean sediment to the river from adjoining tributaries is limited. Considerable public attention has been given in recent years to the potential ecological and human health affects of several tailings impoundment failures in the Rio Pilcomayo basin. Studies following the 2003 failure of the Abaróa tailings impoundment

Bolivia: Mining, River Contamination, and Human Health

451

3500 Pb conventration (μg g−1)

2000 channel 3000

2002 channel

2500

2004 channel

2000 1500

Spatial decline in concentration from 2000 to 2004

1000 500 0 0

50

100 150 Distance downstream (km)

200

250

Fig. 10 Changes in lead concentration in channel bed sediments from 2000 to 2004. Lead concentrations decreased during the period in response to drier climatic conditions.

(Fig. 11), which released approximately 5,500 m3 into the Rio Chilco–Rio Tupiza drainage system, found that 6 months after the event, the impacts of the failure could not be separated from the degradation in sediment quality caused by past mining operations. The tailings impoundment failure at the Porco Mine in 1996 was much larger, releasing 235,000 m3 of waste materials into the headwaters of the Rio Pilaya, a tributary to the Rio Pilcomayo. While the short-term degradation of water quality was significant over a larger area, extending downstream to at least Villamontes (Fig. 4), the longer-term impacts of the failure on sediment quality cannot be separated from those caused by historic releases of trace metals. These investigations indicate that the impacts of contamination on river reaches downstream of historic and modern mining operations may last for decades, if not centuries.

Potential Effects of Trace Metals on Human Health

A primary concern with regards to human health is the potential effects of riverine contamination on the inhabitants in and around Potosi as well as downstream riparian communities. In contrast to the Rio Beni–Rio Madeira basin, a large number of potential exposure pathways exist. These include the consumption of contaminated drinking water, vegetables, fish, poultry, and livestock, the ingestion of contaminated soils, particularly by children, the inhalation of contaminated dust from soils or the river bed, and the direct use of or inadvertent contact with contaminated river and irrigation waters (Fig. 12). To date, the importance of these potential trace metal exposure pathways have been poorly studied; the investigations that have been conducted have focused largely on agricultural soils and the produce grown on them as well as the quality of river, irrigation, and drinking water supplies. Given the extent to which mercury was emitted to the atmosphere by pre-Colonial and Colonial smelting and amalgamation mining, mercury has the potential to have impacted inhabitants in and around Potosi (see Further Readings). In fact, retrospective modeling of air mercury levels during Colonial mining (specifically in 1715) found that the entire community was exposed to mercury vapor concentrations that exceed modern acute inhalation reference concentrations for the general public (Hagan et al., 2011). These values are thought to reflect smelting activities and the emissions of mercury from contaminated soils, the latter of

Fig. 11 Newly constructed dam across the tailings impoundment at the Abaróa Mine. Approximately 5500 m3 of the contaminated tailings were eroded and transported downstream during a flood which breached the impoundment in 2003.

452

Bolivia: Mining, River Contamination, and Human Health

Fig. 12 Exposure to toxic trace metals may occur through a variety of pathways. Suggested routes of importance include the inhalation or ingestion of contaminated dust blown out of the river valley (lower left), the ingestion of contaminated vegetables (upper right), the ingestion of contaminated soils attached to unwashed produce (upper left, small girl on right), and contact with, or ingestion of, contaminated river and irrigation waters (lower right).

which exhibit significantly elevated values in comparison to global averages. Interestingly, however, current mercury emissions from soil appear to be relatively low, possibly because of changes in mercury speciation (form) since the cession of mining amalgamation (Higueras et al., 2012). In addition to the direct emission of mercury from contaminants soils, it has been suggested that it may be incorporated into local soils that were (are) used to produce adobe bricks. Thus, it was hypothesized that mercury contained within these bricks may be released as dust into the home environment. Ingestion of the dust could thereby pose a health risk. An analysis of bricks and dust within adobe brick structures showed, however, that concentrations of bioaccessible mercury within the majority of the 49 households sampled in Potosί did not pose a health risk. The concentrations of arsenic and lead, however, were found to pose a potential risk (McEwen et al., 2016), a conclusion that may hold true for other mining areas in Bolivia as well. Perhaps more problematic is the consumption of contaminated produce, particularly within agricultural communities located downstream of Cerro Rico and who live along the river. Produce in these communities is primarily grown on floodplain soils, many of which are contaminated by the deposition of trace metal enriched river sediments on the floodplains during runoff events, or by the use of contaminated irrigation waters from the Rio Pilcomayo. For example, the degree to which agricultural floodplain soils are contaminated has been examined in four riverside communities (Mondragón, Tasapampa, Tuero Chico and Sotomayor) located along a downstream transect from Cerro Rico (Fig. 4). In general, metal contents of these agricultural soils were found to: (1) semisystematically decrease downstream, (2) decrease with terrace (field) height above the channel, and (3) reflect the use of contaminated irrigation water (Fig. 13). However, only the upstream most sites contained agricultural soils where cadmium, lead, and zinc concentrations exceeded recommended guideline values for agricultural use. Further downstream, metal concentrations were above background values, but below accepted guidelines for agricultural use. Subsequent studies, using lead isotopic fingerprinting and modeling methods, found that mining represented the most significant lead source, accounting for > 80% of the lead in agricultural fields upstream at Mondragon, and as much 15%–35% of the lead downstream at Sotomayor, located approximately 170 km from the mills (Fig. 14). The data for lead are probably applicable to many of the other metals as well. With regard to water supplies, irrigation waters are frequently derived from the Rio Pilcomayo because other sources are limited during the dry season (Fig. 13). Both river and irrigation waters exceeded local background values by several fold, and some metals frequently exceed WHO guidelines for drinking water in upstream areas (within  50 km of the mines). However, use of Río Pilcomayo waters is limited for domestic purposes because communities are usually able to find alternative upland sources of domestic water. Thus, dissolved concentrations of metals and arsenic in actual drinking water supplies are generally lower than WHO guideline values. Metal and arsenic concentrations within agricultural produce from these four communities were generally below existing guidelines for heavy metal content in commercially sold vegetables, suggesting that the metals may not be taken up by plants. Lead represented a possible exception as 37% and 55% of the carrots, lettuce, and beetroot samples from Sotomayor and Tuero Chico,

Bolivia: Mining, River Contamination, and Human Health

453

Fig. 13 Most riparian communities must rely on contaminated waters from the Rio Pilcomayo to irrigate their agricultural plots. Contaminated dark gray waters commonly characterize both the irrigation canals and the surface soils adjacent to the crops.

respectively, exceeded recommended guidelines. While many factors influence metal accumulation in plants, the fact that most of the metals are associated with sulfide minerals, and are not readily bioavailable, is likely to have played a major limiting factor in metal uptake. A closely related form of exposure which may be of importance, particularly for children, is the ingestion of contaminated soil particles attached to the produce. Exposure in this case is associated with a common practice of consuming vegetables from a field before they have been washed or otherwise prepared (Fig. 12). Other studies have examined the potential accumulation of metals (specifically arsenic, cadmium, lead) in potato tubers, which is a staple of the local diet. For example, the hazard quotient has been calculated for potatoes grown on contaminated soils in communities directly affected by mine contamination within the vicinity of Potosi. The hazard quotient is defined as the ratio of the estimated site-specific exposure to a specific contaminant over a given time period, and is used to assess the daily exposure level at which adverse health effects are likely. The analysis found elevated hazard quotients existed for potatoes grown on contaminated floodplain soils for arsenic, cadmium, and lead in children in nearly all of the mining-impacted areas, whereas hazard quotients were elevated in arsenic and cadmium in adults (Garrido et al., 2017). Combined with other studies of the potential uptake of metals, particularly cadmium, in potatoes, the risks of consuming contaminated potatoes, of which > 100 types are grown in the area, may be more significant that originally assumed. Hair, urine, and blood measurements of metal and arsenic exposure of individuals along the Rio Pilcomayo are limited. J. Archer and her colleagues found in 2005 that arsenic concentrations in hair (37–2110 mg kg 1) and urine (11–891 mg g 1) from four riparian communities (Molina, Tasapampa, Tuero chico, and Sotomayor) frequently exceeded published reference values for nonoccupationally exposed subjects, occasionally by several fold. However, comparison of arsenic concentrations in these communities with a control site (Cota) revealed no statistical difference, raising the question as to whether the high arsenic values were primarily associated with mining operations, or resulted from other sources, such as the region’s highly mineralized rocks. The potential impacts of cadmium, mercury, and lead on human health was assessed by conducting physical examinations, urine analyses, and, on a subset of participants, blood analyses on individuals from five communities (three impacted by mining and two control sites) within 40 km of Potosί (Farag et al., 2015). Participants from the mining areas exhibited significantly higher frequencies of hematuria, hypertension, and ketonuria. Hematuria was elevated among those eating homegrown grain, and/or that watered livestock from sources located downstream from mines. Higher blood concentrations of lead were also observed in participants with hematuria. While the exact cause of the noted increase in hematuria, hypertension, and ketonuria could not be determined without further study, the data suggest that efforts to limit metal exposure are warranted (Farag et al., 2015). Farther downstream, lead and cadmium concentrations have been examined in the hair of Weenhayek communities. Weenhayek, an indigenous population that traditionally hunt, fish, and gather, live along the Rio Pilcomayo between Villamontes and the Argentinean border (Fig. 4). Lead concentrations (but not cadmium) were found to be between two and five times higher in hair from Weenhayek than a local control group. In addition, the study found an increased risk for smaller families and the

454

Bolivia: Mining, River Contamination, and Human Health

(A)

300

Pb concentration (ppm)

250 200 150 100 50 0

F1

D-

RM

F1 F3 F6 F2 F7 F3 F1 F4 F5 F3 F2 F4 F2 D- TC- TC- TC- TC- MS- MS- MS- MS- MS- MS- MSR R R R R R R R R R RM R

D-

RM

Mondragón

Tuero Chico

Sotomayor

(B) 90 Pb from mining Pb from mesozoic rocks Pb from ordovician rocks

80

% Pb from mining

70 60 50 40 30 20 10 0 F1

D-

RM

3 3 F4 -F5 -F6 -F7 F1 F3 F2 F4 -F1 F2 F2 D-F C- TC- TC-F TC- MS MS- MS- MSS MS S T M R R R R R R R R RM R RM RM R D-

Successive fields downstream Fig. 14 (A) Average lead concentrations measured in agricultural soils of several communities located along the Rio Pilcomayo. Contaminated irrigation waters are used at all three communities. (B) Estimates of the percentage of lead derived from the mines and mills at Cerro Rico and various rock units that underlying the river basin. Estimates based on a mixing model developed by Gail Mackin from Northern Kentucky University, United States as applied to geochemically fingerprinted sediments. In some agricultural soils from Sotomayor, located approximately 170 km from Cerro Rico, 15%–30% of the lead comes from upstream mining.

delayed onset of walking among Weenhayek who lived along the Rio Pilcomayo (Stassen et al., 2012). Lead concentrations in the soils of this area are relatively low in comparison to those observed upstream, but still above background values. In addition, the source of the lead is unclear; it may be derived from naturally mineralized rocks in the catchment, from mining at Cerro Rico or the numerous other mines in the basin, or, most likely, from all three. Nonetheless, it appears that lead concentrations are may pose a risk to communities living along the entire length of the Rio Pilcomayo of Bolivia.

Concluding Comments Predicting the continued and future impacts of mining operations on river and human health is a difficult process, often plagued by considerable error. Qualitatively, however, the degree of mercury pollution in the Rio Beni–Rio Madeira basin is likely to increase in the future as there is little evidence that small to medium-scale artisanal gold mining using mercury amalgamation mining, and its associated deforestation, is going to end any time soon. A wider range of commodities are mined within drainages located further

Bolivia: Mining, River Contamination, and Human Health

455

south in Bolivia and which drain the eastern flank of the Andes, including the Rio Pilcomayo. Future mining operations are likely to vary between these drainages. However, the contaminant data collected thus far indicates that trace metal contamination will remain a problem for the foreseeable future. In fact, geomorphic models of landscape and channel evolution suggest that trace metals deposited and stored within the channel and floodplains as a result of channel bed aggradation may be reexcavated over periods of tens to hundreds of years and transported downstream, increasing the extent of contamination. The widespread nature of trace metal contamination will make remediation extremely difficult, and will require a much more detailed understanding of ecological and human exposure to the contaminants than currently exists.

See also: Malaysia: Environmental Health Issues; Mining Activities: Health Impacts; Uruguay: Environmental Conditions in the Coast of Montevideo; Ghana: Ecology, Politics, Society and Environmental Health.

References Cooke, C.A., Balcom, P.H., Kerfood, C., Abott, M.B., Wolfe, A.P., 2011. Pre-Colombian mercury pollution associated with the smelting of argentiferous ores in the Bolivian Andes. Ambio 40, 18–25. Farag, S., Das, R., Strosnider, W.H.J., Wilson, R.T., 2015. Possible health effects of living in proximity to mining sites near Potosί, Bolivia. Journal of Occupational and Environmental Medicine 57, 543–551. Garrido, A.E., Strosnider, W.H.J., Taylor Wilson, R., Condori, J., Nairn, R.W., 2017. Acid mine drainage, soils, potatoes and human health in Potosí. Environmental Geochemistry and Health 39, 681–700. Hagan, N., Robins, N., Hus-Kim, H., Halabi, S., Morris, M., Wooodall, G., Zhang, T., Bacon, A., Ricther, D.B., Vandenberg, J., 2011. Estimating historic atmospheric mercury concentrations form silver mining and their legacies in present-day surface soil in Potosi, Bolivia. Atmospheric Environment 45, 7619–7626. Higueras, P., Llanos, W., Garciá, M.E., Millán, R., Serrano, C., 2012. Mercury vapor emissions from the ingenious in Potosí. Journal of Geochemical Exploration 116, 1–7. Maurice-Bourgoin, L., Quiroga, I., 2002. Total mercury distribution and importance of the biomagnifications process in rivers of the Bolivian Amazon. In: The Ecohydrology of South American Rivers and Wetlands, 6. IAHS Special Publication, pp. 49–65. McEwen, A.R., Hsu-Kim, H., Robins, N.A., Hagan, N.A., Halabi, S., Barras, O., deB Ricther, D., Vandenberg, J.J., 2016. Residential metal contamination and potential health risks of exposure in adobe brick houses in Potosί, Bolivia. Science of the Total Environment 562, 237–246. Roulet, M., Lucotte, M., Canuel, R., Rheault, I., Tran, S., De Freitos Gog, Y.G., Farella, N., Souza do Vale, R., Sousa Passos, C.J., De Jesus, d., Silva, E., Amorim, M., 1998. Distribution and partition of total mercury in waters of the Tapajó River basin, Brazilian Amazon. The Science in the Total Environment 213, 203–211. Stassen, M.J.M., Preeker, N.L., Ragas, A.M.J., van de Ven, M.W.P.M., Smolders, A.J.P., Roeleveld, N., 2012. Metal exposure and reproductive disorders in indigenous communities living along the Pilcomayo River, Bolivia. Science of the Total Environment 427–428, 26–34.

Further Reading Akagi, H., Malm, O., Kinjo, Y., Harada, M., Branches, F.J.P., Pfeiffer, W.C., Kato, H., 1995. Methylmercury pollution in the Amazon, Brazil. The Science of the Total Environment 175, 85–95. Archer, J., Hudson-Edwards, K.A., Preston, D.A., Howarth, R.J., Linge, K., 2005. Aqueous exposure and uptake of arsenic by riverside communities affected by mining contamination in the Río Pilcomayo basin, Bolivia. Mineralogical Magazine 69, 719–736. Barbieri, F.L., Cournil, A., Gardon, J., 2009. Mercury exposure in a high fish eating Bolivian Amazonian population with intense small-scale gold-mining activities. International Journal of Environmental Health Research 19, 267–277. Harada, M., Nakanishi, J., Yasoda, E., Pinheiro, M.C.N., Oikawa, T., Guimarâes, G.A., Cardoso, B.S., Kizaki, T., Ohno, H., 2001. Mercury pollution in the Tapajos River basin, Amazon mercury levels of head and hair and health effects. Environmental International 27, 285–290. Hudson-Edwards, K.A., Macklin, M.G., Miller, J.R., Lechler, P.J., 2001. Sources, distribution and storage of heavy metals in the Rio Pilcomayo, Bolivia. Journal of Geochemical Exploration 72, 229–250. Maurice-Bourgoin, L., Aalto, R., Rhéault, I., Guyot, J.L., 2003a. Use of 210Pb geochronology to explore the century-scale mercury contamination history and the importance of floodplain accumulation in Andean tributaries of the Amazon River. Short-Papers – IV South American Symposium on Isotope Geology 449–452. Maurice-Bourgoin, L., Alanoca, L., Fraizy, P., Vauchel, P., 2003b. Sources of mercury in surface waters of the upper Madeira erosive basins, Bolivia. Journal of Physics IV France 107, 855–858. Miller, J.R., Orbock Miller, S.M., 2007. A Geomorphological-Geochemical Approach to Site Assessment and Remediation. In: Contaminated rivers. Springer, Berlin. Miller, J.R., Hudson-Edwards, K.A., Lechler, P.J., Preston, D., Macklin, M.G., 2004. Heavy metal contamination of water, soil and produce within riverine communities of the Río Pilcomayo basin, Bolivia. Science of the Total Environment 320, 189–209. Miller, J.R., Lechler, P.J., Mackin, G., Germanoski, D., Villarroel, L.F., 2007. Evaluation of particle dispersal from mining and milling operations using lead isotopic fingerprinting techniques, Rio Pilcomayo Basin, Bolivia. Science of the Total Environment 384, 355–373. Redwood, S., 1987. Going for gold in Bolivia. New Scientist 20, 41–43. Roulet, M., Lucotte, M., 1995. Geochemistry of mercury in pristine and flooded ferralitic soils of a tropical rain forest in French Guiana, South America. Water, Air, and Soil Pollution 80, 1079–1088. Roulet, M., Lucotte, M., Farella, N., Serique, G., Coelho, H., Sousa Passos, C.J., De Jesus Da Silva, E., Scavone De Andrade, P., Mergler, D., Guimarães, J.R.D., Amorim, M., 1999. Effects of recent human colonization on the presence of mercury in Amazonian ecosystems. Water, Air, and Soil Pollution 112, 297–313. Santos, E.C.O., Câmara, V.M., Jesus, I.M., Brabo, E.S., Loureiro, E.C.B., Mascarenhas, A.F.S., Fayal, K.F., Sá Filho, G.C., Sagica, F.E.S., Lima, M.O., Higuchi, H., Silveira, I.M., 2002. A contribution to the establishment of reference values for total mercury levels in hair and fish in Amazonia. Environmental Research Section A 90, 6–11. Tschirhart, C., Handschumacher, P., Laffly, D., Bénéfice, E., 2012. Resource management, networks and spatial contrasts in human mercury contamination along the Rio Beni (Bolivian Amazon). Human Ecology 40, 511–523. Villarroel, L.F., Miller, J.R., Lechler, P.J., Germanoski, D., 2006. Lead, zinc, and antimony contamination of the Rio Chilco-Rio Tupiza drainage system, southern Bolivia. Environmental Geology 51, 283–299.

Boron: Environmental Exposure and Human Health M Korkmaz, Celal Bayar University, Manisa, Turkey © 2011 Elsevier B.V. All rights reserved.

Introduction Boron (B) is widespread in nature and is commonly used in industrial fields. Day by day, it is becoming increasingly present in different manufacturing areas. Consequently, its extensive use raises the question of whether boron pollution in the environment will constitute a risk for the human being in the future. Epidemiological studies so far – which have been limited – have not found any observed endemic disease even in mining areas where boron concentration and exposure are high. Quite the contrary, the findings direct to the belief that boron supplementation could play a preventive role against some diseases. Studies investigating reproductive toxicity of boron in animal models present results suggesting exposure to boron may induce negative health effects; however, due to limited epidemiological studies, these findings cannot be taken as conclusive. Nevertheless, concerns in relation to reproduction toxicology have arisen and still remain. New studies in this direction will clearly form a foundation for toxicological risk assessment of boron exposure.

Boron and the Environment B is a natural element that is found in soil, water, rocks, and air. On the earth’s crust, it is found in concentrations averaging 10 ppm, with a range from 5 ppm in basalt to 100 ppm in shale. The richest source of the element is seawater, and 1 l of seawater contains approximately 5 mg of boron. Elsewhere, it can be found naturally in surface water, and depending on the geographic location, its concentration can range between 200 mg d 1) is reported in Japan and China where rice grown on contaminated soils is a major dietary component. The variation depends on the exposure levels of cadmium in food and the food Cd bioavailability, but it may also vary depending on the dietary assessment method used for estimating the cadmium intake. The methods used to estimate cadmium via the diet are the duplicate diet method and the market basket method. In the duplicate diet method everything consumed is duplicated and analyzed for cadmium. The advantage of this technique, when correctly performed, is that the actual amount of cadmium consumed is determined, and it gives a good estimate of the range of intake among individuals, which is important in identifying risk groups with increased cadmium intake. It is, however, a costly and time-consuming method, which limits the size of the study group. Like all prospective dietary assessment methods, the duplicate diet method may affect the subject’s intake. In the market basket method, cadmium is measured

478

Cadmium Exposure in the Environment: Dietary Exposure, Bioavailability and Renal Effects The European Union maximum cadmium levels (ML) in foods (mg kg 1 wet weight)a and water for human consumption (mg L 1)b,c EU limits are often lower than CODEX (WHO/FAO) limits without clear technical cause

Table 1 Food group

Foodstuffs

ML

Food group

Foodstuffs

ML

Meat and offal

Meat (excluding offal) of bovine animals, sheep, pig, and poultry Horsemeat, excluding offal Liver of bovine animals, sheep, pig, poultry, and horse Kidney of bovine animals, sheep, pig, poultry, and horse

0.050

Cereals

0.10

0.20 0.50

Legumes

Cereals, excluding bran, germ, wheat, and rice Bran, germ, wheat, Rice Soybeans

Seafood

Muscle meat of fish, excluding species listed later Bonito (Sarda sarda), common two-banded seabream (Diplodus vulgaris), eel (Anguilla anguilla), gray mullet (Mugil labrosus), horse mackerel or scad (Trachurus species), louvar or luvar (Luvarus imperialis), mackerel (Scomber species), sardine (Sardina pilchardus), sardinops (Sardinops species), tuna (Thunnus species, Euthynnus species, Katsuwonus pelamis), and wedge sole (Dicologoglossa cuneata) Muscle meat of bullet tuna (Auxis species) Muscle meat of anchovy (Engraulis species) and swordfish (Xiphias gladius) Crustaceans, excluding brown meat of crab and head and thorax meat of lobster and similar large crustaceans (Nephropidae and Palinuridae) Bivalve mollusks Cephalopods (without viscera)

1.0 0.050

Vegetables, fruit, and fungi

0.10

0.20 0.30 0.50

1.0 1.0

Food supplement

Water

Vegetables and fruit, excluding leaf vegetables, fresh herbs, fungi, stem vegetables, root vegetables, and potatoes Stem vegetables, root vegetables, and potatoes, excluding celeriac Leaf vegetables, fresh herbs, celeriac, and the following fungi: Agaricus bisporus (common mushroom), Pleurotus ostreatus (Oyster mushroom), Lentinula edodes (Shiitake mushroom) Cocoa

0.050

Fungi, excluding those listed earlier Food supplements, excluding those listed later Food supplements consisting exclusively or mainly of dried seaweed or products derived from seaweed

1.0 1.0

Water for human consumption except mineral watersb Natural mineral watersc

5.0

For example, the EU limit for Cd in rice is 0.2 mg kg 1 while the limit for Cd in Japan and CODEX is 0.4 mg kg 1. Note: See legislation for specific application instructions. a According to Commission Regulation (EC) No. 488/2014 (2014). b According to Council Directive 98/83/EC. c According to Council Directive 2003/40/EC.

Fig. 2

0.20 0.20

For most people, the major part of the cadmium intake is through food of plant origin.

0.10 0.20

0.80

3.0

3.0

Cadmium Exposure in the Environment: Dietary Exposure, Bioavailability and Renal Effects

479

in all food bought to reflect consumption according to food statistics or population-based dietary surveys. It is simpler to conduct but does not cover all food consumed. The contribution of cadmium from different food groups can, however, be estimated. As cadmium intake is lognormally distributed, a small fraction of the population will have an intake that is much higher than the average population. The intake may also be higher among some populations with specific dietary habits. For example, vegetarians and high shellfish consumers are groups that have a higher intake of cadmium than omnivores, but bioavailable Cd may not be higher. Only contaminated rice has caused dietary Cd disease to populations. In 1972, the Joint Food and Agriculture Organization (FAO)/World Health Organization (WHO) Export Committee on Food Additives and Contaminants (JEFCA) established a Provisional Tolerable Weekly Intake (PTWI) for cadmium. The tolerable weekly intake was set to 7 mg cadmium per kilogram bodyweight per week. This value was maintained in their later risk assessments, performed in 1988, 1993, and 2004. Thus, the PTWI for a 70 kg person corresponds to a daily cadmium intake of 70 mg. A new risk assessment and re-evaluation of the tolerable weekly intake of cadmium within the EFSA (2009) led to a lower Cd intake recommendation (2.5 mg/kg/week) in disagreement with JECFA. The JECFA re-reviewed tolerable Cd and slightly lowered their daily intake recommendation using a monthly intake limit (25 mg Cd kg 1 month 1) because Cd risk requires decades of consumption with little importance of the variability of daily Cd intake (JECFA, 2010). Cadmium in drinking water contributes less than a few percent of the total cadmium intake.

Air and Dust Cadmium concentrations in ambient air are generally low. Air cadmium contributes only to less than a few percent of the total absorbed dose of cadmium in the body. As in the case of contaminated water, cadmium-polluted air may occur in the vicinity of some metal industries. In Europe and North America, emissions of trace metals have shown a decreasing tendency over the past decades, which may, however, not be applicable to other parts of the world. The decrease in emissions is attributed to the reduction in coal use, the development of industrial manufacturing processes, the tightening of environmental legislation, and the closure of outdated industrial plants. In areas with contaminated soils, house dust is a potentially important route of exposure to cadmium, even after the closure of the cadmium-emitting source, although the bioavailability of Cd in ingested soil and dust is reduced by Zn and Fe in the same soils.

Smoking Tobacco smoking is an important source of cadmium exposure. Tobacco leaves accumulate cadmium in a manner similar to leafy green vegetables. One cigarette contains approximately 1 mg Cd g 1 dry tobacco (varies depending on the type and brand). A 10% inhalation of cadmium content and approximately 50% absorption by the lungs, makes tobacco smoking a major source of kidney Cd among the smoking population. It is estimated that a person smoking 20 cigarettes per day will absorb approximately 1 mg cadmium daily, doubling kidney cortex Cd at age 50. Counterfeit cigarettes usually contain appreciably higher Cd levels (as high as 5 mg g 1), an important less recognized human Cd risk. In contrast, there is little exposure from environmental tobacco smoke.

Kinetics and Biological Indicators of Exposure and Dose Cadmium Kinetics The absorption of cadmium following inhalation varies between 10% and 50%, depending mainly on the particle size. The gastrointestinal absorption is much lower and considered to be at least below 10%, but most likely around a few percent. Body iron stores influence cadmium absorption. The intestinal cadmium absorption increases when there is a depletion in the body iron stores and at overt iron deficiency, which is a condition more prevalent in women at fertile age than in men. The generally higher concentrations of cadmium in blood, urine, and kidney in women compared with men, may to a great extent be explained by the close correlation between cadmium absorption and the expression of the divalent metal transporter-1 (DMT-1), transporting cadmium and iron into the duodenal mucosa cell in a competitive manner. This situation is exacerbated during pregnancy when enterocytes, to optimize micronutrients absorption, have an increased DMT-1 density at the apical surface. After absorption into blood, cadmium is bound to either high-molecular weight proteins such as albumin or low-molecular weight proteins such as metallothioneins. The type of ligand may depend on the dose and route of exposure and on the ligand present in food. Cadmium bound to high-molecular weight proteins is transported to the liver where the protein is degraded and free Cd induces synthesis of metallothionein. Metallothioneins are small, cysteine-rich, metal-binding proteins, which participate in an array of protective stress responses. They evolved as a mechanism to regulate zinc levels and distribution within cells and organisms and can also protect against toxic metals and oxidative-stress-inducing agents. Cadmium can substitute zinc in the proteins. The cadmium–metallothionein complex is redistributed to various tissues and is the main transporting complex for cadmium into the kidneys. The small size of the metallothionein enables the protein to be filtered through the kidney glomerular membrane (Fig. 3). It is then reabsorbed into proximal tubular cells. Cadmium not bound to metallothionein does not enter the kidneys to the same extent. After tubular reabsorption, cadmium is accumulated in the kidney cortex with a half-time of 10– 30 years.

480

Cadmium Exposure in the Environment: Dietary Exposure, Bioavailability and Renal Effects

Glomerulus

Proximal tubulus (S1–S2)

Nephron Kidney cortex Renal artery Renal pelvis Renal vein

Kidney medulla

Tubulus

Ureter Renal capsule

Fig. 3 The functional unit (nephron) of the kidney with the glomerulus and tubulus. Cadmium toxicity mainly affects the proximal part of the tubulus.

Cadmium in Blood Cadmium is localized mainly in the red blood cells, while the levels in plasma are low and generally difficult to assess chemically. Blood cadmium is considered the best marker of recent exposure. Because of this, blood analyses can help identify individuals with particularly high exposure, and preventive measures can be taken to decrease exposure at an early stage, which is more relevant to occupational exposure. After cessation of high exposure, the decrease in blood cadmium displays a fast component with a half-time of 3–4 months and a slow component with a half-time of approximately 10 years. The longer half-time is due to the influence of cadmium accumulation in the body (body burden) on blood cadmium levels. Thus, after long-term low-level exposure, cadmium in blood may serve as a good indicator of the cadmium body burden.

Cadmium in Urine The urinary cadmium concentration is mainly influenced by the body burden and is proportional to the concentration in the kidney. Ideally, the best measure of cadmium in urine is the amount excreted over 24 h. In practice, however, collection of 24-h samples may be difficult due to cumbersome sampling, and there is always a great risk of incomplete sampling due to forgotten or lost urine specimens. Spot urine, however, can vary considerably in composition (water and solutes) within and between individuals. Two main strategies, the urinary density and urinary creatinine concentrations, are used to adjust cadmium concentrations for variation in dilution in spot samples, thus allowing a comparison of excretion rates within and among individuals. Creatinine adjustment is the most common method; but as creatinine excretion is affected by muscle mass and meat intake, it is higher in men than in women. This difference may be considered when comparing the creatinine-adjusted cadmium concentrations between sexes and also between different populations. Alternatively, cadmium excretion can be assessed using timed analyses. There is a close relationship between cadmium concentrations in urine and kidneys. Assuming a linear relationship, urinary cadmium of 5 mg g 1 creatinine ( 5 nmol mmol 1 creatinine) corresponds to approximately 100 mg kg 1 in the renal cortex and 2.5 mg g 1 creatinine to 50 mg kg 1. In cases of tubular damage, the normal reabsorption of cadmium–metallothionein complex decreases and the urinary cadmium concentration increases. Thus, in the long run, cadmium-induced kidney damages will give rise to low cadmium concentrations in the kidney and urine, while the tubular damage remains. In general, tubular dysfunction may begin when Cd in the kidney cortex exceeds 200 mg kg 1 wet weight.

Cadmium in the Kidneys Cadmium is accumulated mainly in the liver and the kidney, and the highest concentrations are found in the renal cortex. After a long-term environmental exposure, because of the long half-time and the transfer of cadmium by metallothionein from liver and other tissues to the kidney, the renal accumulation continues during the major part of the human life span. At the age of about 60 years, the rise in renal Cd concentration levels out and will start decreasing at even older ages. The cadmium–metallothionein complex that has been filtered through the renal glomeruli is reabsorbed preferentially in the proximal convoluted tubule (S1 and S2), which is also the site of cadmium-induced nephrotoxicity (Fig. 3). After the tubular absorption, the protein portion of the complex is rapidly degraded in lysosomes and the released cadmium is retained in the kidney,

Cadmium Exposure in the Environment: Dietary Exposure, Bioavailability and Renal Effects

481

which induces renal metallothionein synthesis. Cadmium bound to metallothionein decreases the toxicity of cadmium in the tubular cells. Nephrotoxicity induced by cadmium is probably due to the unbound cadmium in the cells.

Environmental Cadmium Exposure and Kidney Effects Normal Function of the Kidney The kidney has a central role in human health; hence, life without renal function is not possible. Up to 500 L blood passes through the kidneys each day to be cleared from waste products. The kidney is also involved in the regulation of the pH, the fluid, and electrolyte balance; in production of hormones; and in activation of vitamin D. Several potentially toxic substances are efficiently eliminated via the kidney. The filtration of blood through millions of glomeruli results in the production of approximately 125 mL of primary urine (ultrafiltrate of plasma) every minute (180 L per day). In the renal tubule, more than 99% of the substances in the primary urine are reabsorbed, whereas waste and other substances not reabsorbed are collected in the urine and excreted from the body. The kidney’s sensitivity to toxic substances is attributed to the high blood flow (25% of the blood from the heart), high metabolic activity, high oxygen consumption, high concentrations of substances, and a large endothelial area with a high cellular transport.

Biomarkers of Renal Effects The most serious renal conditions are those that cause a lowering of the glomerular filtration rate (GFR) (not caused by Cd), thereby reducing the kidney’s ability to purify the blood from waste products. In clinical practice, the most commonly used methods of the GFR measure how fast an injected foreign substance is cleared from the bloodstream via the kidneys. In large epidemiological studies, this method has to be replaced by noninvasive, indirect methods. Such examples are serum concentrations of creatinine and the more recently introduced cystatin Cdboth are small substances that are easily filtered through the glomeruli. Both of these substances increase with decreasing GFR. A pathological elevation of serum creatinine does, however, not occur until the GFR is reduced by half of its normal value. Other types of glomerular damage lead to excessive protein leakage (Fig. 4). Normally, only plasma substances with a molecular weight below 50,000 are filtered through the glomeruli. An increased permeability of the glomeruli is detected by the presence of, for example, albumin in urine (Fig. 4). In tubular damage, the normal capacity of the tubular cells to reabsorb substances from the primary urine is reduced (Fig. 4). There are several sensitive methods for determining the tubular function. Low-molecular weight protein, such as urinary concentrations of a1-microglobulin (protein HC), b2-microglobulin, and retinol-binding protein (RBP) are valid markers of the tubular reabsorption (Fig. 4). The urinary concentration of N-acetyl-b-D-glucosaminidase (NAG), an enzyme localized in the lysosomes of the tubular cells, is a sensitive marker of leakage from damaged tubular cells.

Cadmium and Renal Effects The critical organ in adverse cadmium health effects is the kidney tubule. It is very well established that excessive occupational and environmental cadmium exposure can cause renal injury. Numerous studies have also demonstrated that cadmium-induced renal

Fig. 4 Different types of kidney damage. (A) The green arrow symbolizes albumin, which under normal conditions is not filtered through the glomerulus. The black arrow symbolizes a low-molecular weight protein (e.g., B2-microglobulin) that under normal condition is reabsorbed in the proximal tubuli. (B) Glomerular damage with increased permeability of the glomeruli is detected as an increased excretion (leakage) of high-molecular weight proteins and blood cells into the urine. (C) In case of tubular damage, the reabsorption of low-molecular weight proteins is decreased, resulting in increased concentrations of these small proteins in urine.

482

Cadmium Exposure in the Environment: Dietary Exposure, Bioavailability and Renal Effects

damage is characterized by proximal tubular reabsorptive dysfunction. This tubular damage is the critical effect, that is, the first toxic effect to appear due to excessive exposure to bioavailable Cd from diets. The earliest manifestation of cadmium-induced renal damage is an increased excretion of low-molecular weight proteins. The amount excreted is proportional to the severity of the damage. Although not diagnostic of renal damage caused by cadmium, a dose–response relationship between high cadmium exposure and the tubular proteinuria strongly supports an effect of cadmium. The most advanced form of chronic cadmium intoxication was identified in Japan in 1969. Contamination of water and rice from upstream mining activities was responsible for the outbreak of the so-called Itai-itai disease (from “ouch-ouch”). The diseaseda combination of osteomalacia and osteoporosisdaffected almost exclusively older women (at about 1% of populations with adverse renal tubular effects of Cd), and was characterized by severe pain and multiple fractures that had often occurred spontaneously. The disease was, however, the tip of an iceberg, with tubular dysfunction of the kidneys as an earlier adverse effect. Severe cadmium-induced tubular damage is irreversible and results in a progressive deterioration, even after cessation of exposure, with depressed glomerular functiondwith regard to both the filtration rate and increased permeability of the glomeruli. In aerosol Cd-exposed workers, Cd rapidly accumulated in the liver continues to be transferred to kidney substantially increasing longer term renal Cd risks than for diet Cd exposed persons. In contrast to the more severe cadmium-induced tubular damage, the clinical significance of a slight tubular damage, caused by long-term low-level cadmium exposure, has been an issue of discussion for many years. It is clear that the low-molecular weight proteinuria in itself does not give rise to any subjective symptoms or disease, and in its early stage is not accompanied by any histological changes. Based on current research, it is not clear whether these early tubular changes may increase the risk of progression of the renal injury, to a clinically manifested renal failure such as uremia. With regard to the irreversibility of this tubular damage, there are indications suggesting a reversibility of a mild tubular proteinuria after a distinct reduction of the cadmium exposure. It is, however, important to emphasize that a possible reversibility needs to be considered in the light of the long half-time of cadmium in the environment and in the kidney, hampering a marked reduction of the exposure. Despite diverging opinions, it is important to underline that the increased excretion of low-molecular weight proteins is a widely accepted indicator of kidney damage that, irrespective of progression to severe or clinically relevant renal disease, should be considered as an adverse effect. Indeed, the purpose of having sensitive markers of very early effects is to detect the earliest possible onset of toxicity at a stage when it is possible to prevent adverse health effects, even in the most sensitive groups of the population. Nevertheless, it is prudent to conclude that there is a need for prospective longitudinal studies about health effects on populations with well-characterized exposure to cadmium. Studies such as Ezaki et al. (2003) added important data for women exposed to average 3–5 times higher dietary Cd than Europeans.

Dose–Response Assessment and Benchmark Dose Dose–response assessment involves characterization of the relationship between the dose of exposure and the biological effects that are produced. This analysis is usually performed in controlled animal experiments, but it is of course of wide interest to use epidemiological data as a basis for the dose–response analysis when such data are available. The final aim of the dose–response assessment is to determine the dose level of the toxic substance that may be used as a starting point in the establishment of acceptable exposure levels for the human population, including sensitive groups. For nongenotoxic effects it is assumed that there exists an exposure threshold below which there are no biologically significant adverse outcomes, although it should be emphasized that it may be problematic to experimentally prove the presence or absence of such a threshold. The traditional approach in health risk assessment involves establishment of a no-observed-adverse-effect-level (NOAEL), defined as the highest dose or exposure level for which the response is not significantly different from the response in the control group or the group with the lowest exposure. Because of some shortcomings associated with the use of NOAEL, the benchmark dose (BMD) method has been suggested as an alternative approach to be used in health risk assessment. The BMD is defined as the dose causing a predetermined change in response. The lower 95% confidence limits of the BMD, that is, the BMDL, representing the effect of the sample size, has been proposed to replace the NOAEL. One major advantage of the BMD/BMDL approach is that it utilizes the whole dose–response curve. This means that the BMD/BMDL is based on more information than the NOAEL. The BMD method also takes into account the shape of the dose–response relationship to a greater extent, and is not limited to being one of the experimental dose levels. The use of the lower confidence bound (BMDL) appropriately reflects the sample size of the study, that is, larger studies tend to result in shorter confidence intervals and thus lower uncertainty. The BMD method is increasingly used in the health risk assessment of environmental contaminants. The BMD concept was readily introduced for quantal data (i.e., when subjects were categorized as responders or nonresponders), although its application to continuous dose–response information needed further development. Typical continuous responses constitute changes in organ weight or enzyme activity. One category of procedures for defining the BMD for continuous end points focuses on making statements in terms of probability (or risk), but avoiding the dichotomization. This procedure is sometimes referred to as “the hybrid approach.” By using the hybrid approach, the concept of risk can be used for a continuous outcome (or effect). Dichotomization of a continuous outcome is avoided so that no information is lost, which in fact gets lost during the process of transforming data into the categories of responders or nonresponders. Accordingly, the statistical validity and efficiency of the BMD is higher using the hybrid approach, compared with the methods involving dichotomization of a continuous outcome. In the use of epidemiological data in the risk assessment, outcomes that are not easily categorized into “disease” or “no-disease” are possible to be handled using the hybrid approach.

Cadmium Exposure in the Environment: Dietary Exposure, Bioavailability and Renal Effects

483

Cadmium and the Benchmark Dose for Renal Effects Earlier risk assessments, mainly based on cross-sectional studies of occupationally exposed workers, revealed that a urinary cadmium concentration of 10 mg g 1 creatinine, roughly corresponding to 200 mg cadmium per kilogram kidney cortex, caused tubular proteinuria in 10% of the population. This assessment was also the basis for the establishment of the PTWI of 7 mg cadmium per kilogram bodyweight, where the levels in renal cortex, to avoid tubular damage, were not to exceed 50 mg kg 1 after 50 years of exposure. Later evaluations, performed on environmentally exposed populations with a varying degree of exposure and with the use of more advanced exposure assessment methods, arrived at considerably lower exposure concentrations. Several studies have reported urinary cadmium concentrations ranging between 0.5 and 3 mg g 1 creatinine to be the point of departure for renal tubular effects. This substantial reduction in the urinary cadmium concentration, considered to be critical for the development of tubular damage, raised some alternative interpretations of the associations between urinary cadmium and the tubular effect markers at these particularly low levels. These alternative explanations included a possible competition between cadmium–metallothionein complex and the low-molecular weight proteins at the tubular reabsorption sites or a parallel phenomenon causing increased excretion of both cadmium and the low-molecular proteins due to cadmium-independent kidney deterioration. Although cross-sectional studies preclude conclusion to be drawn with respect to causality, a causal relationship is supported by the observed dose–response association and by the fact that urinary cadmium is a marker of the long-term renal accumulation. Also in support, are the facts that even cadmium in blood is associated with the kidney effect markers, implying that cadmium exposuredand not cadmium excretiondis associated with the tubular effects; and cadmium in both blood and urine has been shown to be associated with decreasing glomerular function as measured using cystatin C in serum. Furthermore, lowering of the bone mineral density and increased risk of fractures have also been reported at cadmium concentrations considerably below 5 mg g 1 creatinine. Nutritional and genetic interactions with bone mineral density raise questions of a Cd role in food Cd exposed Japanese. The BMD of cadmium has been assessed in environmentally exposed populations in only a few studies. The BMDL (5% additional probability) ranged between 4 and 12 mg cadmium per gram creatinine for various kidney effect markers in contaminated areas in China, and 0.9–1.2 mg cadmium per gram creatinine (10% additional probability) in another Chinese population that was coexposed to arsenic. Lower BMDLs were observed in Japanese women, aged 40–59 years, in a non-cadmium-polluted area (0.6–1.8 mg g 1 creatinine for 5% additional probability), in Japanese men (0.3–0.6 mg g 1 creatinine), and in Swedish women (0.5 mg g 1 creatinine for 5% additional probability and 0.8 mg g 1 creatinine for 10% additional probability applying the hybrid approach). The GFR was also estimated using cystatin C among Swedish women; the corresponding 5% and 10% additional probability values were 1.1 and 1.8 mg g 1 creatinine, respectively. From another viewpoint, the studies of urban Japanese women showed none of the possible early adverse signs of diet Cd disease (Ezaki et al., 2003). To conclude, rice Cd exposures are dominant Cd exposures in Japan and China where dietary Cd is far higher than in the EU and North America. Important progress is being made in breeding or bioengineering of rice with much lower Cd levels. Agronomic remediation of soil Cd problems (liming adding MnO2) and delay of rice field drainage until rice grain is fully ripe can substantially lower rice grain Cd levels. Further understanding of the low bioavailability of Cd in other foods reduces concerns compared to views before diet Cd bioavailability was so well demonstrated. Nutritional science in addition to toxicological science must be included in understanding of dietary Cd risk assessment. Not all diet Cd comprises risk because of bioavailability.

See also: Cadmium Neurotoxicity; Cadmium and the Welfare of Animals; Exposure Science: Ingestion; Monetary Valuation of Trace Pollutants Emitted Into Air by Industrial Facilities; Renal and Neurological Effects Heavy Metals in the Environment.

References Chaney, R.L., 2015. How does contamination of rice soils with Cd and Zn cause high incidence of human Cd disease in subsistence rice farmers. Current Pollutution Reptorts 1, 16–22. European Commission Regulation No. 488/2014, 2014. Amending Regulation EC No. 1881/2006 as regards maximum levels of cadmium in foodstuffs. Official Journal of the European Union 138, 75. Ezaki, T., Tsukahara, T., Moriguchi, J., Furuki, K., Fukui, Y., Ukai, H., Okamoto, S., Sakurai, H., Honda, S., Ikeda, M., 2003. No clear-cut evidence for cadmium-induced renal tubular dysfunction among over 10,000 women in the Japanese general population: A nationwide large-scale survey. International Archives of Occupational and Environmental Health 76, 186–196. Kirchmann, H., Mattsson, L., Eriksson, J., 2009. Trace element concentration in wheat grain: Results from the Swedish long-term soil fertility experiments and national monitoring program. Environmental Geochemistry and Health 31, 561–571.

Further Reading Åkesson, A., 2005. Tubular and glomerular kidney effects in Swedish women with low environmental cadmium exposure. Environmental Health Perspectives 113, 1627–1631. Berglund, M., 1994. Intestinal absorption of dietary cadmium in woman depends on body iron stores and fiber intake. Environmental Health Perspectives 102, 1058–1066. de Burbure, C., 2006. Renal and neurologic effects of cadmium, lead, mercury, and arsenic in children: Evidence of early effects and multiple interactions at environmental exposure levels. Environmental Health Perspectives 114, 584–590. European Food Safety Agency, 2010. Statement on tolerable weekly intake of cadmium. EFSA Journal 9 (2), 1975. http://www.efsa.europa.eu/en/efsajournal/doc/980.pdf.

484

Cadmium Exposure in the Environment: Dietary Exposure, Bioavailability and Renal Effects

Friberg, L., Elinder, C.G., Kjellström, T., Nordberg, G. (Eds.), 1986. Cadmium and health: A toxicological and epidemiological appraisal, vol. II. CRC Press Inc, Boca Raton, FL. Gunshin, H., 1997. Cloning and characterization of a mammalian proton-coupled metal-ion transporter. Nature 388, 482–488. Hong, F., 2004. Risk assessment on renal dysfunction caused by co-exposure to arsenic and cadmium using benchmark dose calculation in a Chinese population. Biometals 17, 573–580. Ikeda, M., Ezaki, T., Moriguchi, J., Fukui, Y., Okamoto, S., Ukai, H., Sakurai, H., 2006. No meaningful increase in urinary tubular dysfunction markers in a population with 3 mg cadmium/g creatinine in urine. Biological Trace Element Research 113, 35–44. Ishikawa, S., Ishimaru, Y., Igura, M., Kuramata, M., Abe, T., Senoura, T., Hased, Y., Arao, T., Nishizawa, N.K., Nakanishi, H., 2012. Ion-beam irradiation, gene identification, and marker-assisted breeding in the development of low-cadmium rice. Proceedings of the National Academy of Sciences USA 109, 19166–19171. International Agency for Research on Cancer (IARC), 1992. Cadmium in the Human Environment: Toxicity and Carcinogenicity. IARC, Lyon. Scientific Publications No. 118. Järup, L., 1998. Health effects of cadmium exposure: A review of the literature and a risk estimate. Scandinavia Journal of Work and Environmental Health 24 (supplement 1), 1–51. Järup, L., Hellström, L., Alfven, T., et al., 2000. Low level exposure to cadmium and early kidney damage: The OSCAR study. Occupational and Environmental Medicine 57 (10), 668–672. JECFA, 2010. Summary and Conclusions. In: Seventy-third meeting, Geneva, 8–17 June 2010. JECFA/73/SC, 17 pp. http://www.who.int/foodsafety/publications/chem/ summary73.pdf. Jin, T., 2004. Environmental epidemiological study and estimation of benchmark dose for renal dysfunction in a cadmium-polluted area in China. Biometals 17, 525–530. Noonan, C.W., Sarasua, S.M., Campagna, D., et al., 2002. Effects of exposure to low levels of environmental cadmium on renal biomarkers. Environmental Health Perspectives 110 (2), 151–155. Nordberg, G.F., 2007. Cadmium. In: Nordberg, G.F., Fowler, B.A., Nordberg, M., et al. (Eds.), Handbook on the toxicology of metals, 3rd edn. Elsevier, London, UK, pp. 1–992. Olsson, I.M., 2002. Cadmium in blood and urinedImpact of sex, age, dietary intake, iron status, and former smokingdAssociation of renal effects. Environmental Health Perspectives 110, 1185–1190. Reeves, P.G., Chaney, R.L., 2008. Bioavailability as an issue in risk assessment and management of food cadmium: A review. Science of the Total Environment 398 (2008), 13–19. Reeves, P.G., Chaney, R.L., Simmons, R.W., Cherian, M.G., 2005. Metallothionein induction is not involved in cadmium accumulation in the duodenum of mice and rats fed diets containing high-cadmium-rice or sunflower kernels and a marginal supply of zinc, iron, and calcium. Journal of Nutrition 135, 99–108. Suwazono, Y., 2006. Benchmark dose for cadmium-induced renal effects in humans. Environmental Health Perspectives 114, 1072–1076. Uno, T., 2005. Health effects of cadmium exposure in the general environment in Japan with special reference to the lower limit of the benchmark dose as the threshold level of urinary cadmium. Scandinavian Journal of Work and Environmental Health 31, 307–315. Vahter, M., Berglund, M., Nermell, B., Åkesson, A., 1996. Bioavailability of cadmium from shellfish and mixed diet in women. Toxicology and Applied Pharmacology 136, 332–341.

Cadmium Neurotoxicityq Camilo Rios and Marisela Me´ndez-Armenta, Instituto Nacional de Neurología y Neurocirugía, Mexico, Mexico © 2019 Elsevier B.V. All rights reserved.

Introduction Cadmium is an abundant, nonessential element, which is generating concern due to its accumulation in the environment resulting from industrial waste emissions. Approximately 30,000 tons of cadmium are released into the atmosphere every year; an estimated 4000–13,000 tons is the result of human activities. Cadmium is a naturally occurring metallic element, which is used in many industrial processes including electroplating and galvanization; production of pigments; and manufacturing of batteries, plastics, metal containers, and fertilizers. It is also a by-product of zinc, lead, and copper metallurgy. Air pollution and agricultural activities contribute to the dispersion of cadmium from soil and water. Therefore, cadmium is found in foods (fish, offal, vegetable, grains, and cereals), water, and tobacco leaves. In fact, cigarettes serve as one of the major non occupational sources of inhaled cadmium in humans. In humans and other mammals, the absorption of cadmium occurs through a process similar to that of the absorption of essential metals such as iron. Pulmonary absorption of cadmium is relatively more efficient than absorption by the gastrointestinal (GI) tract. In the GI tract, cadmium is poorly absorbed, with only 5%–8% of the ingested load being retained; this absorption is influenced by dietary deficiencies of calcium, zinc, iron, or protein. Absorbed cadmium is transported through blood by binding to red blood cells and proteins in the plasma (particularly albumin or metallothionein) and it is distributed throughout the tissues, mainly to liver and kidney. In the liver, cadmium induces the synthesis of metallothionein (MT), the MT–cadmium complex is then exported via blood to the kidney where it may accumulate lysosomes. Cadmium is excreted slowly by urinary and fecal routes under normal conditions; however, experimental results indicate that the urinary excretion of cadmium increases with cadmium-induced nephropathy. Once absorbed, cadmium has a very long half-life estimated at approximately 17–30 years in humans. Exposure to high levels of cadmium may result in extensive damage to several organs including kidney, lung, bone, placenta, testes, and the brain. A number of studies examining the dose–response relationship between cadmium exposure and renal effects have been carried out among the general population exposed to cadmium. These studies suggest that a significant proportion (10%) of the population showed evidence of renal damage associated with urinary cadmium concentrations exceeding 2–4 mg/g creatinine. Urinary cadmium is often used as an estimate of the internal doses of cadmium in general population. Cadmium contents of human tissue show large variation between individuals. For example, kidney cortex cadmium concentrations may vary from an average of 4.5 to 44.2 mg/g of wet tissue weight, depending on the age group of the subject. Additionally, cadmium exposure may result in damage to the blood-brain barrier and nervous systems. The molecular mechanisms of cadmium toxicity are not well understood yet; mainly, cadmium being able to enhance lipid peroxidation (LPO) by increasing the production of free radicals, thus affecting antioxidant enzyme activity in several organs. The cellular damage induced by cadmium has been observed to be dependent on a number of factors including development stage of the organism, dose, route, and duration of exposure.

Chemistry and Physical Properties Cadmium (atomic number 48; relative atomic mass 112.40) is a metallic element belonging, together with zinc and mercury, to group IIB in the periodic table. It is rarely found in pure state; it is present in various types of rocks, soils, and water as well as in coal and petroleum. Cadmium is a soft, ductile, bluish-white electropositive metal and forms mineral compounds with other elements such as oxygen (cadmium oxide), chlorine (cadmium chloride), or sulfur (cadmium sulfate, cadmium sulfide). Some of the cadmium salts, such as the sulfide, carbonate, or oxide, are practically insoluble in water, although the vapor oxidizes quickly to produce cadmium oxide in the air.

Role of Blood-Brain Barrier in Cadmium Toxicity The blood-brain barrier (BBB) provides both anatomical and physiological protection for the central nervous system (CNS), Paul Ehrlich introduced the concept of the BBB in 1906. The BBB is composed of four main cellular elements; endothelial cells, astrocytes, microglial cells and perycites. First, the nervous system maintains an interface with the bloodstream, and it appears that the cellular basis for the BBB are the tight junctions between endothelial cells present only in the nervous system; second, these

q

Change History: August 2018. Camilo Rios and Marisela Méndez-Armenta were involved in preparing the update. They updated Introduction, Chemistry and Physical Properties, Role of blood-brain-barrier in cadmium neurotoxicity, Effect of cadmium in calcium homeostasis, Oxidative stress and cadmium, Effect of cadmium on neurotransmitters, Role of metallothionein in neurotoxicity, Cell death mechanisms in neurotoxicity of cadmium, Cadmium neuropathology, Epidemiological findings, Final commentary, and Figure 1. This is an update of Marisela Mendez-Armenta, Camilo Rios, Cadmium Neurotoxicity, Editor(s): J.O. Nriagu, Encyclopedia of Environmental Health, Elsevier, 2011, Pages 474–481.

Encyclopedia of Environmental Health, 2nd edition, Volume 1

https://doi.org/10.1016/B978-0-12-409548-9.11571-4

485

486

Cadmium Neurotoxicity

capillaries are enclosed by the astrocyte foot processes, which essentially separate the capillaries from neurons and also act as an active barrier. This barrier is a specialized structure that protects the brain from change in blood levels of ions, amino acids, and other substances that maintain homeostasis and the neuronal microenvironment in the central nervous system (CNS). The effectiveness of the BBB varies from one area of the brain to another, and it also varies by chemical compound. For most solutes and macromolecules, permeability across the BBB is dependent on lipophilicity and size. Less is known about the mechanism by which metals cross the BBB and enter neurons and glial cells. Transporters for each of the major essential metals (calcium, iron, and zinc) have been identified. For example, iron transport trough is the most common mechanism by which cells and neurons acquire iron; transport of zinc and manganese across the BBB has been receptor operated, and voltage-dependent calcium channels mediate cadmium uptake. In adults, cadmium is reported to be neurotoxic because BBB and circumventricular epithelial cells with tight junctions limit the pathways of entrance for cadmium into the CNS. Therefore, cadmium is found to accumulate only in the regions of the CNS without BBB, such as the choroid plexus, the pineal gland, olfactory bulb, and the hypophysis which does not occur in developing organisms because the barrier is not fully developed. MT in glial cells and ependymal cells near circumventricular organs also serves to minimize cadmium infusion into other parts of the brain. However, in humans, neurobehavioral-altered functions including slowing of visuomotor functioning symptoms of fatigue, mental irritability, headache, syncope, and hyposmia or anosmia have been reported in patients or workers acutely or chronically exposed to cadmium. Moreover, effectiveness of the BBB can decline with aging, and hence, the effect of cadmium in the CNS could well be exacerbated in older people exposed to cadmium. Cadmium poorly crosses the BBB in adult animals, however in experimental studies with adult rats, high concentration of cadmium has been found in the brain if a vehicle such as ethanol is employed; this is due to the ability of ethanol to diffuse across all biological membranes, thus allowing cadmium to penetrate through the BBB. Although other studies in experimental animals have shown that cadmium is able to cross the BBB in newborn. Cadmium induces damage in the BBB and its primary effect is on blood vessels. It produces endothelial cell alterations including vacuolization, thinning of endothelial cells, and widening of inter endothelial gaps, thus leading to hemorrhagic lesions and necrotic changes in nerve cells. Exposing developing rats to cadmium showed a decreased activity of antioxidant systems and an increase of LPO, which in turn may alter antioxidant defenses, leading to significant alterations in membrane fluidity to produce BBB dysfunction and more cadmium entering into the brain. Several experimental studies have shown that cadmium is more toxic in newborn than in adult rats. These age differences in susceptibility may be due to differences in the BBB maturation; the BBB is not fully developed at birth, and this is one of the reasons why some chemicals are more toxic to newborn than to adults.

Effect of Cadmium in Calcium Homeostasis Experimental evidences indicate that cadmium may interact with membrane transporters involved in the uptake of Ca2 þ. In neurons, voltage-gated Ca2 þ channels are highly selective for Ca2 þ. The excitation of the neuron causes a transient increase in the intracellular Ca2 þ, which is primarily a passive process of ion flux through transmembrane channels. The process is voltagedependent and depends on the release from intracellular Ca2 þ stores. The restoration of Ca2 þ intracellular level is carried out quickly by extrusion from the neuron by adenosine triphosphate (ATP)-driven Ca2 þ pumps, the Na2 þ/Ca2 þ exchanger, and Ca2 þ-binding proteins, and then stored in the endoplasmic reticulum. The plasma membrane Ca-ATPase pump effects outward transport of Ca2 þ against a large electrochemical gradient for Ca2 þ, and in turn mediates a neuronal response. Therefore, alteration of these processes by cadmium during cell injury can result in inhibition of Ca2 þ extrusion or intracellular compartmentation mechanisms as well as in enhanced Ca2 þ influx and release of Ca2 þ from intracellular stores. Cadmium can upregulate the internal concentration of calcium and interact with functioning of several enzyme or signaling molecules that control various intracellular Ca2 þ signaling pathways. Cadmium inhibits all of the known pathways of cellular Ca2 þ influx; it acts as a potent blocker of Ca2 þ channels, interferes with Ca2 þ pumps and the function of Ca2 þ-modulated proteins (e.g., calmodulin and Ca2 þ-dependent protein kinases), and blocks the release of stored Ca2 þ by inhibiting the activity of inositol triphosphate (IP3). Experimental evidence has shown that approximately one-third of the cadmium entering cells does so through Ca2 þ channels. These interactions may be due to the fact that the ionic radius of Cd2 þ is similar to Ca2 þ, and therefore cadmium can mimic Ca2 þ to gain entry into the cell. Cadmium may interact with calcium in other ways as well. Cadmium acts as a competitive ion to Ca2 þ at the voltagedependent neurotransmitter release, and cadmium ions interact with thiol groups of proteins involved in intracellular Ca2 þ sequestration. In addition, Ca2 þ may be involved in the alteration of the mitochondrial membrane potential generating an increase in the production of oxygen radical species (ROS) through the respiratory chain reducing the activity of complexes II and III; so cadmium affects the uptake and intracellular distribution of Ca2 þ, altering calcium homeostasis. More recently has been proposed that this disequilibrium of the Calcium homeostasis can has two cellular responses, physiological adaptation for reestablish homeostasis or pathological response leading to the cellular death depending the magnitude of the cell stress induce Ca2 þ and/s or ROS (see Thévenod and Lee, 2013 for details).

Oxidative Stress and Cadmium Oxidative stress affects several cellular components (deoxyribonucleic acid (DNA), lipids, and proteins) through oxidation reactions. Oxidative stress refers to the cytopathological consequences of a mismatch between the production of free radicals and

Cadmium Neurotoxicity

487

the ability of the cell to defend against them. A free radical is a molecule or molecular fragment that contains one or more unpaired electrons in its outer orbital and is a product of cellular aerobic metabolism. Superoxide anion (O 2) and hydroxyl (OH) species are the predominant cellular free radicals, but hydrogen peroxide (H2O2) and peroxynitrito (ONOO) are also free radicals that are present. Together, these molecules are referred to as reactive oxygen species (ROS). Cadmium is able to induce the production a variety of ROS including H2O2, O 2, and OH. Active oxygen species are continuously produced in the tissues by the action of mitochondrial electron transport. Mitochondria is the major source of ROS production in cells, and complexes II and III of this electron transport chain are more sensitive to cadmium than complexes I, IV, and V. Complex III is the only site where ROS are produced in the presence of cadmium. Free radicals produce oxidative lipid injuries called LPO and induce progressive loss in membrane fluidity, thus reducing membrane potential and increasing permeability to ions such as Ca2 þ. Overproduction of free radicals may result from indirect interactions of cadmium at critical cellular sites or as a consequence of protective mechanism inhibition. LPO has long been considered the primary mechanism for cadmium toxicity. The ability of cadmium to induce oxidative stress in brain cells has been reported in both in vitro and in vivo systems. In neuroblastoma cells, neuronal inhibition of Janus kinases (Jak1 and Jak2) by cadmium actively increases intracellular levels of oxidative stress and mediates inhibition of signal transduction. This may be a novel mechanism by which cadmium exerts neurotoxic effects. Cadmium is able to induce cellular death due to mitochondrial damage by producing a deficit in ATP production and breakdown of mitochondrial membrane potential, which in turn increase LPO and ROS formation in cortical neurons in culture. Likewise, several studies in animals have shown that the exposure of rats adult to low or moderate doses of cadmium induce LPO in all tissues, mainly on lung and brain. These studies indicate that LPO is an early and sensitive consequence of cadmium exposure. There are several antioxidant defense systems that have been postulated to protect against biological oxidative damage induced by oxygen-free radicals; has been clearly demonstrated that oxidative stress interferes with the expression of genes as well as several transcriptional factors such as the nuclear factor erythroid 2-related factor 2 (Nrf2) which is a master regulator of antioxidant defenses. The Nrf2 binds to a promoter element called the antioxidant response element (ARE) present on these genes and are up-regulated in response to various stressors of the cellular environment. ROS may function as secondary messenger that deregulate gene expression and induce cell transformation when cells are exposed to Cd. The activation of Nrf2 by Cd involves the stabilization of the Nrf2 protein, increasing formation ofNrf2/ Keap1 complex in the cytoplasm. In many cell types the exposure to Cd generated oxidative stress; this phenomenon, in turn, has been associated with elevated Nrf2 activity that results in increased expression of some antioxidant enzymes. An important aspect to consider is that the levels of the Nrf2 factor expressed in astrocytes is much higher than in neurons, which also varies with the age of development. These systems include glutathione peroxidase (GPx) for hydrogen peroxide and lipid peroxide, superoxide dismutase (SOD) for superoxide, and catalase (CAT) for hydrogen peroxide and ceruloplasmin, heme oxiygenase-1 (HO-1) and metallothionein between others. Because of the low activity of these enzymes in the brain and elevated content of easily peroxidizable polyunsaturated fatty acids, the brain is hypothesized to be highly susceptible to oxidative stress. The inhibition or decrease in the activity of several antioxidant enzymes by cadmium has been reported in both in vitro and in vivo systems. Cadmium can inhibit the activity of SOD, increase the level of lipid peroxides, and reduce the activities of glutathione reductase (GR), metabolites glutathione (GHS), oxidizedglutathione (GSSG), and CAT. This interference with antioxidant defense systems can primarily lead to cadmium-induced alteration of the structural integrity of lipids; secondary effects include alteration to the composition of membranes and disturbances in membrane fluidity in different brain regions. Cadmium easily binds with the –SH group, which means that the cell membrane proteins can replace zinc (Zn) in the Zn-dependent enzymes such as ZnSOD (zinc-dependent superoxide dismutase) or manganese (Mn) in MnSOD (manganese dependent superoxide dismutase). In rats it has been proposed that the enhancement of LPO by cadmium is the consequence of a decrease in SOD and CAT activity together with the decrease in glutathione levels. Moreover, cadmium induces a significant increase in free radical production and, consequently, LPO in the brain of developing rats. This is because cadmium is more toxic to newborn and young rats than to adult rats; this age difference in susceptibility may be the result of differences in the BBB. The exposure of pregnant rats to cadmium also increases LPO at 21 days old; this is probably due to the copious availability of lipid substrate as well as rapid myelination associated with brain development. This indicates that oxidative stress could be implicated in the mechanism by which cadmium induces brain tissue damage.

Effect of Cadmium on Neurotransmitters Intracellular communication is achieved in the nervous system through the synapse. Information is transferred at a synapse via the release of neurotransmitter. The neurotransmitter may be a molecule of small molecular weight, which can be synthesized within the neuronal cell body. The neurotransmitter release from an axon acts as the first messenger. The CNS has been shown to be sensitive to disturbances in the trace element concentrations required for normal brain development. Although cadmium is not accumulated in significant quantities in the brain following exposure, it disturbs the metabolism of copper (Cu), Zn, and Ca2 þ. After the potential action, cadmium blocks the influx of Ca2 þ into the nerve terminal by interfering with the membrane channels. This decrease in calcium influx caused by cadmium may be associated with an altered transmitter release. Cadmium also inhibits the function of (Naþ/Kþ)-ATPase and Mgþ ATPase of the brain, thus altering choline transport in synaptosomes. When an action potential arrives at a nerve terminal, Ca2 þ flows into the terminal via voltage-gated Ca2 þ-channels and triggers neurotransmitter release. Alterations in the mechanism of neurotransmitter release have been observed in cadmium neurotoxicity. The serotoninergic system

488

Cadmium Neurotoxicity

appears to be particularly sensitive to cadmium exposure during the lactation period. In growing rats, cadmium produced a decrease in 5-hydroxytryptamine (5-HT) and 5-hydroxyindole-3-acetic acid (5-HIAA) in the cerebellum and corpus striatum, but an increase in levels in the hippocampus. However, the same cadmium administration to adult rats produced an increase in levels of 5-HT and 5-HIAA in all brain regions, suggesting that difference in turnover of 5-HTmay exist in various brain regions. The decrease of 5-HIAA concentration indicates an effect of cadmium on 5-HT uptake, as cadmium-induced block of 5-HT uptake may prevent 5-HT from being metabolized by intraneuronal monoamine oxidase. Likewise, dopamine release is also affected by cadmium exposure. The content of dopamine can be increased or decreased in different brain areas. For example, an increased dopamine release from striatum slices and reduced tyrosine hydroxylase activity are observed in developing rats exposed to cadmium. Moreover, those changes are accompanied by a high dopamine turnover rate after cadmium exposure, probably as a result of the increased dopamine release from the presynapsis. The levels of excitatory neurotransmitters (glutamate and aspartate) are decreased, whereas the inhibitory neurotransmitters (glycine and gamma-aminobutyric acid, GABA) are increased in amygdala of cadmium exposed animals. In pubertal animals, cadmium exposure reduced GABA content in posterior hypothalamus, striatum, and prefrontal cortex; similar results have been observed for taurine. Both GABA and taurine changed in the prefrontal cortex of adult rats. These results suggest that cadmium affects GABA and taurine content in several brain regions as a function of age, and affects the excitation/inhibition balance of the synaptic neurotransmission. Similar results have been observed for glutamate, aspartate, and glutamine in animals (pubertal and adults) exposed to cadmium. The concentrations of glutamate and aspartate decrease in adult and pubertal animals. Pubertal cadmium exposure generally inhibits the glutamate metabolism and, to a lesser extent, the aspartate metabolism. Therefore, cadmium exposure is able to differentially modify glutamate, aspartate, and glutamine concentrations in brain as a function of age. This effect of cadmium exposure also varies according to each brain area.

Role of Metallothionein in Neurotoxicity MTs are a group of low-molecular weight (B6000 Da), cysteine-rich (30%) intracellular proteins with high affinity for both essential (zinc and copper) and nonessential (cadmium and mercury) metals. The amino acid sequences of MTs from mammals are characterized by a single-chain protein containing approximately 61 amino acids of remarkably similar composition, of which 20 are cysteine residues. MT can incorporate up to 7 divalent metals or 12 monovalent metals distributed in two domains the aand b-clusters. The binding affinity varies between metals, with copper having greatest stability, followed by cadmium, zinc, and mercury. Based on their structural similarities, MTs have been divided into four classes: MT-I, MT-II, MT-III, and MT-IV. MT-I and MT-II are present and expressed in almost all tissues, MT-III is present only in the brain, and MT-IV is specific for squamous epithelium and is expressed only in keratinocytes. Several factors can induce the synthesis of MT in vivo including the presence of free zinc and cadmium, activated oxygen, glucocorticoids, and cytokines. Zn has been identified as a central component of more than 300 enzymes involved in the metabolism of cells and it plays an essential structural function in an entire class of transcriptions factors. The regulation of the metals such as zinc and copper has been the primary function of MTs. Other functions ascribed to MTs include regulation of the biosynthesis and activity of zinc MTs, the sequestration and distribution of metal ions, compartmentalization of zinc, ionizing radiation protection, and detoxification of heavy metals. Moreover, MT may be induced in response to oxidative stress and may protect tissues from oxidative damage. Likewise, all isoforms of zinc-bound MTs are antioxidant agents because the zinc–sulfur cluster is sensitive to changes in the cellular redox state. Oxidizing sites induce transfer of zinc from its binding sites in MTs to those of lower affinity in the proteins; therefore, the redox properties of Zn-MTs are crucial for their protective role against the cytotoxic effect of ROS. The diffusible second messenger, nitric oxide (NO), serves as an intracellular messenger in the CNS, facilitating neurotransmitter release and effectively coupling chemical pathways. Formation of free radicals (NO) in the brain results from nitric oxide synthase (NOS). The NO can be converted into reactive nitrogen species in the presence of superoxide anion, nitric oxide can be transformed into peroxynitrite (ONOO), which then forms peroxynitrous acid (ONOOH). Both peroxynitrite and peroxynitrous acid are potent oxidizers. As thiols are known targets for both ONOO and ONOOH, it is plausible that MT isoforms may intercept both these oxidizers; the formation of NO in brain is therefore more elevated in certain neurons and astrocytes. MT-I and MT-II are expressed throughout the brain and spinal cord in protoplasmic and fibrous astrocytes. These proteins are also found in ependymal cells, epithelial cells of choroid plexus, meningeal cells of pia mater, and endothelial cells of blood vessels. The microglia and oligodendrocytes, however, are essentially devoid of MT-I and MT-II. In contrast, MT-III is predominantly expressed in neurons, specifically in those that sequester zinc in synaptic vesicles. Owing to the relative abundance of Zn in brain, it is speculated that MTs regulate the intracellular Zn in the brain. Thus, changes in MT expression would be expected when the physiological processes involved in Zn regulation are modified by specific exogenous stimuli, such as hormones, cytokines, and metals, implying that they might act in cellular proliferation and differentiation as well as in cellular mechanism. The expression of MT-I and MT-II is regulated at the transcriptional level. The promoter regions of these genes contain several metalloregulatory elements (MERs), plus glucocorticoid responsive elements and enhancers that respond to the housekeeping transcription factors; the promoter regions are essential for basal expression and induction by metals. MT-I and MT-II are inducible in astrocytes, whereas MT-III is relatively unresponsive to induction. Thus, MT-III appears to be transcriptionally regulated in a form different from both MT-I and MT-II. The brain is less responsive than the liver to the induction of MT-I and MT-II by chemicals. In fact, there exists a remarkable difference in the expression of MT genes in the liver and brain of mice and rats after their induction with cadmium. The expression of MT-I and MT-III isoforms at different stages of development and cadmium concentrations determine the relationship between the expression of MT and cadmium distribution. In the brain, the

Cadmium Neurotoxicity

489

best inductor of MT is dexamethasone. Several studies have shown that histopathological lesions, LPO, and MT levels are high in developing rats. This increase caused by cadmium exposure can be partially prevented by concomitant dexamethasone administration, suggesting a possible neuroprotective effect of dexamethasone-induced MT against cadmium toxicity.

Cell Death Mechanisms in Neurotoxicity of Cadmium Apoptosis is genetically programed, and biochemically and morphologically distinct from other cellular death. This type of cell death can be triggered by various stimuli, including cytokines, hormones, viruses, and toxic insults. It is an active process associated with cellular changes including shrinkage, loss of contact with neighboring cells, formation of cytoplasmic vacuoles, and nuclear condensation. Release of cytochrome c from mitochondria, caspase activation, phosphatidylserine externalization, and formation of apoptotic bodies are also initiated. The caspases, a family of cysteine–aspartate–protease, are divided into two groups: initiator caspases (such as caspase-8, caspase-9, and caspase-12), whose main function is to activate downstream caspases; and executor caspases (such as caspase-3, caspase-6, and caspase-7), which are responsible for dismantling cellular proteins. Most of the morphological changes that are observed in apoptosis are caused by a set of caspases. Several reports have shown that, in both in vivo and in vitro models, cadmium induced apoptosis in many tissues and cells. The precise mechanism through which cadmium induces apoptosis is not yet clear. Three apoptotic pathways have been described: (1) mitochondria-dependent pathway, (2) death receptor dependent pathway, and (3) endoplasmic reticulum pathway. There is an initiator for each apoptosis pathway: capsase-8 in death receptor-dependent pathway, caspase-9 for the mitochondrial pathway, and caspase-12 in the endoplasmic reticulum pathway. The two main apoptotic pathways (the mitochondrial and death receptor pathways) are activated by caspase-8 and caspase-9, respectively, both of which are found in the cytoplasm. Caspase-12 is specifically involved in the apoptosis that results from stress in the endoplasmic reticulum. Cadmium likely induces apoptosis through the mitochondrial pathway. In cortical neurons exposed to cadmium, it has been reported that the decrease in the ATP intracellular levels at the high concentration of cadmium were accompanied by ATP release, indicating mitochondrial and cytosolic membrane breaking. The breakdown of mitochondrial membrane potential opens the transition pores and stimulates the release of cytochrome c. Its release from mitochondria into the cytosol and its association to adapter molecule Apaf-1, to recruit pro-caspase-9, is the signal to start the apoptotic process. Active caspase-9 in turn cleaves and activates effector caspase (such as procaspase- 3 and pro-caspase-7) and triggers the proteolytic cascade. Cadmium is a potent inductor of apoptosis in in vitro models. The activation of caspase-9 by cadmium and its downstream caspases before appearance of DNA fragmentation is mainly due to the fact that cadmium is partially involved in cytochrome c release, suggesting that the pathway of cadmium-induced apoptosis is highly dependent on the mitochondria pathway. However, studies show that cadmium can generate apoptosis or necrosis, independent of concentration and time of exposure. Several authors have reported that cadmium can induce an increase in brain LPO and intracellular calcium (two well-known processes activating neuronal apoptosis), which produce loss of mitochondrial membrane potential and free oxygen species formation. This suggests that alteration of intracellular Ca2 þþ ion homeostasis and increase of reactive species of oxygen may be involved in the induction of apoptosis associated with changes in nuclear DNA condensation and fragmentation. Other studies show that cadmium is a potent inductor of apoptosis in C6 glioma cells representing a model of astroglial cells, thus supporting the hypothesis that induction of apoptosis by cadmium is an important mechanism for the toxicity of this heavy metal ion. Moreover, cadmium induces systemic DNA damage, interferes with DNA repair processes, and enhances genotoxic damage. DNA is continuously damaged by endogenous and exogenous agents. The attack by ROS is considered to be one of the major sources of endogenous damage to DNA. Cadmium affects the genome mainly via an increase in oxidative stress. In female rats exposed to low doses of cadmium, an increase in single-strand DNA breaks have been reported in brain nuclear DNA. Owing to the increased metabolic activity and rapid oxygen consumption, the brain is susceptible to free radical attack and may therefore be a critical target for cadmium-induced repair inhibition of oxidative DNA damage. Several types of DNA repair systems are employed to manage this continuous damage including base excision repair, nucleotide excision repair, and mismatch repair. Mismatch repair is involved in the correction of replication mistakes and also in the repair of other DNA lesions induced by oxidative stress. Cadmium inhibits mismatch repair, and may also alter the binding of enzymatic activity of SH2 (Mut Sa), which plays an important role in the repair of articular DNA structures such as hairpins. Likewise, the cadmium ion directly inhibits the nucleotide excision repair system; this could be a cause for inhibition of the competition with zinc ions (essential in DNA polymerases) in the polymerization or ligation step of excision repair. Furthermore, in vitro studies of brain cell cultures from Pleurodeles larvae (an amphibian) exposed to cadmium have shown increase in DNA strand breaksdan indirect result of diminished repair capacity.

Cadmium Neuropathology Neurons have a characteristic morphology with cell processes as the axons extend over long distances from the cell body to the target sites. Cadmium is known to damage a number of tissues including the nervous system. Early experimental studies performed on newborn animals, exposed to high concentrations of cadmium, showed extensive hemorrhages in the cerebral and cerebellar cortices. This is in agreement with recent studies that have reported damage in the brain and cerebellar cortices (pyknosis, interstitial edema, and alteration of Purkinje cells) in rats perinatally exposed to cadmium. The CNS damage was also demonstrated in newborn rats exposed to low doses of cadmium; the brain cortex, cerebellum, caudate nuclei, and putamen showed extensive

490

Cadmium Neurotoxicity

necrosis and hemorrhage, and the endothelium of some blood vessels are also slightly damaged. The vascular endothelium is sensitive to cadmium-induced damage, which leads to edema, hemorrhage, thrombosis, and necrosis. It seems that the administration of cadmium initially affects the integrity and permeability of the vascular endothelium, followed by necrotic changes in nerve cells. Moreover, diverse studies demonstrate that cadmium is more toxic in newborn than in adult rats, mainly by the susceptibility and differences in the BBB maturation. Minor histopathological reports exist that document neurological effects of human exposure to cadmium. For example, a case study of a 2-year-old boy, who died from sudden accidental death, reported both high cadmium concentrations and marked cerebral swelling with herniation. There was histopathological evidence of marked cerebral edema with perivascular protein leakage, indicating BBB disruption. These neuropathological changes generated by cadmium can lead to neurological disturbances. Other related studies reported ultrastructural morphological alterations, swollen mitochondria with disrupted crests in the optic nerve of rats exposed to cadmium (Fig. 1).

Epidemiological Findings Exposure levels of 30–50 mg cadmium per day have been estimated for adults and those levels have been linked to increased risk of bone fracture, cancer, kidney dysfunction, and hypertension. Epidemiological studies from several regions of the world have concluded that the toxicity of cadmium increases with age, time of exposure, and gender; only a few epidemiological studies are devoted to explore neurotoxic cadmium in humans. The first epidemiological study of neurobehavioral effects of cadmium exposure was published in 1989; this study reported that workers exposed to cadmium for 15 years, as a result of brazing operations in the manufacture of refrigerator coils, demonstrated neurobehavioral effects involving attention, psychomotor speed, and memory disorders. Neurological symptoms such as fatigue, mental irritability, headache, muscle weakness, syncope, insomnia, and anosmia

Fig. 1 A schematic representation of systems and cellular processes that are affected by cadmium in the brain. Cd enters Ca2 þ mediated channels and induces modifications in the activity of antioxidant defense systems of glutathione peroxidase (GPx), catalase (CAT) and super oxide dismutase (SOD) producing reactive oxygen species (ROS) and increase in the levels of lipid peroxidation (LPO) with disruption of cellular membranes (cellular and mitochondrial). LPO and Ca2 þ, as well as decrease in the ATP intracellular levels indicating mitochondrial membrane breaking; this breakdown of membrane potential (Y J) produce an open the transitions pores with the release of cytochrome C (Cyt. C) activating the caspase cascade. Cd alters gene expression (GPx, CAT, SOD and MT) through Nrf2 and inhibits the DNA repair processes. Moreover, metallothionein (MT) is activated mainly as a scavenger of the free radicals and Cd.

Cadmium Neurotoxicity

491

have been reported in workers who are acutely or chronically exposed to cadmium. Recently, additional human epidemiological studies of chronic occupational exposure to cadmium have shown diminished attention, psychomotor speed, memory, slowing of psychomotor functions, reduced visuomotor functioning, and increase in the risk of peripheral neuropathy. Likewise, olfactory performance in workers exposed to airborne cadmium fumes revealed a decline in olfactory function after chronic exposure to moderate levels of this metal.

Final Commentary Cadmium is a widely used heavy metal that can induce several neuronal dysfunctions in human beings. We have summarized the findings pertaining to cadmium-induced neuronal damage, and the possible mechanisms involved in that damage; oxidative stress and ADN damage are the most frequently effects associated with cadmium exposure; interference with calcium and zinc metabolism and dysfunction on protein expression system between others induce a variety of cellular problems that can contributes to the injury of macromolecules such as lipids and proteins, leading to cell death (Fig. 1). The mechanism of MT-mediated protection against cadmium is related to its ability to sequester cadmium. More studies are needed in order to understand cadmium-induced neurotoxicity both in animals and in humans.

See also: Cadmium Exposure in the Environment: Dietary Exposure, Bioavailability and Renal Effects; Cadmium and the Welfare of Animals; Cadmium Exposure in the Environment: Renal Effects and the Benchmark Dose.

Further Reading Agency for Toxic Substance of Disease Registry (ATSDR), 2003. Toxicological profile for cadmium. U.S. Department of Health and Humans Services, Public Health Service, Centers for Disease Control, Atlanta, GA. Bertin, G., Averbeck, D., 2006. Cadmium: Cellular effects, modifications of biomolecules, modulation of DNA repair and genotoxic consequences (a review). Biochimie 88, 1549–1559. Correale, J., Villa, A., 2009. Cellular elements of the blood-brain-barrier. Neurochemical Research 34, 2067–2077. Esquifino, A.I., Seara, R., Fernández-Rey, E., Lafuente, A., 2001. Alternate cadmium exposure differentially affects the content of gamma-aminobutyric acid (GABA) and taurine within the hypothalamus, median eminence, striatum and prefrontal cortex of male rats. Archives of Toxicology 74, 127–133. Hidalgo, J., Aschner, M., Zatta, P., Vasák, M., 2001. Roles of the metallothionein family of proteins in the central nervous system. Brain Research Bulletin 55, 133–145. Lafuente, A., Fernández-Rey, E., Seara, R., Pérez-Lorenzo, M., Esquifino, A.I., 2001. Alternate cadmium exposure differentially affects amino acid metabolism within the hypothalamus, median eminence, striatum and prefrontal cortex of male rats. Neurochemistry International 39, 187–192. López, E., Arce, C., Oset-Gasque, M.J., Cañadas, S., González, M.P., 2006. Cadmium induces reactive oxygen species generation and lipid peroxidation in cortical neurons in culture. Free Radical Biology and Medicine 40, 940–951. Mendez-Armenta, M., Barroso-Moguel, R., Villeda-Hernández, J., Nava-Ruíz, C., Rios, C., 2001. Histopathological alterations in the brain regions of rats after perinatal combined treatment with cadmium and dexamethasone. Toxicology 161, 189–199. Montes, S., Juárez-Rebollar, D., Nava-Ruíz, C., Sánchez-García, A., Heras-Romero, Y., Rios, C., Méndez-Armenta, M., 2015. Immunohistochemical study of Nrf2-antioxidant response element as indicator of oxidative stress induced by cadmium in developing rats. Oxidative Medicine and Cellular Longevity, 570650. https://doi.org/10.1155/2015/ 570650. Stankovic, R.K., Chung, R.S., Penkowa, M., 2007. Metallothioneins I and II: Neuroprotective significance during CNS pathology. The International Journal of Biochemistry & Cell Biology 39, 484–489. Thévenod, F., Lee, W.K., 2013. Cadmium and cellular signaling cascades: Interactions between cell death and survival pathways. Archives of Toxicology 87, 1743–1786. Wätjen, W., Cox, M., Biaglioli, M., 2002. Cadmium-induced apoptosis in C6 glioma cells: Mediation by caspase 9 activation. Biometals 15, 15–25.

Cancer and the Environment: Mechanisms of Environmental Carcinogenesisq P Irigaray, Association for Research and Treatments Against Cancer (ARTAC), Paris, France; and European Cancer and Environment Research Institute (ECERI), Brussels, Belgium D Belpomme, Association for Research and Treatments Against Cancer (ARTAC), Paris, France; European Cancer and Environment Research Institute (ECERI), Brussels, Belgium; and Paris V University Hospital, Paris, France © 2019 Elsevier B.V. All rights reserved.

Abbreviations AhR Aryl hydrocarbon receptor BBP Benzyl butyl phthalate CMR Carcinogenic, mutagenic and/or reprotoxic CYP Cytochrome P450 DEHP Diethylhexyl phthalate EBV Epstein-Barr virus EMF Electromagnetic fields HBV Hepatitis B virus HCV Hepatitis C virus HHMMTV Human homolog of mouse mammary tumor virus HHV Human herpes virus HIV Human immunodeficiency virus HPV Human papilloma viruses HTLV-1 Human T-cell lymphotropic virus type 1 IARC International Agency for Research on Cancer NAT N-acetyltransferase NOC N-nitroso compounds PAH Polycyclic aromatic hydrocarbons PCB Polychlorinated biphenyls ROS Reactive oxygen species UV Ultraviolet

Introduction The increasing incidence of a variety of cancers in the mid-20th Century confronts scientists with the question of their origin. We have previously shown that demographic expansion and aging, as well as progress in cancer detection using new diagnosis methods and screening tests, cannot fully account for the observed growing incidence. Moreover, we have listed many environmental factors rated as certainly or potentially carcinogenic by the International Agency for Research on Cancer (IARC) that, in addition to lifestylerelated factors, may in fact be involved in human carcinogenesis. We therefore have put forth the hypothesis that the post-WWII change in our environment may have contributed to the increased number of cancer detected and consequently that environmental carcinogensdthat is, viruses and other microorganisms, radiation and xenochemicalsdmay play a more important role in carcinogenesis than was expected. In this paper we describe the different mechanisms whereby environmental carcinogens may induce and generate cancers. We first review the present theories of cancer and analyze the general mechanisms of carcinogenesis. Then we analyze the specific and common mechanisms whereby viruses, radiation and environmental chemicals can contribute to cancer.

q

Change History: March 2019. Irigaray P has updated the text throughout the article. This is an update of P. Irigaray, D. Belpomme, Mechanisms of Environmental Carcinogenesis, Editor(s): J.O. Nriagu, Encyclopedia of Environmental Health, Elsevier, 2011, Pages 655–665.

492

Encyclopedia of Environmental Health, 2nd edition, Volume 1

https://doi.org/10.1016/B978-0-12-409548-9.11816-0

Cancer and the Environment: Mechanisms of Environmental Carcinogenesis

493

What Are Environmental Carcinogens? We define environmental carcinogens as physical, chemical and biological exogenous agents that cause cancer after having penetrated into the organism through several possible routes: respiratory (air pollutants), digestive (food contaminants and additives), cutaneous (radiation and cosmetics), sexual (viruses), and other (including fetal contamination by maternal blood during pregnancy). The risk fraction attributable to environmental carcinogens is still a matter of controversy. There are currently two opposite interpretations of the growing incidence of cancer. The classical interpretation considers that environmental carcinogens can only make a minor contribution to overall cancer incidence changes and therefore that increase in cancer detection and life expectancy as well as lifestyle-related influences can explain the current growing incidence of cancer. Conversely, our challenging interpretation is that the contribution of involuntary exposure to multiple and diverse environmental carcinogens accounts for a significant portion of the increase. Such a new theory mainly results from the observation that our environment has changed over the same time scale, as the recent rise in cancer incidence and that this change caused the accumulation of many new carcinogenic and cocarcinogenic agents in the environment.

Distinction Between Exogenous and Endogenous Carcinogens As they come from the environment, exogenous carcinogens must be distinguished from endogenous carcinogens, which by definition result from the normal metabolism of individuals that are not exposed to a polluted environment (i.e. that inhale nonpolluted air, ingest noncontaminated food, are not submitted to radiation and are not contaminated by pathological microorganisms). Endogenous mutagens mainly include free oxyradicals and particularly aldehydes and ketones that naturally arise from respiration, and/or that may be contained in food and/or that are molecular intermediates issued from the metabolism of food and/or from the metabolism of bacteria of the endogenous microflora. A linkage point however is that oxidative DNA damage caused by these natural potentially carcinogenic endogenous molecules can be commonly repaired, whereas DNA alterations resulting from exogenous carcinogens are generally not correctly repaired by the different repair systems and so are mutagenic and carcinogenic. The distinction between exogenous and endogenous carcinogens may therefore be critical for the determination of the cause of cancers, since endogenous carcinogens might be associated with a limited number of sporadic cancers occurring spontaneously while exogenous carcinogens, including oncogenic viruses, radiation and xenochemicals may cause numerous acquired sporadic cancers. We thus modified Knudson’s classification and distinguish three main categories of cancer according to their causal origin: hereditary cancers resulting from high penetrant inherited germinal mutations, which represent no more than 1% of overall cancer cases (Group 1), spontaneously occurring sporadic cancers resulting from a possible mutagenic effect of naturally occurring endogenous carcinogens or from spontaneous mutations (Group 2) (see further) and sporadic cancers caused by exogenous carcinogens, which include tobacco smoking-related cancers and environmental cancers, which may represent the majority of cancer cases overall (Group 3) (Table 1).

Distinction Between Lifestyle-Related Risk Factors and Cancer-Causing Agents In order to better delineate the respective contribution of endogenous and exogenous factors in the process of carcinogenesis, lifestyle-related factors should be clearly distinguished from cancer-causing agents. Several lifestyle-related behaviors including sun exposure (UV), multiple sexual partnership (risk of sexual transmission of papilloma and hepatitis B viruses) and tobacco smoking (chemical mutagens and promoters in smoke and tars) are examples of lifestyle-related risk factors involving genuine exogenous carcinogens. In order to clarify the role of the different factors involved in the process of carcinogenesis, we have proposed to distinguish lifestyle-related risk factors, as determined by epidemiological studies, from cancer-causing agents, as mainly determined by toxicological and biological studies. Indeed, lifestyle-related factors are not cancer-causing agents, but risk behaviors that may contribute to the direct or indirect action of cancer-causing agents (e.g. exogenous carcinogens in smoke and tars resulting from the combustion of tobacco, radiation and viruses) and therefore to cancer occurrence.

Table 1

Group I Group II Group III

Proposed revised Knudson’s classification of cancer according to genetic and environmental factors Etiological viewpoint

Endogenous carcinogens

Exogenous carcinogens a

Genetic factors

Presumed attributable fraction

Hereditary cancers Spontaneous cancer occurrence Acquired carcinogenesis

? þþþ ?

? 0 þþþ

Germinal mutations ? Genetic polymorphism

 1%  5%–10%  90%–94%

?, not determined but certainly weak; þþþ, Important causal factor. a Include tobacco smoking-associated carcinogens and environmental carcinogens.

494

Cancer and the Environment: Mechanisms of Environmental Carcinogenesis

General Mechanisms of Carcinogens Spontaneous Carcinogenesis Theoretically, in any dividing cells, spontaneous mutations are supposed to arise from miss-copying of damaged DNA template or from inaccuracy of DNA replication, if there is no pre- and postreplication faithful DNA reparation. However, these stochastic eventsdwear-associated errors occurring in DNA synthesis and replicationdare normally very infrequent and commonly repaired, thus too low to account for the high incidence of cancers; as commonly observed in humans and animals. In addition, as previously indicated, endogenous carcinogens may contribute to the spontaneous occurrence of some sporadic cancer. Indeed, on the basis of analysis of old medical literature and observations of animal breedings, it is assumed that most cancers do not occur spontaneously but are caused by acquired factors, among which exogenous carcinogens may act predominantly. .

Distinction Between Genotoxic and Nongenotoxic Carcinogens Cancer is generally defined as a multistep process involving the accumulation of mutations in specific genes that leads an initial clone of transformed cells to irreversibly progress and expand in the organism. DNA mutations are indeed a critical rate-limiting step in carcinogenesis. Based on the sequencing analysis of the human cancer genome, recent data identifying genetic alterations in cancer cells has considerably reinforced the somatic mutation theory of carcinogenesis. At a molecular level, mutations of the three major candidate cancer genesdoncogenes, tumor suppressor genes and DNA mismatch repair genesdhave been described. Yet, it has been proposed that aneuploidydthe occurrence of DNA and/or chromosome rearrangements during mitosisdplays a more critical role in carcinogenesis than point mutations or small mutations; and that only “driver” mutationsdthat is, mutations that confer a clonal cell growth advantagedare determinant for cancer progression. Moreover it has been recognized that all DNA lesions do not necessarily induce mutations; and that a prerequisite for mutagenesis, is that cells with DNA lesions absolutely need to survive and divide. Furthermore silencing of gene expression due to temporary cell dysfunction or to epigenetic changes, such as aberrant hyper or hypo-methylation of DNA, may also indirectly cause mutations. Indeed, it has been proposed that carcinogenesis is more that mutagenesis, that is, that in addition to mutagenesis, many epigenetic alterations contribute to carcinogenesis. These considerations led us to attempt to characterize carcinogens, particularly environmental carcinogens, according to their genotoxic or nongenotoxic potentials. A genotoxic carcinogen is a carcinogen that directly causes stable DNA damage, which cannot be faithfully repaired and which therefore results in mutation after cell division; whereas a nongenotoxic carcinogen does not interact with DNA, but can affect gene expression, cell functions and/or modify the normal phenotype through epigenetic alterations. Nongenotoxic carcinogens are frequently characterized by pleiotropic effects. They can promote cell division and cell survival and/or contribute to cell transformation and/or tumor progression. Moreover, some nongenotoxic carcinogens may in fact be indirectly mutagenic, by inducing secondary mutations. Cells are indeed extremely vulnerable to gene dysfunction during mitosis and this is particularly the case during fetal development where tissue alteration and disorganization can occur following exposure to xenochemicals. Consequently any agent that acts through epigenetic mechanisms, albeit it does not interact with DNA may indirectly induce mutations and particularly aneuploidy. On the other hand, genotoxic carcinogens may also be associated with cancer promoting properties, so that repeated exposure to such agents may be an effective way to induce cancers in experimental animal models. We conclude that the distinction between genotoxic and nongenotoxic carcinogen is questionable and generally insufficient to account for a precise definition of the mechanisms of action of carcinogens.

Distinction Between Carcinogens and Cocarcinogens; and Among Carcinogens, Between Tumor Initiators, Promoters and Progressors By definition carcinogens are not cocarcinogens. We define a carcinogen as a genotoxic or nongenotoxic cancer-causing agent; whereas a cocarcinogen is not carcinogenic itself, but it can activate a carcinogen or enhance its carcinogenic effects. Accordingly, oncogenic viruses, radiation and xenochemicals such as benzo[a]pyrene, aromatic amines (AAs) and heterocyclic amines (HAAs), N-nitroso-compounds (NOCs) and dioxins such as the prototypical 2,3,7,8-tetrachlorodibenzo-p-dioxin (2,3,7,8TCDD) are environmental carcinogens; while exogenous agents that deplete the organism of endogenous detoxifying molecules such a glutathione (GSH) or that activate environmental pro-carcinogens into carcinogens through the induction of Cytochrome P450 (CYP) enzymes, are cocarcinogens. However, carcinogenesis is an extremely complex multifactorial, multigenic and multistage process. On the basis of experimental data, this process has been modeled into three sequential and successive phasesdinitiation, promotion and progressiondduring which promotion is a classically reversible rate-limiting phase, because it determines the latent period of premalignant tumor formation. Accordingly we define tumor initiators as carcinogens capable to induce a first irreversible driver mutation in a single stem cell or progenitor cell, through direct or indirect mutagenesis, so an initial clone of initiated cells can emerge (see above); tumor promoters as nongenotoxic agents capable to cause clonal expansion of initiated cells, that is, to induce a reversible proliferation of mutated cells and to prevent these cells from apoptotic loss so the possibility of additional genetic and epigenetic changes is preserved; and tumor progressors as carcinogens that irreversibly contribute to the acquisition of the necessary complete phenotype hallmarks of transformed cells, i.e. the capacity of these cells to invade normal tissues to induce

Cancer and the Environment: Mechanisms of Environmental Carcinogenesis

495

neoangiogenesis, to organize themselves as a tumor in association with host-related stroma cells and finally to progress and disseminate in the organism. Consequently, mutagens which comprise initiators and progressors can theoretically be clearly distinguished from promoters; while promoters may be difficult to distinguish from co-carcinogens. Moreover, due to their cellular pleiotropic effects and multiple mechanisms of disturbance of tissue homeostasis, tumor promoters may also be secondarily genotoxic and thus difficult to distinguish from mutagens. A typical example of exogenous chemical promoter is the phorbol ester, phorbol 12-myristate 13-acetate (TPA), which has been characterized by numerous phenotypic biological properties. Also, environmental promoters such as endocrine disruptors or immunosuppressors may be associated with pleiotropic effects. Many endogenous hormones or growth factors have also been shown to be tumor promoters, so it clearly appears that tumor promoters may in fact be from endogenous or exogenous origin. Because mutations cannot occur in nondividing cells, endogenous and/or exogenous tumor promoters are absolutely necessary as long as the clone of premalignant cells is not becoming promoterindependent. Therefore during initiation and promotion mutagens and promoters intimately cooperate, so that a critical number of driver mutations can be reached; leading the premalignant clone of transformed cells to become promoter-independent and fully malignant. Finally, while some exogenous carcinogens have the capacity to induce and generate all three phases of the carcinogenesis processdcalled complete carcinogensdmany others are “partial” carcinogens because they need to act together to generate the complete process. An example is cigarette smoke and tars, which contain a mixtures of different mutagenic and tumor promoting cancercausing agents equivalent to a complete carcinogen; so tobacco smoking is a lifestyle-related risk factor that fully contributes to induce and generate cancers. A similar observation may apply to the numerous exogenous chemicals present in the environment, where they exist as mixtures of all types of carcinogens and co-carcinogens, so they may interact with each other (cocktail effects). Table 2 summarizes the general mechanisms of environmental factors that are involved in carcinogenesis according to their mutagenic, promoting and/or co-carcinogenic effects.

Specific Mechanisms of Environmental Carcinogens Viruses and other microorganisms, radiation and many xenochemicals can cause cancer through diverse and specific mechanisms.

Viruses and Other Microorganisms Viruses can induce and generate cancer through two distinct mechanisms, that is, directly by inducing mutations and indirectly by inducing inflammation and/or immunosuppression. Oncogenic DNA viruses can induce mutations by inserting their own genomic DNA into the cell DNA, while oncogenic RNA viruses induce mutation by inserting a complementary DNA copy of their RNA genome into the target cell DNA, thanks to the presence of a RNA dependent DNA transcriptase reverse. There are three groups of double-stranded DNA viruses, that is, the human papilloma viruses (HPV)dmostly HPV type 16dthe hepatitis-B virus (HBV) and the Epstein-Barr virus (EBV). Also there are two groups of diploid RNA viruses, that is, the hepatitis C virus (HCV) and the human T-cell lymphotropic virus type 1 (HTLV-1), which have been shown to be associated with several human solid cancer and leukemia. There are several different mechanisms whereby oncogenic viruses can induce mutations. A direct mechanism is the insertion of one or several viral oncogenes into the cell DNA and/or after insertion, the activation of cellular proto-oncogenes into oncogenes. Other mechanisms are possible: while HPV-16 has been shown to be mutagenic by inserting the viral oncogenes E6 and E7 into cell DNA, thus producing proteins that inhibit p53; HBV is thought to be mutagenic by producing reactive oxygen species (ROS). Likewise, while the retrovirus HTLV-1 has been shown to be directly mutagenic, HCV as HBV is thought to be indirectly mutagenic by producing ROS in infected cells. We define oncogenic viruses as viruses that directly induce mutations. However, nononcogenic viruses may also play an indirect role in carcinogenesis, through inducing immunosuppression and/or inflammation. This has been clearly demonstrated for the two human immunodeficiency viruses HIVs which, albeit they are not mutagenic, have been classified as carcinogenic by IARC, because they promote the mutagenic effect of oncogenic viruses through immunosuppression induction. Also, infectious but nonmutagenic viruses might be involved in the carcinogenic process of some presumably virus-induced neoplasia, such as the common form of childhood acute leukemia. In addition, some microflora bacteria of the gastrointestinal tractdsuch as Helicobacter pylori for gastric cancer, parasites such as Opisthorchis viverrini for gallbladder cancer and Schistosoma haematobium for bladder cancerdhave been shown to be cofactors causally implicated in carcinogenesis, through inflammation induction and free radicals production. Finally an important finding is that several types of microorganisms can intimately cooperate in carcinogenesis. For example it has been shown that in EBV endemic infection area, environmental tumor promoters such as extracts of a commonly used plant, Euphorbia tirucalli, and mosquitoes-dependant infections such as malaria and arbovirus infection are cofactors that cooperate for the genesis of Burkitt lymphoma. Likewise it has been shown that HBV infection and aflatoxin exposure cooperate for the genesis of hepatocellular carcinoma. Aflatoxins are produced by the contaminated molds Aspergillus flavus and Aspergillus parasiticus. Transgenic mouse models that contain HBV targeted to the liver have evidenced that the frequency of hepatocellular carcinoma increases with aflatoxin exposure. Moreover it was found that the relative risk (RR) for developing hepatocellular carcinoma was 3.4 for people exposed to aflatoxins, but 59.4 RR for people that in addition to aflatoxin exposure had tested positive for hepatitis B.

496

Cancer and the Environment: Mechanisms of Environmental Carcinogenesis Table 2

Proposed classification of exogenous environmental factors according to their carcinogenic and cocarcinogenic properties

Microorganisms EBV HBV/HCV HHMMTV HHV-8 HIV HTLV-1 HPV Helicobacter pylori Radiation Radioactivity UV EMF Particles and xenochemicals Air fine particlesa Asbestos Arylaminesb Azoïc dyes Bisphenol A b-Naphthylamine Benzene and derived molecules DEHP, BBP Dioxins Formaldehyde and derived Hormonal residues Metals, metalloids N-nitroso compoundsc NO2 Organochlorines PAHd PCB Pesticidese Vinyl chlorides (monomers)

Mutagen

Promoter

M M M

P

M M M M M M M M M (?) M M M M (?) M M M M M M M M

(some?) (5 rings) (some) (some)

Cocarcinogen

P

C (?) C

P

C

P P P (?)

C C C C C

P P P P P P P (< 5 rings) P P P

C C C C C ?

(?), not proved but possible. a Air carbonaceous particles, especially PM < 2.5, are vectors for chemicals, including PAH and organochlorines (pesticides). b Include aromatic amines (AAs) and heterocyclic aromatic amines (HAAs). c Nitrates, nitrites, nitrosamines, nitrosamides. d PAH of high molecular weight (five to seven rings) induce DNA adduction processes and so are mutagenic, while PAH with low molecular weight (two to four rings) are nongenotoxic promoters. e Act usually as endocrine disruptors or immunosuppressors (promoters), but some of them can be also mutagenic.

It is estimated that oncogenic viruses are involved in about 16% of human cancers worldwide, while it would only range 5% in high-income countries with less exposure to pathogenic viruses. On the basis of our previous analysis, it can be assumed that virusinduced cancers in humans are probably more frequent in high-income countries than it is usually recognized. In these countries, they might range up to 10%–15% of total cancer.

Ionizing and Nonionizing Radiation Radiation can cause cancer by inducing mutations and also contribute to carcinogenesis by causing promotion and/or cocarcinogenic effects, through immunosuppression induction. Radiation-induced cancers are stochastic late effects of ionizing and nonionizing radiation. They include some leukemias and lymphomas, thyroid cancers, skin cancers, some sarcomas, some lung and breast carcinomas, and some brain tumors. Studies of radiation-exposed populations were initially based on occupational exposure. This began to change with the study of the survivors of atomic bombs exploded above Hiroshima and Nagasaki. Moreover, improved understanding regarding the molecular basis of radon-induced cancers has provided support for considering low radon level as a cause of approximately 10% of lung cancers. Ionizing radiation induces point mutations, dimerization and major chromosomal changes involving DNA breakages and rearrangements. Exposure to low linear energy transfer radiation increases the frequency of chromosome aberrations proportional to

Cancer and the Environment: Mechanisms of Environmental Carcinogenesis

497

the square of the radiation dose. However radiation-induced cancers depend on many variables anddas for low dose chemicalsdlow doses of ionizing radiation must be considered significant risk of somatic heritable mutations. Nonionizing radiation comprises ultraviolet (UV) rays and pulsed electromagnetic fields (EMF). Exposure to UV is a dosedependant risk factor which can cause skin cancers and melanoma. UVB can directly cause mutations while UVA can indirectly damage DNA through ROS production. Pulsed EMF of very low frequency or extremely low frequency have been the object of many scientific debates to answer the question whether they can induce cancers. Despite differences is study design and setting of epidemiological studies, recent results appear to be sufficiently convincing to consider that children living near high voltage power lines are at a relative risk increase of leukemia; and that daily prolonged use of mobile phones during a  10 year long term period is associated with increased risk of brain tumors. Because direct mutagenesis depends on energy and energy level is not sufficient to cause direct breakage of DNA, we propose an alternative explanation, that pulsed EMF may be indirectly mutagenic by inducing epigenetic changes. As discussed in a recent international consensus meeting (see the Bioinitiative report: ww.bioinitiative.org), EMFs-related health endpoints include several types of biological responses such as genotoxicity, immune system deregulation and inflammation, and several types of related diseases such as cancers, particularly childhood leukemia, brain tumors and breast carcinoma. Because EMF-related biological responses are extremely complex we have suggested that, according to the second principle of thermodynamics, they may in fact depend on entropydthat is, loss of structural informationdrather than energy-related effects, consequently causing epigenetic dysregulation and structural tissue disorganization. Here too, a possible indirect molecular mechanism for epigenetic dysregulation, tumor disorganization and carcinogenesis could be the production of free radicals. This might explain why EMF-related cancers seem to require a longer period of exposure than ionizing radiation and why EMF-related cancers appear overall to be associated with a weaker relative risk. Overall, it may be hypothesized that radiation-related cancers might represent up to 10% of total cancer cases. There is however no clear assessment of this population attributable risk, which needs to be further précised. If we assume as we have previously shown that about 25% of cancer cases overall are associated with tobacco smoking, 15% are virus-induced and 10% radiationinduced, approximately 50% of the total number of cancer cases might be caused by chemicals and the fraction of cancer attributable to the environment strictosensu might be in the order of 75% of overall cancer cases.

Xenochemicals The petrochemical age and other dangerous aspects of the industrial revolution, in global production by the second half of the 20th century, had consequences in domains such as energy, transport, agriculture, food and health; this caused inter alia the synthesis, production and introduction into the environment of millions of tons of tens of thousands of different man-made xenobiotic chemicals, thousands produced at high volumes. Such products can contaminate air, soil, water and food and be persistent pollutants in the environment. Many pollutants are carcinogenic, mutagenic and/or reprotoxic (CMR) substances which therefore act as mutagens, tumor promoters or both, or as co-carcinogens. To date, hundreds of pollutants are also known to modify multiple biological processes that affect epigenetic mechanisms, including DNA methylation, histone codes, and miRNA expression. They can play a major role in the genesis of many cancers and thus may account for their currently growing incidence. A wide variety of xenochemicals and chemical classes can cause cancer in animals and humans. Experimental animal models have reproduced every major type of human cancer, showing that exposure to specific chemical carcinogens can induce organ-specific tumors.

Lipophilicity as a basic property of many organic pollutants

Chemical-related DNA damage can occur directly from environmental exposure, or indirectly after metabolic activation of xenochemicals to DNA-reactive molecules. A basic condition for activation and DNA damage is that exogenous chemical carcinogens enter cells. All nonplant organisms use their cell membrane as a hydrophobic permeability barrier to control access to their internal milieu. Diffusion across cell membrane of polar (hydrophilic) pollutants is mediated through transport proteins that specifically select substrates from the extracellular milieu, so polar compounds cannot enter cells if they are not recognized by specific transporters. If they are not metabolized into polar molecules by specific detoxification enzymes, nonpolar (hydrophobic) pollutants can enter cells. Because the organism is frequently not able to fully metabolize man-made nonpolar pollutants, many of them, such as Polycyclic Aromatic Hydrocarbons (PAHs), dioxins and Polychlorinated biphenyls (PCBs) can enter cells, due to their lipophilic and liposoluble properties; consequently they can bioaccumulate in the adipose tissue of multicellular organisms and thus may contaminate many trophic ecosystems including the whole human food chain.

DNA adduction and mutagenicity

A major specific and basic property of exogenous chemical carcinogens is that they can induce stable and irreversible ‘bulky adducts’dthat is, covalent bonds with macromoleculesdmany of which DNA adducts cannot be correctly repaired by the cell repair systems; whereas (previously indicated) endogenous chemical carcinogens form adducts (particularly DNA adducts) can normally be easily repaired. An explanation could be that most exogenous carcinogens or their metabolites are “hard” electrophiles that may irreversibly adduct “hard” nucleophilic sites on DNA, whereas endogenous molecules such as unsaturated aldehydes and ketones are “soft” electrophiles that reversibly react with “soft” nucleophiles on the DNA. Because there is a good correlation between the ability to form stable DNA adducts and the capacity to induce tumors in animals, DNA is considered as the ultimate target for most

498

Cancer and the Environment: Mechanisms of Environmental Carcinogenesis

chemical carcinogens. Mutagenic carcinogens may transfer simple alkyl or (complexed) aryl-alkyl groups to specific sites on DNA bases, or transfer arylamine residues to DNA. Metabolic activation to yield DNA-reactive alkylating and aryl-alkylating agents involves oxidation at carbon atoms, while to yield DNA-reactive aryl-aminating agents it involves either oxidation or reduction at nitrogen atoms. Among alkylating and aryl-alkylating agents are PAHs, NOCs, and aliphatic epoxides, while among arylaminating agents are AAs and HAAs and amino-azo dyes. DNA-reactive chemicals are usually mutagenic, but as aforementioned carcinogenicity is more than mutagenicity. In a serial analysis of chemicals, 16% of tested carcinogens were not found to be mutagenic, while 66% of noncarcinogens were found to be mutagenic. Furthermore, because the interaction of genotoxic carcinogens with DNA has been thought not to be random, it has been hypothesized that mutagenic xenochemicals may induce some specific and reproducible mutations. In fact, this “fingerprint” hypothesis has not been validated, because most mutagenic xenochemicals actually can form several types of mutations depending on the conformation of DNA, the type and the location of the adducts.

Metabolic activation of environmental chemical carcinogens

A number of metabolic pathways activate or detoxify exogenous chemical carcinogens. These pathways are complex; interactive. Many enzymes involved in these pathways are inducible and their activity may be thus modified by additional environmental exposures, hormones and diet, adding a further complexity to the process of chemical carcinogenesis. Normally the host is able to detoxify many chemical environmental pollutants thanks to phase I, II and III enzymes, and other proteins such as GSH and ATP-binding cassette (ABC) efflux proteins; all being involved in the metabolism of xenobiotics. However, during this process, pro-carcinogens can be transformed into active carcinogensd more precisely into promoters and/ or mutagensdby several enzymes. A basic general mechanism of biotransformation has been put forward pointing out that a parent moleculedgenerally a “soft” electrophiledmay be converted into an oxidative metabolite, which is a “hard” electrophile, so that the parent molecule and its oxidative metabolite exhibit distinct electrophilic capacity. This difference in electrophilicity therefore accounts for the different nucleophilic target of the parent molecule and of its metabolite, and can predict whether any molecule can adduct DNA. Among phase I detoxifying enzymes that mainly contribute to the activation of chemical carcinogens is the CYP system, which acts in addition to phase II conjugating enzymes such as N-acetyl transferases (NATs) and sulfotransferase, and comprises important carcinogen-activating ubiquitous intracellular enzymes. The CYP system, which comprises more than 40 isoforms, can activate high molecular weight PAHs (more than 4 rings; h PAHs), nitrosamines and other NOCs, and AAs and HAAs; while peroxidases (phase I enzymes) can activate AAs. In addition, a phenotype of slow or fast metabolic activation may lead to different cancers. A genotypically recessive slow acetylation phenotype involving NAT1 has been found to be associated with occupationally induced bladder cancer in dye workers exposed to AAs, whereas a genotypically dominant rapid acetylator phenotype, involving both NAT1 and NAT2 has been found to be associated with colon cancer in people exposed to dietary HAAs. Moreover, in addition to these different substrates, inter-ethnic and inter-individual genetic polymorphisms are major contributing factors that determine the type and quantity of synthesized enzymes and therefore the type of pathways and the intensity of the activation process. For many carcinogens, the activation process takes place in the host. This is the case for benzo[a]pyrene and other h PAHs for which the activation of CYP1A1 and/or CYP2D6 and/or CYP2E1 have been found to be associated with an increased risk of lung cancer. Endogenous bacteria may also contribute to activate carcinogens, for example, AAs, for which the glucuruno-conjugate CYP1A2-induced hydroxylamine is deconjugated in the colon by a bacterial glucuronidase, so hydroxylamine can be acetylated by NAT2. This is also the case for NOCs. Nitrates are not per se carcinogenic. However nitrates can be transformed into nitrites through nitrosation by the microflora bacteria of the digestive tract, then nitrites can be transformed into the highly mutagenic NOCs, alkylnitrosamines and alkylnitrosamides, which are further activated by CYP2E1, CYP2A6 and CYP2D6 to form stable DNA adducts in target tissue.

The central role of the AhR to activate and induce CYP systems in environmental chemical carcinogenesis

A number of xenochemicals that cause cancer in laboratory animals are not demonstrably mutagenic. These xenochemicals are environmental pollutants such as dioxin (prototypically 2,3,7,8-TCDD), dioxin-like PCBs, organochlorine pesticides and low molecular weight PAHs (2, 3 or 4 rings, l PAHs) that can act as promoters and/or cocarcinogens. A basic finding is that many xenochemicals such as PAHs, as well as dioxins, dioxin-like PCBs and other organochlorines act through a common ubiquitous molecular pathway involving the Aryl hydrocarbon Receptor (AhR). AhR is a ligand activated transcription factor known to mediate the pleiotropic effects of many environmental pollutants. Pollutants that combine with and activate AhR cause the transcription of many genes involved in cell proliferation, cell differentiation and cell survival, and consequently induce a broad spectrum of systemic promoting effects. In addition, a major event following AhR activation is the activation of several CYP response genes, causing co-carcinogenic effects. Because the CYP system is a major determinant for the activation of many environmental chemical carcinogens, both the AhR activating and the inducible CYP systems are central in environmental chemical carcinogenesis. In addition to inducing promoting and co-carcinogenic effects, some environmental chemical carcinogens may also induce mutagenic effects.

Metals and metalloids as environmental carcinogens

Several metals and metalloids have been rated as certain or probable carcinogens by IARC, albeit their mechanisms of action are not clear. Metals and metalloids could act as co-carcinogens by activating pro-carcinogens in the liver or by increasing the promoting effect of estrogens. They could also act by replacing the natural enzyme-complexed metal, thus inactivating the metabolic pathway and function of key enzymes. Carcinogenic metals and metalloids, such as arsenic, cadmium and nickel, and some putative

Cancer and the Environment: Mechanisms of Environmental Carcinogenesis

499

carcinogens such as cobalt and lead can inhibit zinc finger-containing DNA repair proteins. Damage of zinc finger in DNA repair proteins can be regarded as a novel mechanism in carcinogenesis. Moreover, some metals and metalloids may also be mutagenic through other mechanisms. Indeed many of them can interact with DNA. Metal compounds such as Chromium(VI) are taken up by cells as chromate anions and are reduced intracellularly via reactive intermediates to stable Cr(III), which can directly interact with DNA. These Cr(III) intermediates may affect DNA by terminating replication or reducing replication fidelity, thus leading to mutations. Cr(III) can also form DNA-proteins and DNA-amino acids and glutathione crosslinks. Platinum compounds (i.e. cis-Diamine dichloroplatinum) are well known DNA strand breakers. They can form DNA crosslinks and DNA-protein crosslinks leading to mutations. There is evidence that nickel may act via an epigenetic mechanism involving heterochromatic regions of the genome. Finally, many studies have focused on metal-induced carcinogenicity, emphasizing the mutagenic role of metals such as iron, copper, chromium, nickel, cadmium and arsenic in carcinogenesis, through the production of ROS. Metal-mediated formation of free radicals can indeed cause various modifications of DNA bases and other intracellular molecular changes that can contribute to carcinogenesis. A typical example is asbestos-induced cancers that may be caused by the generation of free radicals due to the presence of oxidative iron.

Search for Basic Properties and Mechanisms Common to Environmental Carcinogens Basic assumptions of environmental carcinogens is that they act at repeated low doses, and that chronic exposure is more relevant than dose intensity. Furthermore, environmental carcinogens may act according to several common cellular and molecular mechanisms.

Low Dose Effects and Role of Exposure Duration It is commonly claimed that environmental carcinogens, especially radiation and xenochemicals are not released in sufficient quantities in the environment to reach levels that can cause cancer. There are many counter-arguments to this claim. Cancer is indeed a disease basically caused by chronic exposure to low dose carcinogens. Just as for radiation, where there is no safe dose threshold mutagenic xenochemicals can induce mutations at extremely low levels. Yet, a similar consideration is demonstrated for environmental organochlorine pollutants such as dioxins, and more generally may apply to tumor promoters and environmental endocrine chemical disruptors, for which the promoting effect depends on the sensitivity of receptors. Indeed in endocrinology nonmonotonic inverted U or J-shaped dose-response relationships, indicating more risk at low than at high dose levels. Similar consideration might apply to viruses for which a few number of particles with infectious capacities may result in malignant transformation or immunosuppressive effects. Consequently it appears that environmental carcinogens, be they mutagens or promoters, would in fact be carcinogenic at doses lower than those at which no effect level is observed in classical rodent tests. In addition, since many environmental chemical pollutants can bioaccumulate in the adipose tissue (see further), we have proposed that they can be released in the blood circulation at doses which do not correspond to those found in the environment and thus may be carcinogenic at environmental extremely low doses. Moreover, according to the current concept of carcinogenesis duration of exposure (i.e. repeated low doses) rather than dose intensity of carcinogens should be considered. Indeed, it clearly appears that the older a person is, the longer will be his/her exposure period to carcinogens and hence the greater the probability of cancer occurrence will get.

Chronic Inflammation and Immunosuppression as Cellular Mechanisms Common to Many Environmental Cancer Types Environmental carcinogens can cause cancer through the induction of several pathological conditions such as inflammation and immunosuppression. Nononcogenic as well as oncogenic viruses, other microorganisms, radiation and xenochemicals can induce cancer through the induction of chronic inflammation, more precisely through the production of ROS and other free radicals such as nitric oxide and hypochlorite by phagocytes and neutrophils and through a cascade activation of many pro-oxidant cytokines and growth factors. It is only recently that gastritis, ulcerative colitis, chronic pancreatitis and hepatitis have been recognized as risk factors that may contribute to the genesis of many cancers, may be about one-third of all cancers. Environmental carcinogens that elicit an inflammatory response are potent generators of humoral immunity that cooperate with cellular immunity and effectively suppress antitumor immune response, while simultaneously enhancing angiogenesis. Also, generation of ROS can damage host cells and DNA and be associated with promoting effects. Consequently environmental agents that induce inflammation could contribute to tumor initiation, tumor promotion and tumor progression. In addition, viruses, radiation and environmental xenochemicals can be immunosuppressive through inducing direct damage or disorganization of the immune system and may also indirectly cause these damage or disorganization by producing ROS. On the basis of animal experiments and observations of cancer incidence in immunosuppressive patients, immunosuppression was first recognized as a factor contributing to carcinogenesis. Immunosuppressorsdespecially T cell immunosuppressorsdmay lead to tumor promotion and progression and/or cocarcinogenic effects. Immunosuppression-induced tumor promotion and progression have been observed in experimental animal models and may combine suppressed cellular immunity with enhanced humoral immunity. We believe that the co-carcinogenic effect of immunosuppressiondthus of environmental immunosuppressors, be they of viral physical or chemical origindmainly concern microorganisms, and particularly oncogenic viruses. Immunosuppressive

500

Cancer and the Environment: Mechanisms of Environmental Carcinogenesis

xenochemicals such as pesticides could be therefore an important etiological factor that may account for the recently-observed growing incidence of virus-induced leukemia and lymphoma.

Overweight/obesity, type II diabetes and cancer

Due to their lipophilicity and lipo-solubility, many chemical environmental carcinogens can bioaccumulate in the adipose tissue. We have shown that overweight/obesity can be experimentally induced by benzo[a]pyrene and propose that adipose tissue acts as a reservoir for lipophilic liposoluble environmental carcinogens, so chemical pollution may in fact generate both overweight/obesity and the accumulation of more chemical carcinogens. Such concept is in addition supported by the fact that epidemiological studies have shown that overweight/obesity is a risk factor associated with certain types of cancer including breast, colorectum and endometrium cancers and lymphoma. Also, it has been recognized that there is an increased risk of cancer in patients with type II diabetes and that type II diabetes may be induced by chemical environmental pollutants. We conclude that the association of overweight/obesity, type II diabetes and cancer may constitute a specific clinical syndrome that comprise environmental diseases and therefore that this syndrome should lead to the search for environmental pollutants as a common cause.

Cancer induction following disorganization of fetal and neonatal tissues by environmental pollutants

Many clinical studies have revealed the extreme vulnerability of fetus to environmental pollutants, that is, to viruses, radiation, hormones and xenochemicals. Indeed, the high rate of cell proliferation and differentiation, and the lower capacity of metabolic detoxification and DNA reparation render the cells of the fetus and developing child more susceptible to mutations and/or epigenetic alterations than adult cells. We distinguish three periods during which exposure to environmental carcinogens may take place, the preconceptual period (i.e. effects on parental germ cells), the prenatal period (i.e. exposure of the embryo or fetus via the mother’s placenta) and the postnatal period which corresponds to the direct exposure of children to environmental carcinogens. The enormous complexities of development, which are often especially active at specific times, create fetal and postnatal “windows of vulnerability” to disruption by exogenous factors, during which there may be inter alia an increased risk of subsequent development of cancer. This fetal vulnerability exposure window, together with the necessarily prolonged latent promotion phase, may explain why current epidemiological and experimental studies may find no correlation or causation when performed during adulthood. Because there is no protective barrier between the developing fetus and its mother, transplacental exposure of the fetus to natural or synthetic estrogenic hormones and environmental endocrine disrupters can occur and may result in cancer occurrence. Many studies in animal models have confirmed the existence of a causal link between prenatal and/or neonatal exposure of chemical environmental carcinogensdsuch as estrogens and endocrine disruptersdand the subsequent development of cancers. From these experiments, an intriguing finding is that tumor promoters such as the ubiquitous estrogenic endocrine disruptor Bisphenol A, administrated during pre- or postnatal periods, can result in the occurrence of prostate cancer or mammary cancer during the adulthood. Perturbations of fetal organogenesis and cell differentiation may therefore be a mechanism whereby exposure to environmental carcinogens could lead to epigenetic alterations and indirect mutagenesis, and thus to the subsequent development of cancers, which may occur later in life. It is well know that epigenetic mechanisms are essential for normal development in mammals. Global changes in the epigenetic landscape are a hallmark of cancer. Additionally advances in the field of epigenetics have shown that human cancer cells harbor global epigenetic abnormalities, in addition to numerous genetic alterations. The initiation and progression of cancer, traditionally seen as a genetic disease, is now realized to involve epigenetic abnormalities along with genetic alterations. Environmental chemicals may modify multiple biological processes that affect epigenetic mechanisms, including DNA methylation, histone codes, and miRNA expression. Indeed, in-vitro, animal, and human investigations have identified several classes of environmental chemicals that modify epigenetic marks, including metals (cadmium, arsenic, nickel, chromium, methylmercury), peroxisome proliferators (trichloroethylene, dichloroacetic acid, trichloroacetic acid), air pollutants (particulate matter, black carbon, benzene), and endocrine-disrupting/reproductive toxicants (diethylstilbestrol, bisphenol A, persistent organic pollutants, dioxin). These environmental chemicals may modify chromatin organization and condensation, gene expression, and affect cancer risk. Increasingly, research find that short-term exposures during pregnancy and early development to some chemicals can cause reproductive system damages, alter body weight, and even increase the risk of cancer for great-grandchildren of exposed animals. For example, Michael Skinner and his team, were able to show in rats that various environmental pollutants caused negative health effects for three generations after the exposed animal’s offspring. Finally, perinatal exposure, preconceptual paternal and even grandpaternal exposure to environmental carcinogens might be a causal factor accounting for the recently-observed growing incidence of some cancer both in children and in adults.

Free radical production as a common molecular mechanism

As indicated above, a common and necessary step-limiting molecular mechanism in carcinogenesis is DNA mutagenesis. Environmental carcinogens may specifically inhibit DNA reparation (hence enhance the probability of mutations) and induce genomic destabilization in dividing cells by the mean of two mechanisms: a direct inactivation of reparases and/or the inhibition of expression of repair-associated genes. Two hypotheses have been so far proposed for the second mechanism. According to the first hypothesis, occurrence of a mutator phenotype has been postulated in order to account for the high rate of nonexpanded (random)

Cancer and the Environment: Mechanisms of Environmental Carcinogenesis

501

Mutagenesis

Cell death Tumour promotion Oxidative stress ROS Fig. 1 Schematic representation of a dose dependent hypothetic relationship between oxygen free radicals and cancer genesis during oxidative stress according to Dreher and Junod. Local doses of free radicals capable of cancer genesis are subtoxic. Doses capable of inducing promotion are lower than doses involved in mutagenesis, while mutagenic doses, that is, potentially carcinogenic doses are lower than doses inducing cell death.

mutations observed in cancer cells, and for the genomic instability that may result from this high mutation rate. Such mutator phenotype seems to have been evidenced in early-stage sporadic colorectal cancers, where it might be related to a mismatch repair deficiency. However, to our knowledge, no specific gene alteration, as well as no proved causal specific effect of mutagens, have clearly proved that a mutator phenotype is a necessary step-limiting in carcinogenesis. Hence this hypothesis remains to be validated, in as much as according to the classical concept of carcinogenesis cells can acquire mutations and destabilize genome as a result both of a stochastic process associated with clonal (driver) mutation occurrence and the Darwinian selection process acting as a driving force during clonal expansion of mutated cells. As previously mentioned a cancer-associated molecular mechanism common to many environmental carcinogens might be the production of free radicals (the second hypothesis). Free radicals are molecules or fragments of molecules containing one or more unpaired electrons that confer a considerable degree of reactivity upon these molecular species. Oxidative stress is the cumulative production of ROS and reactive nitrogen species that lead cells to present an unbalanced redox state, conferring advantage of oxidants over reductants. In normal redox conditions free radicals act as secondary messengers in intracellular signaling cascades, and consequently may contribute at physiological concentrations to tumor promotion. However, as indicated in Fig. 1, at intermediate higher concentrations, when the redox potential of the systemdthat is, the redox buffering capacity of cellsdis saturated by an excess of ROS, ROS can damage macromoleculesdsome of them may further become free radicalsdand thus induce oxidative DNA lesions and adducts; while at the highest concentrations they induce cell death. Superoxide, a precursor of many free radicals and ROS has been shown to regulate major epigenetic processes. Because mitochondrial DNA is more susceptible than nuclear DNA to the mutagenic effect of toxicants; and so, because toxicants can contribute to an excessive intracellular free radical production, it is our hypothesis that during cancer progression, epimutations and other epigenetic changes may be caused not only by toxicants themselves but also by ROS; in the general context of a toxicant-induced oxidative redox change. Adapting the free radical theory of development to cancer may help elucidate the distinct genotypic and phenotypic properties of cancer cells, in the framework of a more general genomic/epigenomic theory of carcinogenesis.

Conclusion Because all types of environmental carcinogens can generate ROS, it has been proposed that free radicals might be central in carcinogenesis through direct and indirect mutagenesis, promotion and progression induction. As aforementioned, it clearly appears that such presumed carcinogenic effects intimately depend on the intracellular concentration of free radicals. However, in vitro and/or in vivo tests of intracellular oxidative stress are not sufficient to causally implicate free radicals in mutagenesis and carcinogenesis. It has been observed that many tumor promoters may have a strong inhibitory effect on cellular antioxidant defense mechanisms, but a lack of antioxidants in diet seems to be associated with no more than 5% of overall cancer cases, and administration of antioxidants has not yet been proved to induce preventive anticancer effects in numerous studies. However, as elucidated, there are other hallmarks and means for carcinogenesis to occur; and anyway in all threatening aspects of life, prevention is better than cure. The most effective anti-cancer strategy of public officials should be to reduce production of environmental carcinogens. Especially in view of the fact that with the global climate change, where intense rainfall is expected to increase due to climate change, as a result run-off of pollutants such as pesticides and herbicides, metalloids and trace metals, nutrients, endocrine disrupting chemicals (EDCs) and pharmaceuticals, and dioxins into water bodies will increase.

502

Cancer and the Environment: Mechanisms of Environmental Carcinogenesis

Further Reading Armitage, P., Doll, R., 1954. The age distribution of cancer and a multistage theory of carcinogenesis. British Journal of Cancer 8, 1–12. Belpomme, D., Irigaray, P., 2011. Environment as a potential key determinant of the continued increase of prostate cancer incidence in Martinique. Prostate Cancer 2011, 819010. https://doi.org/10.1155/2011/819010. Belpomme, D., Irigaray, P., 2016. Replicative random mutations as an unproven cause of cancer: A technical comment. Molecular and Clinical Oncology 4, 497–499. Belpomme, D., Irigaray, P., Hardell, L., et al., 2007a. The multitude and diversity of environmental carcinogens. Environmental Research 105, 414–429. Belpomme, D., Irigaray, P., Sasco, A.J., et al., 2007b. The growing incidence of cancer: Role of lifestyle and screening detection (review). International Journal of Oncology 30, 1037–1049. Belpomme, D., Irigaray, P., Hardell, L., 2008. Electromagnetic fields as cancer-causing agents. Environmental Research 107, 289–290. Belpomme, D., Irigaray, P., Ossondo, M., et al., 2009. Prostate cancer as an environmental disease: An ecological study in the French Caribbean islands, Martinique and Guadeloupe. International Journal of Oncology 34, 1037–1044. Dreher, D., Junod, A.F., 1996. Role of oxygen free radicals in cancer development. European Journal of Cancer 32A, 30–38. Hanahan, D., Weinberg, R.A., 2000. The hallmarks of cancer. Cell 100, 57–70. Hanson, M.A., Skinner, M.K., 2016. Developmental origins of epigenetic transgenerational inheritance. Environmental Epigenetics 2 pii: dvw002. Hinson, J.A., Roberts, D.W., 1992. Role of covalent and noncovalent interactions in cell toxicity: Effects on proteins. Annual Review of Pharmacology and Toxicology 32, 471–510. Irigaray, P., Belpomme, D., 2010. Basic properties and molecular mechanisms of exogenous chemical carcinogens. Carcinogenesis 31, 135–148. Irigaray, P., Mejean, L., Laurent, F., 2005. Behaviour of dioxin in pig adipocytes. Food Chemical and Toxicology 43, 457–460. Irigaray, P., Ogier, V., Jacquenet, S., et al., 2006. Benzo[a]pyrene impairs beta-adrenergic stimulation of adipose tissue lipolysis and causes weight gain in mice. A novel molecular mechanism of toxicity for a common food pollutant. FEBS Journal 273, 1362–1372. Irigaray, P., Newby, J.A., Clapp, R., et al., 2007a. Lifestyle-related factors and environmental agents causing cancer: An overview. Biomedicine and Pharmacotherapy 61, 640–658. Irigaray, P., Newby, J.A., Lacomme, S., Belpomme, D., 2007b. Overweight/obesity and cancer genesis: More than a biological link. Biomedicine and Pharmacotherapy 61, 665–678. Irigaray, P., Lacomme, S., Mejean, L., Belpomme, D., 2009. Ex vivo study of incorporation into adipocytes and lipolysis-inhibition effect of polycyclic aromatic hydrocarbons. Toxicology Letters 187, 35–39. Jirtle, R.L., Skinner, M.K., 2007. Environmental epigenomics and disease susceptibility. Nature Review Genetics 8, 253–262. Jones, P.A., Baylin, S.B., 2002. The fundamental role of epigenetic events in cancer. Nature Review Genetics 3, 415–428. Jones, P.A., Baylin, S.B., 2007. The epigenomics of cancer. Cell 128, 683–692. Knudson, A.G., 1993. Antioncogenes and human cancer. Proceedings of the National Academy of Sciences 90, 10914–10921. Landau-Ossondo, M., Rabia, N., Jos-Pelage, J., et al., 2009. Why pesticides could be a common cause of prostate and breast cancers in the French Caribbean Island, Martinique. An overview on key mechanisms of pesticide-induced cancer. Biomedicine and Pharmacotherapy 63, 381–395. Nilsson, E.E., Sadler-Riggleman, I., Skinner, M.K., 2018. Environmentally induced epigenetic transgenerational inheritance of disease. Environmental Epigenetics 4 dvy016.

Relevant Websites http://www.bioinitiative.org/dBioInitiative Report: A Rationale for a Biologically-based Public Exposure Standard for Electromagnetic Fields (ELF and RF). http://monographs.iarc.fr/dIARC Monographs on the Evaluation of Carcinogenic Risks to Humans. http://www.artac.infodThe Paris Appeal, International Declaration on diseases due to chemical pollution.

Cancer Risk Assessment and Communicationq Stacey A Fedewa, American Cancer Society, Surveillance Research Department, Atlanta, GA, United States © 2019 Elsevier B.V. All rights reserved.

Restricting the Term “Environmental Cancer” The term “environmental cancer” has been used to broadly describe cancers attributed to all non-genetic risk factors to a more restricted definition of cancers attributed to involuntary risk factors such as those in the workplace, air, water, and soil. To the public, “environmental cancer” is oft defined as cancer as a result of “pollutants” (McGuinn et al., 2012). Nongenetic causes of cancer encompass numerous factors, including several known and well-documented carcinogens such as tobacco smoke, poor diet, alcoholic beverages, excess body weight, and ultraviolet radiation. Under this definition, environmental factors include voluntary individual behaviors that are in part shaped by the communities and environment in which they live. For example, the growing burden of cancers due to excess body weight is attributed in part to less healthful dietary patterns and increasing portion size, consumption of energy-dense, sweetened food influenced by the food environment (Sung et al., 2018). For many purposes, these factors have been deemed as voluntary and thus potentially modifiable. Involuntary exposures, which involve agents encountered in the workplace or polluting the general environment, are considered differently. In the current article, the term “environmental cancer” includes involuntary exposuresdeither in the workplace or outside of work consequent on air, water, soil, or other contamination. Thus, risk assessment will be discussed here primarily in relation to involuntary exposures, with reference to lifestyle factors. Involuntary exposures account for a lower fraction of the cancer burden according to present-day literature, though some consideration needs to be paid to how attributable risk is calculated and interpreted, discussed below in the risk communication section. Nonetheless, a recent assessment of modifiable causes of cancer in the United States estimates that 42% of cancers and 45% of cancer deaths can be attributed to potentially modifiable risk factors, including tobacco smoke, excess weight, physical inactivity, alcohol consumption, dietary factors, excess exposure to ultraviolet light, and six-infectious agents, with the highest proportion attributable to tobacco (19.0% of cancers and 28.8% of cancer deaths) (Islami et al., 2018). Another study estimated that 39.0% of incident cancers are attributed lifestyle factors in the United Kingdom (Parkin et al., 2011). By comparison, occupational exposures were estimated to account for about 2%–8% of cancers in the United Kingdom and United States, with a higher proportion in men (3%–14%) than women (1%–2%) (Purdue et al., 2015). Doll and Peto’s landmark report of cancer causes, published in 1981, that about 4% of cancer deaths in the United States were due to occupational exposures, which at the time, was much lower than the US governments Occupational Safety and Health Administration’s (OSHA) estimation that 20% of cancers were workrelated (Doll and Peto, 1981; Bridbord et al., 1981). Although current estimates of cancers attributable to workplace and environmental exposures are generally smaller than that of tobacco smoke and other cancer risk factors, they remain important for several reasons. First, even a seemingly limited percentage of cancers attributed to an agent or groups of agents can lead to large absolute numbers of cancers. For example, the International Labor Organization estimated that workplace exposures accounted for 742,235 cancer deaths worldwide in 2015 (Hamalainen and Kiat, 2017) and residential radon exposure is estimated to be the second-leading cause of lung cancer deaths behind tobacco smoke, accounting for 21,000 lung cancer deaths in the United States annually (Gaskin et al., 2018; Environmental Protection Agency, 2003). Furthermore, socioeconomically deprived communities are disproportionately affected by exposure to environmental and occupational carcinogens, contributing to disparities in cancer burden in underserved populations, which is especially disconcerting given that these groups also have limited access to early detection and curative treatment (Menvielle et al., 2010; Kioumourtzoglou et al., 2016). Additionally, it is estimated that occupational and environmental cancers are expected to increase in low-to-middle income countries as result of expanding industrialization, lack of regulation, and increases in life-expectancy, which is also concerning because these populations often have limited access to early detection and treatment (Hashim and Boffetta, 2014). The community expectation is that involuntary exposures leading to cancer development will be identified and eliminated (Stewart, 2012). Moreover, it is acknowledged that exposure to carcinogens either in the workplace or because of environmental pollution may be prevented by policy and regulatory measures.

Cancer Risk Assessment Risk assessment, as defined by the US National Academy of Sciences, as “the factual base to define the health effects of exposure of individuals or populations to hazardous materials and situations” (National Academy of Sciences, 1983). It is used in developing environmental health policy, and incorporates the magnitude of the risk, population exposed and impacted. In 1983, the United

q

Change History: March 2019. Stacey A. Fedewa has updated the text throughout the article. This is an update of B.W. Stewart, Cancer Risk Assessment and Communication, Editor(s): J.O. Nriagu, Encyclopedia of Environmental Health, Elsevier, 2011, Pages 482–488.

Encyclopedia of Environmental Health, 2nd edition, Volume 1

https://doi.org/10.1016/B978-0-12-409548-9.11737-3

503

504

Cancer Risk Assessment and Communication

States National Academy of Sciences outlined four key steps in quantitative risk assessment: hazard identification, exposure assessment, dose–response assessment, and risk characterization, described in further detail below (National Academy of Sciences, 1983).

Hazard Identification The first step in quantitative risk assessment is hazard identification. The term “hazard” is used to describe whether an agent capable of causing cancers whereas the term “risk” is probability of an event, such as developing cancer, during a specified period of time (Rothman and Lash, 2008). They are sometimes used interchangeably, but distinguishing the two is important as the primary question of hazard identification is whether it’s capable of causing cancer in humans, without reference to the number or elevated risk of cancers that it may cause. The identification of such hazards is undertaken by international or national agencies, namely, the International Agency for Research on Cancer (IARC, an arm of the World Health Organization) and the National Toxicology Program in the United States. In this article, the structure of IARC’s Monographs on the Evaluation of Carcinogenic Risks to Humans is used to describe hazard identification (IARC, n.d.). Of note, the IARC makes a distinction that their title includes the term risk for historical purposes, though the primary purpose of its monographs are to identify hazards. IARC has a structured process to evaluate agents suspected to be carcinogenic (International Agency for Research on Cancer, 2015). The term “carcinogenic” or cause is not only to describe an agent capable of inducing disease, but if it can shorten the latency (i.e., leading to sooner clinical appearance of disease) and/or influence the severity of a neoplasm (Rothman and Lash, 2008; International Agency for Research on Cancer, 2015). Available evidence on the following are considered during the IARC’s evaluation: 1) information on the agent, such as its underlying biologic, chemical and physical properties, as well as exposure routes; 2) epidemiologic studies of humans that compare cancer incidence or mortality among exposed relative to unexposed populations, ecologic studies and correlational data; 3) experimental evidence from chronic toxicity testing of the agent using experimental animals and 4) mechanistic data, including toxicokinetic data and mechanisms of carcinogenesis, which have been increasingly incorporated into the IARC’s process (Smith et al., 2016). These components, as well as other relevant information, including the quality of studies, consistency and clarity of results are considered by multidisciplinary teams. For example, there are rarely randomized clinical trial data on agents in humans, so experts rely on observational studies of various designs (cohort, casecontrol, and sometimes ecologic data), that have biases such as confounding, exposure misclassification, and selection bias, that scientists must consider when interpreting and integrating findings. In theory, determining whether an agent can cause cancer in humans involves only a binary categorization; either an agent is carcinogenic, or it is not. The usage of multiple hazard identification categories is indicative of it’s complexity. IARC classifies agents as shown in Table 1 below. Group 1 comprises those agents that are definitively established to cause cancer in humans. Typically, such agents are those for which there is sufficient epidemiological evidence of carcinogenicity. Group 2A agents are probably carcinogenic for humans, and Group 2B agents are possibly carcinogenic for humans. For many agents in Groups 2A and 2B there is sufficient evidence of carcinogenicity in animals, but epidemiological data fall short of establishing causality (typically Group 2A) or there is no such evidence available (typically Group 2B). Agents for which the available evidence of carcinogenicity does not warrant categorization under Group 2B are regarded as those for which evidence of carcinogenicity is inadequate to indicate causation of cancer in humans (Group 3). Group 4 indicates that an agent is probably not carcinogenic. As of November 2018, there were 120 agents classified as Group 1 (i.e., carcinogenic to humans), 82 agents as Group 2A (i.e., probably carcinogenic to humans), 311 as Group 2B (i.e., possibly carcinogenic to humans), 499 as Group 3 (i.e., not classifiable), and one as Group 4 (i.e., probably not carcinogenic). Though a particular hazard identification is nonbinding; for each individual agent, determinations are made on the basis of current evidence. Additionally, the circumstance of exposure may also be considered. Exposure to hair dyes as a hairdresser belongs to Group 2A, whereas exposure due to personal use of hair dyes belongs to Group 3, as the extent of exposure to individuals handling hair dyes in their course of work is greater than consumers who encounter the same agents episodically. Similarly, relevant worker exposure is usually greater than that experienced by those who are exposed as a result of being located in a community near a point source of pollution. Exposure of such residents may be less than occurs in the workplace, but in the case of, for example, communities located adjacent to asbestos mining and milling, increased carcinogenic risk may be evident. Specification that an agent is definitely capable of causing cancer in humans, or probably has that activity, or any other outcome of hazard identification, is a central element so far as primary prevention of cancer is concerned, but is an intermediate endpoint. The following additional steps need to be taken to assess risk in a population.

Exposure Assessment Exposure assessment principally answers questions on the amount, intensity and duration of exposure in a population and the routes of exposure (inhalation, ingestion, and/or dermal contact) and pathways (e.g., drinking water, showering/bathing). Exposures may be assessed in epidemiologic studies, occupational health settings, as part of routine surveillance, and, in risk assessment, which is the primary application discussed herein. One the first steps in assessment, is defining exposed populations whether it be the general public or selected groups to distinguish between different circumstances of exposure. For example, the risk of second cancers arising in patients following their treatment with certain cytotoxic drugs and the work situation of oncology nurses and pharmacists who handle cytotoxic

Cancer Risk Assessment and Communication Table 1

505

The international agency for cancer research’s classification

Classification

Description according to IARC monograph preamble

Group 1: The agent is carcinogenic to humans

Sufficient evidence of carcinogenicity in humans or in some instances where evidence of carcinogenicity in humans is less than sufficient but there is sufficient evidence of carcinogenicity in experimental animals and strong evidence in exposed humans that an agent acts through a relevant mechanism of carcinogenicity. Limited evidence of carcinogenicity in humans and sufficient evidence of carcinogenicity in experimental animals. In some instances: an agent is classified in this category when there is inadequate evidence of carcinogenicity in humans and sufficient evidence of carcinogenicity in experimental animals and strong evidence that carcinogenesis is mediated by a mechanism that also operates in humans Exceptionally, an agent may be classified on the basis of limited evidence of carcinogenicity in humans. Limited evidence of carcinogenicity in humans and less than sufficient evidence of carcinogenicity in experimental animals. May be used when there is inadequate evidence of carcinogenicity in humans, but sufficient evidence of carcinogenicity in experimental animals. In some instances, an agent for which there is inadequate evidence of carcinogenicity in humans and less than sufficient evidence of carcinogenicity in experimental animals plus supporting evidence from mechanistic and other relevant data may be categorized in this group. An agent may also be classified based solely on strong mechanistic and other relevant data. Most commonly used for agents for which evidence of carcinogenicity is inadequate in humans and inadequate or limited in experimental animals. Exceptionally, agents for which evidence of carcinogenicity is inadequate in humans but sufficient in experimental animals when there is strong evidence that the mechanism of carcinogenicity in experimental animals does not operate in humans. This category is used for agents for which there is evidence suggesting lack of carcinogenicity in humans and in experimental animals. In some instances, agents for which there is inadequate evidence of carcinogenicity in humans but evidence suggesting lack of carcinogenicity in experimental animals, consistently and strongly supported by a broad range of mechanistic and other relevant data, may be classified in this group.

Group 2a: The agent is probably carcinogenic to humans

Group 2b: The agent is possibly carcinogenic in humans

Group 3: The agent is not classifiable as to its carcinogenicity to humans. a Does not make a determination of non-carcinogenicity, it often means more research is needed Group 4: The agent is probably not carcinogenic to humans

Number of agents classified a 120

82

311

499

1

a

As of November 2018.

drugs varies as does exposure route. Carcinogenic risk is influenced by cumulative lifetime dose, as discussed in further detail in the dose-response section below. Because of this, average daily dose (LADD) is the desired quantitative metric, though actual measurement of exposures over a lifetime are usually not available. To address this, direct measurements taken during a point in time as well as stochastic modeling, may be used to estimate LADD. Critical periods of an exposures, such as those in childhood or in-utero may also be considered (US Environmental Protection Agency, 2005). The US National Institute for Occupational Safety and Health has standards to estimate workplace exposures, such as time-weighted averages, cumulative doses, and peak exposure (high intensity exposures over short periods of time). Researchers have also developed jobexposure matrices to obtain population-based estimates of exposure estimates in the workplace, depending on the specificity of the job title and grouping, exposure assessment can be thought of as quantitative or semi-quantitative and there’s more uncertainty for broader job titles and/or among roles without information on exposure intensity (Dopart and Friesen, 2017). For residential exposures, Geographic Information Systems (GIS) may be used to map where people lived, the concentration of a particular exposure to build an exposure history. Qualitative exposure assessment mainly concerns the context in characterizing exposure as well as the uncertainties of quantitative exposure assessment or may be used in the absence of quantitative assessment.

506

Cancer Risk Assessment and Communication

Dose-Response Dose-response is a critical component of risk assessment that answers the question of how much risk varies with exposure. It may be informed by epidemiologic data of humans and provide a basis for comparing carcinogens. Though for the vast majority of substances, dose- response has been determined using data from experimental studies involving rodents. Rodents are small, short-lived animals, differing from humans in relation to a number of important physiological, genetic, and biochemical parameters. There is some evidence that genetic processes underlying cell proliferation in humans are markedly different from those in rodents. These considerations limit confidence that may be vested in dose–response relationships based on rodent studies. In properly conducted animal bioassays, the amounts of chemical carcinogen required to induce tumors may vary by 108 depending on species, route of administration, and other parameters. There is also departure beyond the ranges of data for which there is an observed human data, usually referred to as the point of departure (POD). Beyond, POD, data are extrapolation to predict responses at levels not observed in studies and the mode of action is incorporated into mathematical models. For example, linear models assume that a risk is monotonically proportional to a dose whereas linear quadratic or supralinear shapes have accelerated responses at certain dosing levels. The slope of the line or its upper bound, is sometimes used as an indication of carcinogenic potency. In some instances, it is recognized that nonlinear and threshold models of dose-response are appropriate as there may be some doses at which no response is anticipated. Inadequate dose response and exposure data may severely limit application of quantitative risk assessment. Thus, risk assessment in relation to pollutants such as pesticides and solvents may be inconclusive. For some such agents, tissue damage may not occur until a particular dose is exceeded, after which cytotoxicity, necrosis, apoptosis, and a proliferative response may occur. Such agent-induced tissue change may result in tumorigenesis in experimental animals. The status accorded to such tumors as indicators of carcinogenicity is inherently challenging. As noted at the outset, quantitative risk assessment is subject to recognized uncertainties, specifically concerning the estimation of human cancer risk from exposure to environmental carcinogens at low doses.

Risk Characterization and Regulation Risk characterization captures the extent of risk to the population, taking into account an agent’s hazard, exposure and doseresponse assessment, and is used for regulatory purposes. It is specifically used in determining the impact of chemicals in the workplace as well as the public while integrating the statistical and biological uncertainties of the effect of an exposure. Regulatory agencies may mandate what level of residual risk is acceptable. For example, the US Clean Water Drinking Act, regulates water, and has usual acceptable risk of 10 5 to 10 7. Ethics of defining acceptable risks include the “as low as reasonably achievable” (ALARA) principle and the precautionary principle. The ALARA principle was introduced after there was an increased understanding of the link between radiation dosing and health outcomes in epidemiologic studies of Atomic bomb survivors. This principle is still used to today by the International Commission on Radiological Protection, stating that “any decision that alters the radiation exposure situation should do more good than harm” (Anon, 2007). The precautionary principle is more commonly used in Europe than the United States and is used in the European Union’s Regulation, Evaluation, Authorization, and Restriction of Chemicals (REACH) guidelines. It calls for action to reduce risk with uncertain evidence or if there is a potential of harm (European Commission, 2017). The greatest value of risk assessments is their utility in providing for comparative assessment of agents and modes of exposure for the purpose of determining priority, particularly where exposures to multiple substances are involved. Again, there are limitations. In relation to complex mixtures, such as tobacco smoke, or as occurs in the context of air pollution or exposure to solvent vapor, determination of cumulative risk is only rarely made by reference to individual risks accorded to certain known agents within the mixture. With that consideration acknowledged, reference to the risk determined for separate components may reasonably indicate priorities for action. Likewise, different modes of exposure may be ranked as less or more desirable once relevant risk determinations have been made. As a result, the numerous limitations of quantitative risk assessment, specifically in relation to carcinogenic hazards, rarely preclude quantitative risk assessments being made. Generally speaking, such assessments are critical to the determination of priorities at a national, state, or local level.

Risk Communication Risk communication is defined as the exchange of information on the harms and benefits to improve knowledge and understanding of a particular risk to a consumer of information whether it be on the media, patients or healthcare providers in a healthcare clinic, or in society at large. It has been further extended to be defined as a two-way exchange of information whereby opinions about these harms and benefits can be asserted. Accurate risk communication relies on the initiator’s ability to correctly interpret and present data and the recipient’s ability to correctly interpret the information provided. Risk communication is critical to all aspects of cancer control. The present discussion is concerned only with risk communication about cancer causation, though, some information on effective risk communication is informed by studies of people considering cancer screening and/or treatment options. There are several challenges in communicating risk effectively, the most prominent being difficulty in framing probabilities in a coherent way for the general public to understand them. Quantitative data may range more basic mathematic operators, such as an integer, to intermediate mathematic operators, such as a percentage or risk ratio, to specialized concepts such as a population

Cancer Risk Assessment and Communication

507

attributable risk. There are also measures of uncertainty such as a variance and confidence interval (Nelson and Croyle, 2009). Problems related to communicating carcinogenic risk concerning some circumstances of exposure to an agent may occur at two levels. First, the significance of findings may be misunderstood. Second, misunderstanding may arise once relevant information has been conveyed because of limitations inherent in qualitative risk assessment. Health professionals, and often times cancer control advocates, inherently conceptualize and communicate risk in numerical terms (Schwartz and Meslin, 2008). Risk communication centrally involves expression of quantitative risk in everyday terms. Understanding of risk is a necessity for life in a complex society, and any consideration of cancer risk may be placed in the context of day-to-day experience. Though, there is often confusion over the magnitude of various risks (e.g., 1% versus a 10% risk) (Lipkus et al., 2001). Many times, relative risks are presented in scientific study, but are overestimated and misunderstood by the public. For example, a review of the effect of mammography on breast cancer death reported a 20%–25% relative risk reduction, resulting in an absolute reduction of about 4–5 breast cancers in every 1000 women (Nystrom et al., 2002). In a survey of UK women, about slightly over a quarter thought this relative risk meant that for every 1000 women screened, 200 fewer women would die from breast cancer (Gigerenzer et al., 2007). Generally, people tend to find one in a certain number (e.g., one in seven, one in 10) more interpretable than relative risks, but these are more commonly reported in the scientific literature. Population attributable fractions (PAF) or attributable risk are an important way to convey a particular risk over a population, that is often used to inform policy makers. PAFs are defined as the result of the prevalence of a particular exposure and its effect. They too as they too can be misunderstood, even among researchers and public health practitioners, as there as assumptions about lack of confounding, and confusion about necessary and sufficient causes (Rockhill et al., 1998). An important consideration for cancer prevention is how to present information about risk. Thus, investigators in risk communication may emphasize loss aversion by demonstrating that people are more likely to be risk seeking when given risk information framed in terms of losses than when given functionally equivalent messages framed in terms of gain. In a relevant study, the investigators devised risk messages promoting mammography that focused either on losses of not getting tested (e.g., failure to identify a malignancy) or on gains associated with mammography (e.g., providing reassurance). Women were more likely to get screened when provided with messages in the loss frame. Identification gives rise to categorization according to whether nominated agents definitively cause cancer in humans, probably have that effect, possibly cause cancer, or are unable to be so categorized. Such differences are not evident in the community. Generally, lifestyle risks generate less concern than consumer products, workplace exposures, and environmental pollution. Surveys taken in various developed countries typically indicate that tobacco smoking is correctly perceived as presenting the greatest risk of cancer (AICR, 2015; Peretti-Watel et al., 2016). Typically, involuntary exposures consequent on pollution are deemed a greater threat than exposures arising from lifestyle choices or behavior. For example, in a 2015 survey of Americans, 84%–89% also identified inherited factors such as cancer genes, radiation, and industrial pollution as a cause of cancer whereas only half identified obesity (AICR, 2015). Those who perceive themselves as powerless to prevent cancer are found less likely to have adopted preventive behaviors (Peretti-Watel et al., 2016). Risk communication might be regarded as the most difficult element of communication concerning cancer generally. Carcinogenic risk pertains to the causation of malignant disease and hence provides the foundation for reducing incidence. Communication of carcinogenic risk is an essential element in achieving that goal. Initiatives to prevent cancer by limiting or preventing exposure to carcinogens extend across the distinction sometimes made between lifestyle choices and involuntary exposures. Once actual or potential carcinogenic exposures are known, there is an obligation to alter circumstances such that preventable disease does not continue to occur. This may involve action by the individual or action on behalf of the community as a whole. Intervention as a consequence of risk assessment is referred to as risk management. Risk management is directed at the same goal of cancer prevention but involves a societal rather than an individual perspective. Risk management issues include cost, political feasibility, and pressure from special interests. Detailed discussion of risk management is beyond the scope of this article. However, its recognition is appropriate. Without risk management, both qualitative and quantitative risk assessment are reduced to little more than intellectual exercises and legal compliance. Consequent on risk management, in the context of carcinogenic exposures, the burden of cancer in a community may be reduced.

See also: Environmental Agents and Childhood Cancer; Environmental Carcinogens and Regulation; History of the Dose Response; Nutrition and Cancer – An Update on the Roles of Dietary Factors in the Etiology, Progression and Management of Cancer; Pesticide Exposure and Human Cancer.

References AICR. 2015 Cancer risk awareness survey report. http://www.aicr.org/assets/docs/pdf/education/aicr-awareness-report-2015.pdf Anon, 2007. The 2007 Recommendations of the International Commission on Radiological Protection. Annals of the ICRP 37 (2–4), 1–332. Bridbord, K., Decoufle Jr., P., Fraumeni Jr., J.F., 1981. Estimates of the fraction of cancer in the United States related to occupational factors. In: Peto, R., Schneiderman, M. (Eds.), Banbury report: Quantification of occupational cancer. Cold Spring Harbor Laboratory Press, Cold Spring Harbor, NY. Doll, R., Peto, R., 1981. The causes of cancer: Quantitative estimates of avoidable risks of cancer in the United States today. Journal of the National Cancer Institute 66 (6), 1191–1308.

508

Cancer Risk Assessment and Communication

Dopart, P.J., Friesen, M.C., 2017. New opportunities in exposure assessment of occupational epidemiology: Use of measurements to aid exposure reconstruction in populationbased studies. Current Environmental Health Reports 4 (3), 355–363. Environmental Protection Agency, 2003. Assessments of risks from radon in homes. Environmental Protection Agency, Office of Radiation and Indoor Air, Washington, DC. European Commission, 2017. Science for environment policy future brief: The precautionary principle: Decision-making under uncertainity. European Commission DG Environment by the Science Communication Unit, Bristol. Gaskin, J., Coyle, D., Whyte, J., Krewksi, D., 2018. Global estimate of lung Cancer mortality attributable to residential radon. Environmental Health Perspectives 126 (5), 057009. Gigerenzer, G., Gaissmaier, W., Kurz-Milcke, E., Schwartz, L.M., Woloshin, S., 2007. Helping doctors and patients make sense of health statistics. Psychological Science in the Public Interest 8 (2), 53–96. Hamalainen, P.T.J., Kiat, T.B., 2017. Global estimates of occupational accidents and work-related illnesses 2017. Singapore Workplace Safety and Health Institute. Hashim, D., Boffetta, P., 2014. Occupational and environmental exposures and cancers in developing countries. Annals of Global Health 80 (5), 393–411. IARC. IARC monographs on the evaluation of carcinogenic risks to humans. https://monographs.iarc.fr/agents-classified-by-the-iarc/ International Agency for Research on Cancer, 2015. IARC monographs on the evaluations of carcinogenic risk to humans: Preamble. International Agency for Cancer Research, Lyon, France. Islami, F., Goding Sauer, A., Miller, K.D., Siegel, R.L., Fedewa, S.A., Jacobs, E.J., McCullough, M.L., Patel, A.V., Ma, J., Soerjomataram, I., et al., 2018. Proportion and number of cancer cases and deaths attributable to potentially modifiable risk factors in the United States. CA: A Cancer Journal for Clinicians 68 (1), 31–54. Kioumourtzoglou, M.A., Schwartz, J., James, P., Dominici, F., Zanobetti, A., 2016. PM2.5 and mortality in 207 US cities: Modification by temperature and city characteristics. Epidemiology 27 (2), 221–227. Lipkus, I.M., Samsa, G., Rimer, B.K., 2001. General performance on a numeracy scale among highly educated samples. Medical Decision Making 21 (1), 37–44. McGuinn, L.A., Ghazarian, A.A., Ellison, G.L., Harvey, C.E., Kaefer, C.M., Reid, B.C., 2012. Cancer and environment: Definitions and misconceptions. Environmental Research 112, 230–234. Menvielle, G., Boshuizen, H., Kunst, A.E., Vineis, P., Dalton, S.O., Bergmann, M.M., Hermann, S., Veglia, F., Ferrari, P., Overvad, K., et al., 2010. Occupational exposures contribute to educational inequalities in lung cancer incidence among men: Evidence from the EPIC prospective cohort study. International Journal of Cancer 126 (8), 1928–1935. National Academy of Sciences, 1983. Risk assessment in the federal government: Managing the process. National Academy of Sciences, Washington, DC. Nelson, D.E.H.B., Croyle, R.T., 2009. Making data talk: Communicating public health data to the public, policy makers, and the press. Oxford University Press, New York, NY. Nystrom, L., Andersson, I., Bjurstam, N., Frisell, J., Nordenskjold, B., Rutqvist, L.E., 2002. Long-term effects of mammography screening: Updated overview of the Swedish randomised trials. Lancet 359 (9310), 909–919. Parkin, D.M., Boyd, L., Walker, L.C., 2011. The fraction of cancer attributable to lifestyle and environmental factors in the UK in 2010. British Journal of Cancer 105 (Suppl 2), S77–S81. Peretti-Watel, P., Fressard, L., Bocquier, A., Verger, P., 2016. Perceptions of cancer risk factors and socioeconomic status: A French study. Preventive Medical Reports 3, 171–176. Purdue, M.P., Hutchings, S.J., Rushton, L., Silverman, D.T., 2015. The proportion of cancer attributable to occupational exposures. Annals of Epidemiology 25 (3), 188–192. Rockhill, B., Newman, B., Weinberg, C., 1998. Use and misuse of population attributable fractions. American Journal of Public Health 88 (1), 15–19. Rothman, K.J.G.S., Lash, T.L., 2008. Modern epidemiology. Lippincott Williams and Wilkins, Philadelphia, PA. Schwartz, P.H., Meslin, E.M., 2008. The ethics of information: Absolute risk reduction and patient understanding of screening. Journal of General Internal Medicine 23 (6), 867–870. Smith, M.T., Guyton, K.Z., Gibbons, C.F., Fritz, J.M., Portier, C.J., Rusyn, I., DeMarini, D.M., Caldwell, J.C., Kavlock, R.J., Lambert, P.F., et al., 2016. Key characteristics of carcinogens as a basis for organizing data on mechanisms of carcinogenesis. Environmental Health Perspectives 124 (6), 713–721. Stewart, B.W., 2012. Priorities for cancer prevention: Lifestyle choices versus unavoidable exposures. The Lancet Oncology 13 (3), e126–e133. Sung, H., Siegel, R.L., Torre, L.A., Pearson-Stuttard, J., Islami, F., Fedewa, S.A., Goding Sauer, A., Shuval, K., Gapstur, S.M., Jacobs, E.J., et al., 2018. Global patterns in excess body weight and the associated cancer burden. CA: a Cancer Journal for Clinicians. US Environmental Protection Agency, 2005. Guidelines for carcinogen risk assessment. US Environmental Protection Agency, Washington, DC.

Relevant Websites https://cordis.europa.eu/project/rcn/74967/factsheet/endEnvironmental Cancer Risk, Nutrition and Individual Susceptibility. http://monographs.iarc.fr/dInternational Agency for Research on Cancer. http://ntp.niehs.nih.gov/ntp/roc/toc11.htmldUS Department of Health and Human Services. https://www.epa.gov/risk/human-health-risk-assessmentdUS EPA human health: Toxicity (Hazard identification and dose response). https://www.who.int/cancer/modules/Prevention%20Module.pdf?ua ¼ 1dWorld Health Organization.

Carbon Farmingq Jerome Nriagu, University of Michigan, Ann Arbor, MI, United States © 2019 Elsevier B.V. All rights reserved.

Abbreviations CF Carbon farming COP21 21st yearly session of the Conference of the Parties (COP), held in 2015 DOC Dissolved organic carbon GHG Greenhouse gas Gt Gigaton (one billion tons or 1015 g) IGP Indo-Gangetic plains IPCC Intergovernmental Panel on Climate Change IWM Integrated weed management LOSU Level of scientific understanding Mha Metric ton per hectare NOAA National Oceanic and Atmospheric Administration NT No-till RF Radiative forcing SOC Soil organic carbon SSA Sub-Saharan Africa

What is Carbon Farming? Global food production has resulted in massive transformation of natural ecosystems to managed areas. At present, 30%–50% of the Earth’s land cover has substantially been modified by land use, including about 15 million km2 of cropland and 34 million km2 of pasture that have replaced natural land cover. Roughly 92% of the natural grasslands/steppes have been converted to human use including grazing and croplands. Expansion of agriculture has led to the appropriation of about 40% of the terrestrial photosynthesis for human use. The impacts of agriculture extend well beyond changes in land surface to virtually most facets of the biosphere. Both the energy balance and hydrological cycle have significantly been affected while the global carbon, nitrogen and trace metal cycles have been severely altered by present-day agricultural activities. The conversion of natural land cover to agricultural system has resulted in huge losses (50%–60%) of original soil organic carbon (SOC) stocks in the top soils. Most of the lost SOC is released to the atmosphere and global food production has been estimated to contribute about one third of all global greenhouse gas (GHG) emissions from anthropogenic sources. Agriculture alone contributes between 35% of anthropogenic CO2 emissions and 20% of annual increase in radiative forcing from greenhouse gases during the last 150 yearsdmaking it one of the key agents of humanrelated climate change. Carbon farming (CF) is based on the principle that agroforestry practices can be transformed from being a net carbon emitter to a net carbon sink. It is a recent concept that is used to describe the cultivation techniques that take carbon dioxide out of the atmosphere (where it causes global warming) and how to convert it into carbon-based compounds in the soil that aid plant growth. The approach involves implementing practices that are known to improve the rate at which CO2 is removed from the atmosphere and converted to plant material and soil organic matter. Carbon farming is successful when carbon gains resulting from enhanced land management or conservation practices exceed carbon losses especially to the atmosphere. In essence, we farm for carbon by storing it in our agricultural soils. It is an attractive option because it relies on the large magnitude of carbon that can be stored in soils and the vast land area covered by soils. Carbon farming is being promoted as agriculture’s answer to climate change. The concept of CF derives from a number of ideas and themes that had previously been used in farming practices and technologies during the past few decades to increase and sustain food production without ruining the biosphere. The most prominent of these approaches include (in alphabetical order), agroecology, agroforestry, climate-smart agriculture, conservation agriculture, organic agriculture, permaculture, regenerative agriculture, sustainable intensification, and so on. Almost all these approaches to food and farming systems reject pesticides, artificial fertilizers and build on the efficient use of locally available resources. Carbon q

Change History: April 2019. Jerome Nriagu updated the text. This article includes little from our chapter in first edition. The content is new and different; all the tables are new; and most of the references are new. This is an update of S. Mandlebaum, J. Nriagu, Carbon Sequestration and Agriculture, Editor(s): J.O. Nriagu, Encyclopedia of Environmental Health, Elsevier, 2011, Pages 498–504.

Encyclopedia of Environmental Health, 2nd edition, Volume 1

https://doi.org/10.1016/B978-0-12-409548-9.11702-6

509

510

Carbon Farming

farming differs from these approaches by focusing on farming practices that promote the sequestration of atmospheric carbon dioxide in soils, with carbon trading (of sequestered CO2) as a potential collateral economic benefit for farmers. The attractive features and cobenefits of carbon farming are numerous and include Environmental: (i) Improve air quality by off-setting anthropogenic emissions of CO2 by fossil fuel combustion and deforestation; (ii) improved water quality by reducing risks of accelerated erosion and nutrient run-off; (iii) improved soil quality by improving water and nutrient retention capacity; (iv) increase microbial activity and soil biodiversity; (v) increase the adoption of sustainable pest and weed management; (vi) enhance use efficiency of nutrient inputs in soils of managed ecosystems; (vii) restore quality of degraded soils and their ecosystem functions and services. Social and Cultural: (i) Increase social capital; (ii) empower indigenous communities; (iii) enhance knowledge sharing and education; (iv) promote better livelihoods and community cohesion; (v) lead to better protection of sacred sites. Economic: (i) Increase farm productivity; (ii) provide diversified revenue streams for farmers and landholders; (iii) provide opportunities for new skills and career development; (iv) attract investment regionally and in rural communities; and (v) create jobs on the land. Public Health: (i) Increase and sustain agronomic productivity, and advance food and nutritional security; (ii) reduce exposure to pesticides occupationally and as residues on foods; (iii) reduce the exposure to hazardous algal pollutants by nonuse of chemical fertilizers; (iv) reduce the stress of farm work and improve the physical and mental health of farmers and communities. The numerous cobenefits and the concern over climate change have thrust carbon farming into the limelight. The approach been embraced by most major players in the sustainable food movement and in 2015, 25 countries pledged to pursue it during the COP21 talks on climate change in Paris. Currently, the biggest international effort to promote carbon farming is a French-led initiative called “four per 1,000 carbon sequestration in soils for food security and the climate” (http://4p1000.org/understand). This initiative was launched to demonstrate that agriculture, and in particular agricultural soils can play a crucial role in ameliorating climate change and improving the global food security. The program is aimed at increasing the global soil organic carbon (SOC) stock by 0.4% per year through a variety of agricultural and forestry practices. An annual growth rate of 0.4% in the soil carbon stocks, or 4& per year, would halt the increase in the CO2 concentration in the atmosphere related to human activities. In order to achieve the 4 per 1000 target, the annual soil sequestration rate should be 0.6 tons of carbon per hectare per year. Although this rate of soil sequestration cannot be reached everywhere because of the high spatial heterogeneity of SOC stocks, there are several major studies which suggest that rates in the range of 0.2–0.5 t C ha 1 y 1 are feasible at many locations in the world (see below).

Land Use and Loss of Soil Organic Carbon Agricultural soils occupy roughly 37% of the earth’s surface and contain a lot of carbon. It is estimated that the upper 1 m of soils contain 2000–2500 Gt (1 billion metric tons), with about 60% of this being organic carbon and about 40% inorganic carbon. The amount of carbon in soils is approximately three times higher than the amount of carbon bound in the aboveground biomass, and at least 230 times higher than current global anthropogenic CO2 emissions. Soil organic carbon is an important component of soils and consists of a complex mixture of plant and animal residues and of microbial biomass at various stages of decomposition. Reported estimates of the stock of SOC stock to depth of 1 m cluster around 1500 Gt and is about four times higher than the biotic (560 Gt) and 3.2 times the atmospheric (867 Gt) reservoirs. In addition, about 1672 Gt of carbon is stored in frozen or permafrost soils (cryosols). Because of their relative sizes, even small changes in soil carbon stock can strongly influence the atmospheric CO2 concentrationda decrease of soil carbon stock by 1 Gt would increase the atmospheric CO2 concentration by 0.47 ppm worldwide. Globally, croplands store more than 140 Gt of carbon in the top 30 cm of soil. About 94% (132 Gt) of this carbon is stored on the 15.9 million km2 (98% of global cropland) with a potential for significant carbon sequestration through improved soil management and farming practices. In general, SOC is lower in the tropics where it is hotter and/or drier, and higher in the cooler, wetter, more northerly, and to a somewhat lesser extent, southerly, latitudes (Table 1). The strong influence of temperature on sequestration of SOC is evident from the low SOC values found across much of the equatorial belt compared to the high SOC density in soils (400 t C ha 1 or more) found in the northern croplands and farmed peat soils of the United States, Canada, Europe, and Russia. Recent estimates show that the greatest amounts of carbon (about 21 Gt of carbon each) are stored in croplands of North America, Eurasia, and Europe; these regions together account for over 50% of all SOC stocks on cropland globally (Table 1). By contrast, the SOCs in croplands of Central America, North Africa, and the Australian/Pacific regions are very low, amounting to 6.5 Gt of carbon together or just under 5% of the global total. Reported stocks of SOC in Western Asia, South Asia, Southeast East Asia, and East Asia range from 4.4 to 9.1 Gt and these regions together account for less than 2% of the global total. Even though South America has a large expanse of farmland, its SOC stock is modest at 9.4 Gt. Only about 8.5% of the global SOC total is found in African soils. At the national level, the vast northern tracts of carbon dense agricultural land gives Russia the largest total amount of SOC stored on cropland (about 22 Gt or 17% of the global total), followed by the United States (19 Gt of carbon), China (8.4 Gt of carbon), India (6.4 Gt), and Brazil (5.0 Gt of carbon) (see Table 1). Typically, modern agriculture depletes the carbon in soils because agricultural land has lower net primary production (NPP) than natural systems and conventional tillage practices increase soil respiration. Conversion of natural ecosystems into agroecosystems lead to depletion of SOC pool because of: (i) lower accumulation of biomass carbon, (ii) increased losses of SOC

Carbon Farming Table 1

511

Soil organic carbon (SOC) pool of cropland soils in different regions of the world Cropland area

Total organic carbon

Average organic carbon

Region

Million hectare (Mha)

Gigatons (Gt)

Tons of carbon per hectare (t C ha 1)

Australian/Pacific Central America Central Asia East Asia Eastern and Southern Africa Europe North Africa North America Russia South America South Asia South East Asia West and Central Africa Western Asia Global

66.0 14.0 36.9 126.2 105.5 199.6 25.9 126.1 123.6 175.3 85.3 130.7 89.2 289.4 1593.5

3.75 1.22 5.01 9.14 5.64 21.05 1.51 28.07 21.94 9.42 7.68 8.15 4.83 4.38 131.81

57 87 136 72 53 106 58 97 174 76 44 96 37 49 81.61

by erosion, mineralization and leaching, and (iii) stronger influence of soil temperature and moisture regimes. Depletion of SOC from croplands can also be enhanced by processes of biomass degradation such as microbial abundance and composition, nutrient depletion, erosion, salinization, decline in soil structure and aggregation, etc. In general, agricultural soils contain 25%–75% less SOC than their counterparts in undisturbed or natural ecosystems. The conversion of forest land use to agricultural system often result in more severe losses (50%–60%) of original soil organic carbon (SOC) stocks in the top soil. It is estimated that land-use changes and soil cultivation have contributed about 136 Gt of carbon to the atmosphere from change in biomass carbon since the beginning of the Industrial Revolution, and that the depletion of soil organic carbon has contributed another 78 Gt. The cumulative historical release of 214 Gt of carbon from the land-use sector is comparable to the estimated 270 Gt carbon emitted from fossil fuel combustion. Average historic SOC depletion is estimated to be about 20–30 tons of carbon per hectare in forest/woodlands and 40–50 t C ha 1 in steppe/savanna/grassland ecosystems.

Soils as a Sink for Atmospheric Carbon Dioxide Plants release CO2 through the processes of respiration and biomass degradation, but they also capture and use carbon during photosynthesis (schematized as CO2 þ H2O )/ CH2O þ O2). As example, one can cite a field of corn (Zea mays) which can capture about 400 times as much carbon as the annual increase by anthropogenic emission of CO2 in the entire column of air above the field from ground to the upper reaches of the atmosphere. In other words, plants fix carbon through photosynthesis, die, and start to decompose with the help of microorganisms. Then, depending on the conditions, the carbon that was fixed in the plant material can end up in the labile (or recyclable) fraction or the passive fraction of the soil organic carbon pool. The passive fraction is the most important for sequestration because it is not readily decomposed and thus cannot be released as CO2. Charcoal fits into the passive category. Humus, which is made of humic and fulvic acids, is relatively passive as it decomposes very slowly. The structure of humus is not specific, instead it is described as a loose assembly of aromatic polymers made up of approximately 60% carbon and 6% nitrogen. Worldwide, the process of carbon fixation not only balances respiration but also results in a net uptake of CO2 from the atmosphere equivalent to approximately 60 Gt of carbon per year. The weathering of silicate and carbonate rocks is another pathway for carbon sink in soil. It is one of the principal processes of the long-term carbon cycle. In this process, atmospheric CO2 dissolves in rainwater, forming carbonic acid, H2CO3. This weak acid “weathers”’ or dissolves silicate rocks on the continents, releasing Ca2 þ, Mg2 þ, bicarbonate (HCO3), and dissolved silica (SiO2) into solution. Rivers and streams carry these dissolved materials into the ocean where certain organisms use them to form calcium carbonate (CaCO3) shells. Most shells redissolve but some sink to the sea floor and are buried in the sediments along with any calcium carbonate that precipitates in the ocean. The carbon can remain stored for millions of years until calcium carbonate (CaCO3) recombines with silica (SiO2) and releases CO2 through volcanoes. Silicate weathering generally provides an important mechanism of negative feedback for atmospheric CO2 levels. Simply, high levels of CO2 cause high global temperatures and greater rainfall on continents. This then leads to faster silicate weathering, which then removes atmospheric CO2. Plants are also known to affect chemical weathering of silicate rocks because of deep rooting and good drainage. The importance of this pathway of sequestering atmospheric carbon in ameliorating global change is poorly understood at this time. Unlike the oceanic sink, the soil sink can absorb atmospheric carbon relatively quickly, if managed correctly, implying that this particular pathway is suitable for ameliorating atmospheric CO2 levels. Carbon that has been incorporated into soil can range in form (organic and inorganic) and amount. Inorganic forms include elemental carbon and carbonate minerals. Primary carbonates

512

Carbon Farming

are derived through weathering of parent material. Secondary carbonates are formed when CO2 dissolves to form carbonic acid and reprecipitates with Ca2 þ or Mg2 þ. Carbon that has been sequestered into the soil as inorganic carbon through the formation of secondary carbonates or when bicarbonates leach into the groundwater means that the CO2 that has been captured is not immediately reemitted. Carbon can also exist in an organic form as humus, charcoal or dissolved organic carbon (DOC). Total soil organic carbon can range from less than 1% in sandy soils to greater than 20% in soils in wetlands or bogs. The composition of a soil influences its ability to protect soil organic carbon and thereby store carbon. Organic carbon preservation depends on the sorptive processes of certain soil surfaces. Metal oxides are particularly effective in absorbing and stabilizing organic carbon in soils. Clay minerals also help preserve organic carbon, but are not as effective as oxides. The interactions between minerals and oxides may also contribute in protecting soil carbon from mineralization by forming complexes or aggregates. Unfortunately, carbon that has been sequestered in soil may not act as a permanent sink. In fact, the residence time of carbon sequestered in soil ranges from a few weeks to millennia depending on the nature of the carbon-containing substances, stability of secondary carbonates formed, and depth of leaching. In undisturbed ecosystems, the capacity of soils to store carbon is limited by apparent equilibrium between the release of CO2 by decomposition of soil organic matter (SOM) and the formation of new SOC by organic matter input from plant debris residues. There are three major SOM stabilization mechanisms are (i) selective preservation due to refractory SOM, (ii) occlusion in soil aggregates, and (iii) interaction with mineral surfaces. The last is regarded as quantitatively the most important in a wide range of soils, as indicated by a strong correlation of SOC stocks with clay contents. Since it is not easy to increase the formation of SOC beyond the natural capacity of agroecosystems, carbon farming is therefore aimed at restoring the SOC depleted from soils over the last centuries. This also means that carbon sequestration with carbon farming should reach the limit of SOC storage after a certain period of time. Meta-analysis of field studies suggests that cropped soils will be able to sequester carbon for at least 20 years before reaching saturation points although in some instances significant sequestration can continue for 30 or even up to 40 years before reaching new equilibrium. Several studies show that in eroded soil, a large proportion of the lost SOC is replaceable by reabsorption of organic carbon (OC) on newly exposed mineral surfaces. Thus, recarbonization of soil (and the terrestrial biosphere) is often regarded as an important strategy for climate change adaptation and mitigation. A number of model scenarios have been reported showing how much carbon can sequestered through improvements in agricultural practices and management (Table 2). One analysis suggests that 0.9–1.85 Gt

Table 2

Carbon sequestration potential of cropland soils in different biomes and regions of the world

By region Australian/Pacific Central America Central Asia East Asia Eastern and Southern Africa Europe North Africa North America Russia South America South Asia SouthEast Asia West and Central Africa Western Asia Global

Cropland area (Mha)

Soil organic carbon stock (Gt) to 30 cm depth

Potential sequestration rate (t C/ha)

Total sequestration potential (Gt)

66.0 14.0 36.9 126.2 105.5 199.6 25.9 126.1 123.6 175.3 85.3 130.7 89.2 289.4 1593.5

3.75 1.22 5.01 9.14 5.64 21.05 1.51 28.07 21.94 9.42 7.68 8.15 4.83 4.38 131.81

0.58 0.55 0.54 0.56 0.55 0.57 0.64 0.61 0.51 0.54 0.64 0.55 0.58 0.61 0.58

38.3 7.7 19.9 70.7 58.0 113.8 16.6 76.9 63.1 94.7 54.6 71.8 51.7 176.5 924.2

Covered area (Mha) By biomes Arable (unirrigated) Pastures Permanent crops Urban (lawns, forests) Degraded land areas Forest areas Forest plantations

1200 3430 170 400 1970 2000 50

Soil organic carbon stock (Gt) to 30 cm depth

Potential sequestration rate (t C/ha)

Total sequestration potential (Gt)

0.35 0.15 0.65 0.75 0.45 1.3 0.40

420 514 110 300 886 2600 20

Carbon Farming

513

of carbon can be sequestered annually in the top 30 cm layer of available croplands, equivalent to global average of 0.56– 1.15 t C ha 1 y 1. Because of the serious degradational loss of SOC, croplands across South Asia have high sequestration potential (0.62–1.28 t C ha 1 y 1) on nearly 3 million km2 of land which can store up to 2.2–4.5 Gt of carbon total (Table 2). Africa likewise has a large potential for carbon sequestration (0.55–1.28 t C ha 1 y 1) and has the potential to store 0.15–0.31 Gt of carbon. On the other hand, North America has the highest potential for total carbon storage (0.17–0.35 Gt) with South Asia and Europe being the second highest among the regions (each with storage potential of 0.11–0.23 Gt). One recent study reported that the sequestration potential of tropical soils of the Indo-Gangetic Plains (IGP) and Sub-Saharan Africa (SSA) are 0.16– 0.49 Mt. C ha 1 y 1 and 0.26–0.96 Mt. C ha 1 y 1 respectively (Table 2), hence represent a potential sponge for atmospheric CO2. Because of prior depletion in SOC, croplands generally have a high potential for carbon sequestration in soils. The global rate has been estimated to be of 0.25–1.0 Mt. C ha 1 y 1, implying that croplands can import and store up to 0.5–1.2 Gt C y 1(Table 2). Grasslands (including rangelands, shrublands, savannas, and croplands grown in association with pasture and fodder crops) which cover about 3500 million hectares (Mha) or 26% of the global ice-free land area have a lower potential for carbon sequestration (0.3–0.7 Mt. C ha 1 y 1) compared to grasslands (Table 1). Deforestation often results in decline in the SOC of surface soils. Thus, afforestation of soils with depleted SOC can be considered to be a good strategy that can sequester up to 4.5 Gt of CO2 in the atmosphere per year. These studies suggest that the annual soil sequestration rate of 0.6 t C ha 1 y 1 required to achieve the 4 per 1000 target may be attained through carbon farming in many regions of the world.

Agricultural Practices That Can Enhance Carbon Sequestration in Soil Since agricultural productivity depends so much on soil organic carbon and carbon cycling, how can we best manage fields to enhance soil organic carbon levels while also reducing carbon loss into the atmosphere within the framework of carbon farming? Farming practices that can increase soil organic carbon and reduce carbon loss into the atmosphere are shown in Table 3. Significant proportions of global carbon emissions are inevitably linked to intensive agriculture due to practices such as the conversion of natural ecosystems to farmlands, deforestation, biomass burning, indiscriminate use of fertilizers and manures, Table 3

Carbon farming practices that can increase soil organic carbon and reduce carbon loss into the atmosphere

Management practices

Functions and explanation

Conventional tillage practices

Replaced by conservation tillage, no till, and/or mulch farming. Conservation and no-till management aid in storing soil organic carbon, keeping the physical stability of the soil intact. When these practices are combined with residue management and manure management, soil organic carbon can increase over time Leftover biomass is returned to the soil as mulch after harvest instead of being removed or burned. Returning crop residue to the soil adds carbon and helps to maintain soil organic matter Grow crops during the off-season instead of leaving croplands bare. Cover crops can increase soil carbon pools by adding both root and above ground biomass. Covers also reduce the risk of soil erosion and the resulting loss of carbon with soil particles. Cover crops also enhance nutrient cycling and increase soil health over time Replace by diversity of crop rotations, intercrop plantings and integrated farming practices, such as permaculture (centered around simulating or directly utilizing the patterns and features observed in natural ecosystems); silvoculture (combination of trees, forage plants and livestock together as an integrated, intensively-managed system) or agroforestry Use compost tea, animal manures, and thermal compost. Adding organic amendments such as manure or compost can directly increase soil carbon, and also result in increased soil aggregate stability. This enhances the biological buffering capacity of the soil, resulting in greater yields and yield stability over time Replace with animal integration, and holistically managed grazing, and grass-fed livestock Replace by integrated nutrient management and precision farming Use Biochar þ compost mixtures for improving soil fertility and plant growth. Biochar application to agricultural soils is now considered as a soil-based greenhouse mitigation strategy for sustainable environmental management Perennial crops eliminate the need for yearly planting and increase soil organic carbon by root and litter decomposition postharvest. Crops with greater root mass in general add to root decomposition and physically bond aggregates together. Using high residue annual crops can also help reduce net carbon loss from cropping systems Replace by drip, furrow, or subirrigation Replace by integrated pest management techniques Restore to their natural states instead of being used as cropland Design for bee habitat and other beneficial insects that can effect pollination

Crop residue management Cover crops

Continuous monoculture and intensive cropping Manure and compost

Livestock management Intensive use of chemical fertilizers Biochar addition to soil Crop selection

Surface flood irrigation Use of pesticides Marginal and degraded soils Border planting

514

Carbon Farming

and excessive tillage. However, restoring degraded soils and adopting appropriate agricultural practices could make soils a net sink for atmospheric carbon. Improved agronomic practices (such as using improved crop varieties, extending crop rotations, especially those with perennial crops that allocate more carbon below ground, and avoiding or reducing use of bare/unplanted fallow) increase yields and, therefore, generate higher inputs of carbon residue into the soil. Also, adding more nutrients, when deficient, can promote carbon gains. However, the benefits from N fertilizer can be offset by higher N2O emissions (another greenhouse gas) from soils and CO2 from fertilizer manufacture. Planting temporary vegetative cover between successive agricultural crops or between rows of tree or vine crops can also add carbon to soils and may also extract plant-available nitrogen unused by the preceding crop, thereby reducing N2O emissions. Water management practices can also improve soil’s ability to sequester carbon (Table 3). Increasing the use of irrigation or using more efficient irrigation measures can enhance yield and the resulting residue return. However, the energy needed to deliver the water may offset the carbon sequestered in the soil. Agroforestry, or growing trees for wood along with regular food crops or livestock, can increase carbon stores above ground. Planting trees may also increase soil carbon sequestration. Land cover change, such as converting croplands to farmland with vegetation and animal husbandry can increase carbon storage. Peatland ecosystems cause the formation of organic soils which are distinguished from mineral soils by their high carbon and nitrogen (N) contents. Peat deposits can have organic matter content over 90% and be several meters thick. Although peatlands only constitute  3% of the terrestrial surface, they may store  644 Gt of carbon or 21% of the global total soil organic C stock. At present, human activity is draining or mining about 10% of global peatlands, and in the process transforming them from longterm carbon sinks into sources through three main pathways: release of CO2 from microbial peat oxidation, leaching of dissolved organic matter, and emission of CO2, CO, and CH4 from peat fires and combustion of mined peat. It is estimated that drained peatlands cumulatively release about 81 Gt carbon and 2.3 Gt nitrogen, which corresponds to annual greenhouse gas emission of 1.91 (0.31–3.38) Gt CO2-equivalent that could be saved with peatland restoration. While soil carbon sequestration on all agricultural land has comparable mitigation potential, additional nitrogen is required to build up a similar carbon pool in organic matter of mineral soils, equivalent to 30%–80% of the global fertilizer nitrogen application annually. Consequently, peatland protection and restoration through rewetting and paludiculture are attractive options that can be used to reduce the emission of greenhouse gases, restore vegetation communities, and recover biodiversity. Another attractive feature of restoring peatlands is that it is 3.4 times less costly in terms of nitrogen demand and involves a much smaller land area than mineral soil carbon sequestration. Integrating significant peatland preservation and restoration measures in land-use practices would be an important step for mitigating global climate change. Carbon farming is generating added interest in the use of biochar in agriculture. Biochar consists of solid material obtained from carbonization of biomass. When applied to soils, it can enhance soil carbon sequestration and provide other soil productivity benefits such as reduction of bulk density, improvement of water-holding capacity and nutrient retention, stabilization of soil organic matter, mediation of microbial activities, and heavy-metal sequestration. In addition, the application of biochar can enhance phosphorus availability in highly weathered tropical soils. Another attractive feature is that small-scale farmers can convert the locally available feedstocks and farm wastes to biochar at reasonable costs. While these benefits and opportunities look attractive, several problems, and bottlenecks remain to be addressed before widespread production and use of biochar becomes popular. In particular, the properties of biochar vary with both the feedstock from which it is produced and the method of production, and the availability of feedstock as well as the economic merits, energy needs, and environmental risks of its large-scale production and use remain to be investigated. Lastly, reducing or eliminating the practice of tilling can result in soil carbon gain (Table 3). In a study of progress toward sustainable agroecosystems, 286 projects in 57 countries were analyzed. A variety of systems were employed, but those farms using zero-tillage and conservation agriculture (reducing or eliminating tillage and increasing the coverage on the soil surface) were able to sequester the most carbon per hectare per year. With the use of herbicides, tilling has become unnecessary for many farms. Nontill (NT) practices can increase soil organic carbon because they minimize soil disturbance, thereby avoiding the disruption of protective soil aggregates, and they promote the retention of crop residues, which act as the precursors for soil organic matter. The increase in carbon sequestration following NT practices is most pronounced at the 0–10 cm depth. The meta-analysis showed that soils under NT will gain carbon at rates of approximately 0.2–0.5 t C ha 1y 1 for approximately 20 years after starting the practice. However, rates can vary widely depending on climate. The effect on climate change can also vary depending on how emissions of N2O are affected by NT practices. The global hotspots of SOC sequestration are generally believed to be areas with eroded, degraded, desertified, and depleted soils. Specific ecosystems where SOC sequestration is feasible (soils are under-saturated with respect to SOC) include agricultural lands, urban lands, and eroded/degraded lands. The rate of SOC sequestration has been reported to be (tons C ha 1 y 1) is 0.25– 1.0 in croplands, 0.10–0.175 in pastures, 0.5–1.0 in permanent crops and urban lands, 0.3–0.7 in salt-affected and chemically degraded soils, 0.2–0.5 in physically degraded and prone to water erosion, and 0.05–0.2 for those susceptible to wind erosion. Global potential for soil SOC sequestration is estimated to be in the range of 1.45–3.44 Gt C y 1.

Limitations of Carbon Farming for Climate Change Mitigation Although carbon farming can pull carbon out of the air and into the soil, it will require a whole new way of thinking about how to tend the land. General limitations to the carbon farming approach have been noted in several publications. First, the capacity for SOC accumulation is finite, and the SOC concentration in a given soil can only be increased to a specific amount. Second, SOC

Carbon Farming

515

increases are reversible and the climate change benefit depends on the new management practice being maintained for a long time. Third, the new SOC must be maintained in bio-refractory forms almost indefinitely; the tendency to form labile organic compounds instead of the bio-refractory forms that can accumulate in soils under carbon farming practices has been noted in some studies. Fourth, the largest amounts of SOC sequestration are obtained by taking agricultural land out of production and returning it to native grassland or forest which conflicts with the required use of the land to meet the food security goals. Fifth, the mean residence time of the sequestered carbon is highly variable (a few seconds to millennia) and may depend on land use and soil/crop/water/ livestock management, the priming effect, sensitivity of SOC stock to changes in temperature and precipitation and the protective processes which can be physical, chemical, microbial, biochemical, and ecological. Sixth, compost and similar amendments require energy to produce and it is still unclear if such material can cause nitrogen pollution when put on the land, or how much that greenhouse gas composting itself generates. Seventh, reliably monitoring of relatively small changes in SOC in agricultural soils following a management change is challenging due to a combination of soil variability and slow rates of change. Finally, there are more than 570 million farms in the world and more than 3 billion people living in rural areas who could benefit from carbon farming and need to be educatedda daunting task. Even if we have the right science and technology, the real barriers to changing our agricultural system will remain economic, social, and political. On a more positive note, carbon farming can, in principle, start immediately and it is not dependent on development of new technologies. As noted previously, even small increases in SOC can have positive effects on a range of soil physical properties and ecosystem services and hence be able to potentially contribute to climate change adaptation.

Conclusions Changing agricultural practices can create a pedospheric sink that can be used to moderate the increasing trend in atmospheric concentrations of CO2 while technological and lifestyle methods to reduce emissions are being developed. The most prominent options include improved crop and grazing land management (e.g., improved agronomic practices, nutrient use, tillage, and residue management), restoration of organic soils that are drained for crop production, and restoration of degraded land. Other options include improved water and rice management, set-asides, land use change (e.g., conversion of cropland to grassland), agroforestry, and improved livestock and manure management. The effects of carbon farming on other greenhouse gases, soil functions, and overall food production need to be carefully assessed. Practices such as NT farming must be maintained so that carbon that has been sequestered is not rereleased into the atmosphere. Also, carbon farming practices must be tailored for soils and weather in different locations. While numerous constraints to achieving the potential of carbon farming exist, carbon farming can start immediately as it is not dependent on development of new technologies. In addition to off-setting anthropogenic emissions of greenhouse gases, carbon farming has numerous co-benefits, including advancing food quality and security, improving the environment, enhancing water quality and renewability, increasing biodiversity and ecological resilience, etc. It is also important that farmers/land owners are compensated through payments for ecosystem services; undervaluing the SOC can lead to tragedy of the commons. All things considered, carbon farming deserves much more attention than it is currently getting.

See also: Human Health and the State of the Pedosphere; Land Quality: Environmental and Human Health Effects; Overview of How Ecosystem Changes Can Affect Human Health; Environmental Health, Planetary Boundaries and Four Futures.

Further Reading Abbas, F., Hammad, H.M., Fahad, S., et al., 2017. Agroforestry: A sustainable environmental practice for carbon sequestration under the climate change scenariosdA review. Environmental Science and Pollution Research 24, 11177–11191. Berner, R.A., 1998. The carbon cycle and CO2 over Phanerozoic time: The role of land plants. Philosophical Transactions of the Royal Society 353, 75–82. Chen, S., Martin, M.P., Saby, N.P.A., et al., 2018. Fine resolution map of top- and subsoil carbon sequestration potential in France. Science of the Total Environment 630, 389–400. Graff-Zivin, J., Lipper, L., 2008. Poverty, risk, and the supply of soil carbon sequestration. Environment and Development Economics 13, 353–373. van Groenigen, J.W., van Kessel, C., Hungate, B.A., et al., 2017. Sequestering soil organic carbon: A nitrogen dilemma. Environmental Science & Technology 51 (9), 4738–4739. IPCC, 2007. Agriculture. In: Metz, B., Davidson, O.R., Bosch, P.R., et al. (Eds.), Climate change 2007: Mitigation, 4th edn. Cambridge University Press, Cambridge, pp. 498–532. Kroodsma, D.A., Field, C.B., 2006. Carbon sequestration in California agriculture, 1980–2000. Ecological Applications 16 (5), 1975–1985. Lal, D., 2018. Digging deeper: A holistic perspective of factors affecting soil organic carbon sequestration in agroecosystems. Global Change Biology 1–17. https://doi.org/ 10.1111/gcb.140542018. Lal, R., Negassa, W., Lorenz, K., 2015. Carbon sequestration in soil. Current Opinion in Environmental Sustainability 15, 79–86. Leifeld, J., 2006. Soils as sources and sinks of greenhouse gases. Geological Society Special Publication 266, 23–44. Leifeld, J., Menichetti, L., 2018. The underappreciated potential of peatlands in global climate change mitigation strategies. Nature Communications 9, 1071. Manojlovic, M., Acin, V., Seremesic, S., 2008. Long-term effects of agronomic practices on the soil organic carbon sequestration in Chernozem. Archives of Agronomy and Soil Science 54, 353–367. Monfreda, C., Ramankutty, N., Foley, J.A., 2008. Farming the planet: 2. Geographic distribution of crop areas, yields, physiological types, and net primary production in the year 2000. Global Biogeochem Cycles 22, GB1022. https://doi.org/10.1029/2007GB002947. Nair, V.D., Nair, P.K.R., Dari, B., et al., 2017. Biochar in the agroecosystem: Climate-change–sustainability nexus. Frontiers in Plant Science 8, 2051.

516

Carbon Farming

Nave, L.E., Domke, G.M., Hofmeister, K.L., et al., 2018. Reforestation can sequester two petagrams of carbon in US topsoils in a century. Proceedings of the National Academy of Sciences of the United States of America 115, 2776–2781. Powlson, D.S., Stirling, C.M., Thierfelder, C., White, R.P., Jat, M.L., 2016. Does conservation agriculture deliver climate change mitigation through soil carbon sequestration in tropical agro-cosystems? Agriculture, Ecosystems and Environment 220, 164–174. Rimhanen, K., Ketoja, E., Yli-Hall, M., Kahiluoto, H., 2016. Ethiopian agriculture has greater potential for carbon sequestration than previously estimated. Global Change Biology 22, 3739–3749. Smith, P., Ineson, P., 2007. The soil carbon dioxide sink. In: Reay, D., Hewitt, C.N., Smith, K., Grace, J. (Eds.), Greenhouse gas sinks, 1st edn. CAB International, Oxfordshire, pp. 50–57. Sommer, R., Bossio, D., 2014. Dynamics and climate change mitigation potential of soil organic carbon sequestration. Journal of Environmental Management 144, 83–87. Wiesmeier, M., Munro, S., Barthhold, F., et al., 2015. Carbon storage capacity of semi-arid grassland soils and sequestration potentials in northern China. Global Change Biology 21, 3836–3845. Zomer, R.J., Bossio, D.A., Sommer, R., Verchot, L.V., 2017. Global sequestration potential of increased organic carbon in cropland soils. Nature Scientific Reports 7, 15554.

Relevant Websites https://www.marincarbonproject.org/carbon-farmingdMarine Carbon Project. https://www.carbonfarmersofaustralia.com.au/carbon-farming/dCarbon Farmers of Australia. http://carbonmarketinstitute.org/wp-content/uploads/2017/11/Carbon-Farming-Industry-Roadmap.pdfdCarbon Market Institute: Carbon Farming Industry Roadmap. https://www.leonardodicaprio.org/scaling-carbon-farming-and-regenerative-agriculture-in-california/dScaling Carbon Farming and Regenerative Agriculture in California. https://www.4p1000.org/dThe “4 per 1000” Initiative.

Carcinogenicity of Disinfection Byproducts in Humans: Epidemiological Studiesq Cristina M Villanueva, ISGlobal - Barcelona Institute for Global Health, Barcelona, Spain; Pompeu Frabra University, Barcelona, Spain; CIBER Epidemiology and Public Health, Madrid, Spain; and IMIM (Hospital del Mar Medical Research Institute), Barcelona, Spain © 2019 Elsevier B.V. All rights reserved.

Abbreviations CI Confidence interval (95% by default) CYP Cytochrome p450 DBP Disinfection by-product EPA US Environmental Protection Agency GST Glutathione S-transferase IARC International Agency for Research on Cancer NAT2 N-acetyltransferase 2 OR Odds ratio PAH Polycyclic aromatic hydrocarbon THM Trihalomethane WHO World Health Organization

Introduction Chlorination is a widely used and highly cost-effective technique for disinfection of drinking water, and has conferred important public health benefits. Since the first identification of toxic by-products produced by the reaction of chlorine with organic matter in 1974, a number of epidemiological studies evaluated the cancer risk associated with this exposure. The initial epidemiological studies were ecological in design and suggested bladder as one of the cancer sites associated with chlorinated water intake. Other cancers, such as colorectal, were also identified as high-risk cancers in some of these studies. In “ecological” studies, comparisons are made between populations rather than comparing individuals. For example, mortality in communities provided with ground water that had low levels of disinfection by-products (DBPs) was compared to mortality of populations in urban areas that were provided with water from rivers with high levels of DBPs. Subsequently, case-control studies based on death certificates strengthened these findings. In these mortality studies, the residence of a person dead from a specific cause, say bladder cancer, was retrieved from the death certificate and exposure to DBPs was attributed based on existing information on DBP levels in that residence/area. This information was compared to that of residences from subjects deceased from causes that were not thought to be caused by DBP exposure. The International Agency for Research on Cancer/World Health Organization (IARC/WHO) evaluated chlorinated drinking water as a potential human carcinogen in 1991. At that time most of the available studies were ecological or death certificate based. These studies had limited information on exposure to DBPs (usually crude estimates at current residence or around the time of death), and were limited in their ability to consider other risk factors for the disease. The latter could be important since comparing a rural community (with low levels of DBPs) with an urban community (with high levels of DBPs) is at the same time a comparison of many other potential risk factors such as air pollution, lifestyle, nutrition, and occupation. The attribution therefore of differences in disease between communities to varying levels of DBPs was not straightforward. These methodological limitations led the IARC to conclude that the evidence for the carcinogenicity of chlorinated drinking water in humans was inadequate (group 3 in the IARC/ WHO classification), although most of these studies had found a positive association between chlorinated drinking water and mortality from bladder and some other cancers. After this evaluation by IARC in 1991, several studies with improved exposure assessment at the individual level have been published. Among them, the studies on bladder cancer reported the most consistent positive associations with chlorination by-products exposure. Subsequently the IARC has evaluated selected DBPs rather than water chlorination. These compound-specific evaluations have not incorporated the epidemiological evidence since all epidemiological studies examine exposure to a mixture (chlorinated water) rather than to specific compounds of that mixture. Human exposure to chlorinated by-products in water may occur through ingestion (by drinking the water), inhalation and dermal absorption. These latter routes of exposure occur during water-related activities such as taking showers, cleaning the

q

Change History: April 2018. Cristina Villanueva updated all the section of the text and the Table 3. This is an update of M. Kogevinas and C.M. Villanueva, Carcinogenicity of Disinfection Byproducts in Humans: Epidemiological Studies, In Encyclopedia of Environmental Health, edited by J.O. Nriagu, Elsevier, 2011, Pages 505–515.

Encyclopedia of Environmental Health, 2nd edition, Volume 1

https://doi.org/10.1016/B978-0-12-409548-9.11191-1

517

518

Carcinogenicity of Disinfection Byproducts in Humans: Epidemiological Studies

dishes, or swimming in pools. For several of the DBPs that are volatile such as the trihalomethanes (THMs), inhalation and dermal absorption contribute more to the total uptake than exposure through ingestion. The most recent epidemiological studies on bladder and colorectal cancer have examined these routes of exposure. A bladder cancer study conducted in Spain, where THM levels were high in the past, identified increased risks for exposure through showers, baths, and swimming in pools that were higher than those risks identified for ingestion. However, little differences between exposure routes were observed for colorectal cancer. This article presents the evidence for human carcinogenicity of DBPs in drinking water and focuses on bladder and colorectal cancers, the two cancers for which there exists the strongest evidence. Further, evidence for other cancers is presented, and the article concludes with some general remarks.

Epidemiological Studies on Bladder Cancer Bladder cancer is among the most common cancers and is more frequent in men than in women. Transitional cell carcinoma is the dominant histological type in industrialized countries, whereas squamous cell carcinoma is a common histological form in developing countries such as Egypt, with a high prevalence of infection by a water parasite Schistosoma haematobium. Tobacco is by far the main cause of bladder cancer, responsible for about one-third to two-thirds of all bladder cancers in different parts of the world. If the association between exposure to DBPs in drinking water and bladder cancer is proven to be causal then they probably should be considered as the second most important environmental cause of this cancer, at exposure levels occurring before the regulation in drinking water. Occupation was evaluated as the second most common cause in the past. In recent decades, however, in industrialized countries exposures to occupational carcinogens have been effectively controlled and therefore it is unlikely that nowadays they cause a significant proportion of bladder cancer cases. Bladder cancer is one of the first cancers for which interactions between environmental exposures and genetic polymorphisms were demonstrated. Genetic polymorphisms refer to genetic variations in the general population, which are fairly common (more than 1% of the population). Genetic variation may be associated with different disease risks and this could be due to differential capacity of individuals to handle environmental exposure to toxic substances. Consistent associations with risk for bladder cancer have been found for two genes that are important for the metabolism of several toxic substances such as polycyclic aromatic hydrocarbons (PAHs) and aromatic amines, specifically for N-acetyltransferase 2 (NAT2) and for glutathione S-transferase M1 (GSTM1). Subjects with a specific form of the NAT2 gene have a lower capacity to detoxify aromatic amines that cause bladder cancer and have consistently been shown to have an approximately 40% increased risk for developing this cancer. Experimental evidence has shown that brominated THMs are activated by GSTT1 (glutathione Stransferase T1), haloacetic acids are detoxified by GSTZ1 (glutathione S-transferase Z1), and cytochrome P450 2E1 (CYP2E1) is responsible for the primary oxidation of THMs. The evaluation of the interaction between specific genetic variants (polymorphisms) in these genes has shown that subjects with different variants in these genes show a different bladder cancer risk in association with DBP exposure. The consistency of these findings with experimental observations of GSTT1, GSTZ1, and CYP2E1 activity strengthens the hypothesis that DBPs cause bladder cancer and suggests possible mechanisms. The most informative studies regarding bladder cancer include four studies in the United States, and one study each in Canada, Finland, France, and Spain. They all identify an association between exposure to DBPs and bladder cancer although specific results, for example, associations in men or women, or in smokers and nonsmokers, are not always consistent between studies. In addition, a meta-analysis of published studies and two pooled reanalysis of the raw data from these studies have also been published, providing very strong epidemiological evidence for an association between exposure to DBPs in drinking water and bladder cancer. The US Environmental Protection Agency (EPA) conducted a formal risk assessment of the evidence using results from the metaanalysis and pooled analysis.

Bladder Cancer Studies in North America The most recent study of bladder cancer in the United States was conducted in New England by Laura Beane Freeman and colleagues in the early 2000’s. They recruited 1213 cases and 1418 population-based controls. Case-control studies enroll subjects with a disease of interest (in this case bladder cancer) and compare them with subjects without the disease. For both the cases and the controls, information is requested on past exposures that could be related with the disease. In this study, a trained interviewer visited participants’ homes and administered a computer-assisted personal interview that elicited information on a variety of factors, including global positioning system (GPS) coordinates, showering, bathing and swimming in pool habits, and a lifetime residential history that was used to reconstruct lifetime water-source information. This was combined with measurement data from water utilities in the study area to create personal THM exposure indices. A modest association with bladder cancer was observed for the highest category of THM levels (> 45.7 mg/L, where only 5% of the study population was exposed to) versus the lowest (< 6.8 mg/L). No association was found for exposure through showering, bathing or swimming in pools. The largest study in the United States was conducted by Ken Cantor and colleagues in the late 1980s and included 2805 patients with urinary bladder cancer and 5258 population controls from 10 geographical areas. In this study apart from asking information on exposures such as smoking, occupation, and coffee, subjects provided information on consumption of tap water during a typical week. Information on lifetime residential history, with water sources, was also collected. A total of 1102 water utilities were visited, and utility personnel were interviewed. This allowed the evaluation of the residences of the subjects in relation to water sources and

Carcinogenicity of Disinfection Byproducts in Humans: Epidemiological Studies

519

chlorination status (chlorination/no chlorination) during various time periods. Statistically significant trends (P ¼ .02) with duration of residence in locations provided with chlorinated surface drinking water source were found for women whose tap water consumption was above the median. Similar associations were found for some other subgroups defined by smoking and amount of tap water consumption. Another case-control study of bladder cancer from the United States was conducted in the late 1980s by Cantor and colleagues (reported 10 years later). Patients and controls provided information on sociodemographic data, smoking history, and other potential risk factors for bladder cancer and also on the frequency of consumption of beverages containing tap water and other beverages by an adult inside and outside the home. Lifetime residential histories were recorded, and the water source at each place was identified. All 280 Iowa water utilities that served at least 1000 persons were contacted for historical information, and at each utility an interviewer collected one or two samples from the clear well where the water enters the distribution system. The risk increased significantly with increasing total lifetime dose of THMs and lifetime average total THM concentration in men but not in women. In this type of studies, risk of disease among the exposed compared to the nonexposed is estimated by the odds ratio (OR), and the degree of certainty of this estimate is calculated through the 95% confidence interval (CI). ORs provide an approximate estimate of the risk of having the disease among exposed compared to nonexposed, who are assumed to have an OR of 1.0. OR for men in the highest total lifetime THM category (over 2.4 g) was 1.8 (95% CI ¼ 1.2–2.7). A smaller case-control study included 327 cases of urinary bladder cancer and 261 controls and was conducted in Colorado (United States) by McGeehin and colleagues in the early 1990s. Subjects were contacted using telephone interviews to obtain lifetime residential and water source histories. Similar to other studies, data were linked to information from water utilities. Long-term exposure to chlorinated water (more than 34 years) was associated with a two to threefold increased risk when these subjects were compared to subjects without such exposure. The total lifetime exposure to total THMs was calculated for each subject and the mean lifetime concentration was 620 mg L 1 for cases and 420 mg L 1 for controls (P < .001). A population-based case-control study of bladder cancer was conducted by King and Marrett in the early 1990s among residents of Ontario, Canada. The study included 696 bladder cancer patients and 1545 nondiseased controls with long-term information on DBP exposure. The study estimated exposure to DBPs back to 1950. THM levels were estimated by modeling data from 1988 to 1992 for 114 water treatment plants, using several predictors of THM formation such as characteristics of the raw water and pretreatment and posttreatment procedures. Subjects provided information on several sociodemographic and potential risk factors for bladder cancer, including lifetime residence, water source history, and usual water consumption before diagnosis. Water exposures were estimated by linking residential histories with the relevant treatment plant data by time and geographic area. The risk of disease increased with increasing duration of using a chlorinated surface source. ORs were 1.0 for the referent group (defined as having 0– 9 years exposure to chlorinated surface water), 1.04 (CI ¼ 0.71–1.53) for 10–19 years exposure, 1.15 (CI ¼ 0.86–1.51) for 20– 35 years, and 1.41 (CI ¼ 1.09–1.81) for 35 þ years. Similar estimates were found for THM exposure. For example having been exposed to more than 75 mg L 1 of THMs for more than 35 years increased the risk by approximately 70% (OR ¼ 1.68; CI ¼ 1.06–2.67) compared to subjects who had been exposed for 0–9 years.

Bladder Cancer Studies in Europe The study in Finland by Koivusalo and colleagues was conducted in the early 1990s and included 732 bladder cancer cases and 914 controls. Similar to other studies, information was requested on several factors potentially associated with bladder cancer, such as tobacco use, socioeconomic status, intake of coffee and other beverages, and water source history. Historical exposure estimates used information on past residence, past water source, and historical data on water quality and treatment. Contrary to other studies, exposure was expressed as estimates of mutagenic potency of drinking water based on the Ames test. Exposure estimates covered the period 1950–87, and only persons with at least 30 years of existing exposure information were included in the analysis. Overall the bladder cancer OR was 1.22 (CI ¼ 0.92–1.62) for an increase of 3000 net revertants per liter. The French study has been the only study to evaluate both chlorination and ozonation because drinking water has been ozonated in France for decades. The French case-control study of bladder cancer was conducted by the group of Cordier and colleagues in the mid-1980s in seven French hospitals. The study included 281 patients of bladder cancer with 272 controls for whom extensive information on residential exposure to drinking water contaminants was available for over a 30-year period. Various other risk factors for bladder cancer were examined and taken into account in the analysis. The risk of bladder cancer decreased as duration of exposure to ozonated water increased (OR ¼ 0.60, 95% CI ¼ 0.3–1.3 for 1–9 years of ozonation; OR ¼ 0.31, 95% CI ¼ 0.1–0.7 for 10 years or more of ozonation). An OR of 0.31 corresponds approximately to a 70% decrease in the risk of disease among subjects with long-term use of ozonated water compared to those who did not use ozonated water (Table 1). Correspondingly, the risk of bladder cancer increased with duration of exposure to chlorinated surface water and with the estimated THM content of the water. Subjects exposed to an average of more than 50 mg L 1 of THMs had approximately a threefold increased risk compared to those exposed to less than 1 mg L 1. The Spanish study by Villanueva and colleagues has been the first cancer study to evaluate lifetime exposure to THMs through ingestion, inhalation, and dermal absorption. The study was a hospital-based case-control study conducted between 1998 and 2001 in five areas of Spain that enrolled 1219 patients with bladder cancer and 1271 controls. Study subjects provided individual information on water-related habits including residential and occupational history, drinking water source at each residence and job, amount of water consumption, and frequency and duration of showering, bathing, and swimming pool attendance. THM levels, water source history, and year when chlorination started in study areas were ascertained through measurements in drinking water

520

Carcinogenicity of Disinfection Byproducts in Humans: Epidemiological Studies Table 1

Association of bladder cancer with duration of exposure to ozonated water and cumulative exposure to THMs in the French study

Duration of exposure to ozonated water (years) 0 178 1–9 53 10–30 38 p trend Cumulative exposure to THM (mgL 1 years) 0 49 1–150 104 151–1500 63 >1500 53 p trend

151 44 57

1.0 0.52 (0.2–1.1) 0.25 (0.1–0.5) 0–10 0.9 (0.6–1.3) >10–35 1.2 (0.8–1.7) >35 1.4 (0.9–2.0) p trend 0.09 Duration of shower and bath  average residential THM level (min/day  mg L 1) (biological) risk factors > disease > mortality. Noise affects the organism either directly through synaptic nervous interactions or indirectly through the emotional and the cognitive perception of sound. It should be noted that the ‘direct’ pathway is relevant even at low sound levels, particularly during sleep, when the organism is at its nadir of arousal. Both the objective noise exposure (sound level) and the subjective noise exposure (annoyance) may be predictors of the relationship between noise and health endpoints.

Epidemiology Occupational Noise Epidemiological studies regarding the effects of occupational noise mostly refer to noise exposures above the hearing damaging criterion of 85 dB(A) for the 8-h average noise level. In older studies from countries where hearing conservation programs were not established, higher risks of high blood pressure were found in exposed subjects. However, confounding factors were often not adequately considered. More sophisticated studies showed lower associations in this context. In an older review that took 20 occupational noise studies into account and met certain methodological standards, it was concluded that there is sufficient evidence of a causal relationship between occupational noise at levels above 85 dB(A) and the prevalence of hypertension.

546

Cardiovascular Effects of Noise

An average relative risk of 1.7 was estimated for workers exposed to high occupational noise as compared to workers in less exposed areas. Increases in mean systolic and diastolic blood pressure of 3.9 and 1.6 mmHg, respectively, were measured. A study carried out on pilots showed higher mean blood pressure readings (systolic/diastolic: 1.4/5.9 mmHg) in pilots flying turboprop planes compared to pilots flying less-noisy jet planes. Possible confounding due to comparisons between white- and blue-collar workers was avoided by the study design. Repeated hemodynamic measurements carried out under acute exposure conditions suggested that blood pressure increases were primarily associated with the mean noise exposure, and heart rate increases with the average peak noise exposure. Occupational noise had both transient and sustained effects on the worker’s systolic blood pressure, indicating that no habituation takes place. In a meta-analysis based on nine studies that met the methodological inclusion criteria, a statistically significant association for occupational noise exposure (range 55–116 dB(A)) with hypertension was recorded, showing a relative risk of 1.14 (95% confidence interval (CI)¼1.01–1.29) per 5-dB(A) increase in noise level. Cardiovascular effects of occupational noise were found particularly in workers performing complex tasks. Associations between occupational noise and resting heart rate, stress hormones, and blood lipids were also found, implying a higher health risk for exposed subjects. Crosssectional studies are difficult to interpret because of possible self-selection (noise-sensitive subjects do not apply for noisy jobs), the healthy worker effect (subjects with health problems move away from noise workplaces), and confounding factors, including potentially protective factors such as physical activity. Associations between occupational noise and myocardial infarction mortality were found in more recent case-control and cohort studies. Relative risks of myocardial infarction for subjects in the highest exposed groups were consistent and ranged from 1.3 to 1.6. However, the results of individual studies were not always statistically significant. The magnitude of the noise effects was larger in subjects not using hearing protection, increased with duration of employment, and was dependent on the type of noise (intermittent vs. continuous). After 8 years of exposure, a positive association was found between the noise and total mortality. It was emphasized that research in this field should pay more attention to the perception of the noise on the disease outcome – to see more stringent results in line with the hypothesis. Subjective noise exposure (annoyance, disturbance) may be closer related to the health outcome than objective exposure (sound level). It was suggested that indirect effects could occur at noise levels lower than needed to affect hearing. Noise effects on blood pressure changes were found to be more pronounced in white-collar workers than in blue-collar workers, presumably, because white-collar workers were more disturbed by the noise, even at lower noise levels.

Environmental Noise Epidemiological noise studies regarding the relationship between transportation noise (particularly, road traffic and aircraft noise) and cardiovascular effects have been carried out on adults and on children, focussing on mean blood pressure, hypertension, and IHD as cardiovascular endpoints. The evidence, in general, of a positive association has increased during the recent years. Expert groups in the WHO’s normative work on guidelines for environmental noise reviewed the overall evidence for associations between noise exposure and cardiovascular morbidity. While there is evidence that road traffic noise increases the risk of IHD, including myocardial infarction, there is less such evidence for such an association with aircraft noise. This is partly due to the fact that large-scale cohort studies are missing. However, there is increasing evidence that both road traffic noise and aircraft noise increase the risk of hypertension. Only very few studies regarding the cardiovascular effects of other environmental noise sources, including rail traffic, are known of. Clinical manifestations of cardiovascular diseases are not very likely in young people. Therefore, blood pressure reading is the major outcome that has been studied in children and adolescents. In adults, however, manifestations of high blood pressure (hypertension) and IHDs (myocardial infarction, angina pectoris, ischemic signs in the ECG, heart failure) are major outcomes of interest. Classical, systematic, and quantitative reviews have been published in the past, summarizing the results of studies that were carried out at the end of the last century. New studies have appeared in the meantime, which were included in a review update. Until 2005, 61 epidemiological studies were recognized as having either objectively or subjectively assessed the relationship between transportation noise and cardiovascular endpoints. The studies referred to road traffic noise or (commercial) aircraft noise, a few referring to military aircraft noise. Most studies were of cross-sectional type (descriptive studies), but observational studies such as case-control and cohort studies (analytic studies) were also available. Confounding factors were not always adequately considered in some older studies. Not many studies provided information on dose–response relationships, because only two exposure categories were considered. Most studies referred to the noise level during the daytime or during the whole day. Information on night-time exposure (Lnight,8 h), in particular, was seldom available. Groups of experts have assessed the evidence of the relationship between community noise and cardiovascular disease outcomes. Evidence of an association between community noise and cardiovascular endpoints was found particularly for IHDs (coded as 410-414 in ICD 9 (http://en.wikipedia.org/wiki/List_of_ICD-9_ codes), including myocardial infarction and coronary atherosclerosis) and high blood pressure (hypertension). According to conclusions of the WHO, noise may be detrimental to health if the noise emission level (weighted day–night level Ldn) exceeds 65 dB(A) in this respect. Numerical meta-analyses were carried out assessing exposure–response relationships in quantitative terms.

Road traffic noise

With respect to the association between road traffic noise and hypertension, the picture was not clear until the end of the last century. In a well-respected meta-analysis, it was concluded that there was no excess risk in higher noise-exposed subjects. A relative risk of 0.95 (95% CI¼0.84–1.08) was calculated per 5-dB(A) increase in noise level during the day (Lday,16 h¼1 million Legal 640 000

559

Europe 1985−1992 250 000 Japan 1990 150 000 Japan 1994 250 000

USA Up 1989: 2 459 000 1990s: legal 7 036 000 Up 2005: legal 7 200 493 Undocumented 2000: 5.6 million 2006: 8.9 million

Australia 1990 80 000 Australia 2005/2006 65 707

14 000 000

10 000−300 000

30 000−999 000

20 000−99 000

Menos de 20 000 N

Figure 4 Immigration flow from Latin America. Reproduced from Schmunis GA (2007) Epidemiology of Chagas disease in non-endemic countries: The role of international migration. Memorias do Instituto Oswaldo Cruz 102(Suppl. 1): 75–85, with permission.

(20–30 years) resident immigrants. Seroreactive donors living in cities with large Hispanic immigrant populations were more likely to have been born or resided in Mexico or Central America and more likely to have donated blood in the past. These infected immigrants and their children, who may have acquired the infection congenitally, represent a growing reservoir population of more than one million individuals. An estimated 200 000 of these immigrants to the United States have chronic T. cruzi infections. The US Food and Drug Administration approved the first ELISA blood-screening test in December 2006 for use in the United States. There is now routine testing of the majority of donated blood units for T. cruzi by blood banks in the United States. Repeat reactive donations have been identified in all states, with most in California, Texas, New York, and Florida. Approximately one-third of the repeat reactive donations across the nation are confirmed by RIPA. Protocols for testing and treatment in the United States have been issued by the Centers for Diseases Control.

Organ Transplantation Characteristic lack of obvious symptoms during acute and indeterminate stages in a potential organ donor is cause for concern for the immunocompromised transplant patient. Chagas disease following organ transplantations has been reported in recipients in Spain, Argentina, the United States, and elsewhere who received bone marrow, kidney, pancreas, heart, or other organs. Transplantation of infected renal grafts into negative recipients has a reported index of transmission of 35%. Acute Chagas due to transplant infection has resulted in death; other recipients continue to be monitored for symptoms and progression of the disease. Reactivation of chronic Chagas heart disease may occur following cardiac transplant, resulting in clinical manifestations of fever, skin lesions, and myocarditis, although transplantation offers a potential benefit for patients with serious heart disease due to Chagas. Screening of solid organ donors has been recommended in areas where there is a high number of immigrants from T. cruzi endemic countries.

Laboratory Acquired Parasitic diseases are receiving increased attention, in part, because they pose a potential occupational hazard to clinicians, health care and laboratory workers, and researchers. Infections and, in some instances, death have resulted from laboratory-acquired Chagas in South America and the United States. Laboratory-acquired infections may result from accidental exposures, necessitating emphasis on protocols for handling specimens, using appropriate personal safety equipment, and responding to spills and accidents.

560

Chagas Disease: Environmental Risk Factors

Oral Transmission Oral transmission occurs through the consumption of foods contaminated with triatomines or their feces, or by consumption of raw meat from infected sylvatic hosts. Outbreaks occur in a regional or local area with more severe clinical presentation at younger ages and high death rates. Close monitoring by health agencies is mandatory to prevent recrudescence of the outbreak.

Migration of Diseased Hosts and Vectors Rural-to-urban migration continues to change the traditional epidemiological pattern of T. cruzi transmission in Latin America. Genomic characterization of the various strains of T. cruzi can be used to identify rates and patterns of dispersal directly related to the epidemiological importance of specific vectors in endemic areas. Human migration and activities that alter the environment continue to influence the rate at which triatomine species disperse and may influence their domestication. Domesticated triatomine species have become dependent on humans for dispersion. Parasite–vector interactions and behavior that may result from migration are not well defined in the environment.

Occurrence in the United States Six autochthonous cases of Chagas have been recognized in the United States since the mid-1950s. The earliest cases were in Texas and California, and more recently, cases have been diagnosed in Tennessee and Louisiana. With such low frequency, Chagas was generally viewed as an immigrant disease. Many have speculated as to why Chagas disease has not been recognized with greater frequency in the United States. Postulated factors included potentially lower virulence in North American strains, lower overall vector density, significantly different vector and human habitats reflected in less frequent domestication, longer feed–defecation response times, and a low index of suspicion, compounded by the fact that acute infections produce rather nondescript clinical patterns that will not be recognized by clinicians who are not trypanosomiasis conscious. Several triatomine species have been evaluated under laboratory conditions to assess the number of parasites in the first defecation following a blood meal or the blood meal–defecation response. However, such studies may not be indicative of sylvatic or domestic environments, and comparable determinations have not been reported for adult and all instar stages of the various triatomine species. The density of infected triatomines in human dwellings in the United States is much lower than that in Latin America where as many as 1000 infected triatomines have been found in one house. Significant numbers of patients with Chagas disease may be masked among patients with heart disease (chronic stage) or with a fever of uncertain origin (acute stage), and they are not registered by the health system due to lack of clinical suspicion. What cannot be ignored is the endemic presence of infected triatomines across the United States. Biogeographical review of evidence in the literature indicates a largely unrecognized number of individuals who are infected or at risk. The distribution of infected vectors and hosts in Texas (Figure 5) serves as a model to be replicated in other states to confirm endemic regions and risk. Further, the map highlights where additional detailed field-based research is needed to identify vectors, vector density, rates of infection, and impact on host species, including valid estimates of the prevalence of Chagas disease in humans and other free-ranging or domesticated host species. Historical data in the United States spanning more than nine decades contain significant inherent limitations, including uncertainty of specific vector species identification, lack of specificity for location data, small population sizes of vectors and host species/ populations that were selectively tested, lack of genomic identification of the various T. cruzi strains infecting the respective vectors and hosts, the use of a variety of laboratory test methods with varying specificity and selectivity, and limited number of individual cases and incidents involving humans.

Disease Control through Intervention and Education Recognition of the etiology and epidemiological consequences of T. cruzi and Chagas disease led to confirmation that the best method for disease control is by intervention. Transmission can be interrupted by control of domestic triatomine vectors, but the various endemic environments require optimizing cost-effective strategies. Such factors as the size of the vector colony, degree of domestication, geographic range of a species, population structure, reservoir hosts, and response to ecological disruption and to chemical insecticides must be considered. Vector control programs began in the 1940s in several South American countries. A number of national and international initiatives that began in the 1970s continue to focus on elimination of domestic populations and education among potential human hosts. The Pan American Health Organization (PAHO) supervised the international cooperative efforts of the Southern Cone Initiative, begun in 1990. PAHO effectively set priorities, coordinated eradication efforts, minimized duplication, and certified results.

Chagas Disease: Environmental Risk Factors

561

Figure 5 Biogeographic analysis of distribution of triatomines, infected triatomines, and host species in Texas. Reproduced from Hanford EJ, Zhan FB, Lu YM, and Giordano A (2007) Chagas disease in Texas: Recognizing the significance and implications of evidence in the literature. Social Science and Medicine 65(1): 60–79; Kjos SA, Snowden KF, Craig TM, et al. (2008) Distribution and characterization of canine Chagas disease in Texas. Veterinary Parasitology 152(3–4): 249–256; graphic courteously prepared by A. Giordano, with permission from A. Giordano.

Eradication measures focused on domesticated T. infestans. Houses were fumigated with greatest success achieved after the advent of synthetic pyrethroids, with a 94% reduction of transmission in Southern Cone countries. However, some areas within these countries still have high infestation and infection rates, and it is unlikely that this level of success can be replicated in Central and North America where triatomines are peridomestic and sylvatic, as well as domestic. Large-scale screening of blood donors in Latin America began in the 1980s following the emergence of AIDS. By the late 1990s, significant progress evidenced by a reduction in mortality and morbidity was recognized in a number of South American countries, and this served as an impetus to apply control efforts in other South and Central American countries, including Mexico. In Latin America, Chagas disease remains one of the leading causes of death and is the largest parasitic disease burden. Intervention efforts have been remarkably successful in reducing the incidence of vector-transmitted Chagas disease in portions of South America. Current medical treatments consist of chemotherapies, which have very low activity in treating chronic Chagas disease and may induce a number of toxic side effects. Available drugs are efficacious in the acute stage and side effects are less severe in children. Therefore, as shown in South America, prevention is paramount and can be successfully achieved through intervention and education. Minimization through interruption of the transmission of Chagas disease as a public health problem remains a realistic goal. The first step is to recognize the existence of this public health problem. Including Chagas disease as a reportable disease in the United States will not only raise awareness of the disease risk, but will also help to provide needed health care and to minimize risk for those not yet infected. In February 2007, the State of Arizona became the first in the United States to make Chagas a reportable disease for humans. In Oklahoma, Chagas is reportable as an unusual syndrome. Other states, such as Florida in 2007, created an optional infection case report. With standard protocols established in 1979 for countrywide prevalence studies and with coordination by the World Health Organization and the PAHO, research has continued along with intervention and control among the Southern Cone countries, the Andean countries, and Central American countries. However, 8–9 million people in Mexico and the Andean and Central American countries are infected with the parasite and between 25 and 100 million remain at risk, emphasizing the need to sustain and extend control strategies that will require continued financial and political support, as well as focus on epidemiological surveillance and care of those people already infected. Success of these multinational programs must not be allowed to engender a premature reduction in concern within endemic regions. Continued entomological and parasitological surveillance is needed as peridomestic or sylvatic vectors may become domesticated or sylvatic vectors may transmit T. cruzi during interaction with humans during temporary colonization.

562

Chagas Disease: Environmental Risk Factors

Further concerns are associated with the encroachment of human biomes into rural areas. The primary risk of transmission from insect to humans is related to the efficiency with which local vector species can invade and colonize human houses or infect dogs, resulting in a domestic transmission cycle. Infections in wild animals could be transmitted to hunters and trappers while dressing or skinning animals. Increasing global human migration from rural-to-urban and endemic-to-nonendemic regions spreads the risk of transmission. In nonendemic regions, the risk of human-to-human transmission is significant. Education and intervention can be effective by

• • • • • • • • • • • • •

Improving housing through measures such as replacing thatched roofs with corrugated aluminum and plastering over adobe walls Sleeping inside screened areas, under a permethrin-impregnated bed net, or an air-conditioned room Making community sensitization an important tool for vigilance and prevention of Chagas disease Washing and cooking any food that could be contaminated with insect feces Developing and implementing early detection of pathogen introduction to nonendemic areas due to travel and commerce Developing and implementing early detection of new domestication of triatomines Developing and disseminating vector control strategies Using insecticides to kill bugs and reducing the risk of transmission Training fumigation brigades on techniques and equipment maintenance Remaining aware that blood supplies may not always be screened and blood transfusions may carry a risk of infection Disseminating treatments and vaccines Sensitizing and training health professionals on Chagas diagnosis and treatment Assuring public health services: family planning, immunization, health education

The Future of Research on Chagas Disease As illustrated in Figure 6, a small number of publications in mainstream journals in the early 1920s began increasing significantly in the 1960s, followed by resurgence in research since the year 2000. This wealth of laboratory, clinical, epidemiological, socioeconomic and applied field research studies can guide the needed research and provide a framework for evaluating future results. Multidisciplinary approaches that integrate research and socioeconomic dimensions should be used to develop efficient models for prioritizing implementation of intervention strategies. Education and environmental interventions should be employed to reflect the interactive character of the various parameters that offer future research opportunities (Figure 7). Biogeographic research of epidemiological and ecological factors can improve understanding of prevalence, transmission, impacts, distribution, infection, virulence, and incidence. Modeling the effects of diversity on multi-host disease systems requires a detailed understanding of the underlying mechanisms that control species distribution and abundance. Comparative, quantitative biogeographical research utilizing GIS may reveal empirical patterns of variations within the ecological range that contribute to the cycles of T. cruzi among vector species and host organisms. Furthermore, dynamic changes in niche range boundaries, as well as vector and host species characteristics must be considered with respect to the potential for human–vector interaction and the associated health risk manifested in infected individuals. The observed patterns may vary with spatial scale and other contributory mechanisms. Geographical tools can be used to delineate ecological, socioeconomic, and cultural dimensions on a variety of scales from individual residence to community to regional, facilitating appropriate ecosystem approaches to intervention. Epidemiological data can be integrated to monitor the disease and form a predictive model that can be used to evaluate the effectiveness of interventions.

Figure 6

Number of publications about Chagas disease by year from 1921 to 2014, based on search results from ISI databases.

Chagas Disease: Environmental Risk Factors

Figure 7

563

Interactive parameters for Chagas disease that offer opportunities for needed research.

Obvious uncertainties include the consequent interactions of invasive triatomines with native species, as well as the associated strains of T. cruzi and resulting epidemiological expression of Chagas disease. Developing effective prevention and health-care policies will be dependent on better understanding of vector and host species, their interactions, range and rates of infection, as well as the related ecological factors of the components of Chagas disease. Such measures must be able to take into consideration the changing population dynamics of vector and host species and changing human socioeconomics and demographics. All of Latin America (including Mexico, the Caribbean, Central and South America) and the southern tier of the United States are endemic for Chagas disease. A growing human reservoir population and international migration of those infected poses a threat to public health across the globe and will cause an increasing economic burden in affected endemic and nonendemic regions. An appropriate methodology for diagnosis of those infected must be identified, taking into account the etiology of the disease through acute, indeterminate, and chronic stages. Similarly, testing procedures are needed to assess the blood supply and potential donor organs. Determination of appropriate medical care for immigrants and refugees should be based on specific clinical evaluation that takes into account immigration status and area of origin. Screening of immigrants for Chagas disease should include health history, physical examination, and laboratory testing. Challenges may be associated not only with cultural aspects such as language differences, but also with medical–legal issues such as the status of the immigrant – whether they are legal or undocumented. Such issues should be part of the necessary discussion on how to cope with a very large migrant population and the recognition of Chagas as an emerging disease. The following list of research opportunities reflecting the various integrated parameters depicted in Figure 7 is by no means exhaustive.

Biogeographic and Epidemiological Research

• • • • •

Implement strategies for integrated sustainable surveillance in areas with diverse epidemiological patterns Use risk mapping and stratification of vector transmission to identify priority areas for intervention Use realistic and consistent programs for testing of blood and organ donors in endemic and non-endemic regions Delineate the extent of morbidity and mortality in host organisms Define host characteristics and behaviors that limit disease incidence

564

Chagas Disease: Environmental Risk Factors

Vector Control

• • • • •

Use new tools and strategies to control vectors in peridomestic environment based on biological control and exploitation of vector behavior and habitat manipulation Conduct field operational research for better planning of control activities, cost-effectiveness of interventions, and assessment of impact of control interventions Use new tools and strategies to address threats posed by sylvatic vectors Define factors that enhance community participation in vector control Evaluate resistance to insecticides

Parasitology and Pathogenesis

• • • • • • • • •

Develop and use genomic information to identify and validate new molecular targets for discovery of drugs and diagnostics Reassess issue of vaccine development Research relationships among parasite groups and subgroups and type of disease and congenital transmission Study molecular mechanism of host–parasite interaction that determines disease Use new/improved tools for diagnosis of congenital infection or infections in immunocompromised individuals to assess prevalence and incidence for respective parasitic strains Use serological or parasitological markers to predict disease prognosis and outcomes Improve effective diagnostic tools for all the stages of the disease Develop control methods for testing pregnant women and reporting positive infections Improve and target education to health services and systems.

Therapeutics

• • • • •

Develop a clear definition of ‘cure’ to aid development of therapeutics Identify and make available potential therapeutic candidates from other indications for clinical evaluation in Chagas patients Conduct research of immune responses providing protection from the parasite and the pathologic immune response causing Chagas disease Generate evidence for policy and guidelines for treatment of individuals in indeterminate or chronic stages Evaluate efficacy and effectiveness of new treatments in preventing or slowing disease progression

Further Reading Basile, L., Jansa, J.M., Carlier, Y., Salamanca, D.D., Angheben, A., Bartoloni, A., Seixas, J., Van Gool, T., Canavat, E.C., Flores-Chavez, M., Jackson, Y., Chiodini, P.L., AlbajarVinas, P., 2011. Chagas disease in European countries: the challenge of a surveillance system. European Surveillance 16 (37). PII: 19968. Bern, C., Montgomery, S.P., Herwaldt, B.L., et al., 2007. Evaluation and treatment of Chagas disease in the United States – a systematic review. Journal of the American Medical Association 298 (18), 2171–2181. Bustamante, J.M., Tarleton, R.L., 2014. Potential new clinical therapies for Chagas disease. Expert Review of Clinical Pharmacology 7 (3), 317–325. Carlier, Y., Torrico, F., Sosa-Estani, S., Russomando, G., Luquetti, A., Freilij, H., et al., 2011. Congenital chagas disease: recommendations for diagnosis, treatment and control of newborns, siblings and pregnant women. PLoS Neglected Tropical Diseases 5 (10), e1250. https://doi.org/10.1371/journal.pntd.0001250. Castro, J.A., DeMecca, M.M., Bartel, L.C., 2006. Toxic side effects of drugs used to treat Chagas disease (American Trypanosomiasis). Human and Experimental Toxicology 25 (8), 471–479. Cruz-Lopez, L., Malo, E.A., Rojas, J.C., Morgan, E.D., 2001. Chemical ecology of triatomine bugs: vectors of Chagas disease. Medical and Veterinary Entomology 15, 351–357. Cruz-Reyes, A., Pickering-Lopez, J.M., 2006. Chagas disease in Mexico: an analysis of geographical distribution during the past 76 years – a review. Memórias do Instituto Oswaldo Cruz 101 (4), 345–354. Dias, J.C.P., 2007. Globalization, inequity and Chagas disease. Cadernos de Saúde Pública 23 (Suppl. 1), S13–S22. EuroSurveillance, Special Edition, Chagas disease in Europe, Euro Surveillance 16 (2011) (37). Gascon, J., Vilasanjuan, R., Lucas, A., 2014. The need for global collaboration to tackle hidden public health crisis of Chagas disease. Expert Review of Anti-Infective Therapy 12 (4), 393–395. Gorlin, J., Rossmann, S., Robertson, G., et al., 2008. Evaluation of a new Trypanosoma cruzi antibody assay for blood donor screening. Transfusion 48 (3), 532–540. Hanford, E.J., Zhan, F.B., Lu, Y.M., Giordano, A., 2007. Chagas disease in Texas: recognizing the significance and implications of evidence in the literature. Social Science and Medicine 65 (1), 60–79. Hernández, J., Núñez, I., Bacigalupo, A., Cattan, P.E., 2013. Modeling the spatial distribution of Chagas disease vectors using environmental variables and people’s knowledge. International Journal of Health Geographics 12. https://doi.org/10.1186/1476-072X-12-29. Howard, E.J., Xiong, X., Carlier, Y., Sosa-Estani, S., Buekens, P., 2014. Frequency of the congenital transmission of Trypanosoma cruzi: a systematic review and meta-analysis. British Journal of Obstetrics and Gynaecology 121 (1), 22–33. http://www.eurosurveillance.org/images/dynamic/ES/V14N01/V14N01.pdf (accessed 8 September 2015). Jansen, A.M., Xavier, S.C.C., Roque, A.L.R., 2015. The multiple and complex and changeable scenarios of the Trypanosoma cruzi transmission cycle in the sylvatic environment. Acta Tropica. https://doi.org/10.1016/j.actatropica.2015.07.018. Kjos, S.A., Snowden, K.F., Craig, T.M., et al., 2008. Distribution and characterization of canine Chagas disease in Texas. Veterinary Parasitology 152 (3–4), 249–256. Messenger, L.A., Miles, M.A., Bern, C., 2015. Between a bug and a hard place: Trypanosoma cruzi genetic diversity and the clinical outcomes of Chagas disease. Expert Review of Anti-Infective Therapy 13 (8), 995–1029. Montgomery, S.P., Starr, M.C., Cantey, P.T., Edwards, M.S., Meymandi, S.K., 2014. Neglected parasitic infections in the United States: Chagas disease. American Journal of Tropical Medicine and Hygiene 90 (5), 814–818.

Chagas Disease: Environmental Risk Factors

565

Patz, J.A., Daszak, P., Tabor, G.M., et al., 2004. Unhealthy landscapes: policy recommendations on land use change and infectious disease emergence. Environmental Health Perspectives 112 (110), 1092–1098. Peterson, A.T., 2006. Ecologic niche modeling and spatial patterns of disease transmission. Emerging Infectious Diseases 12 (12), 1822–1826. G}urtler, R.E., Yadon, Z.E., 2015. Eco-bio-social research on community-based approaches for Chagas disease vector control in Latin America. Transactions of the Royal Society of Tropical Medicine and Hygiene 109, 91–98. Rocha, M.O.C., Teixeira, M.M., Ribeiro, A.L., 2007. An update on the management of Chagas cardiomyopathy. Expert Review of Anti-Infective Therapy 5 (4), 727–743. Schmunis, G.A., 2007. Epidemiology of Chagas disease in non-endemic countries: the role of international migration. Memórias do Instituto Oswaldo Cruz 102 (Suppl. 1), 75–85. Texiera, A.R.L., Nascimento, R.J., Sturm, N.R., 2006. Evolution and pathology in Chagas disease: a review. Memórias do Instituto Oswaldo Cruz 101 (5), 463–491.

Challenges in Pesticide Risk Communicationq H-A Rother, University of Cape Town, Cape Town, South Africa © 2019 Elsevier B.V. All rights reserved.

Abbreviations DHHS Department of Health and Human Services EPA United States Environmental Protection Agency FAO United Nations Food and Agricultural Organization GHS Globally Harmonized System of the Classification and Labelling of Chemicals HIC High Income Countries LD50 Lethal dose to kill 50% of test animals LIMC Low- and Middle-Income Countries OSHA Occupational Safety and Health Administration WHO World Health Organization

Introduction Pesticides are toxic substances by design and are intentionally used for the control of various “pests” (insects, weeds, diseases, etc. that are in competition with humans). The health risks associated with pesticide use are well highlighted in numerous research studies and range from acute symptoms of varying severity (e.g., headaches, vomiting, skin rashes, respiratory problems, eye irritations, seizures, coma, death) to various chronic effects (e.g., cancer, asthma, dermatitis, endocrine disruption, birth defects, neurological effects). As all pesticides are toxic and vary in degrees of toxicity, end users require knowledge of a particular pesticide’s associated risks for risk decision making to protect themselves and the environment from harmful exposures and contamination. Simplistic enough in concept, communicating pesticide risks to diverse end-users is challenging and contentious. The complexity of transmitting risk concepts is often underestimated, and more importantly, intended risk messages are often misinterpreted. This is particularly the case in Low- and Middle-Income Countries (LMIC) where transnational pesticide companies and governments regulating pesticides are faced with transmitting risk information to semiliterate and illiterate populations. Thus effective pesticide risk communication is vital in the development and implementation of pesticide and environmental health policies, regulating pesticides, protecting human health, and preventing environmental contamination. Environmental health professionals play a key role in developing, evaluating, and implementing effective risk communication strategies relevant for the protection of various target audiences. This article presents a brief general background on the field of risk communication before focusing specifically on the issues relating to communicating risks associated with pesticides. Although challenges may overlap, pesticide risk communication issues and differences between High Income Countries (HIC) and LMIC are highlighted. This is particularly important in light of globalization and global usage of risk communication strategies (e.g., the GHS). Although the challenges associated with transmitting risk information about pesticides is the focus of this article, the reader is left with recommendations for promoting effective environmental health risk communication generally as well as identifying areas for future work.

Risk Communication No matter how accurate it is, risk information may be misperceived or rejected if those who give information are unaware of the complex, interactive nature of risk communication and the various factors affecting the reception of the risk message. (Fessenden-Raden et al., 1987).

Risk communication is the process through which people become informed about hazards with the intention of influencing perceptions and behavioral changes. Understanding this process of transmitting or exchanging information about the likelihood

q

Change History: July 2018. H-A Rother updated the reference section and updated the text. This is an update of H.-A. Rother, Challenges in Pesticide Risk Communication, Editor(s): J.O. Nriagu, Encyclopedia of Environmental Health, Elsevier, 2011, Pages 566–575.

566

Encyclopedia of Environmental Health, 2nd edition, Volume 1

https://doi.org/10.1016/B978-0-12-409548-9.02150-3

Challenges in Pesticide Risk Communication Table 1

567

Some characteristics of the two languages of risk communication

“Expert” assessment of risk

“Lay/public” assessment of risk

• Scientific • Probabilistic • Acceptable risk • Changing knowledge • Comparative risk • Population averages • A death is a death

• Intuitive/personal experience/hearsay • Yes/no • Safety • Is it or is not it? • Events • Personal consequences • It matters how people die

Source: Leiss, W. and Powell, D. (2004). Mad cows and mother’s milk: The perils of poor risk communication, 2nd edn. Montreal: McGill-Queen’s University Press.

and consequences of adverse events, in this case, from the exposure to pesticides, is crucial for managing risks in environmental health. Within the risk communication literature there are three schools of thought of how risk communication can control risk: (1) risk communication as public relations (i.e., educating the public), (2) risk communication as a business strategy (i.e., regulatory compliance, risk sharing, transferring liability to end users as is the case with product labels where the end user may have to pay a penalty/jail time for not using a product as directed on the label), and (3) risk communication as risk management (i.e., eliciting safety behaviors). Within each of these schools of thought, the objectives and goal of communicating risks vary, overlap, and sometimes even conflict with the other schools of thought. Thus, the term risk communication has different connotations and different outcomes for the various risk communication practitioners and participants. For example, the view that risk communication is a business strategy (2) would focus on the ultimate goal of fostering corporate profits rather than the promotion of human health, which would be the primary focus in risk communication as a risk management strategy (3). All three strategies are used in communicating risks about pesticides to workers, end-users, and the general public; however, the purpose of the strategy depends on who is communicating and what their underlying goal or purpose is in communicating pesticide risks. In this article, a brief overview of the field of risk communication is presented to contextualize the discussion of pesticide risk communication strategies used in HIC and LMIC. The field of risk communication developed as a result of several interrelated factors, including the legal and moral obligations placed on governments and industries to inform potentially exposed populations of environmental, technological, and health hazards, along with public policy difficulties resulting from social conflicts over risks (e.g., industry vs. community rights in the citing of pesticide factories in poor communities and LMIC countries). Baruch Fischhoff, a psychologist and researcher in risk decision making, modeled an eight-stage chronology summarizing risk communication development. In Fischhoff’s model, each stage represents the main strategy that risk communication practitioners viewed as effective at the time. Thus, each stage transcends the limits of preceding strategies, building on what preceded that stage. These risk communication stages are as follows:

• • • • • • • •

Stage Stage Stage Stage Stage Stage Stage Stage

1: Only need to get the numbers right. 2: Only need to tell the target audience the numbers. 3: Only need to explain what is meant by the numbers. 4: Only need to illustrate that the target audience has accepted similar risks in the past. 5: Only need to show the target audience that they are getting a good deal. 6: Only need to treat the target audience nicely. 7: Only need to make the target audience partners. 8: All of the above.

Currently, in literature from HIC, risk communication operates at stage 8, which sees risk communication as a two-way process based on a collaboration between the target audience and an agency (often government and industry) in developing the most appropriate communication strategy for the target audience. However, risk communication, and particularly pesticide risk communication, in LMIC such as South Africa appears to be stuck at Fischhoff’s stages 1 and 2. That is, risk communication is viewed from the traditional authoritarian top-down (one-way) assumption that the laity (e.g., general public, workers) do not understand or do not have access to technical scientific data and therefore only require the provision of risk information to have the appropriate risk reduction behaviors when exposed to risks. To understand from what basis risks are communicated and from what basis they are perceived, it is important to have an understanding of the two languages of risk communication (Table 1). The level of risk literacy, that is the ability to weigh risk and benefits for decision making (often through interpreting statistics and probability) varies both within and between developed and LMIC. HIC generally expose the general public more to the technical language of risk through regular reporting of scientific studies in the media, which has the potential to increase risk literacy.

Risk Communication Factors When communicating risks, there are several factors that need to be considered, which influence both how the information is communicated and how it is received. These factors are the target audience, the messenger, the message itself, and the medium for transferring the message. These four factors are briefly discussed in the following text.

568

Challenges in Pesticide Risk Communication

Target Audience Target audiences of a risk communication strategy are not homogenous, which is problematic when risk communication strategies are developed to cover a broad and general audience. Therefore, understanding the audience and their characteristics (e.g., social and cultural beliefs, language, economic status) is the most important principle for effective risk communication. Target audiences” characteristics, identified in the literature, that need to be defined and taken into account when communicating risks include:

• • • • • • • •

The individual’s or the group’s values and frame of references/world views Previous experiences with the risk (e.g., previously poisoned by pesticides?) Previous experience with and attitudes toward the organization communicating the risk Literacy level Levels of formal education/training Current level of knowledge about the risk Health status of the individual and their family members Local conditions that might affect how information is received (e.g., socioeconomic status, political climate)

This list of characteristics can be used in the field of environmental health as a survey guideline for the type of information needed before considering and developing a risk communication strategy.

The Messenger Messenger characteristics also influence effective risk communication especially in relation to how the target audience trusts who is communicating the risk information (e.g., government official, pesticide industry representative, environmental health professional). Factors influencing a target audience’s trust of a messenger include characteristics such as competence and expertise, objectivity, fairness, consistency, goodwill, commitment to a goal, fulfilling responsibilities, honesty, and openness. Environmental and public health specialists and others communicating risks (e.g., industry, governments) need to engage in critical evaluation of pesticide risk communication strategies with the prime goal of protecting human health and the environment.

The Message Generally, the goal of risk messages is to inform and influence the target audience either to produce an intended behavior to reduce risk exposures or to alter the target audiences’ risk perceptions. Risk messages are either “official” or “unofficial” and are expressed through various methods. Official risk messages are statements that are communicated by the “experts,” for example, scientists, government officials, and chemical companies’ technical staff. Unofficial risk messages are referred to as statements communicated by laypersons and the media. Predominately, both official and unofficial risk messages are hard to create in ways that are accurate, comprehensible, and not misleading. For example, “expert” messages tend to make general statements rather than providing numerical information/statistics regarding the magnitude of the risk (e.g., “harmful if inhaled” does not specify the quantity that can cause harm). Risk messages also tend to be controversial as the hazards they depict are themselves controversial. According to the National Academy of Sciences, risk messages should have the following qualities for effective risk communication to occur: 1. 2. 3. 4.

Emphasized information relevant to any practical actions that can be taken; Used clear and plain language; Reflected and respected target audiences and their concerns/worries; and Focused on informing the target audience rather than using persuasion or influencing strategies.

The Medium Various mediums (tools, strategies, methods) are employed for communicating risks to diverse target audiences. For example, in HIC, risk communicators rely on mediums such as community and public talks/meetings, the Internet, the media, labels on products, and signage in areas where risks may occur; whereas in LMIC, predominately the mediums used are labels and signage (Fig. 1), with occasional use of the media. Pesticide labels are one of pesticide companies’ main risk communication mediums (the Safety Data Sheet being the other), which rely on technical data and scientific jargon to convey the message that pesticides are “safe” as long as the label information is adhered to. In South Africa, for example, the government and pesticide industry assume that the pesticide label is a viable medium for communicating pesticide risks to all population groups, irrespective of the appropriateness of the medium’s characteristics for these groupsdthat is, language of the written text, technical language proficiency, unexplained icons, and symbols. That is to say that if a case of pesticide use results in poisoning or environmental contamination, government and industry would presume that risk communication was sufficient because there was a label on the container. Often the end-user is blamed for the poisoning as a result of “misuse” which assumes the label can be read and understood (see discussion of “misuse” below). What are the implications of risk communication for the field of environmental health? Risk communication in environmental health, generally, highlights the public’s right-to-know about chemical and industrial hazards (i.e., access to information), and as

Challenges in Pesticide Risk Communication

Fig. 1

569

Billboard used for risk communication in Lusaka, Zambia.

presented here, the “right-to-comprehend” risk information is neglected (i.e., provision of mechanisms to understand the information). Although within the field of environmental health substantial attention has focused on risk communication, existing models focus on communicating general risk messages to population groups and not on communicating specific exposure or risk data to individuals. As people who are exposed to pesticides are not homogeneous and exposure contexts have vast differences, risk information should ideally be contextual and individually relevant. However, no simple risk message would fit this requirement. The question is then what information should be communicated and how exposure contexts should be addressed by these risk messages.

Pesticide Risk Communication Communicating the risks associated with various pesticides is of vital importance since many of these products are highly acutely toxic and cause long-term (chronic) health effects. The risk communication platform for pesticide risks is based on scientific testing of each pesticide to determine the hazard, potential risks to humans and the environment, as well as determining acceptable levels of risk. Toxicological and ecotoxicological data are then translated into risk assessments (the evaluation of potential adverse effects) by extrapolating these data (in most cases conducted on laboratory animals) to humans. The assumptions made in human extrapolations generally rely on a model of a healthy Caucasian male whose susceptibility may be very different to that of a person living in a different context. Although every country using pesticides should require that research and risk assessment data are produced within that country, realistically, most LMIC have neither the financial and human resources nor laboratory capacity to conduct these expensive tests. This results in most regulatory agencies in LMIC accepting, for registration purposes, the pesticide risk assessment data produced by the parent pesticide company, which is predominately based in a HIC and therefore the data are premised on risk assessments using different populations with different susceptibilities. Risk is a function of Hazard and Exposure. R ¼ f ðH  EÞ What is the implication of inappropriate risk assessment data for pesticide risk communication? In LMIC the human populations do not resemble healthy Caucasian males and many suffer from a range of health burdens (immune deficient diseases, malnutrition, etc.). Furthermore, climatic conditions are vastly different. LMIC often experience hotter climatic conditions resulting in some pesticides breaking down into more toxic metabolites (e.g., organophosphates). This means that if populations in LMIC are able to understand the pesticide risk information being communicated, the information may well be inappropriate for protecting their health given their own health status and the environmental context within which they are using the pesticide.

Pesticide Risk Communication Strategies Pesticide communication strategies are not uniform between HIC and LMIC. The former rely on the media, public and community discussions/meetings, and pesticide labels. In LMIC, predominately the only risk communication tool to which the general public

570

Challenges in Pesticide Risk Communication

and workers have access to is the pesticide label. Whose responsibility is it for communicating pesticide risk information to exposed populations? Is it the pesticide industry, which has a vested interest in presenting information that indicates all pesticides are safe? Or is it the government who may receive industry financial support or, in the case of LMIC, who do not have the capacity to conduct and implement effective risk communication? In the United States, the Department of Health and Human Services (DHHS) and the Environmental Protection Agency (EPA) share the broadest set of responsibilities for determining and communicating health risks to the public. The United States also has the legislated Hazard Communication Standard under the Occupational Safety and Health Administration (OSHA) for work-related risk communication. Agencies responsible for and professionals concerned with environmental health issues should more proactively be involved in the critical evaluation of pesticide risk communication strategies, especially in countries where no formal structures for evaluating risk or hazard communication exist.

Pesticide Labels as a Risk Communication Tool Pesticide manufacturers and regulatory agencies globally rely on pesticide labels to communicate general information, as well as environmental and toxicological risk assessment data to end-users with the intention of soliciting specified safety behaviors. The label serves a dual function. That is, it provides use and risk assessment information for risk reduction and product efficacy, on the one hand, while on the other, it functions as a legally binding document. Namely, the end user is bound to use the pesticide as indicated on the label or be liable for penalties. Therefore, comprehension of the risk information is vital not only to protect human health and the environment, but also against liability charges. Regulators, industry officials, researchers and others tend to blame pesticide poisonings and environmental contamination because of end-user’s “misusing” pesticides. As Rother (2018) highlights, however, for an end-user to “misuse” a pesticide other than intended on the label and for the label to be an effective risk communication tool, all the following factors must be simultaneously fulfilled:

• • • •

The The The The

label is accessible. label is in an appropriate language. end-user has an adequate literacy level for reading. end-user has an adequate literacy level for comprehension.

The results from toxicological and environmental risk assessments conducted on a particular pesticide will be expressed on the label as information, for example, on handling, storage, application methods, waiting periods before reentering into sprayed areas, disposal, and poisoning first aid. Pesticide labels are designed by the company producing the pesticide. The labels are designed to meet the standards set by the regulating body of the country and submitted to this body when a company first applies to register a pesticide. Currently, pesticide labels are not standardized in how they present risk information and vary depending on which countries they are used in. In many LMIC, pesticide companies are required to produce labels that follow the United Nations Food and Agricultural Organization’s (FAO) guidelines to labeling under the International Code of Conduct on the Pesticide Management. This entails the use of pictograms and color codes for transmitting hazard information, the precautionary measures required, as well as the toxicity of a pesticide. However, HIC do not use these. To illustrate this point, two pesticide labels are presented for the same chemical (i.e., aldicarb) and produced by the same company (i.e., Bayer CropScience). (Note: These labels are no longer valid and the product has been discontinued in South Africa. They are used for illustrative purposes only.) Fig. 2 illustrates the front page of a pesticide label previously used in the United States for the acutely toxic (WHO Class Ia) pesticide aldicarb (trade name is Temik). Fig. 3 illustrates the front label previously used on Temik products in South Africa. The American front label of Temik provides more detailed written risk information, particularly first aid information, than the South African label, which predominately presents protective equipment and warning advice information using symbols. Thus both labels present a different take on communicating the same acute toxic risks. What is also interesting to note is that both labels, in written text, put the responsibility of negative health and environmental effects back on the end-user. That is, the American label states, “if you do not understand the label, find someone to explain it to you in detail” (what happens if the person cannot read?) and the South African label states, “Do not misuse this product. It is an offence to do so” (what constitutes misuse? What if the person cannot read?).

Interpreting and Communicating Pesticides Risk Assessment Data The right-to-know about pesticide risks does not necessarily equal the right-to-comprehend what these risks mean and how to prevent them. In risk communication, emphasis is often placed on the communication process, that is, understanding the target audience, designing messages either to alter risk perceptions or to influence behaviors, developing various strategies for the message transmission, and working on developing trust of the messengers. However, little emphasis and attention is placed on how risk messages are comprehended and whether these interpretations (i.e., misinterpretations) are actually increasing environmental health risks. In the case of pesticides, the concepts behind the risk information are complex and prone to misinterpretations. Research has shown that pictograms used on pesticide labels in LMIC to transmit risk assessment data to illiterate populations are not well understood and often lead to hazardous misinterpretations (Fig. 3). A problem is that definitions of these pictograms are intuitively implied. That is, the meanings/scientific definitions for each pictogram (and for the other risk communication

Challenges in Pesticide Risk Communication

Fig. 2

571

Temik pesticide label previously used in the United States.

vehicles on the labelde.g., color, risk phrases) are not provided on the label. The assumption is that the pictogram is simplistic enough in design to be obviously understood. This, research has shown, is not the case.

Communicating the Concept of “Toxicity” and Acute Effects In 1973, the World Health Organization of the United Nations (WHO) developed a classification system to distinguish between the levels of hazards for each pesticide. This classification system is only in relation to acute risks to health and does not reflect potential chronic risks from exposure (see the section “Communicating the concept of ‘long-term,’ chronic health effects”). The WHO classification attempts to distinguish between the hazardous levels of each pesticide based on the toxicity of the compound (Table 2). What needs to be remembered is that toxicity testing does not take into account the context the pesticide will be used in or the current health status of the humans that are exposed. Toxicological risk assessments are intended to deal with these interpretations as discussed above. More specifically, the WHO hazard classification is based on the acute oral or dermal toxicity (not for inhalation) of a pesticide to rats, which is determined by the LD50 in laboratory trials. The LD50 value is a statistical estimate of the number of milligrams of toxicant per kilogram of bodyweight required to kill 50% of a large population of test animals. Communicating pesticide health risks based on the LD50 to a nonscientific target audience is not a simple feat and even more challenging where nonliterate audiences are concerned. Although the WHO does not specify the symbols or risk phrases to list on pesticide labels to show the level of

572

Fig. 3

Table 2

Challenges in Pesticide Risk Communication

Temik pesticide label previously used in South Africa. WHO acute hazard classification for pesticides and South African color codes

WHO acute toxicity classes

Class la/ Class lb

Class II

Class III

Class IV or U (unclassified)

Hazards of active ingredient

Extremely hazardous Highly hazardous

Moderately hazardous

Slightly hazardous

Less hazardous

South African FAO toxicity color codes

toxicity, general recommendations are made, especially in relation to the most toxic pesticides (e.g., recommend skull and cross bones symbol for classes Ia and Ib). The WHO hazard classification system is currently being revised to incorporate a system attempting to harmonize the classification and labeling of chemicals globally (see the section “An initiative to harmonize chemical hazard classification and communication”). Nevertheless, the FAO Code of Conduct recommends the use of color bands on pesticide labels to illustrate the active ingredient and the formulation’s acute toxicity based on the WHO’s hazard classification of pesticides (Fig. 3 and Table 2). Table 2 presents the toxicity color codes used in South Africa. Although the FAO Guidelines on Good Labelling Practice for Pesticides

Challenges in Pesticide Risk Communication

573

specifies colors to use with each WHO hazard class, some countries interpret these colors differently (e.g., purple instead of red or orange instead of yellow). This is particularly problematic for countries such as Zambia who import pesticides from South Africa and Zimbabwe, as each of these countries use different colors for the four hazard classes. Linking colors to toxicity is quite arbitrary, and current research has shown that end-users rely on their social and cultural frame of reference to interpret what these colors means. These interpretations are often not as scientifically intended and may not afford protection from potential exposure risks in different cultural settings. The concept of “toxicity” is not easy to explain to populations with limited or no scientific background. Using color to denote acute toxicity may therefore, in the absence of effectively communicating risk, serve more as a means to protect the industry from liability rather than effectively communicate the potential acute pesticide risks. Training is required but unrealistic to cover all end-users.

An Initiative to Harmonize Chemical Hazard Classification and Communication In 2002, the United Nations agreed on a voluntary international system for classifying and labeling all chemicals, including pesticides. The Globally Harmonized System of the Classification and Labelling of Chemicals (GHS) was meant to be globally implemented by 2008, but to date only a handful of countries have implemented this system. This system provides a framework for identifying and communicating chemical hazards with the intention of reducing human health risks, reducing environmental contamination and removing barriers to trade in chemicals. The GHS advocates the use of nine pictograms as risk communication vehicles for common and risky chemical hazards with the view to promoting global recognition of these (Fig. 4). Preliminary research findings indicated a high confusion in understanding many of these pictograms, which resulted in high misinterpretations. The aim of the GHS is to promote continuity in chemical risk communication tools, particularly in light of continued globalization and trade in chemicals. The current FAO pictograms used on pesticide labels in LMIC will continue to be used in conjunction with the GHS symbols when a GHS symbol is not available.

Communicating the Concept of “Long-Term,” Chronic Health Effects The GHS presents the first attempt to develop a hazard classification system for chronic effects associated with pesticide exposures. This system has also designed a risk communication pictogram to represent, without words, the concept of chronic hazard (Fig. 4). The concept of an exposure causing an effect many years from now is difficult for nonscientific populations and especially poor populations worried about daily survival. This particular pictogram is prone to misinterpretations by its sheer design, which draws more attention to respiratory ailments, heart problems, and, in some countries, spiritual enlightenment.

Comprehension Issues Insufficient research and evaluation, before implementation, are currently conducted assessing whether target audiences actually understand the risk information being communicated as scientifically intended. Government officials, researchers, and other risk communicators need to ensure that Fischhoff’s stage 8 promotes more participatory research and evaluation of risk communication strategies such as pictogram designs, phrasing of risk message, and finding other visual means for communicating risk, especially for LMIC target audiences. For example, the GHS pictograms were not tested for comprehensibility before adoption of this system. Nor was extensive research conducted with illiterate populations on the comprehension of the FAO pictograms when they were developed in the 1980s. Recent research has shown that farmers and farm workers in LMIC are predominately unable to interpret the FAO pesticide pictograms as scientifically intended. Some of the interpretations given for these pictograms were critical confusions, implying that interpretations could lead to even more hazardous exposures. Similar findings have resulted from research in South Africa on the GHS symbols.

Pesticide Risk Perceptions Risk perception refers to people’s beliefs, attitudes, judgments, and feelings toward risk, and incorporates the wider social and cultural values, as well as outlook, people adopt toward hazards. Perception is a significant concern for risk communication. Risk perception research has provided risk communication researchers with insights into the various issues in relation to people’s attitudes, beliefs, and interpretations of risk. However, the trend in the risk communication literature focuses mostly on how to use risk communication mechanisms to control, manipulate, and change perceptions to achieve a desired precautionary behavior rather than using risk perceptions as a starting point for adapting various communication strategies to promote better understanding of the risk information. What is important to take into account is the role that risk perceptions play in interpretations of risk communication strategies and, more specifically, the perceptions of the symbols, pictograms, and color codes used as risk communication vehicles.

574

Fig. 4

Challenges in Pesticide Risk Communication

GHS hazard pictograms.

Can Pesticide Risk Communication be Context Neutral? Pesticide toxicity data are produced in laboratory environments. It is in this context that risk information is produced. For example, what precautions pesticide users need to take (e.g., wear gloves, wear a respirator, harmful if swallowed) to prevent possible negative health effects. However, once a pesticide leaves a laboratory the context it is used in is no longer controlled and pristine. Thus the question is, can risks identified in a laboratory be adequately identified and appropriately extrapolated to human use and exposure contexts? In many LMIC personal protective equipment (PPE) is not available, pesticide containers

Challenges in Pesticide Risk Communication

575

are reused (e.g., for food and water), and pesticides intended for agricultural uses are decanted and sold in unlabeled containers by vendors in informal markets for domestic control of pests. Furthermore, current strategies do not present risk assessment information relevant (1) to protect children from exposure vulnerabilities (e.g., neurodevelopmental effects) or (2) to protect pregnant or lactating women farm workers from risks. (e.g., birth defects, transmission of residues from hand to breast to baby). Thus provision of pesticide safety information cannot be context neutral. The challenge is how to produce more contextrelevant risk information and, ultimately, an appropriate risk communication tool. Otherwise, the question arises as to the purpose of the safety informationdthat is, to protect industry from liability or to protect the end user? One suggestion for laboratory-based research on pesticides is to include simulated contexts found in LMIC or amongst migrant farm labor populations in HIC.

Challenges in and Recommendations for Effective Pesticide Risk Communication This article has illuminated some of the many challenges faced in communicating scientifically identified pesticide risks to diverse population groups. To make pesticide risk communication an effective endeavor, particularly for LMIC, many of these challenges will need to be addressed by environmental health professionals and students, industry, policy makers, and others. Challenges to be addressed include:

• • • • •

Pesticide risk communication strategies tend to be static, particularly in LMIC. Climate change challenges risk communication strategies to become less static and to provide information to protect from unforeseen risks due to climate change. Developing risk communication mechanisms in HIC countries is currently more participatory and interactive for those who have access to the Internet. The challenge is to promote participatory and problem-solving risk communication strategies in LMIC where the means to participate are limited and not actively fostered. Current pesticide risk communication strategies (e.g., pictograms) are not gender specific or targeted for children. The challenge is to design relevant strategies. Addressing means of communicating risks to vendors selling street pesticides in the informal sector. An enormous challenge is for those involved in risk communication to promote the concept of the right-to-comprehend risk information (i.e., provision of mechanisms to understand risk information) rather than just the right-to-know (i.e., provision of or access to information only). For example, one way to foster comprehension of existing risk communication strategies would be to include pesticide pictograms and color codes in the curriculum of schoolchildren and incorporating risk communication into tertiary degree programs (e.g., in environmental health fields). New and progressive risk communication strategies are needed, which not only focus on communicating risks to semiliterate and illiterate populations (the right-to-know), but also aid in the understanding of this information as intended (the right-to-comprehend).

The challenge for the field of environmental health is to design, research, and implement innovative and appropriate strategies with the view to reducing pesticide health effects and environmental contamination.

See also: Cancer Risk Assessment and Communication; Decision Making Under Uncertainty: Trade-Offs Between Environmental Health and Other Risks; Pesticides: Human Health Effects.

Further Reading Bennett, P., Calman, K., 1999. Risk communication and public health. Oxford University Press, Britain. Dalvie, M.A., Rother, H.A., London, L., 2014. Chemical hazard communication comprehensibility in South Africa: Safety implications for the adoption of the globally harmonized system of classification and labeling of chemicals. Safety Science 61, 51–58. FAO/WHO, 2015. Guidelines on good labelling practice for pesticides. FAO/WHO. Fischhoff, B., 1995. Risk perception and risk communication unplugged: Twenty years of process. Risk Analysis 15, 137–145. Fischhoff, B., Kadvany, J., 2011. RiskdA very short introduction. Oxford University Press, Oxford. Ibitayo, O.O., 2006. Egyptian farmers’ attitudes and behaviors regarding agricultural pesticides: Implications for pesticide risk communication. Risk Analysis 26 (4), 989–995. Leiss, W., Powell, D., 2004. Mad cows and mother’s milk: The perils of poor risk communication, 2nd edn. McGill-Queen’s University Press, Montreal. Morgan, M.G., Fischhoff, B., Bostrom, A., Atman, C.J., 2001. Risk communication: A mental models approach. Cambridge University Press. Quandt, S.A., Doran, A.M., Rao, P., et al., 2004. Reporting pesticide assessment results to farmworker families: Development, implementation, and evaluation of a risk communication strategy. Environmental Health Perspectives 112 (5), 636–642. Rother, H.-A., 2005. Researching pesticide risk communication efficacy for south African farm workers. Occupational Health Southern Africa 11 (3), 20–26. Rother H-A (2005) Risk perception, risk communication, and the effectiveness of pesticide labels in communicating hazards to South African farm workers. PhD thesis, Department of Sociology. Michigan: Michigan State University. Rother, H.-A., 2008. South African farm workers’ interpretation of risk assessment data expressed as pictograms on pesticide labels. Environmental Research 108 (3), 419–427. Rother, H.-A., 2018. Pesticide labels: Protecting liability or health?dUnpacking “misuse” of pesticides. Current Opinion in Environmental Science and Health 4, 10–15. Rother, H.-A., London, L., 2008. Classification and labeling of chemicals: New Globally Harmonized System (GHS). Encyclopedia of Pest Management 1 (1), 1–6.

576

Challenges in Pesticide Risk Communication

Thompson, P.B., 2012. Ethics and risk communication. Science Communication 34 (5), 618–641. Whitford, F., Feinberg, R., Mysz, A., et al., 2001. Pesticides and risk communication: Interaction and dialogue with the public. PPP-52. Department of Botany and Plant Pathology. Purdue University Cooperative Extension Service, Indiana.

Relevant Websites http://www.atsdr.cdc.govdAgency of Toxic Substances and Disease Registry, Department of Health and Human Services, USA. Health risk communication, health education and risk communication strategies, risk perceptions and pesticides. http://www.cdc.govdCenters for Disease Control and Prevention, Department of Health and Human Services, USA. Pesticides and risk communication. http://www.fao.org/agriculture/crops/thematic-sitemap/theme/pests/code/en/dUnited Nations Food and Agriculture Organization, International Code of Conduct on Pesticide Management and relevant technical guidelines. http://www.hsph.harvard.edu/ccpe/programs/RCC.html#whodHarvard School of Public Health, Center for Continuing Professional Education, USA. Risk communication training for health professionals and policy makers.

Chemically-Induced Respiratory Toxicitiesq P-G Forkert, Queen’s University, Kingston, ON, Canada © 2019 Elsevier B.V. All rights reserved.

Abbreviations DNA Deoxyribonucleic acid EC Ethyl carbamate NADPH Nicotinamide adenine dinucleotide phosphate oxidase RNA Ribonucleic acid VC Vinyl carbamate

Introduction It is well recognized that the lung, which receives exposure via both the circulatory and inhalational routes, is highly susceptible to toxicities induced by xenobiotic (foreign) compounds. Most of the data on chemically induced toxicities have been derived from studies of the liver, which have affirmed that metabolism of chemicals or metabolic activation is mediated by drug-metabolizing enzymes, including the cytochrome P-450 system, to yield reactive metabolites capable of binding covalently to tissue constituents including proteins, lipids, and nucleic acids (deoxyribonucleic acid (DNA), ribonucleic acid (RNA)). Covalent binding is usually determined by using a radiolabeled chemical or drug, and subsequently measuring the amounts bound to proteins or other cellular constituents after extensive extraction with agents such as solvents. Immunochemical techniques have also been used to determine covalent binding by measuring the amounts of protein-bound adducts. An immunochemical approach has been developed to characterize the protein adducts formed in the liver from the analgesic, acetaminophen. Interestingly, the protein adducts were detected in the serum of mice that were given toxic doses of acetaminophen. Similar protein adducts were also detected in the serum of patients who had an overdose of acetaminophen and developed hepatotoxicity. Hence, covalent binding is a parameter that serves as a convenient index of the formation and exposure of a tissue to reactive metabolites generated from potentially cytotoxic chemicals or drugs. The formation and covalent binding of reactive metabolites to cellular proteins, critical for maintaining normal cellular function, appear to be key events in the development of chemically induced toxicities. The pulmonary toxicities caused by a number of chemicals, including 4-ipomeanol, 3-methylindole, naphthalene, 1,1-dichloroethylene, and trichloroethylene, have been ascribed to cytochrome P-450-dependent formation of reactive metabolites that bind covalently to proteins. Moreover, the metabolism of these compounds is believed to take place within the bronchiolar Clara cells, leading to the selective necrosis of this cell population. The preferential damage to the Clara cells is supported by data showing the localization of cytochrome P-450 enzymes within this cell type in high concentrations. Covalent binding of chemical metabolites to proteins may also lead to the formation of chemical– or metabolite–protein adducts that act as immunogens, resulting in immune responses or hypersensitivity reactions, formation of immune complexes, or tissue toxicities. The formation of chemical–protein adducts may be associated with some autoimmune diseases. A wellrecognized hypersensitivity reaction is produced by the anesthetic drug, halothane; hepatocytes that contain halothane neoantigens are susceptible to attack by lymphocytes. In contrast to the liver, data regarding chemical or drug-related immune responses in the lung are limited. Approximately 90 years have elapsed since the first experimental tumors were produced by exposure to chemicals. Since then, extensive data have accumulated for a wide variety of chemicals with the potential to induce carcinogenesis. The available data have led to the development of a paradigm that a chemical carcinogen or its metabolite must react in a specific manner with critical cellular constituents to produce a neoplasm. On contact, potential chemical carcinogens are metabolized to compounds that interact with tissue components. In a majority of cases, chemical metabolism leads to the formation of metabolites that are noncarcinogenic. However, in certain cases, the by-product is more carcinogenic than the parent compound, and it may react with or bind to cellular constituents including proteins and nucleic acids to initiate carcinogenesis. Early studies have focused on binding to protein as a basic process leading to carcinogenesis. However, subsequent investigations have focused on binding of metabolites to DNA as a critical event in the development of chemical carcinogenesis. The importance of metabolite–DNA binding was first suggested by studies that found a correlation between carcinogenicity of several polycyclic hydrocarbons in mouse skin and covalent

q

Change History: October 2018. Jerome Nriagu updated the references. This is an update of P.-G. Forkert, Chemically-Induced Respiratory Toxicities, Editor(s): J.O. Nriagu, Encyclopedia of Environmental Health, Elsevier, 2011, Pages 576–586.

Encyclopedia of Environmental Health, 2nd edition, Volume 1

https://doi.org/10.1016/B978-0-12-409548-9.11718-X

577

578

Chemically-Induced Respiratory Toxicities

binding of these hydrocarbons to DNA. Further studies have shown that reactive metabolites of benzo[a]pyrene bound to DNA and generated DNA adducts, resulting ultimately in the formation of lung tumors. Lung toxicities may also be mediated by a mechanism termed “oxidative stress.” In this mechanism, metabolic activation produces reactive oxygen metabolites including the superoxide anion radical, hydrogen peroxide, the hydroxyl radical, and singlet oxygen. Each of these reactive oxygen species may act as an oxidizing agent and is capable of contributing to oxidative stress. Hydrogen peroxide may be formed from dismutation of the superoxide anion, and the hydroxyl radical may be produced by interaction of the superoxide anion or hydrogen peroxide with iron ions. Reactive oxygen species may also be formed from redox cycling of compounds such as paraquat. Redox cycling is a process whereby a compound undergoes reduction by nicotinamide adenine dinucleotide phosphate oxidase (NADPH) to an intermediate such as a free radical that is then oxidized by molecular oxygen to yield the superoxide anion, resulting in regeneration of the parent compound. The reactive oxygen species can react with cellular constituents that produce toxic effects or cell death. Some chemicals that produce reactive oxygen species can also simultaneously produce reactive metabolites. The relative contributions of oxidative stress and reactive metabolite formation to tissue toxicities are more difficult to determine. Later in this article, we present data relating to our efforts and those of others to identify specific events implicated in the bronchiolar toxicity of 1,1-dichloroethylene, the carcinogenicity of vinyl carbamate (VC), and the pulmonary fibrosis induced by paraquat. These compounds are used as surrogates for compounds that elicit their toxicities via mechanisms involving metabolic activation (dichloroethylene, VC) and oxidative stress (paraquat).

Expression of Cytochrome P-450 Enzymes in the Respiratory Tract The cytochrome P-450 monooxygenases are a superfamily of enzymes that catalyze the oxidation of a wide variety of xenobiotic compounds including environmental contaminants. Cytochrome P-450 is embedded in the membranes of the endoplasmic reticulum, a cellular organelle that when isolated is collectively referred to as microsomes. Although microsomal cytochrome P-450 in the lung amounts to only approximately 10% of the amount in liver microsomes, the concentration of P-450 enzymes within a few lung cells renders them susceptible to the adverse effects of chemicals that undergo metabolic activation. This preferential localization is believed to underlie the cell selectivity of chemically induced toxicities in the lung. Therefore, the mechanisms involved in chemically induced lung cytotoxicities are linked in part to the localization of the different forms of cytochrome P-450 in the various lung cell types. The distribution and localization of the major P-450 isoforms in the lungs of various species, including the human, are summarized in Table 1. The nomenclature for the cytochrome P-450 superfamily is genetically based. The naming of a P-450 gene includes the root symbol “CYP,” denoting cytochrome P-450; an Arabic number designating the P-450 family, a letter indicating the subfamily when two or more subfamilies are known to exist within that family, and an Arabic numeral representing the individual gene. The gene product such as a protein is not italicized. CYP1A1 is a P-450 enzyme that is involved in the metabolism of various polycyclic aromatic compounds. It may be absent or present at minimal levels in the lungs of rodents, but is highly induced after treatment with agents such as 3-methylcholanthrene or 2,3,7,8-tetrachlorodibenzodioxin (dioxin). In rabbits and rats, CYP1A1 is induced in the Clara, type II, and endothelial cells. However, in mice, this P-450 is induced in the type II and endothelial cells, but not in the Clara cells, indicating a species difference. In humans, CYP1A1 is highly induced by cigarette smoke. The CYP2A subfamily has several members including CYP2A5, CYP2A10, CYP2A11, and CYP2A13. CYP2A5 has been localized in the olfactory mucosa. CYP2A10 and CYP2A11 (formerly Nma) have been identified in both the olfactory and respiratory epithelia of the nasal mucosa. CYP2A13, which is predominantly expressed in the lung, is found in the bronchus and trachea. Table 1

Localization of cytochrome P-450 enzymes in the respiratory tract

Enzyme

Species

Location

CYP1A1

Rabbit

CYP2A5 CYP2A10/11 CYP2A13 CYP2B1 CYP2B4 CYP2E1 CYP2F2 CYP2S1 CYP4B1

Rat Mice Human Mice Rabbit Human Mice and rat Rabbit Mice Mice Human Rat and rabbit

Clara cells, type II cells, endothelial cells, and macrophages Clara cells, type II cells, and endothelial cells Type II cells and endothelial cells Bronchiole and type II cells Olfactory mucosa Nasal mucosa Trachea and bronchus Clara cells, type I and type II cells Clara cells, type II cells Clara cells Clara cells Nasal mucosa, bronchi, and bronchiole Clara cells and type II cells

Chemically-Induced Respiratory Toxicities

579

The CYP2A enzymes are all implicated in the metabolism of aflatoxin B1 and the tobacco-specific carcinogen, 4-(methylnitrosamino)-1-(3-pyridyl)-1-butanone. CYP2A13 is also highly efficient in the metabolism of nicotine. Members of the CYP2B subfamily, including CYP2B1 and CYP2B4, are major P-450s in the lung. CYP2B1 is constitutively expressed in the Clara cells as well as the type I and type II cells, and is not induced by phenobarbital, a prototypic inducer of this P-450 in the liver. CYP2B1 is responsible for the metabolism of compounds such as butylated hydroxytoluene, O,O,S-trimethylphosphorothioate, and methylcyclopentadienyl manganese tricarbonyl. The toxicities are manifested in the type I and type II cells as well as in the Clara cells, which is consistent with the localization of CYP2B1 in these cell types. CYP2E1 has been detected in the lungs of the rabbit, rat, hamster, and human. CYP2E1 is present in rabbit nasal mucosa, and is induced twofold by ethanol and sixfold by acetone. In mice, CYP2E1 is predominantly localized in the bronchiolar Clara cells, with minimal expression in the type II cells. The Clara cell necrosis induced by the environmental contaminants, dichloroethylene and trichloroethylene, is ascribed to in situ metabolic activation by CYP2E1 within the Clara cells. The CYP2F subfamily is distinct for its lack of gene diversity, with only a single member expressed in each of the species examined: human (2F1), mouse (2F2), and goat (2F3). In murine lung, CYP2F2 is preferentially expressed in the Clara cells. The CYP2F enzymes have a major role in the metabolism of naphthalene, 3-methylindole, and dichloroethylene. All of these compounds induce Clara cell cytotoxicities, and are consistent with the contention that metabolic activation takes place in situ within the Clara cells. CYP2S1 has been recently identified and localized in human lung. Expression of CYP2S1 in the lung is highest in the epithelia of the nasal cavities, bronchi, and bronchioles. In the human lung cell line A549, dioxin induces the CYP2S1 mRNA by about twofold. This P-450 is also implicated in the metabolism of naphthalene. CYP4B1 is highly expressed in the Clara and type II cells of both the rat and rabbit. In rat lung, CYP4B1 plays a major role in metabolic activation of the Clara cell toxicant, 4-ipomeanol. However, in rabbit lung, 4-ipomeanol, which also elicits Clara cell injury, is metabolized by CYP2B4 and CYP4B1. These species differences in 4-ipomeanol metabolism are underscored by findings showing an absence of lung toxicities in quail and chickens, which lack Clara cells. In the context of mechanisms, covalent binding of 4-ipomeanol in isolated Clara cells is 10-fold higher than in type II cells. Although the type II cells contain CYP4B1, the lack of toxicity in this cell type has been attributed to a more balanced content of enzymes catalyzing activation and detoxication. In contrast, the metabolic balance in the Clara cells is believed to be in favor of the mechanism of activation. In summary, diverse forms of P-450 enzymes are localized to the greatest extent in the Clara cells, a characteristic that contributes to the sensitivity of this cell type to toxicities induced by xenobiotic chemicals. The nasal mucosa is also enriched in its content of cytochrome P-450 enzymes. However, further studies are required to fully characterize the effects of chemical exposures via inhalation in this region of the lung.

Glutathione and Glutathione S-Transferases Glutathione is a tripeptide (g-glutamylcysteinylglycine) that has an important role in the detoxication of reactive metabolites. Conjugation of the metabolites with glutathione usually yields a product with decreased reactivity by inhibiting covalent binding to cellular constituents. The conjugation of metabolites with glutathione can occur nonenzymatically or be catalyzed enzymatically by the glutathione S-transferases. Therefore, the susceptibility of cells and tissues to toxicities mediated by reactive metabolites depends in part on the availability of both glutathione and the transferases for conjugation reactions and hence detoxication. The distribution of glutathione and the transferases is of interest because of the cell selectivity of the toxicities found in the lung. Histochemical labeling has been used to localize glutathione in the ciliated and Clara cells of the bronchiolar epithelium as well as in the type II cells (Fig. 1A). However, glutathione labeling is most pronounced in the Clara cells. Since studies involving toxic agents have shown that glutathione depletion occurs in conjunction with cell injury, it has been postulated that the susceptibility of the Clara cells to chemical toxicities is due to an excess of metabolite formation compared to the availability of glutathione for conjugation. The glutathione S-transferases are a family of multifunctional proteins that have an important role in defending cells against potentially toxic compounds by catalyzing the conjugation of reactive metabolites to glutathione. However, detoxication can also be mediated through a nonenzymatic reaction involving binding of the transferases to the metabolites. In some cases, transferase activity produces a conjugated product that is more reactive than the parent compound or the metabolite; in this case conjugation is associated with toxication rather than detoxication. The transferases are composed of two subunits and exist as either homo- or heterodimeric proteins. According to their primary structures, they are categorized into five separate families and are designated as the class alpha, mu, pi, sigma, or theta. On the basis of decreasing electrophoretic mobilities, three protein bands are resolved and are designated Ya, Yb, or Yc. Subsequent studies have shown that the Ya and Yc bands represent the class alpha, whereas the Yb band represents the class mu. Several studies have examined the distribution and localization of the transferases in the lung. Transferase activities have been found to be considerably higher in isolated Clara cells than in isolated type II cells in both the mouse and rat when 1-chloro-2,4dinitrobenzene is used as a substrate. Comparative studies have found transferase activities in human and rat lung to be comparable, but both the activities were less than that in hamster and mouse lung. Immunohistochemical studies in murine lung showed that the Ya (alpha), Yp (pi), and Yb1 (mu) subunits were all localized in the bronchiolar Clara cells and alveolar type II cells. In addition, Ya was localized in the alveolar type I and endothelial cells. Parallel studies that used in situ hybridization and

580

Chemically-Induced Respiratory Toxicities

Fig. 1 Histochemical localization of glutathione in lungs of control mice. (A) Mice treated with 1,1-dichloroethylene (125 mg kg 1, ip); (B) diminished labeling for glutathione in the bronchiolar epithelium, including Clara cells, and parenchymal cells (stain used, Mercury Orange).

quantitative image analysis demonstrated good agreement between the relative amounts of transferase protein and mRNA transcripts detected. Treatment of mice with 2(3)-tert-butyl-4-hydroxyanisole induced Ya and Yp in the bronchiolar epithelium. Taken together, localization of the cytochrome P-450 and transferase enzymes, as well as glutathione, within the same lung cell populations is likely to provide optimal conditions for the detoxication of reactive metabolites formed from potential pneumotoxicants.

Susceptibility of Lung Cells to Chemically Induced Cytotoxicities Of the 40 distinct cell types identified in the lung, the Clara cell is the most frequent target for chemically induced toxicities. Information regarding the identification and distribution of the Clara cells is relevant for identifying the sites along an airway where the toxicities are likely to be manifested. The Clara cell is characterized by the presence of a protruding apex (Fig. 2A). It contains an indented nucleus, numerous pleomorphic mitochondria, and an abundant smooth endoplasmic reticulum (Fig. 3). Electron-dense membrane-bound secretory granules are unique characteristics of the Clara cells and are located in the apical cytoplasm close to the plasma membrane. The Clara cell 10-kDa protein (CC10), a secretory protein, has been immunolocalized to these granules. The Clara cells along with the ciliated cells form the epithelial lining of the bronchioles, which are part of the distal conducting airway system. The Clara cells are localized predominantly in the terminal and respiratory bronchioles. The terminal bronchiole is that portion of the conducting airway that is lined by a continuous layer of epithelium, whereas the respiratory bronchiole represents a transitional area between the conducting portion of the lung and the respiratory portion where gas exchange occurs. The respiratory bronchiole has a conducting airway interspersed with alveoli so that the epithelium appears to be discontinuous. More recent studies of the human lung revealed that the Clara cell, as identified by the presence of CC10, is virtually absent in the proximal airways and is restricted to the terminal and respiratory bronchioles. As the terminal and respiratory bronchioles are common sites of chemically induced lesions, the identity of the progenitor cell in epithelial cell renewal is of interest. Studies in experimental animals, mostly rodents, indicated that, under both steady state and pathologic conditions, cell renewal in the bronchiolar region is accomplished by proliferation of the Clara cells, which are stem cells giving rise to both nonciliated and ciliated cells. Hence, the Clara cell has a major role in maintaining the integrity of the bronchiolar epithelium. The enhanced susceptibility of the Clara cells to chemically induced toxicities has been ascribed to the localization within this cell type of high concentrations of cytochrome P-450 enzymes (Table 1), which contribute to the metabolic activation of a wide variety of chemicals. For a number of chemicals, metabolic activation by cytochrome P-450 is believed to take place in situ within

Chemically-Induced Respiratory Toxicities

581

Fig. 2 Bronchiolar epithelium in the lungs of control mice. (A) Mice treated with 1,1-dichloroethylene. (B) The Clara cell in control mice possesses a protruding apex, whereas the ciliated cell is cuboidal and is characterized by cilia. The Clara cells are vacuolated after exposure to 1,1dichloroethylene (100 mg kg 1, ip), whereas the ciliated cells are unaffected.

Fig. 3 Nonciliated Clara cell of the bronchiolar epithelium of control murine lung. The Clara cell has a basally located indented nucleus, numerous pleomorphic mitochondria, and dense secretory granules (indicated by arrows). The smooth endoplasmic reticulum is abundant, and networks are located throughout the cytoplasm (indicated by an asterisk). Permission from Microscopy Research and Technique.

the Clara cells. The oxidation of these agents yields metabolites that are highly reactive and that are capable of binding to critical tissue constituents at the site of formation, leading to necrosis of the Clara cells. Alternately, metabolic activation of chemicals within the Clara cells may produce reactive metabolites that bind to DNA, leading to the formation of mutations and initiation of carcinogenicity. Thus, the Clara cell is likely to be a cell of origin of lung tumors induced by chemicals that are metabolically activated within this cell type. Altogether, the susceptibility of the Clara cells is linked to the high concentrations as well as the diversity of cytochrome P-450 enzymes that reside in this cell population.

582

Chemically-Induced Respiratory Toxicities Table 2

Chemical compounds that cause Clara cell necrosis in various animal species

Chemicals

Species

Bromobenzene Bromotrichloromethane Carbon tetrachloride 1,1-Dichloroethylene 4-Ipomeanol 3-Methylfuran 3-Methylindole 2-Methylnaphthalene Naphthalene 1-Nitronaphthalene Perilla ketone Trichloroethylene

Mouse and rat Rat Mouse and hamster Mouse Mouse, rat, hamster, guinea pig, and rabbit Mouse, hamster, and rat Mouse Mouse Mouse Rat Mouse and rat Mouse

Some of the chemical compounds that have been reported to cause Clara cell damage are shown in Table 2 and include those that are classified as aromatic hydrocarbons (bromobenzene, 2-methylnaphthalene, naphthalene, and 1-nitronaphthalene), chlorinated hydrocarbons (bromotrichloromethane, carbon tetrachloride, 1,1-dichloroethylene, and trichloroethylene), and furans (4ipomeanol and 3-methylfuran). Although this inventory is by no means complete, the chemicals shown underscore the variety of compounds with capabilities of eliciting Clara cell necrosis. The isozyme-selective metabolism of these compounds by cytochrome P-450 enzymes have been identified for some of the lung toxicants that have been well characterized: 4-ipomeanol, 3-methylindole, naphthalene, 1,1-dichloroethylene, and trichloroethylene (Table 3). These data underscore the overlapping substrate specificities of P-450 enzymes that are involved in the metabolism of a specific compound. It is, however, a more complex issue to determine the relative extents to which individual P-450 enzymes are involved in the metabolism of a specific compound. In this regard, it should be emphasized that changes may occur as additional P-450 enzymes and their substrates are more fully characterized. All the P-450 enzymes implicated in the metabolism of the chemicals are localized in the Clara cells, and is consistent with the contention that xenobiotic metabolism takes place to the greatest extent in this cell type. Although the Clara cells are a major target population for a wide variety of compounds, other lung cells are also susceptible to chemically induced toxicities. These include the endothelial, type I, and type II cells (Table 4). Exposure to bleomycin, butylated hydroxytoluene, or cyclophosphamide results in damage to the alveolar type I and endothelial cells. The mechanisms, by which the toxicities of these compounds are mediated, are associated with the formation of reactive oxygen species, leading to oxidative stress. However, monocrotaline, a member of the pyrrolizidine family of compounds, damages only the endothelial cells. The mechanism of monocrotaline lung toxicity is believed to be due to its metabolism within the liver to a pyrrolic derivative that is subsequently transported to the lung. In the case of the herbicide, paraquat, toxicities are manifested in both the type I and type II cells. Normally, repair of the epithelium subsequent to type I cell necrosis is accomplished by proliferation and differentiation of the type II cell to the type I cell. Since the type II cell is a progenitor cell, damage to type II cells produces a more severe lesion than if only the type I cell is involved. Furthermore, reparative processes may not proceed normally if a chemical, such as bleomycin, is still present during the critical time of type II cell division and differentiation. Cell division occurs but differentiation may be disturbed, resulting in abnormal alveolar epithelial forms. Renewal of the endothelial cells after injury is achieved by proliferation Table 3

Isozyme-selective metabolism of chemical compounds by cytochrome P-450

Chemical

P-450 enzyme

Species

1,1-Dichloroethylene

CYP2E1 CYP2F2 CYP2F3 CYP2F4 CYP4B1 CYP2B4 CYP4B1 CYP2A6 CYP2F1 CYP2F3 CYP2F2 CYP2F1 CYP2A13 CYP2E1 CYP2F2 CYP2B1

Mouse Mouse Goat Rat Rat Rabbit Rabbit

4-Ipomeanol 3-Methylindole Naphthalene Trichloroethylene

Human Goat Mouse Human Human Mice Mice Mice

Chemically-Induced Respiratory Toxicities Table 4

583

Compounds that cause necrosis of endothelial, alveolar type I and type II cells

Chemical

Species

Type I cell

Type II cell

Endothelial cell

Bleomycin Butylated hydroxytoluene Paraquat Monocrotaline Cyclophosphamide

Mice Mouse Rat Rat Rat

þ þ þ  þ

  þ  

þ þ  þ þ

of other endothelial cells. Altogether, these findings suggested that the specific lung cells that sustain toxicities are dependent, in part, on the mechanisms involved. In addition, the long-term consequences are dependent on the identities of the specific cells involved. Later in this article, the pulmonary lesion and the mechanism relating to paraquat toxicity are discussed in greater detail.

Metabolic Activation of 1,1-Dichloroethylene and Clara Cell Necrosis 1,1-Dichloroethylene, also known as vinylidene chloride, is used as an intermediate in the manufacture of plastics. Studies in our laboratory have used dichloroethylene as a model to investigate the mechanisms that mediated lung toxicity, and to identify the specific events that occur in the period intervening between exposure and manifestation of cytotoxicity. Treatment of mice with dichloroethylene produced Clara cell necrosis (Figs. 4 and 5). Adverse effects were not observed in other lung cells including the ciliated cells, endothelial cells, type I or type II cells, suggesting that the Clara cells are the primary targets of dichloroethylene in the lung. The Clara cell injury induced by dichloroethylene is associated with cytochrome P-450-dependent metabolic activation to a reactive metabolite. Dose-dependent increases in covalent binding to protein occurred parallel with decreases in glutathione levels (Fig. 1B), and correlated with the severities of Clara cell damage. These findings suggested that binding of a dichloroethylene metabolite to proteins mediates the cytotoxic response, and further that glutathione conjugation represents a detoxication mechanism. Conjugation of the metabolite with glutathione was achieved nonenzymatically and did not require the participation of the glutathione transferases. Subsequent studies identified the dichloroethylene epoxide as the ultimate toxic species. The epoxide was also formed and conjugated with glutathione in incubated human lung microsomes. Two cytochrome P-450 isoforms, CYP2E1 and CYP2F2, were identified as being responsible for the metabolism of dichloroethylene. Furthermore, in vivo studies demonstrated that when mice were pretreated with 5-phenyl-1-pentyne to inhibit CYP2E1 and CYP2F2, there was abrogation of epoxide formation and protection from Clara cell damage. Altogether, the available data demonstrated that the Clara cell cytotoxicity induced by dichloroethylene is associated with its metabolic activation by CYP2E1 and CYP2F2, resulting in the formation of the epoxide, conjugation with glutathione, and covalent binding to protein when glutathione is depleted. The findings further suggested that the metabolic balance in the Clara cells is in favor of toxication rather than detoxication. The cell selectivity of the toxic lesion by dichloroethylene raises a question whether the Clara cell damage being mediated by chemical metabolism in situ within this cell type. In early studies, we measured the covalent binding of [14C]-dichloroethylene

Fig. 4 Bronchiolar epithelium in the lung of a mouse treated with 1,1-dichloroethylene (125 mg kg 1, ip). The endoplasmic reticulum in the Clara cell is dilated, resulting in extensive vacuolization. An adjacent ciliated cell remains unaffected by exposure to 1,1-dichloroethylene. Permission from Microscopy Research and Technique.

584

Chemically-Induced Respiratory Toxicities

Fig. 5 Scanning electron photomicrograph of the mucosal surface of the bronchiolar epithelium in murine lung. In control lung (A) Clara cells with protruding apices are numerous and are distributed uniformly. Interspersed among the Clara cells are the ciliated cells. (B) The number of Clara cells is reduced in the lungs of mice treated with DCE (225 mg kg 1, ip) due to exfoliation of damaged Clara cells. The remaining Clara cells appear swollen. Permission from Journal of Pathology.

in isolated cell fractions enriched with Clara cells and compared the levels to that enriched with alveolar type II cells. Covalent binding in the Clara cell fraction was about fourfold of that in the type II cell fraction, while minimal levels were detected in the mixed lung cell fraction. These results suggested that metabolic activation of dichloroethylene takes place to the greatest extent in the Clara cells. This concept is supported by findings that CYP2E1 and CYP2F2, which are believed to mediate the metabolism of dichloroethylene, are both preferentially localized within the Clara cells. However, it was of interest to obtain more direct evidence to demonstrate that the dichloroethylene epoxide is generated within the Clara cells. As an initial step toward this end, we have developed a polyclonal antibody specific for the dichloroethylene epoxide–glutathione conjugate. Using this antibody, immunohistochemical studies in dichloroethylene-treated mice revealed specific labeling in the bronchiolar epithelium with preferential localization within the Clara cells. Studies were also carried out on the lung tissue from mice that were pretreated with the garlic derivative, diallyl sulfone, to inhibit CYP2E1 and CYP2F2. The combined treatment of diallyl sulfone and dichloroethylene produced diminished labeling in the Clara cells, and protected from Clara cell damage. These findings have validated our working hypothesis that the selective Clara cell necrosis induced by dichloroethylene is mediated by metabolic activation and formation of the epoxide within the target Clara cells.

Metabolic Activation of Vinyl Carbamate: Formation of DNA Adducts, Mutations, and Lung Tumors Vinyl carbamate is a food carcinogen that has been used as a model by our laboratory and those of others to investigate the metabolic events that lead to the development of lung tumors. Vinyl carbamate is a metabolite of ethyl carbamate (EC; urethane), a chemical formed during fermentation, and that is found in alcoholic beverages and fermented food products. Between 1950 and 1975, EC was used as a co-solvent for analgesic and sedative drugs in Japan. It has been estimated that the total dose of EC administered to a patient weighing 60 kg, was approximately 0.6–3.0 g. These 25 years was a period during which millions of humans were administered “the largest doses of a pure carcinogen that is on record” (Miller). Ethyl carbamate has also been used as an antineoplastic agent for the treatment of chronic leukemia and multiple myeloma. Today, human exposures occur inadvertently via the consumption of fermented food products and alcoholic beverages as well as through tobacco use. A question has been raised regarding the potential carcinogenic risk associated with long-term or perhaps a lifetime exposure to low levels of the carbamate compounds. In this regard, regulatory agencies in Canada and the United States have set limits on the concentrations of EC in wines and distilled spirits. In 1943, it was first reported that EC produces adenomas in the lungs of mice. Lung tumors developed rapidly and were seen approximately 2–6 months after exposure. Similar tumors were induced by VC; however, it is a more potent carcinogen than EC, and has been found to generate lung tumors in numbers that were 20- to 50-fold of those induced by EC. The lung adenomas are manifested as either solid or papillary tumors. Solid tumors arise in alveolar septa and proliferate to produce a spherical, compact mass of cells with morphological characteristics of type II cells (Fig. 6A). Papillary tumors arise in bronchioles, exhibit an open tubular configuration, and appear to be formed as a result of extensive and uncontrolled proliferation of columnar epithelial cells

Chemically-Induced Respiratory Toxicities

585

Fig. 6 Lung adenomas in Strain A/J mice treated with ethyl carbamate (1 mg g 1 body weight). Mice were sacrificed 16 weeks after carcinogen treatment. The adenomas were manifested as; (A) solid; or (B) papillary tumors.

with features characteristic of Clara cells (Fig. 6B). Growth of solid tumors is restricted and regression may occur, whereas papillary tumors continue to grow, are larger in size, and are more likely to progress to carcinomas. Cytochrome P-450-dependent oxidation of VC yields an epoxide, a metabolite postulated to be the ultimate mutagenic and carcinogenic species. More recent studies confirmed the involvement of CYP2E1 in VC metabolism in murine and human lung. In vitro studies demonstrated that the VC epoxide reacted with DNA to form adducts including two guanine adducts, 7-(20 oxoethyl)deoxyguanosine and N2,3-ethenodeoxyguanosine, and an adenine adduct, 1,N6-ethendeoxyoadenosine. Other studies showed the formation of ethendeoxyoadenosine and 3,N4-ethenodeoxycytidine in the lungs of mice treated with VC. In addition, correlations were found between CYP2E1 levels and the magnitudes of formation of DNA adducts and lung tumors. In more direct in vitro experiments, 1,N6-Ethendeoxyoadenosine was generated in reactions of VC epoxide with DNA (20 -deoxyadenosine) (Fig. 7). These findings confirmed that the development of lung tumors is linked in part to the oxidation of VC by CYP2E1, leading to the formation of an epoxide that forms adducts with DNA. Recent studies have investigated point mutations produced by VC in the lung, using Big Blue transgenic mice and cII as a target gene. These transgenic mice harbor prokaryotic shuttle vectors with the bacterial lacI and phage cII as reporter genes. Sequencing of the cII gene revealed that spontaneous mutations were formed in the lungs of control mice. The most common spontaneous mutations produced by VC in the cII gene are G:C / A:T transitions (58%) and G:C / T:A transversions (19%). In the mice treated with VC, the major mutations generated in the lung are G:C / A:T (26%) and A:T / G:C (29%) transitions and A:T / T:A transversions (29%). The kind of mutations produced may be related to the specific etheno DNA adducts generated by a carcinogenic compound. The DNA adducts produced in vivo by VC in the lung are ethendeoxyoadenosine and ethenodeoxycytidine. The A:T / G:C transitions and A:T / T:A transversions produced by VC are believed to be associated with base changes induced by ethendeoxyoadenosine. However, the G:C / A:T transitions generated by VC is associated with the ethenodeoxycytidine adduct. These findings are consistent with the theory that ethendeoxyoadenosine and ethenodeoxycytidine cause mispairing during DNA transcription.

Oxidative Stress: Studies With Paraquat Paraquat, a herbicide widely used, is commonly used as a prototypic model for lung damage induced through a mechanism involving oxidative stress. Paraquat produces swelling of alveolar type I cells early in the cytotoxic response. The injury is precipitous

586

Chemically-Induced Respiratory Toxicities

NH2 N

N

HO N O O

O

O NH2

P-450

O

NH2 VC

EC

N

OH 2′-Deoxyadenosine N

P-450 N O

O

N O O

N

HO N

NH2

VCO OH H2O + NH3 + CO2

1,N 6-Ethendeoxyadenosine

Fig. 7 Proposed scheme of metabolism of EC to VC and vinyl carbamate epoxide (VCO), and the formation of 1,N6-ethendeoxyadenosine from reaction of VCO with 20 -deoxyadenosine.

as the type I cells have extensive cytoplasmic processes that cover a large surface area estimated to be approximately 93% of the epithelial surface. Soon after, the type II alveolar cells, which contribute to the remaining 7% of the epithelial surface, sustain damage with loss of the contents (surfactant) of the lamellar bodies. Later in the destructive response, there is endothelial cell necrosis, and it is believed that this damage favors the development of an alveolitis characterized by edema and infiltration of the interstitial and alveolar spaces with inflammatory cells. It has been suggested that destruction of the surfactant-producing type II cells leads to greater surface tension within the alveoli that then draws fluid from the capillaries, thus producing edema. The destructive phase is followed by a proliferative phase in which there is development of an extensive fibrosis that is rapid in onset. The alveolar spaces are invaded by immature- and mature fibroblasts that deposit collagen and ground substance. Interstitial fibrosis is also observed but it is the alveolar fibrosis that is more deleterious due to obliteration of the alveolar air spaces and obstruction of gas exchange. One of the mechanisms proposed for the pulmonary fibrosis is that it is associated with perturbations in re-epithelialization subsequent to epithelial damage. This mechanism is plausible in the case of paraquat since the type I cells are damaged, and normal repair of the alveolar epithelium is compromised by the concomitant destruction of the progenitor type II cells. Paraquat is selectively taken up in the lung, but this accumulation is neither associated with covalent binding nor have any significant metabolites of paraquat been identified. However, there is a general agreement that redox cycling is involved in the pulmonary toxicity of paraquat. Under aerobic conditions, the addition of paraquat to incubations containing lung microsomes and NADPH results in marked increases in NADPH oxidation and oxygen uptake. Under anaerobic conditions in the presence of lung microsomes, paraquat is reduced by NADPH to a free radical species, whereas under aerobic conditions, the oxidation of NADPH is due to the cyclic reduction and reoxidation of paraquat. As a result of these reactions, NADPH is consumed, resulting in a decrease in the ratio of NADPH/NADPþ. This reduction is also due, in part, to the utilization of NADPH as a cofactor for glutathione reductase when oxidized glutathione is regenerated back to reduced glutathione. It has been suggested that glutathione oxidation is mediated through its role as a substrate in the reduction of cellular hydrogen peroxide via glutathione peroxidase, and is consistent with evidence of paraquat-dependent formation of hydrogen peroxide in lung microsomes. However, other studies suggested that hydrogen peroxide is produced from a reactive oxygen species initially formed from the reaction of oxygen with the paraquat radical. It has been demonstrated that the superoxide anion radical, a short-lived oxygen species, is formed by the reduction of oxygen by the paraquat radical. It has subsequently been confirmed that the superoxide is generated from paraquat in lung microsomal incubations. The dismutation of superoxide by the enzyme superoxide dismutase can produce hydrogen peroxide, which can also be produced by reduction of superoxide by the paraquat radical. The available evidence thus indicated that several potentially toxic species could be generated from oxygen during the process of reduction and reoxidation of paraquat. Lipid peroxidation and membrane damage have been proposed as a possible mechanism for the toxicity of paraquat. In this regard, paraquat has been shown to stimulate lipid peroxidation in vitro and in vivo. However, other reports have indicated

Chemically-Induced Respiratory Toxicities

587

that paraquat may not generate the peroxidation of lipids. These findings suggested that more definitive and consistent evidence is required to demonstrate lipid peroxidation as a mechanism of paraquat toxicity. In addition to lipids, paraquat has been reported to cause modification of proteins and DNA damage. In summary, the available evidence indicated that redox cycling of paraquat in lung cells leads to oxidative stress, consumption of NADPH, lipid peroxidation, as well as protein and DNA modifications. However, it is not clear which specific events mediate the pulmonary toxicity of paraquat. It is possible that the mechanism may involve several processes that act independently or in concert with one another.

Conclusion and Comments We have discussed the mechanisms implicated in the toxicities and carcinogenicities induced in the lung as a result of chemical exposures. In the studies described herein, the compounds, dichloroethylene and VC, have been used as surrogates for chemical exposures. Both are small molecules with double bonds that readily undergo cytochrome P-450-dependent oxidation to produce epoxides. The epoxides of dichloroethylene and VC are short-lived, are highly reactive, and bind to cellular constituents including proteins and nucleic acids. The dichloroethylene epoxide targets cellular proteins at the site of formation and causes acute Clara cell necrosis. However, the VC epoxide interacts with DNA, and sets into motion a cascade of events including formation of DNA adducts and mutations that lead subsequently to the formation of lung tumors. Hence, dichloroethylene and VC require metabolic activation to exert lung toxicity and carcinogenicity, respectively. Moreover, the extent to which the effects are manifested depend, in part, on the capacities of the activating enzymes to convert the chemicals into their ultimate reactive species. This paradigm is supported by studies in inbred strains of mice showing that the severities of bronchiolar necrosis correlated with the levels of CYP2E1 in the lungs of dichloroethylene-treated mice. These differential effects are relevant in the context of chemical exposures of humans where considerable variability in expression of xenobiotic-metabolizing enzymes is found in the general population. This variability is due, in part, to the presence of genetic polymorphisms that result in differing enzyme activities. Genetic polymorphisms are linked to disease susceptibility and have been identified for P-450 enzymes including CYP1A1 and CYP2E1. More recently, an interethnic genetic polymorphism for CYP2F1 has been identified, and it has been postulated that this polymorphism may be associated with lung cancer development. Currently, extensive investigations in this research area are taking place with the anticipated possibility of being able to predict relative risks in various population groups.

See also: Air Pollution and Development of Children’s Pulmonary Function; Air Pollution and Lung Cancer Risks; Effect of Air Pollution on Human Health; Hazardous (Organic) Air Pollutants; Long-Term Effects of Particulate Air Pollution on Human Health; PM2.5 Sources and Their Effects on Human Health in China: Case Report; Sulfur Oxides: Sources, Exposures and Health Effects.

Further Reading Barber, N.A., Ganti, A.K., 2011. Pulmonary toxicities from targeted therapies: A review. Targeted Oncology 6 (4), 235–243. Boczkowski, J., Lanone, S., 2012. Respiratory toxicities of nanomaterialsdA focus on carbon nanotubes. Advanced Drug Delivery Reviews 64 (15), 1694–1699. Dinis-Oliveira, R.J., Duarte, J.A., Sánchez-Navarro, A., et al., 2008. Paraquat poisonings: Mechanisms of lung toxicity, clinical features, and treatment. Critical Reviews in Toxicology 38, 13–71. El-Gharabawy, R.M., El-Maddah, E.I., Oreby, M.M., Salem, H.S., Ramadan, M.O., 2013. Immunotoxicity and pulmonary toxicity induced by paints in Egyptian painters. Journal of Immunotoxicology 10 (3), 270–278. Forkert, P.G., 2001. Mechanisms of 1,1-dichloroethylene-induced cytotoxicity in lung and liver. Drug Metabolism Reviews 33, 49–80. Forkert, P.G., D’Costa, D., El-Mestrah, M., 1999. Expression and inducibility of alpha, pi, and mu glutathione S-transferase protein and mRNA in murine lung. American Journal of Respiratory Cell and Molecular Biology 20, 143–152. Gram, T.E. (Ed.), 1993. Metabolic activation and toxicity of chemical agents to lung tissue and cells. Pergamon Press, London. Gram, T.E., 1997. Chemically reactive intermediates and pulmonary xenobiotic toxicity. Pharmacological Reviews 49, 297–341. Hinson, J.A., Roberts, D.W., 1992. Role of covalent and noncovalent interactions in cell toxicity: Effects on proteins. Annual Review of Pharmacology and Toxicology 32, 471–510. Levy, J.I., Diez, D., Dou, Y., Barr, C.D., Dominici, F., 2012. A meta-analysis and multisite time-series analysis of the differential toxicity of major fine particulate matter constituents. American Journal of Epidemiology 175 (11), 1091–1099. Miller, E.C., Miller, J.A., 1966. Mechanisms of chemical carcinogenesis: Nature of proximate carcinogens and interactions with macromolecules. Pharmacological Reviews 18, 805–838. Miller, J.A., Miller, E.C., 1983. The metabolic activation and nucleic acid adducts of naturally-occurring carcinogens: Recent results with ethyl carbamate and the spice flavors safrole and estragole. British Journal of Cancer 48, 1–15. Nelson, S.D., Pearson, P.G., 1990. Covalent and noncovalent interactions in acute lethal cell injury caused by chemicals. Annual Review of Pharmacology and Toxicology 30, 169–195. Nelson, D.R., Koymans, L., Kamataki, T., et al., 1996. P450 superfamily: Update on new sequences, gene mapping, accession numbers and nomenclature. Pharmacogenetics 6, 1–42. Pohl, L.R., Satoh, H., Christ, D.D., 1988. The immunologic and metabolic basis of drug hypersensitivities. Annual Review of Pharmacology and Toxicology 28, 367–387. Roth, R., 1997. Toxicology of the respiratory system. Pergamon Press, New York.

588

Chemically-Induced Respiratory Toxicities

Teuwen, L.A., Van den Mooter, T., Dirix, L., 2015. Management of pulmonary toxicity associated with targeted anticancer therapies. Expert Opinion on Drug Metabolism & Toxicology 11 (11), 1695–1707. Witschi, H., Nettesheim, P. (Eds.), 1982. Mechanisms in respiratory toxicology, vol. 2. CRC Press, Boca Raton, FL. Wogan, G.N., Hecht, S.S., Felton, J.S., Conney, A.H., Loeb, L.A., 2004. Environmental and chemical carcinogenesis. Seminars in Cancer Biology 14, 473–486. Zhang, J.Y., Wang, Y., Prakash, C., 2006. Xenobiotic-metabolizing enzymes in human lung. Current Drug Metabolism 7, 939–948.

Relevant Websites http://drnelson.utmem.edu/nelsonhomepage.htmldUpdates on cytochrome P450.

Children’s Environmental Health: General Overview LR Goldman, Johns Hopkins Bloomberg School of Public Health, Baltimore, MD, United States © 2011 Elsevier B.V. All rights reserved.

Abbreviations CDC Centers for Disease Control and Prevention DDT dichlorodiphenyltrichloroethane DNA deoxyribonucleic acid PCB polychlorinated biphenyl SIDS sudden infant death syndrome WHO World Health Organization

Introduction Children’s susceptibilities to environmental exposures differ from adults’ and may result in more or less risk at various stages of their development. Although it is appropriate to be concerned about environmental exposures, it is wise to keep concerns in perspective. A concern for children’s environmental health is not new; however, it has recently received increased attention. Future research will be needed to determine more precisely how environmental agents contribute to or cause illness, to identify which children are most susceptible and to develop approaches to protect today’s children and future generations.

Windows of Vulnerability The intricate, gradual maturation of the fetus and infant organ systems – especially the nervous, immune, and endocrine systems – results in ‘windows of vulnerability’ during which development can be disrupted, with the possibility of serious, irreversible effects over a lifetime. During such windows, cells are growing rapidly and differentiating to form new structures, organs are forming, and delicate processes, such as the formation of neural connections, are occurring. The developing brain is particularly vulnerable to disruption. For the fetus, exposure to environmental contaminants occurs when contaminants pass from the mother to the baby via the placenta. The placenta plays a major role in determining which substances reach the fetus, and at what levels. It serves as a barrier to some toxic substances, but others pass through; when the fetus is unable to metabolically inactivate or excrete these, the result is higher levels of toxicants in the fetus than in the mother. Adverse outcomes associated with prenatal environmental exposures can occur at birth (e.g., miscarriage, birth defects, low birth weight, and preterm birth), during childhood (e.g., cancer, asthma, and autism), or during adulthood (e.g., cardiovascular disease).

Public Health Consequences In 1948, the World Health Organization (WHO) defined health as ‘not merely the absence of disease or disability but a complete state of physical, mental, and social well-being.’ Many children do not experience such a state of well-being. Their potential for healthy development is compromised at birth; they contract chronic or life-threatening diseases in early childhood; or they struggle with physical, cognitive, and behavioral disabilities.

Developmental Disabilities The Centers for Disease Control and Prevention (CDC) estimates that 17% of US children have developmental disabilities, with learning disabilities most prevalent (6.5%); severe developmental disabilities occur in 2% of children. According to the Committee on Developmental Toxicology of the National Research Council, 3% of all developmental defects are attributable to exposure to toxic chemicals and physical agents (environmental factors), and 25% of developmental defects may be due to a combination of genetic and environmental factors. A number of persistent pollutants – lead, polychlorinated biphenyls (PCBs), dioxins, and methyl mercury (a form of mercury) – have been linked with developmental neurotoxicity (impairment of the development of the brain or nervous system) that results in developmental disabilities in children. Others, such as certain pesticides, are known from animal studies to be neurotoxicants, and are therefore highly likely to cause similar effects in children. When exposed to neurotoxicants during rapid brain and nervous system development, children may experience one or more developmental disorders.

Encyclopedia of Environmental Health, 2nd edition, Volume 1

https://doi.org/10.1016/B978-0-444-63951-6.00020-6

589

590

Children’s Environmental Health: General Overview

Physical Growth Environmental exposures affect many other aspects of children’s capacities to grow and thrive. Effects of such exposures include:

• • •

reduced growth (associated with exposures to lead and environmental tobacco smoke); reduced lung volume and lung function growth (associated with air pollution); and delayed puberty (associated with exposure to lead in childhood).

Fetal growth restriction and premature birth pose major threats to health and development, and are associated with elevated risks of birth defects, childhood cancer, and asthma. These conditions pose significant public health problems, and their incidence is on the rise. A significant proportion of infant deaths occur among babies born preterm; preterm birth is also associated with substantial neonatal morbidity, high risk of long-term neurodevelopmental deficits, and low academic performance. The risk for potential health problems increases over the lifetimes of babies born preterm or of growth-restricted infants. First, they are more likely to be ill as newborns and to die from sudden infant death syndrome (SIDS). Preterm babies suffer from numerous complications involving injury to or immaturity of vital organs such as the brain and kidneys. Children with low birth weight are more likely to have chronic health problems later in life, including obesity, diabetes, and cardiovascular disease, whether or how environmental exposures might be involved is unknown. Other likely consequences of preterm birth and restricted growth include developmental disabilities and decrements in neurobehavioral functioning. Numerous factors are associated with fetal growth restriction, including maternal smoking and drinking, poverty, nutrition, access and quality of prenatal care, infections, and social stress. Examples of exposure effects on preterm birth and low birth weight are:

• • •

Even at low levels, exposures to mercury, lead, and other environmental agents may affect birth weight. Infants whose mothers ate fish from Lake Michigan (contaminated with PCBs and other xenobiotics) during pregnancy were somewhat smaller for gestational age, were born earlier in pregnancy, had smaller head circumferences, and were of lower birth weight. A second study of babies born to Michigan fish eaters reported similar trends, but these were not statistically significant. A Dutch longitudinal study of children found that infants with higher umbilical cord blood PCB levels had lower birth weights as well. Similar associations between PCB exposures and altered fetal growth and development have been reported in some studies but not in others. A study of women who work in child care settings with exposures to wood preservative chemicals (pentachlorophenol, lindane, polychlorinated dibenzo-p-dioxins, and dibenzofurans in indoor air) found that women with higher levels of such exposures gave birth to babies with significant reductions in birth weight and length for gestational age.

Childhood Cancer The relationship of environmental exposures to childhood cancers is an area of high interest and concern, although challenging to study because illness usually follows exposure by several years.

Birth Defects Birth defects are the largest cause of mortality between birth and 1 year of age, and many possible associations with environmental exposures have been reported in the literature. Approximately 10% of birth defects are estimated to be purely environmental in origin (including nutritional and infectious factors), but many more involve environmental exposure in combination with genetic factors.

Asthma The evidence for environmental associations with childhood asthma is very strong, particularly for indoor air pollution containing certain allergens (dust mites, molds, cockroaches, and pets) and irritants (environmental tobacco smoke and poorly vented stoves). Outdoor air pollution also plays a role in bringing on asthma attacks, and recent data indicate that it may cause the onset of asthma as well.

Adult Disease The fetal origin of adult disease, known as the Barker hypothesis, is attracting growing interest among researchers and clinicians. Because children have more years of potential life, they face a greater risk of developing exposure-related diseases with long latency periods, such as cancer, than do adults. More specifically, the links between fetal exposure and reproductive abnormalities, including infertility, are becoming clearer. Recognition of the importance of preconception parental exposures is increasing as well. Another possible mechanism for induction of adult disease is via exposure to carcinogens in the prenatal through newborn period. A small number of animal cancer toxicology studies comparing childhood and adult carcinogenesis indicate that adult carcinogens may cause higher incidences of cancer when exposure occurs in utero and in early childhood. A number of factors

Children’s Environmental Health: General Overview

591

may combine to produce the greater susceptibility of the fetus and newborn to carcinogens. Although health care professionals’ concern about exposure to carcinogens, particularly prenatally and to newborns, is increasing, US regulatory policies continue to stipulate that bioassays to assess carcinogens use only mature animals.

Endocrine Disruption Exposure to endocrine disruptors in childhood illustrates the heightened susceptibility of the very young to environmental exposures. Endocrine disruptors are either natural or synthetic substances originating outside the human body that disrupt some aspect of the endocrine system related to hormone levels or functions or both within an individual. The endocrine system is involved in, among other developmental events:

• • • • •

development and function of reproductive organs (prenatal through adult reproductive years); development and function of the nervous system and the brain (prenatal through adult reproductive years); susceptibility to organ malformations (prenatal organogenesis); maturation of the immune system (late prenatal life and early infancy); and growth spurts (prenatal third trimester and adolescence).

Genetics and Epigenetics The rapidly advancing science of genetics is shedding new light on the environment’s role in human health. Genotoxic substances, that is, those causing genetic mutations, can result in mutations that occur de novo (for the first time) in a fetus or that can be inherited, that is, passed from either parent to the child. Such mutations can result in miscarriages or fetal death and, if not fatal, can cause birth defects, cancer, and other adverse outcomes. Gene expression – whether a particular gene is ‘turned on’ or ‘turned off’ – can be altered by environmental exposures. Alterations in gene expression are called epigenetic changes. Although the genetic material itself (the DNA (deoxyribonucleic acid) sequence) remains unchanged, the function of a particular gene can be nullified by epigenetic changes. Experimental studies show that such epigenetic changes can be passed from animal parents to their offspring.

Conclusions It is known that children often are more susceptible and more exposed than adults to certain environmental risks. Much of the current concern is based on the knowledge of biology and from disturbing findings from animal research, rather than from direct observation of health effects on children and young people as they encounter exposures in their environments. Over the next few years, the state of knowledge should improve. A wealth of new scientific data is emerging worldwide, both on human biological changes from prenatal life through adolescence and on long-range impacts of environmental exposures. The many critical studies to evaluate childhood environmental exposures that are now underway should be extended to encompass the fetus and the infant, as well as the child. Large longitudinal studies, such as the National Children’s Study in the United States and similar studies internationally may inform the associations between early childhood exposures – to chemicals, air pollutants, pathogens, nutrients, pharmacologic agents, injury, and stress – and subsequent child health outcomes and disabilities. Such large, prospective cohort studies should enable scientists to evaluate the complex gene–environment interactions that must be understood to prevent exposure-related disease in children over a lifetime and to ensure their well-being. Well-designed and supported research can inform the decisions and guide the actions of policy makers, health professionals, parents, and others who care for children, as well as the scientists who design future toxicity testing strategies. Meanwhile, the government regulations and the activities of nongovernmental groups will continue to result in reduced environmental emissions of lead, mercury, PCBs, dichlorodiphenyltrichloroethane (DDT), and other substances that are potentially harmful to children. Increased research and understanding of the role of environmental exposures in health can enable the society to take further action to reduce risks to children. Particularly, poor children are likely to be the most vulnerable. Worldwide more than 200 million children aged 5–17 work, of whom more than 120 million work in hazardous occupations. These numbers are declining over time, but these children are likely to be the most at risk. Fortunately, efforts are underway internationally to eliminate hazardous forms of child labor. In sum, the potential ability to reduce or eliminate some factors involved in pediatric environmental health risks is very encouraging.

See also: Estimating Environmental Health Costs: Valuation of Children’s Health Impacts.

Further Reading Anderson, L.M., 2004. Introduction and overview. Perinatal carcinogenesis: Growing a node for epidemiology, risk management, and animal studies. Toxicology and Applied Pharmacology 199 (2), 85–90. Anway, M.D., Cupp, A.S., Uzumcu, M., Skinner, M.K., 2005. Epigenetic transgenerational actions of endocrine disruptors and male fertility. Science 308 (5727), 1466–1469.

592

Children’s Environmental Health: General Overview

Barker, D.J., Eriksson, J.G., Forsen, T., Osmond, C., 2002. Fetal origins of adult disease: Strength of effects and biological basis. International Journal of Epidemiology 31 (6), 1235–1239. Bearer, C.F., 1995. How are children different from adults? Environmental Health Perspectives 103 (supplement 6), 7–12. Branum, A.M., Collman, G.W., Correa, A., et al., 2003. The National Children’s Study of environmental effects on child health and development. Environmental Health Perspectives 111 (4), 642–646. Brauer, M., Hoek, G., Smit, H.A., et al., 2007. Air pollution and development of asthma, allergy and infections in a birth cohort. The European Respiratory Journal 29 (5), 879–888. Brent, R.L., 2004. Environmental causes of human congenital malformations: The pediatrician’s role in dealing with these complex clinical problems caused by a multiplicity of environmental and genetic factors. Pediatrics 113 (Supplement 4), 957–968. Eskenazi, B., Marks, A.R., Bradman, A., et al., 2006. In utero exposure to dichlorodiphenyltrichloroethane (DDT) and dichlorodiphenyldichloroethylene (DDE) and neurodevelopment among young Mexican American children. Pediatrics 118 (1), 233–241. Gauderman, W.J., Avol, E., Gilliland, F., et al., 2004. The effect of air pollution on lung development from 10 to 18 years of age. The New England Journal of Medicine 351 (11), 1057–1067. Grandjean, P., Harari, R., Barr, D.B., Debes, F., 2006. Pesticide exposure and stunting as independent predictors of neurobehavioral deficits in Ecuadorian school children. Pediatrics 117 (3), e546–e556. Jacobson, J.L., Jacobson, S.W., Humphrey, H.E., 1990. Effects of in utero exposure to polychlorinated biphenyls and related contaminants on cognitive functioning in young children. The Journal of Pediatrics 116 (1), 38–45. Karmaus, W., Wolf, N., 1995. Reduced birthweight and length in the offspring of females exposed to PCDFs, PCP, and lindane. Environmental Health Perspectives 103 (12), 1120–1125. Lanphear, B.P., Aligne, C.A., Auinger, P., Weitzman, M., Byrd, R.S., 2001. Residential exposures associated with asthma in US children. Pediatrics 107 (3), 505–511. McConnell, R., Berhane, K., Gilliland, F., et al., 2002. Asthma in exercising children exposed to ozone: A cohort study. Lancet 359 (9304), 386–391. National Research Council. Committee on Developmental Toxicology, 2000. Scientific Frontiers in Developmental Toxicology and Risk Assessment. National Academies Press, Washington, DC. Olshan, A.F., Anderson, L., Roman, E., et al., 2000. Workshop to identify critical windows of exposure for children’s health: Cancer work group summary. Environmental Health Perspectives 108 (supplement 3), 595–597. Patandin, S., Koopman-Esseboom, C., de Ridder, M.A., Weisglas-Kuperus, N., Sauer, P.J., 1998. Effects of environmental exposure to polychlorinated biphenyls and dioxins on birth size and growth in Dutch children. Pediatric Research 44 (4), 538–545. Rauh, V.A., Garfinkel, R., Perera, F.P., et al., 2006. Impact of prenatal chlorpyrifos exposure on neurodevelopment in the first 3 years of life among inner-city children. Pediatrics 118 (6), e1845–e1859. Ribas-Fito, N., Torrent, M., Carrizo, D., et al., 2006. In utero exposure to background concentrations of DDT and cognitive functioning among preschoolers. American Journal of Epidemiology 164 (10), 955–962. Rojas-Martinez, R., Perez-Padilla, R., Olaiz-Fernandez, G., et al., 2007. Lung function growth in children with long-term exposure to air pollutants in Mexico City. American Journal of Respiratory and Critical Care Medicine 176 (4), 377–384. Rylander, L., Stromberg, U., Dyremark, E., Ostman, C., Nilsson-Ehle, P., Hagmar, L., 1998. Polychlorinated biphenyls in blood plasma among Swedish female fish consumers in relation to low birth weight. American Journal of Epidemiology 147 (5), 493–502. Selevan, S.G., Kimmel, C.A., Mendola, P., 2000. Identifying critical windows of exposure for children’s health. Environmental Health Perspectives 108 (supplement 3), 451–455. Torres-Sanchez, L., Rothenberg, S.J., Schnaas, L., et al., 2007. In utero p,p0 -DDE exposure and infant neurodevelopment: A perinatal cohort in Mexico. Environmental Health Perspectives 115 (3), 435–439. US Environmental Protection Agency, 2005. Supplemental Guidance for Assessing Cancer Susceptibility from Early-Life Exposure to Carcinogens. Risk Assessment Forum, Washington, DC. EPA/630/R-03/003F. Vartiainen, T., Jaakkola, J.J., Saarikoski, S., Tuomisto, J., 1998. Birth weight and sex of children and the correlation to the body burden of PCDDs/PCDFs and PCBs of the mother. Environmental Health Perspectives 106 (2), 61–66.

Relevant Websites http://www.aap.org/visit/cmte16.htm. American Academy of Pediatrics. http://www.cape.ca/children/resources.html. Canadian Children’s Environmental Health Project. http://www.cehn.org/. Children’s Environmental Health Network. http://www.ilo.org/ipec/index.htm. International Labour Organization (ILO), International Programme on the Elimination of Child Labour. http://www.nyo.unep.org/ceht.htm. United Nations Environment Program, Children’s Environmental Heath. http://www.aoec.org/PEHSU.htm. US Association of Occupational and Environmental Clinics, Pediatric Environmental Health Specialty Unit Network. http://yosemite.epa.gov/ochp/ochpWeb.nsf/content/homepage.htm. US Environmental Protection Agency. http://nationalchildrensstudy.gov/. US National Children’s Study. http://www.niehs.nih.gov/health/topics/population/children/index.cfm. US National Institute for Environmental Health Sciences. http://www.niehs.nih.gov/research/supported/centers/prevention/. US National Institute for Environmental Health Sciences, Centers for Children’s Environmental Health & Disease Prevention Research. http://www.who.int/heca/en/index.html. WHO, Healthy Environments for Children Alliance.

Children’s Environmental Health in Developing Countriesq J Pronczuky, M-N Brune´, and F Gore, World Health Organization, Geneva, Switzerland © 2011 Elsevier B.V. All rights reserved.

Abbreviations BPA bisphenol A DALY disability-adjusted life year EDC endocrine disrupting chemical PAH polycyclic aromatic hydrocarbon PCB polychlorinated biphenyl POP persistent organic pollutant UV ultraviolet WHO World Health Organization

Introduction Threats to human health from environmental conditions are increasing worldwide. Protecting the health of children from these threats poses a tremendous challenge. Worldwide, the burden of environmental disease is much higher for children than for adults: about a quarter of the global burden of disease can be attributed to environmental factors, but children under five years of age bear over 40% of this burden. Although the term ‘children’ is used to cover all age-groups from birth to age 19, strict World Health Organization (WHO) terminology refers to ‘newborns’ (1–28 days), ‘infants’ (up to 12 months), ‘children’ (from 1 up to 10 years), and ‘adolescents’ (10–19 years). All children, both in industrialized and in developing countries, are at risk of exposure to unsafe environments and are uniquely vulnerable to them. However, the type and the magnitude of exposure, as well as the health consequences, vary greatly according to the social and economic conditions of the country and geographic area in which the child lives. Contaminants in air, water, and food as well as disease vectors, toxic chemicals, and ultraviolet (UV) radiation represent a threat to children’s health everywhere. However, they contribute to a significantly high proportion of overall child morbidity and mortality in developing countries, where the impact of environmental risk factors is magnified by poverty, malnutrition, and infectious diseases. In addition, stressful situations such as those resulting from social conflict, natural disaster, or life in degraded settings contribute to disease in children. Uncontrolled industrialization and anarchic urbanization contribute to create hazardous living environments for children, their families, and communities. In the most affluent countries, where traditional environmental threats and infectious diseases have largely been controlled, the major diseases confronting children now are chronic and disabling conditions termed the ‘new pediatric morbidity.’ It is reported that asthma mortality has doubled; leukemia and brain cancer have increased in incidence; neurodevelopmental dysfunction is widespread; and hypospadias incidence has doubled. Concern is currently raised by ‘new chemicals’ present in household products, cosmetics, and toys, by the impact of some new technologies, and by the observation of new epidemiological trends in pediatric diseases. Low-level chronic exposures related to some chemicals present in children’s environments, such as phthalates, brominates, bisphenol A (BPA), and polychlorinated biphenyls (PCBs), are a cause of serious concern, as they may potentially alter the neurological, endocrine, reproductive, and immune systems, affecting growth and development. The type and importance of environmental threats to children varies according to a number of factors, including the economic status of the population. Low-income population groups continue to be exposed to the ‘traditional’ risk factors, some of which existed before industrialization, such as contaminated water and food, indoor air pollution, lack of sanitation, and vector-borne diseases. As the income of a population group increases, a series of ‘modern’ environmental threats are introduced. For example, widespread use, storage, and transportation of chemicals and pollution resulting from road traffic and industrialization may predominate. In addition, a series of ‘emerging’ threats are recognized as new synthetic substances, persistent organic pollutants (POPs), nanoparticles, some types of radiation, and climate change are linked to negative environmental health effects. The existence of such global threats adds complexity to the consideration of children’s environmental health in different parts of the world. Nowadays, the ‘modern’ hazards recognized mainly in the industrialized countries may also be present in developing countries and

q

The authors are staff members of the World Health Organization. The authors alone are responsible for the views expressed in this publication, and they do not necessarily represent the decisions or policies of the World Health Organization. y Deceased.

Encyclopedia of Environmental Health, 2nd edition, Volume 1

https://doi.org/10.1016/B978-0-444-63951-6.00008-5

593

594

Children’s Environmental Health in Developing Countries

countries in transition, compounding the effects of the existing ‘traditional’ hazards. Children represent approximately 40% of the world population, yet 100% of the future of humanity. A large majority of the world’s children are therefore living under adverse environmental conditions that will have an impact on their future health and well-being, as well as the future health and wellbeing of humanity. Environmental risk factors can contribute to more than one-third of the disease burden in children – a major portion of disease that could be prevented. Environmental factors contribute to 36% of all deaths and 34% of the overall disease burden among children aged 0–14. The environmental causes or triggers of childhood diseases are perceived differently in the more and less affluent countries. This provides a partial explanation as to why children’s environmental health issues are addressed so differently in diverse parts of the world. In developing areas, where dirty water and indoor air pollution are a ‘given’ in the community, their adverse effects on child health may remain unrecognized or be considered unavoidable. In more affluent and informed communities, problems such as learning disabilities and developmental disorders are growing causes of concern and may be attributed to environmental causes even when scientific evidence is inconclusive or absent. The different levels of knowledge and information of communities result in different abilities to identify and address the environmental problems and take appropriate actions. Public health problems are perpetuated when environmental risk factors are not fully recognized and only a ‘curative’ approach is applied, instead of considering prevention. For example, antibiotics are indicated for childhood respiratory infections or oral rehydration for diarrheal diseases, but poor or no advice is provided on the use of safe fuels, access to safe drinking water, and improved sanitation. In the case of asthma, the most common chronic disease in children, it is quite frequent to have children treated in a hospital and then sent back to an environment with indoor allergens (e.g., dust mites in bedding, carpets and stuffed furniture, air pollution, and pet dander), outdoor allergens (e.g., pollen and mold), tobacco smoke, and chemical irritants that may continue to trigger or aggravate the condition. In affluent countries, the close relationship of asthma with poverty, ethnic minority group status, and residence in an inner-city environment is understood to be closely linked with morbidity, as the marginalized populations are disproportionately exposed to irritants in the air. There are large regional differences in the environmental contribution to various disease conditions, due in part to differences in environmental exposures and access to health care. For example, it has been estimated that 25% of all deaths in developing regions are attributable to environmental causes and only 17% in developed regions. The analysis of how different diseases are caused or influenced by environmental risk factors shows that diarrheal disease, lower respiratory infections, malaria, and perinatal conditions are the ones having the greatest impact (Figure 1). This environmentally mediated disease burden is much higher in the developing world than in developed countries: the infant death rate from environmental causes is 12 times higher in developing than in developed countries. It has been estimated that near 1.5 million deaths per year globally could be attributed to unsafe water, sanitation, and hygiene, and 9 out of 10 of these deaths occur in children, nearly all in developing countries. These estimates are

Deaths attributable to the environment in children 0−14 years Poisonings 1% Road traffic accidents 2%

Other 13%

Malnutrition 2%

Diarrheal diseases 36%

Drownings 3% Perinatal conditions 6%

Childhood-cluster diseases 7%

Malaria 11% Lower respiratory infections 19%

Figure 1 Deaths attributable to the environment in children 0–14 years old. Based on data from Prüss-Ustün A and Corvalan C (2006) Preventing disease through healthy environments: Towards an estimate of the environmental burden of disease. Geneva: World Health Organization. Available at: http://www.who.int/quantifying_ehimpacts/publications/preventingdisease/en/index.html

Children’s Environmental Health in Developing Countries

595

considered to be conservative, as evidence is not clear for many diseases, and the linkages between environmental hazard and disease outcome are complex. As already stated, the diseases with the largest absolute burden attributable to environmental factors include diarrheal diseases, lower respiratory infections, and malaria. Approximately 88% of diarrheal diseases in adults and children are attributable to unsafe drinking water and poor sanitation and hygiene. Lower respiratory infections are associated with indoor air pollution produced from the burning of solid fuels and possibly to secondhand tobacco smoke as well as outdoor air pollution. In developed countries, an estimated 20% of lower respiratory infections are attributable to environmental causes, and this rises to 42% in developing countries. Approximately 41% of all lower respiratory infections, disability-adjusted life years (DALYs), and deaths in children can be attributed to the environment. Malaria causes nearly 800 000 deaths a year, and approximately 80% occur in young African children. Infants are especially vulnerable to malaria from approximately 3 months of age, when immunity acquired from the mother starts to decrease. Every day more than 2000 children die from a preventable injury. Most of these occur in low- and middle-income countries, where the environments are particularly unsafe. In the rural areas, injuries are related mainly to farming activities, pesticide poisoning, and drowning. In the urban areas, most injuries are traffic related, or linked to electrical appliances, falls, or poisonings resulting from household chemicals and pharmaceuticals ingested by small children. Many of those who survive these injuries suffer lifelong disabling health consequences.

Why Children in Developing Countries are Particularly Vulnerable All children are especially susceptible to environmental risk factors, but those living in developing countries are affected in a disproportionate manner. Children in developing countries lose eight times more healthy life years, per capita, than their counterparts in developed countries from environmentally caused diseases. In some of the poorest areas, the disparity is far greater. The number of healthy life years lost as a result of childhood lower respiratory infections is 800 times greater per capita; 25 times greater for road traffic injuries; and 140 times greater for diarrheal diseases. To address the special vulnerability of children living in developing countries, it is important to first understand why children are particularly susceptible to environmental risk factors and then analyze the factors that increase their vulnerability in developing areas. Poverty and malnutrition compound the adverse effects of toxicants and contribute to the potential for injuries and environmentally related diseases. Under these conditions, children are not only overexposed but may also lack the normal protective mechanisms and physiological responses.

Children’s Unique Vulnerability Children’s special susceptibility during different life stages is linked to their dynamic growth and development and to a number of physiological, metabolic, and behavioral characteristics. The growth, development, and maturation of different organs, systems, and functions from conception to adolescence are at risk of being disrupted by exposure to environmental risk factors. For example, anatomical and physiological maturation, metabolic and functional processes, and the toxic kinetics and toxic dynamics may be altered by exposure to certain toxicants in the environment. During the developmental stage-specific periods of susceptibility in children (‘critical windows of exposure’ or ‘critical windows of development’), highly dynamic and relevant processes take place at the molecular, cellular, and organ system level, and certain exposures may have a severe impact. For example, the respiratory system develops very actively from the gestational period up to approximately eight years of age, and then more slowly into adolescence. During that entire period and especially up to eight years of age, the lungs are very vulnerable to external agents such as tobacco smoke, particulate matter, and indoor air pollution that might affect their structure and function. As different organ systems mature at different rates, the same dose of an agent during different periods of development can have very diverse consequences. Some examples of health effects resulting from developmental exposures include those observed prenatally and at birth (e.g., miscarriage, stillbirth, low birth weight, and birth defects), in young children (e.g., infant mortality, asthma, and neurobehavioral and immune impairment), and in adolescents (e.g., precocious or delayed puberty). Some of the outcomes of environmental exposures may be irreversible and persist throughout life. There may also be a long latency period between exposure and effects, with some outcomes not apparent until later in life. Emerging evidence suggests that an increased risk of certain diseases in adults (e.g., heart disease, hypertension, diabetes, and cancer) can be related to malnutrition during the period in utero. In developing areas, pregnant women may be malnourished and also exposed to a myriad of environmental pollutants – both factors having a negative impact on fetal and child’s growth and development. Increased recognition is being given to epidemiological studies that link low birth weight with cardiovascular disease, type II diabetes, and obesity later on in life. Other types of environmental exposure early in life, such as exposure to lead and mercury during fetal life, can alter neurodevelopment. As the blood-brain barrier is not fully developed for the first 36 months of life, toxicants such as lead readily cross into the central nervous system. As genes regulate vital mechanisms, including cellular growth and development and the metabolism of environmental contaminants, the consideration of genetics and epigenetics is crucial in understanding the etiology of environmentally induced diseases in children throughout the world.

596

Children’s Environmental Health in Developing Countries

Pathways of Exposure In children, the pathways of exposure are also quite different from those in adults. A unique exposure route during gestation is transplacental, especially for compounds of low molecular weight that could readily reach the embryo or fetus. Toxicants may pass through the placenta and have an effect on the unborn child. A dramatic example of transplacental exposure is the Minamata disease, an alkyl mercury poisoning that occurred in the 1950s causing ‘cerebral palsy’ in children born to mothers who consumed seafood polluted by industrial waste. Developing brains were affected by alkyl mercury through transplacental exposure. Another unique source of exposure in children is via breast milk when mothers are heavily exposed to environmental pollutants. Many environmental chemicals (particularly those that are lipophilic) pass into breast milk, but morbidity from such exposures is extremely rare. The milk of other mammals, such as cows, often used as the basis for infant formula, is also subject to environmental contamination and may contain higher levels of some pollutants than human milk. The condition of human milk is thus an important indication of the level of environmental contamination in the world the infant is entering, but breast milk remains the food of first choice for any infant of a healthy mother. Inhalation, ingestion, and dermal contact are important pathways of exposure in children. Children’s inhalation rate is high, as is their surface-to-bodyweight ratio, which may favor increased exposures. Children consume more food and beverages per kilogram of bodyweight than adults, and their diets tend to be less varied during early stages in life. Some of children’s behaviors, such as playing on the ground and putting their hands in their mouths, can result in high exposure to contaminants in air, soil, and objects. If soil is heavily contaminated, for example, near waste sites or heavily industrialized areas, the potential of exposure to pollutants is very high. The typical example is high lead exposure in children living around smelters or near mining areas that have released lead and other metals into the soil and air. All the physiological and behavioral factors mentioned earlier contribute to children’s special susceptibility and are influenced by the adverse conditions under which children live in poor areas and degraded environments.

Poverty and Malnutrition Although children from all socioeconomic backgrounds are vulnerable to environmental hazards, poor children will suffer the most. This is due not only to the characteristics of the environments where they are, where risks of adverse exposures are high, but also to their different capacities to deal with toxicants and infectious agents. Poor children are at a disproportionate risk for exposure to environmental hazards and tend to be malnourished, neglected, and unprotected from hazards. In addition, they tend to have poor or no access to health care services. Even in the most affluent countries, one in six children lives below the poverty line, mainly in urban centers. Poorly nourished children are more prone to retarded growth, infectious disease, impaired physical and intellectual development, and low productivity as adults. Many biological, social, political, and economic factors can contribute to malnutrition, which may also be determined by environmental conditions. Climate change is gaining universal recognition as an emerging risk for food shortage. Malnutrition is a major contributor to children’s mortality and morbidity: globally, over 35% of all deaths in children under five years are associated with malnutrition. Chronic undernutrition during the first 2–3 years of life may result in delayed growth and learning disabilities. As underweight children have an impaired immune system, they are more prone to infections and less able to cope with disease. The effects of poverty and malnutrition on children’s health are highlighted by the Millennium Development Goals – the first goals agreed on by all UN member states in 2000 to reduce poverty and hunger by 50% by 2015. Poor nutrition is associated with deficiencies in important micronutrients such as vitamins A and B, iodine, iron, zinc, and folate. Deficiencies occur when there is no access to micronutrient-rich foods such as fruits, vegetables, animal products, and fortified foods, as they may be too expensive to buy or are locally unavailable – as is the case in degraded environments. Micronutrient deficiencies increase the general risk of infectious illness and of dying from diarrhea, measles, malaria, and pneumonia and are among the 10 leading causes of disease in the world today. Iron deficiency is the most common and widespread nutritional disorder globally. It affects a large number of children andwomen in developing countries, but it is a nutrient deficiency that is also significantly prevalent in industrialized countries. Approximately 2 billion people – over 30% of the world’s population – are anemic, many due to iron deficiency, and in resource-poor areas, this is frequently exacerbated by infectious diseases that are linked to environmental factors, such as malaria, hookworm infestation, schistosomiasis, and others. Iron deficiency has been associated with increased susceptibility to lead exposure, a major environmental health problem in the developing world. Another relevant nutritional disorder is iodine deficiency, which, through its effects on the developing brain, has condemned millions of people to a life of few prospects and continued underdevelopment. On a worldwide basis, iodine deficiency is the single most important preventable cause of brain damage and can be prevented by the addition of a small, constant amount of iodine to the salt. At the other end of the nutrition scale, obesity in children is a mounting health threat, mainly in developed countries, but is an increasing concern in developing countries. The epidemics of childhood obesity, which is likely to have an impact on future health status in adulthood, may be linked to a series of environmental risk factors.

Children’s Environmental Health in Developing Countries

597

Social, Cultural, Demographic, and Lifestyle Factors Social, cultural, demographic, and lifestyle factors play significant roles in influencing the exposure of children to environmental threats and consequently their health. They can determine children’s dietary habits and have an impact on nutrition and on the type and extent of exposure to chemicals or microbiological contaminants present in food. An example of a particularly susceptible subpopulation of children may be those who (e.g., indigenous group) rely on marine mammals and fish for subsistence food that is heavily contaminated with POPs or heavy metals. In many African countries where children’s diets are based on maize and groundnuts, contamination with aflatoxins (fungal metabolites) may cause acute toxic episodes and is also associated with an increased risk of developing liver cancer, impaired immune function, growth impairment, and malnutrition, depending on the duration and level of exposure. Cultural factors that may determine or influence environmental exposures in children include the use of traditional medicines (some of which may contain heavy metals), some practices and behaviors such as the indoor burning of ‘evil-chasing’ incense or candles, and the use of unsafe toys that may cause injury or poisoning (e.g., lead-containing paint in toys). Key lifestyle factors within the family, such as alcohol consumption and tobacco smoking, will also influence children’s exposures within their settings. In many developing countries, there is a lack of or inadequate legislation to address the special vulnerability and needs of children with respect to environmental health. In countries that do have legislation, there may be a lack of capacity to enforce, monitor, and evaluate the effectiveness of measures. As a result, children may not be sufficiently protected by the existing legislative and regulatory systems. Children involved in child labor may be exposed to poisoning, injury, and other environmentally related effects. Many children work in the agricultural sector and in industries using pesticides, cleaners, and solvents, which expose them to risk of poisoning. For example, children involved in the matches and fireworks industry mix chemicals and use flammable products, becoming exposed to hazardous chemicals and also to the possibility of fire and explosions that lead to burns, injuries, and death. According to the International Labour Organization, there are 306 million child laborers aged 5–17 years, and those involved in hazardous work (which accounts for the majority of the worst forms of child labor) were 115 million in 2008. Most working children (69%) are involved in agriculture (compared with 9% in industry), where they may be acutely and chronically exposed to pesticides. Child labor is closely linked to poverty, lack of education, poor health, and gender inequalities, which prevail in developing countries. The conditions under which children live and work in rural settings may determine the extent and nature of exposure to pesticides, injuries, bites, and stings. In agricultural areas, for example, children may be exposed to pesticides because they live close to areas that are frequently sprayed, as their parents bring in the products into the home in their clothes and shoes, and also as a result of their involvement in rural work, either helping their parents or directly involved as young workers or in child labor.

Main Environmental Risk Factors in Developing Areas In developing countries, the main environmentally related diseases in children continue to be linked to indoor and outdoor air pollution, lack of access to safe water and sanitation, vectors of disease, exposure to hazardous chemicals, and injuries. However, major challenges emerge as poor countries industrialize and children become exposed to new environmental threats, commonly associated with the developed world, which creates an additional environmental burden of disease. The ‘new’ emerging concerns include, for example, exposure to POPs and other toxic substances that persist in the environment, as well as global climate change. Children in developing countries are therefore exposed to a double burden of disease resulting from the traditional and modern environmental threats whose effects are exacerbated by poverty, illiteracy, neglect, and malnutrition. In everyday life, exposure to environmental risk factors occurs in combination with these social conditions.

Water, Sanitation, and Hygiene Unsafe water, lack of sanitation, and poor hygiene remain a major cause of child death in developing countries. Unsafe water is the main cause of diarrhea, whereas the lack of sanitation and hygiene increase the chances of disease. This results in high mortality in children less than five years of age, as well as disease and malnutrition that contribute to the high social and public health costs. In addition, lost days of schooling have negative consequences for children’s education (especially in girls, as after their menarche, they do not go to school if there are no toilets). When children are constantly ill and cannot go to school, they become a burden to the family and the community. In developing regions, the percentage of population served by adequate sanitation and drinking water has increased between 1990 and 2004. Although access to water and sanitation has increased in the past decade, the world’s population has also increased, and as a result, there are still large numbers of people who have no access to these basic services. 2.6 billion people do not use improved sanitation. Effective low-cost interventions to reduce diarrhea morbidity and mortality are available, and can be implemented at the household and community levels.

598

Children’s Environmental Health in Developing Countries

Vector-Borne Diseases Vector-borne diseases such as malaria, dengue fever, leishmaniasis, Japanese encephalitis, and others cause approximately one million deaths per year in children. A high proportion of the burden of disease falls on children under five years of age. Poor water management, global climate change, and environmental and societal changes contribute to the spread of vector-borne diseases. Malaria is a major global health problem: about half of the world’s population, mostly those living in the world’s poorest countries, are at risk of malaria. The disease caused by Plasmodium parasites transmitted via the bites of infected female Anopheles mosquitoes is an especially serious problem in Africa, where one in every five (20%) childhood deaths is due to malaria. An African child has on average between 1.6 and 5.4 episodes of malaria fever each year, and every 30 seconds, a child dies from malaria. The disease is exacerbated by poor water management and storage, inadequate housing, deforestation, and loss of biodiversity. For children of all ages, malaria contributes to high mortality and morbidity. In older children, it adds significantly to low educational achievement. Most malaria cases and deaths are in sub-Saharan Africa, but other areas such as Asia, Latin America, the Middle East, and parts of Europe are also affected. Schistosomiasis (or bilharziasis) is a waterborne parasitic disease caused by trematode flatworms (genus Schistosoma) whose larvae are released by freshwater snails and penetrate the skin. It affects children and adolescents mainly in Africa, with more than 80% of infected people living in sub-Saharan countries. It is linked to lack of hygiene and to playing and swimming in contaminated waters. Endemic in 74 developing countries, it can lead to a debilitating infection with severe damage to the liver or bladder over many years, and result in premature death. Japanese encephalitis is a potentially severe viral disease spread by infected mosquitoes around rice-growing irrigated areas in South and Southeast Asia. Dengue is the most common mosquito-borne viral disease of humans, and in recent years, it has become a major international public health concern. The geographical spread of both the vectors (mainly Aedes aegypti) and the viruses has led to the global resurgence of epidemic dengue fever and emergence of dengue hemorrhagic fever, a leading cause of hospitalization and death among children in several countries. The mosquito breeds in and around human dwellings and is adapted to urban settings. A rapid rise in urban populations is bringing ever-greater numbers of people into contact with this vector, especially in areas that are favorable for mosquito breeding (e.g., where household water storage is common and where solid waste disposal services are inadequate). In children, the infection can develop into dengue hemorrhagic fever or dengue shock syndrome, with high levels of mortality. Most dengue deaths occur in children.

Indoor and Outdoor Air Pollution In developing countries, indoor air pollution resulting from cooking and heating with solid fuels such as dung, wood, crop waste, or coal on open fires or stoves without chimneys is a major environmental health problem. Biomass smoke increases the risk of acute lower respiratory infections in childhood, particularly pneumonia. Almost half of the burden of disease from lower respiratory infections in children under five years is attributable to indoor smoke from solid fuel use. The significance of indoor air pollution in the context of public health varies according to the countries’ levels of development: in high-mortality developing countries, indoor air pollution is responsible for up to 3.7% of the burden of disease, but it is not among the top 10 risk factors in industrialized countries. Indoor smoke contains soot or dust particles in addition to carbon monoxide and other toxicants that have effects on the function and development of the lungs. Exposure is particularly high in children (and women), as they spend considerable amounts of time indoors and may often be carried on their mother’s back or be kept close to the warm hearth. Effective interventions are available, such as improved stoves and better ventilation. Other indoor air pollutants of concern in both developing and industrialized countries include secondhand tobacco smoke, volatile organic compounds released by household furnishings or products at room temperature, and biologicals (e.g., animal dander, dust mites, cockroach parts, and molds), all of which may contribute to respiratory disease in children. Outdoor air pollution is a serious problem in cities throughout the world, in particular in the megacities of developing countries. Outdoor air pollutants include a wide variety of particulate matter and chemicals (e.g., sulfur compounds, carbon monoxide, lead, nitrogen oxides, and ozone), all of which may have an impact on children’s health. Special interest has been raised by polycyclic aromatic hydrocarbons (PAHs) and benzene, carcinogenic compounds found in relatively high levels in urban air pollution from motor vehicle emissions in some cities, including Asian megacities. Children living and attending schools in the inner cities may be exposed to these chemicals and be at risk of diseases due to genotoxic substances in urban air.

Ultraviolet Radiation Exposure to solar UV radiation has important public health implications. Sustained ozone depletion and enhanced levels of UV radiation on Earth will aggravate the effects of UV on skin, eyes, and the immune system, and children are at especially high risk of suffering damage from exposure to UV radiation. Many developing countries are located close to the equator; hence, children are exposed to very high levels of UV radiation in these regions, many times not having appropriate protection from the sun.

Ionizing Radiations Ionizing radiation is a known carcinogen, and children are particularly vulnerable to it, especially if exposed early in life. Natural background radiation is identified as by far the largest source of exposure for the world’s population, followed by the medical use of

Children’s Environmental Health in Developing Countries

599

X-rays and radiopharmaceuticals and the exposure from atmospheric testing of nuclear weapons and radioactive incidents. Irradiation during childhood increased the risk of thyroid cancer (as reported among those who were young children and adolescents at the time of the nuclear power plant incident at Chernobyl in Ukraine in 1986 and lived in the most contaminated areas). The fact that children are involved in scavenging may expose them to radioactive substance, as was the case of two children who found a cylinder full of a luminescent powder in Goiana, Brazil (September 1987), and believed it was ‘carnival glitter.’ The glowing powder was cesium-137, which caused radiation sickness in dozens of people and four deaths. Exposure to high levels of ionizing radiation in childhood may occur in the medical domain, in the context of diagnostic and therapeutic procedures that may lead to high cumulative doses over time. Ionizing radiations have been linked to an increased risk of leukemia, breast cancer, and thyroid cancer, and fetal exposure has been associated with severe mental retardation.

Unintentional Injuries Children and adolescents are more vulnerable to injury if they live under unsafe conditions and in degraded environments. Injuries are commonly classified based on ‘intentionality’: most road traffic injuries, poisonings, falls, fire and burn injuries, and drowning are unintentional, whereas interpersonal violence, suicide, and war are intentional. Young children suffer mostly from poisoning, drowning, burns, and maltreatment by caregivers. Older children and adolescents are more prone to having road traffic accidents, interpersonal violence, and sports injuries. Unintentional physical injuries that may be related to household or community environmental hazards kill more than 855 000 children under the age of 18 every year. Worldwide, road traffic crashes and drowning are the most common causes of injury deaths among children, followed by burns and falls. Children from poor families are disproportionately affected by injuries: most of all unintentional childhood injury deaths occur in low- and middle-income countries. Injuries occur mostly at home or while at play in unsafe settings. Within the high-income countries, there is also a strong socioeconomic gradient of child and adolescent injury, with children from poor families being considerably more likely to sustain an injury than their more affluent counterparts.

Chemical Hazards The production, transportation, use, and disposal of toxic chemicals pose a potentially significant environmental threat to the health of children. Increased industrialization, new urbanization trends, and intensified agriculture, along with growing patterns of unsustainable consumption and environmental degradation, are contributing to the release of large amounts of toxic substances into the air, water, and soil. Many children in developing countries are exposed to dangerous chemicals in the context of their work: applying pesticides in the fields, manufacturing fireworks or matches, deconstructing computers and mobile phones, or recycling the lead from car batteries. Work involving chemical exposures is among the worst forms of child labor recognized by the International Programme on the Elimination of Child Labour/International Labour Organization (IPEC/ILO). Some naturally occurring chemicals, such as arsenic and fluoride in groundwater and aflatoxins in food, may cause chronic lowlevel exposure and poisoning in children. Poisoning is estimated to cause 45 000 deaths in the age-group 0–20 years. A wide range of chemicals can affect children’s health, and those of particular concern include heavy metals (especially lead and mercury and metalloid arsenic), pesticides, air contaminants, and POPs. Heavy metals, lipophilic POPs, and certain other toxicants can pass through the placenta and also go into breast milk (the main source of food for neonates) and are known to interfere with the normal growth and development of children. As infants are weaned from breast milk, they may become exposed to contaminants present in formula, drinking water, and solid foods. In addition, mouthing or play behaviors of infants can lead to ingestion of toxic chemicals that accumulate on surfaces (e.g., toys) or in soil. Young children may be exposed to chemicals in solid food (e.g., pesticides and POPs), in air (e.g., particulate matter, carbon monoxide, and PAHs), in water (e.g., arsenic), and through dermal exposure (e.g., lead in soil). Children can be exposed to a wide variety of pesticides, especially if they live in rural areas and are involved in agricultural work. Organophosphorus and organochlorine pesticides are in general those most commonly used and a frequent cause of acute toxic exposure. However, the effects of low-level chronic exposures are a cause of concern as experimental studies suggest neurobehavioral, reproductive, and other effects in developing organisms. Playing in childhood is an important aspect of well-being and growing up. Chemicals in toys or in playground equipment (e.g., wood treated with the pesticide chromated copper arsenate) may represent a risk. Toys should be safe, age appropriate, and nontoxic for their use, and misuse should be anticipated.

Emerging Children’s Environmental Health Issues A number of recently recognized environmental risk factors include those linked to climate change, ozone depletion, nanoparticles, and endocrine disrupting chemicals (EDCs). Climate change is a global issue, but developing countries are more vulnerable to its effects, and children will bear most of the burden of disease. Poverty limits adaptive responses to climate change, and increasing global temperatures will affect the levels and seasonal patterns of both man-made and natural airborne particles (e.g., plant pollens) that can trigger asthma. Children will experience both the direct and indirect effects of climate change, including increases in certain

600

Children’s Environmental Health in Developing Countries

infectious and vector-borne diseases (such as malaria and dengue), exposure to air pollution and thermal stress, and the consequences of extreme weather events and disasters. Children do not adapt to extreme temperatures as effectively as adults and have less capacity to survive during natural catastrophes. Many developing areas already have fragile climates, with scarce food and clean water, and may lack the infrastructure and technologies to cope with climate change.

Taking Action to Protect Children’s Environmental Health The challenges for providing safe and healthy environments for children are enormous in developing countries. The recognition of the magnitude of the environmentally related burden of disease in children and their unique and special susceptibility, and the fact that they represent the capitals of main countries, may be the starting points for triggering or strengthening actions. Global and regional actions are important to promote the environmental health of children, but local actions at the national and community levels will be more immediate and expected to have a direct impact on the child, family, and community. Key areas for action include, after assessing the main public health problems and their environmental causes, planning and implementing communication, education and training, advocacy, and research. As children’s environmental health is altered by a large number of factors, coordination and cooperation among many different sectors within a country are required. Governmental agencies dealing with health, environment, and child welfare – also with energy, industry, and agriculture – are called to play a role, along with nongovernmental organizations and community groups, including parents, teachers, and communities. Interaction among different sectors may be difficult to achieve but effective once the problem is recognized and goals for action set up. The preparation of national profiles on the status of children’s environmental health as a joint multisectoral activity has enabled a number of countries to identify the key problems and address them in a coordinated manner. Informed communities and parents who are aware of the impact of environmental risk factors on their children’s health are more able and motivated to improve the living conditions of children and to request support of their leaders and authorities. Health care professionals trained on the recognition of environmentally related diseases and their management and prevention can identify knowledge gaps and research needs. They can become advocates for children’s environmental health, providing advice to caregivers and sound evidence required by authorities to implement protective measures. Informed government officials and health and environmental authorities can update or prepare new policies to protect children’s health from environmental risks. When all these sectors are informed and involved, further public awareness is raised and actions to improve environmental health become more effective. All levels of society can advocate for healthier environments for children and have an impact on the local and national political agendas. Although in some countries governments may lead the efforts, in others they are prompted by public demand. This is a major challenge in developing countries, where the health priorities may be other than environmental concerns. Training the health sector on the environmental origins of childhood disease is essential to bring in change. Even in the most advanced countries, health care providers may receive poor information on environmental matters. It is crucial to supply health care providers (physicians, pediatricians, nurses, midwives, and other health professionals) ‘in the front line’ dealing with children and adolescents’ health with up-to-date information on how to recognize, assess, manage, and, above all, prevent environmentally related diseases in children. To inform the public, different media can be utilized according to local availability, with the press, television, and radio programs being particularly powerful tools for reaching the community in developing countries. ‘Do not hide, do not scare’ is the principle to apply when communicating about environmental threats to parents and caregivers, meaning that the information has to be clear and concise, avoiding alarmism. Increased awareness and a better understanding of the interactions between environmental exposures, nutrition, infectious diseases, and genetic predisposition are required to protect children and promote public health. Collaborative research efforts among scientists from developing and industrialized countries are critical to address health problems in their national and global contexts and for enhancing the sharing of experience and knowledge. The results of research studies can be used to plan and implement prevention and remediation strategies and put in place evidence-based public health policies at the country level. These collaborative activities also result in technology transfer and capacity building, efforts sharing, and building up of a network of trained scientists. In particular, long-term cohort studies on the environmental influences on health in children are comprehensive and use innovative approaches for identifying and assessing the effects of a broad range of environmental factors on children’s health, covering their main developmental periods. These studies proved, for instance, the long-term effects of certain pollutants such as lead, mercury, PCBs, and pesticides, especially dangerous for children and their developing nervous systems. Successful implementation of such complex studies in developing countries is possible but requires innovative approaches leading to the production of both short-term and long-term results. A number of international commitments such as the 2002 Bangkok Statement, the 2005 Buenos Aires Declaration and the 2009 Busan Pledge for Action prompted the international community to improve the environments of children and their health, prioritizing the areas for action. The theme for World Health Day 2003 on ‘Healthy environments for children’ and its slogan ‘Shape the future of life’ enabled the WHO to catalyze action in a large number of developed and developing countries. International commitments resulting from the World Summit on Sustainable Development (Johannesburg, 2002), the Special Session on Children of the

Children’s Environmental Health in Developing Countries

601

United Nations General Assembly (2002), meetings of the International Forum on Chemical Safety (2003, 2004), the Fourth European Ministerial Conference on Environment and Health (2004), and, more recently, the Strategic Approach for the Management of Chemicals highlight the need to take action for protecting children from certain environmental threats. In the long term, the activities developed on children’s health and the environment will make an essential contribution to the achievement of Millennium Development Goals 4 (Reduce child mortality) and 7 (Ensure environmental sustainability) adopted in 2000 by the United Nations. The right to healthy environments is a universal right of children and adolescents according to the United Nation’s Declaration of Children’s Rights. These commitments call for action to promote the recognition, assessment, and study of environmental factors that have an impact on the health and development of children. In response to the many challenges identified, the Department of Public Health and Environment at the WHO, in collaboration with relevant partners, promotes a number of activities regarding children’s health and the environment, including awarenessraising and training activities, the use of indicators, collaborative international research, and the promotion of successful prevention and education ‘models’ to provide healthier settings for children, their families, and communities. Implementing these activities and turning efforts into action will have a considerable impact on reducing the burden of disease affecting children in developing countries and throughout the world.

Further Reading Gordon, B., Mackay, R., Rehfuess, E., 2004. Inheriting the World: The Atlas of Children’s Health and the Environment. World Health Organization, Geneva. Guidotti, T.L., Gitterman, B.A., 2007. Global pediatric environmental health. Pediatric Clinics of North America 54, 335–350. Neonatal Survival Series. The Lancet, Elsevier, March 2005. Prüss-Üstün, A., Corvalán, C., 2006. Preventing disease through healthy environments. Towards an estimate of the environmental burden of disease. World Health Organization, Geneva. UNICEF (2005) Childhood under Threat – The State of the World’s Children 2005. www.unicef.org/sowc05/english/sowc05.pdf. Statistics at http://www.unicef.org/sowc05/ english/statistics.html – United Nations Population Division, World Urbanization Prospects: The 2001 Revisions. http://www.un.org/esa/population/publications/wup2001/ wup2001dh.pdf (accessed 8 June 2009). United Nations, 2000. Millennium Development Goals. http://www.un.org/millenniumgoals (accessed 8 June 2009). WHO, 2002. Healthy Environments for Children – Initiating an Alliance for Action. World Health Organization, Geneva. WHO, 2002. Healthy Environments for Children – An Alliance to Shape the Future of Life. World Health Organization, Geneva. WHO/SDE/PHE/02.05. WHO, 2002. The Bangkok Statement. http://www.who.int/docstore/peh/ceh/Bangkok/bangkstatement.htm (accessed 8 June 2009). WHO, 2004. From Theory to Action: Implementing the WSSD Global Initiative on Children’s Environmental Health Indicators. http://www.who.int/ceh/publications/924159188_9/en/ index.html (accessed 8 June 2009). WHO, 2004. The Physical School Environment: An Essential Component of a Health-Promoting School. http://www.who.int/school_youth_health/media/en/physical_sch_ environment_v2.pdf (accessed 8 June 2009). WHO, 2005. Buenos Aires Declaration. http://www.who.int/ceh/news/pastevents/buenosairesdeclareng.pdf (accessed 8 June 2009). WHO, 2005. The Lancet neonatal survival series. The Lancet. http://www.who.int/child_adolescent_health/documents/lancet_neonatal_survival/en/index.html (accessed 10 June 2010). WHO, 2006. Preventing Disease through Healthy Environments. Exposure to Mercury: A Major Public Health Concern. http://www.who.int/phe/news/Mercury-flyer.pdf (accessed 8 June 2009). WHO, 2006. Fuel for Life: Household energy and health. World Health Organization, Geneva. WHO, 2005. The Environment and Health for Children and Their Mothers. Fact Sheet WHO/284. World Health Organization, Geneva. WHO, 2006. Air Quality Guidelines – Global Update 2005. http://www.who.int/phe/health_topics/outdoorair_aqg/en/index.html (accessed 8 June 2009). WHO, 2007. Climate and Health Fact Sheet. http://www.who.int/mediacentre/factsheets/fs266/en/index.html (accessed 8 June 2009). WHO, 2008. Protecting Health from Climate Change – World Health Day 2008. World Health Organization, Geneva. WHO Information Series on School Health. http://www.who.int/school_youth_health/resources/information_series/en/index.html (accessed 8 June 2009). WHO, INFOSAN (2008) Food Safety and Nutrition during Pregnancy and Infant Feeding. INFOSAN Information Note 3/2008, 30 April. WHO, INFOSAN (2008) Nanotechnology. INFOSAN Information Note 1/2008, 7 February.

Relevant Websites http://www.ilo.org/ipec/index.htm. International Labour Organization. http://www.unicef.org/. United Nations Children’s Fund (UNICEF). http://www.unep.org/. United Nations Environment Programme (UNEP). http://www.who.int/child_adolescent_health/en/. WHO, Child and Adolescent Health and Development. http://www.who.int/ceh/. WHO, Children’s Environmental Health. http://www.who.int/globalchange/climate/en/. WHO, Climate Change and Human Health. http://www.who.int/peh-emf/en/. WHO, Electromagnetic Fields. http://www.who.int/fch/en/index.html. WHO, Family and Community Health. http://www.who.int/nutgrowthdb/en/. WHO, Global Database on Child Growth and Malnutrition. http://www.who.int/globalchange/en/index.html. WHO, Global Environmental Change. http://www.who.int/heli/en/. WHO, Health and Environment Linkages Initiative. http://www.who.int/heca/en/. WHO, Healthy Environments for Children Alliance. http://www.who.int/indoorair/en/index.html. WHO, Indoor Air Pollution. http://www.who.int/ifcs/en/. WHO, Intergovernmental Forum on Chemical Safety. http://www.who.int/ipcs/en/. WHO, International Programme on Chemical Safety (IPCS).

602

Children’s Environmental Health in Developing Countries

http://www.who.int/phe/en/index.html. WHO, Public Health and Environment. http://www.who.int/quantifying_ehimpacts/en/. WHO, Quantifying Environmental Health Impact Assessments. http://www.who.int/school_youth_health/en/. WHO, School Health and Youth Health Promotion. http://www.who.int/ceh/indicators/globinit/en/index.html. WHO, The Global Initiative on Children’s Environmental Health Indicators. http://www.who.int/uv/en/. WHO, Ultraviolet Radiation.

Children’s Exposure to Environmental Agents J Moya, US Environmental Protection Agency, National Center for Environmental Assessment, Washington, DC, United States LR Goldman, Johns Hopkins Bloomberg School of Public Health, Baltimore, MD, United States © 2011 Elsevier B.V. All rights reserved.

Abbreviations DDT dichloro-diphenyl-trichloroethane METS metabolic equivalents of work NHANES National Health and Nutrition Examination Survey PCBs polychlorinated biphenyls

Introduction A child’s overall environment helps lay the foundation for his/her health over a lifetime, yet the environment contains many substances that are potentially harmful to children’s health and development. These substances include chemical agents (e.g., metals, pesticides, and consumer products), microbial agents (e.g., bacteria and viruses), and physical agents (e.g., ultraviolet light, X-rays, and radon). Children’s exposures to these substances are suspected in a multitude of health concerns, including impaired growth and physical development, birth defects, and childhood cancers, as well as other chronic, sometimes lifelong, conditions such as asthma and cognitive and behavioral problems. In addition, childhood exposures can result in adult-onset disorders, including impaired fertility, cardiovascular disease, neurodegenerative disease, and cancer. In the context of human exposures, the process of a substance entering the body can be described in two steps: (1) the contact with the substance (exposure, or potential dose) and (2) the amount of the substance crossing the outer boundary of the body (absorbed dose). Risk to an individual or population can be represented as a continuum from the source to exposure to dose to effect as shown in Figure 1. The exposure–dose–effect continuum depicts the trajectory of a chemical or an agent from its source to an effect. The chemical or agent can be transformed and transported through the environment via air, water, soil, dust, and diet.

Source/stressor formation Chemical Physical Microbial Magnitude Duration Timing

Disease Cancer Asthma Infertility etc. Altered structure/ function

Transport/ transformation

Dispersion Kinetics Thermodynamics Distributions Meteorology

Early biological effect Environmental characterization

Air Water Diet Soil and dust

Activity Pattern pattern

Dose Exposure

Pathway Route Duration Frequency Magnitude

• Individual

Edema Arrhythmia Enzymuria Necrosis etc.

Molecular Biochemical Cellular Organ Organism

Absorbed Target Internal Biologically effective

• Community • Population Statistical profile Reference population Susceptible individual Susceptible populations Population distributions

Figure 1

The exposure–dose–effect continuum.

Encyclopedia of Environmental Health, 2nd edition, Volume 1

https://doi.org/10.1016/B978-0-444-63951-6.00010-3

603

604

Children’s Exposure to Environmental Agents

Children can come in contact with the chemical through inhalation, ingestion, or skin/eye contact. The child’s physiology, behavior, and activity patterns as well as the concentration of the chemical will determine the magnitude, frequency, and duration of the exposure. The exposure becomes an absorbed dose once the chemical crosses the absorption barrier (i.e., skin, lungs, eyes, gastrointestinal tract, and placenta). Interactions of the chemical or its metabolites with a target tissue may lead to an adverse health outcome. The text in the boxes indicates the specific information that may be needed to characterize each box in the exposure–dose–effect continuum. Once a chemical or an agent is released into the environment, it can be transformed and transported through the environment via air, water, soil, dust, and diet. The movement of the chemical or agent through the environment depends on the characteristics of the chemical or agent and the environmental setting. Children can come in contact with the chemical through inhalation, ingestion, or skin/eye contact. The child’s physiology, behavior, and activity patterns as well as the concentration of the chemical will determine the magnitude, frequency, and duration of the exposure. Once the chemical crosses the absorption barrier (i.e., skin, lungs, eyes, gastrointestinal tract, and placenta), the exposure becomes an absorbed dose. When the chemical or its metabolites interact with a target tissue, it becomes a target tissue dose, which may lead to an adverse health outcome.

Exposure Exposure is defined as the contact of an organism with a chemical, physical, or microbial agent, quantified as the amount available at the exchange boundaries of the organism and available for absorption. Children can be exposed via dietary ingestion of contaminated food and beverages (including human milk), nondietary oral intake from age-related behaviors such as soil ingestion and mouthing objects, inhalation of contaminated indoor or outdoor air, and dermal contact with contaminated water, soil, or surfaces. The fetal life stage is also recognized as a period of susceptibility as a result of maternal exposures to chemicals. The ‘exposure duration’ is the length of time of contact with the contaminant, which varies with a number of factors such as the length of time a person lives in an area, frequency of bathing, and time spent indoors versus outdoors. ‘Contaminant concentration’ is the concentration of the contaminant in the medium (air, food, soil, etc.) contacting the body and has units of mass per volume (e.g., mg l 1) or mass per mass (e.g., mg g 1).

Dose The dose is affected by the characteristics of the chemical of concern, the concentrations contacted by the potential receptors, the receptors’ behaviors and characteristics, and the characteristics of the environmental setting. The ‘potential dose’ can be calculated as the product of the concentration of the chemical by ‘intake rate’ and ‘exposure duration.’ Intake rate refers to the rates of inhalation, ingestion, and dermal contact, depending on the route of exposure. For ingestion, the intake rate is simply the amount of food containing the contaminant of interest that the child ingests during some specific time period (units of mass per time). For inhalation, the intake rate is the rate at which contaminated air is inhaled. Some factors that affect dermal exposure include skin surface area, estimates of the amount of soil that adheres to the skin, and concentration of the chemical in contact with the skin. The ‘exposure duration’ is the length of time of contaminant contact. The exposure becomes an ‘absorbed dose’ when it crosses an absorption barrier (skin, lung tissue, eye, gastrointestinal tract, and placenta).

Exposure Factors Exposure factors are specific metrics important for characterizing exposures to environmental contaminants via various routes. They are used as input parameters in the dose equation. Exposure factors relevant to childhood exposures, methods for their estimation, and child-specific issues that need to be considered when assessing exposure are summarized in Table 1. Children have greater water intake rates per unit of body weight. Per capita consumption of water for children can be up to three times higher than that for adults. Young infants may be particularly at risk since their only source of nutrition may come from reconstituted infant formula. Infant formula and water used to prepare it can contain contaminants, including pathogens if water quality is poor and refrigeration unavailable. During normal exploration of their environment, children can be exposed to various contaminants. Young children touch floors, surfaces, and objects such as toys and engage in hand-to-mouth and object-to-mouth behaviors. This behavior can result in exposures to contaminants present in soil and dust. Chemicals such as pesticides can be tracked into the house and accumulate in carpets, where young children spend a significant amount of time. Research shows that children living in agricultural areas may experience higher exposures to pesticides than do other children. In addition, children living in agricultural areas may also play in nearby fields or may be exposed via consumption of contaminated human milk from their farmworker mother. Moreover, some children may experience higher levels of soil intake. The recurrent ingestion of unusually high levels of soil is known as soil pica. The inhalation rates of children differ from those of adults due to their size, physiology, and activity patterns. The resting metabolic rate and the rate of oxygen consumption by body weight are higher in infants and young children. The volume of air

Children’s Exposure to Environmental Agents Table 1

605

Exposure factors estimation and considerations for children’s exposures

Exposure factor

Method of estimation

Considerations

Water intake

Water intake is measured using diary studies and memory recall.

• Water is ingested directly as a beverage, indirectly from

Mouthing behavior

Mouthing behavior is measured using real-time hand recording, videotaping, and survey questionnaires.

Soil ingestion

Soil ingestion is estimated by measuring the presence of certain biomarkers in the environmental media (i.e., soil, dust, feces, urine, food, and medicines) and conducting a mass balance calculation. Examples of biomarkers include aluminum, silicon, titanium, barium, and zirconium. Inhalation rates are estimated by measuring the disappearance rates of oral doses of doubly labeled water (2H2O and H218O), oxygen consumption associated with energy expenditures determined using food consumption data, and oxygen consumption associated with energy expenditures determined using metabolic equivalents of work (METS) data.

Inhalation rate

Body surface area

Coating, triangulation, and surface integration are used as direct measurements of body surface area. Other methods of estimation include equations that use other body dimensions (i.e., height and weight).

Soil adherence

Soil loadings are measured from individuals participating in various activities by washing before and after the monitored activities. Experiments are also conducted where individuals press their hands into a pan containing soil. Individuals have also been video-imaged under a long-wave ultraviolet light before and after soil contact to assess presence of soil in various body parts. Body weight is directly measured using a scale. In some surveys, body weight is self-reported by the child or parent. Fetal body weight is estimated using ultrasound technology.

Body weight

Intake of foods

Intake of foods is measured using food diaries or memory recall.

foods and drinks made with water (e.g., infant formula), or incidentally during swimming. • Children drink more water per unit of body weight. The per capita consumption of water for children 21 years of age. • Young children mouth fingers, hands, and objects as they explore their environment. • Useful metrics include frequency of mouthing behavior (e.g., contacts per unit of time) and duration of specific mouthing events. • Children may have different mouthing behaviors outdoors than indoors. • Children may inadvertently ingest soil and dust because they play close to the ground, tend to mouth objects and their hands, and may eat food that has been dropped on the floor. • Some children may eat soil intentionally (also known as soil pica). • Infants have higher resting metabolic rate and oxygen consumption per unit of body weight than adults. • Age-related differences in physiology affect the inhaled dose and deposition of particles in the lungs. • Because children play on floor or are close to the ground because of their height, their breathing zone is lower than that of an adult. Children may experience higher concentration of volatile chemicals present on the ground. • Children crawl, roll, or sit on surfaces treated with chemicals. • Children are more likely to wear less clothing than adults. • Children who wear diapers may be exposed for long periods of time to chemical components of lotions or other products. • Children have higher body surface per unit of body weight area than adults. Surface area-to-body weight ratio for newborn infants is more than two twice greater than that for adults. • Skin characteristics vary with age. Stratum corneum of neonates is not fully developed and absorption of chemicals may be higher. Preterm infants may be particularly vulnerable. Damaged or diseased skin areas (e.g., diaper rash) may be more permeable. • Soil adherence varies with activities. Children are more likely to engage in activities (e.g., sports) that will put them in contact with soil and dust.

• Consider body weights for the target population and age group of interest.

• Prenatal exposures may result in low birth weight. • Obesity in children has an impact on their overall health. Fetal exposures to certain environmental chemicals can predispose the child to obesity. • Children eat more of certain foods per unit of body weight than adults. • Young children’s diets may be limited. • Pesticides, soil additives, or fertilizers to crops of gardens may result in contamination of food products.

(Continued)

606 Table 1

Children’s Exposure to Environmental Agents Exposure factors estimation and considerations for children’s exposuresdcont'd

Exposure factor

Method of estimation

Considerations

Human milk intake

Human milk intake is measured by weighing the infant before and after feeding and adjusting for insensible water loss (i.e., evaporation). The difference between before and after feeding measurements results in the total amount of milk consumed.

• Maternal exposures to environmental chemicals can result in

Consumer products

Data on consumer products used by children are limited. Most data come from household surveys.

Activity factors

Data on activity patterns are primarily obtained from recall or diary surveys. They are likely to be short term in nature.

the transfer of those contaminants to human milk (e.g., pesticides and polychlorinated hydrocarbons (PCBs)). • Human milk may be the only source of food for newborns. • Newborns (birth to 25 cigarettes per day being >20 times that of nonsmokers. The adverse effect of smoking on rate of decline in FEV1 has also been well documented.

Indoor and Outdoor Air Pollution COPD can be caused and aggravated by ambient pollution exposure. The main pollutants involved are sulfur dioxide, oxides of nitrogen, and ambient particles. In a heavily polluted area, COPD has been associated with an excess decline in FEV1 above that attributable to smoking and other potential confounders. The magnitude of the effect of heavy pollution on decline in FEV1 in a study by Bascom et al. was 24 ml per year1, slightly less than the effect of 33 ml per year attributed to heavy smoking. This study suggests that the effect of ambient pollution is likely to be of a broadly similar order of magnitude to smoking in COPD. There is consistent evidence that indoor air pollution from biomass fuel, burned for cooking and heating in poorly vented dwellings, is a major risk factor for COPD.

Occupation Up to 19% of COPD in smokers may be caused by occupational exposure. Increased risk of COPD has been described in relation to a number of occupations typically involving exposure to dust and fumes and includes effects from coal dust, silica, cadmium, animal feeds, and solvent exposure.

Genes The pathogenesis of COPD involves a complex interplay between genetic factors and exposure to environmental stimuli. a-1Antitrypsin deficiency is an inherited disorder due to mutations in the SERPINA gene causing deficiency of the serine protease a-1-antitrypsin. Patients with this genetic defect develop severe emphysema as a result of uninhibited neutrophil elastase, an enzyme released into the lung tissue during inflammation. This deficiency is uncommon and explains only a small proportion of cases of COPD (1–3%). Oxidative stress response genes and the transcription factors involved in their regulation are differentially expressed in the bronchial epithelium of subjects with COPD compared with healthy smokers, indicating that oxidative stress defense responses are amplified in patients with COPD. Other genetic alterations such as interleukin (IL)-13 gene polymorphisms, single nucleotide polymorphisms in the ADAM33 gene, and MMP1 gene polymorphisms may be associated with an increased risk of COPD.

Diagnosis COPD is characterized by slowly progressive development of airflow limitation that is poorly reversible, in contrast to asthma where there is variable airflow obstruction that is usually reversible spontaneously or with treatment. Other chronic lung conditions that result in irreversible airflow obstruction include bronchiectasis, cystic fibrosis, sarcoidosis, and tuberculosis.

Chronic Obstructive Pulmonary Disease

637

A patient with COPD often suffers from dyspnea that worsens over time, and is usually worse with exercise and is persistent. The patient describes an ‘increased effort to breathe,’ ‘heaviness,’ ‘air hunger,’ or ‘gasping’ and may also have chronic cough that may be intermittent and may be unproductive. There is typically a history of exposure to risk factors, including tobacco smoke, occupational dusts, or chemicals. COPD includes chronic bronchitis and obstructive bronchiolitis with fibrosis and obstruction of the small airways as well as emphysema with enlargement of airspaces and destruction of lung parenchyma, loss of elasticity, and closure of small airways. Most patients with COPD have a combination of chronic bronchitis and emphysema. For clinical purposes, COPD is defined on a spirometric basis as a disease state characterized by airflow limitation that is not fully reversible, with an FEV1 to forced vital capacity (FVC) ratio 5 years’ exposure; questionnaire study 3 years’ exposure to jet fuel and noise; adjusted for age, gender, and noise exposure Oil refinery; current exposure: toluene up to 18 ppm Dockyard + paint and lacquer industry; exposure to solvent and noise; value adjusted for age, gender, and current exposure to noise 12 years’ exposure to jet fuel and noise; adjusted for age, gender, and noise exposure

1.4 (1.1−1.9)

1.7 (1.14−2.41)

1.8 (0.6−4.9)

2.4 (1.6−3.7)

2.41 (1.04−5.57)

Aviation industry; current exposure: toluene up to 3.6 ppm, xylene up to 2.24 ppm; exposure index 55 dBA, Lden). In such situations, the so-called railway bonus disappears. Two recent studies between Sweden and Korea confirmed this finding: total annoyance increased, when the sound levels of both sources were beyond 55 dBA. In dominant situations, the annoyance from the single source is typically (but not always) higher than the total annoyance. Some studies found less railway annoyance (Fig. 2) when substantial road traffic noise (> 60 dBA) is present and higher annoyance when the noise is below 50 dBA. This finding is compatible with both, experimental work showing an asymmetry in the capabilities

1,00

Proportion annoyed by total traffic sound environment

Proportion annoyed by railway noise

0,80

0,60 Equal road and rail 0,40 Only railway

0,20

0,00

0,80

0,60

Equal

One dominant source

0,40

0,20

0,00 45

50 55 Soundlevel LAEq,24h (dB)

60

50

55 60 65 Sound level, railway and road traffic, LAEq,24h tot (dB)

Fig. 1 Estimated relationship between sound level of rail or road (left) or total traffic sound exposure level LAeq, 24 h, tot for persons equally exposed to railway and road traffic noise (upper curve) and for those exposed to one dominant noise source (railway or road traffic) and total annoyance (proportion moderately, very, and extremely annoyed). Evi Öhrström. Presentation Internoise 2007, Istanbul.

70

Combined Transportation Noise Exposure in Residential Areas

1 = [32.2,50.0 dBA,Lden,mw] 2 = [50.0-60.0 dBA,Lden,mw] 3 = [60.0-74.5 dBA,Lden,mw] 1

0.6

2 3

0.4

0.2

0.0 40

50

60 70 80 Sound level rail,Lden,dBA

Adjusted to: main road exposure

Proportion highly annoyed: rail

0.8

699

90

Fig. 2 Exposure–annoyance curves for railway noise by three levels of additional road noise exposure (mw, motorway): receiver point beyond 300 m of the rail track. Heimann, D., de Franceschi, M., Emeis, S., Lercher, P. and Seibert, P. (eds). (2007). ALPNAP comprehensive report, Trento, Italy: Università degli Studi di Trento, Dipartimento di Ingegneria Civile e Ambientale.

of maskers (road traffic sound is more difficult to mask than railway sound), and field studies where higher source annoyance is found when the background level is low. A major drawback of these studies is that neither the amount of vibration nor other relevant nonacoustic factors were considered in the analyses. Recently, the combination of road traffic and tramway noise was subject of new research. With increasing demand for public transportation tram operations were extended into critical night and morning hours with respect to sleep. A study in Belgrad found higher night annoyance in the presence of public transportation (Paunovic). The combination of busses and trams at night was the most annoyingdindependent of noise level. This indicates the importance of disturbing sound qualities not covered by the Aweighting scheme (LFN) plus the accompanying vibrations from these sources (see there). The neglected role of non A-weighted factors (after Schomer et al., 2013) becomes especially relevant with tramways. Acceleration, deceleration, and curve squeal are perceptually dominant features which can be assessed only with specialized acoustic (TETC: total energy of the tonal components, sputtering, and nasal indices) and/or psychoacoustic indicators (roughness, fluctuation strength, sharpness), as laboratory studies in Lyon (at ENTPE) and an in-depth field assessment in the city of Graz have demonstrated. In the ENTPE studies, specific acoustic indices (based on earlier work on road and tramway noise) were used and showed that road traffic and tramway traffic partial annoyance responses were influenced by each other. Nevertheless, the strongest component model best predicted the total annoyancedwith the known limitation to account for synergistic or antagonistic effects. The current WHO-evidence update (2017) on noise annoyance contains a meta-analysis regarding effects of combined exposure in the supplement. Only two studies with combined rail and road exposure were included. Significant effects of combined rail-road exposure were found. With a 10 dB increase of combined noise (LAeq, 24 h), a French study showed an OR of 2.5 for higher annoyance. For the Tyrol-study an OR of 1.5 was calculated. In the Tyrol-study a slight masking effect at higher road traffic exposure levels decreased the mean observed effect. Overall, the results of the various studies are not equivocal and some complex, level-dependent effect modifications have been observed, which need to be interpreted in the specific context of the study area. The most important reasons for inconsistent results are:

• • • •

The limited validity of the total annoyance reporting. Neglecting different spatial patterns of exposure: annoyance may differ in situations where both sources affect the same building side compared to different sides (front and rear). No consideration of contextual factors (third source present or not, topography, meteorology, different building shapes, sizes, and codes). Differing statistical power of assessment methods: dichotomous approaches (dominant/nondominant situations) versus categorical approaches (consideration of three or four exposure levels or differences between the sources).

Considering the many combinations of possible reasons, it is not surprising that results seem inconsistent overall. To gain deeper insight, relevant studies could be reanalyzed in the light of the reasons outlined above.

700

Combined Transportation Noise Exposure in Residential Areas

Specific Findings: Aircraft, With Road or Rail Traffic The combination of road and aircraft, is less frequently analyzed. Some older experimental studies have consistently found that aircraft noise heard within a road traffic noise environment is less annoying. A Swiss–Austrian analysis challenged this common wisdom. It found evidence for a complex level-dependent effect modification. Between 45 and 60 dBA of aircraft noise exposure the ones with less road traffic noise score higher on annoyance than those with more road traffic noise. At higher aircraft noise exposure (> 65 dBA), the groups with the highest and the lowest road traffic exposure exhibit the highest scores of aircraft noise annoyance. It seems when aircraft is the dominating source intermediate levels of traffic noise (45– 60 dBA) are better tolerated than high or low exposure. When the focus is shifted to road traffic annoyance, it becomes evident that, between 50 and 60 dBA, aircraft noise exposure does not exhibit a differential effect, whereas higher aircraft noise (> 55 dBA) increases road traffic annoyance at lower levels of road traffic noise and vice versa. This is also confirmed by the dominance analysis: when aircraft noise is the dominating source, road traffic annoyance is higher for below 50 dBA traffic noise exposure and lower at higher road noise levels. However, more data from a broader range of urban contexts would be needed to draw firm conclusions about this frequent noise source combination around large cities. A recent study in two Asian cities (Ho Chi Minh City, Hanoi) investigated a different noise scenario. Uniquely, road traffic noise was the dominating source (average: 71 dB, LAeq vs. 51 dB, LAeq for aircraft at nine investigated sites). Due to the high proportion of powered-twowheelers and old trucks the road traffic composition is quite different from western urban areas. The exposure response curve for road traffic started around 70 dB, LAeq, while the one for aircraft ended at that scale point. Therefore, it is no surprise that the dominant source model fits bestdthe annoyance equivalents model cannot even be applied. Due to its extreme exposure situation (source levels do not even overlap), the data are not suited to develop models further. Much better suited are two other studies from South-Korea and Germany. The Korean study compared two single source (second source < 10 dBA) samples with two combined exposure samples (nondominant < 3 dBA vs. road dominant > 5 dBA). Between 55 and 75 dBA, the total annoyance with combined noise is higher than from single sources the higher the level of both sources is. The total annoyance with equally noisy sources was higher than in the dominant road noise sample. The larger NORAH study did not sample a prioridbut had enough sample size to study combinations. In this study, the most annoying sourcedaircraftdshows the greatest impact on total noise annoyance. The perceptual dominance of the aircraft exposure on total annoyance remains even in a situation with equal exposure levels from road traffic. Thus, road traffic, ranked as second annoying source, did not contribute significantly to total noise annoyance (Noise range was limited up to 60 dBA, LAeq, 24 h). Noteworthy, this finding is incompatible with the annoyance equivalent model and supports the dominant source hypothesis for this specific combination. A similar result was obtained for the aircraft-rail combination. In both, the equally loud and in the rail dominant combination, the total annoyance was lower than in the dominant aircraft-rail combination. This finding was confirmed in a multiple regression analysis, adjusted for sociodemographic factors and the additional consideration of attitudinal factors. Generalization is, however, limited through the lower exposure (47.5–60 dBA, LAeq, 24 h for all combinations) and the smaller dominance criterion (> 2.5 dBA). The current WHO-evidence update (2017) analyzed five comparable studies (N ¼ 1949). The road þ aircraft combinations show larger effects on annoyance than any of the road þ rail or road þ industry combinations on noise annoyance.

Specific Findings: Aircraft, Railway, Road Traffic Noise Three source combinations have only recently been investigated. In Montreal, a phone based sample (N ¼ 4336) found 20.1%, 13.0%, and 6.1% annoyed by road traffic, airplane, and train noise respectively. Noise was assigned by a land use regression model (LUR), which did fit well with road traffic noise. However, the single relationship of aircraft and train noise with annoyance was not significant, which suggests a problem with the LUR-model used. Nevertheless, total noise exposure exhibited a significant increase of highly annoyed people in an exposure response relation (prevalence proportion ratio: 1.04 (1.02–1.06) per 1 dBA, Ldendfrom 45 to 75 Lden). The increase was smaller than for road traffic alone 1.10 (95% CI: 1.07–1.13). The result was weighted and adjusted for age, sex, and education. In the same Montreal sample also sleep disturbance (“Was your sleep disturbed by noise in the past 4 weeks,” 12.8%) was studied. In a distance measure analysis, sleep disturbance prevalence increased for those exposed to both rail and road noise, but not for those exposed to both road and planes (Fig. 3). The increase of sleep disturbance was significant with both Lnight and distance to transportation noise sources. Unique, an association between Lnight (LUR noise model) and sleep disturbance caused by outdoor environmental noise (all sources from neighborhood including transportation) was also found. While this “overall” noise estimate provided by the used LUR-model may be an advantage, a limitation is likely due to its underestimation of the single noise effects from air and trains. Sleep disturbance was highest for the total noise estimate. A laboratory study of the DRL in Germany gave more detailed insight into the effect of multiple transportation noise exposure on sleep and cardiovascular function. While the cardiac arousals did not habituate across nights, the effects on objective sleep parameters were modest and dependent on the source. Road traffic noise impacted strongest on sleep structure and continuity, while air and rail traffic noise showed strongest effects on subjective parameters. Awakenings and arousals increased slightly in three and two source nights compared with control or single source exposure. The authors warn to extrapolate the findings to the general population, as only short-term effects in a selected healthy group with moderate traffic density were studied.

Combined Transportation Noise Exposure in Residential Areas

701

Fig. 3 Marginal proportions of sleep disturbance by transportation noise according to proximity to single and combined sources of transportation noise: airplanes (1000 m from NEF25 or in NEF25), roads (100 m from an artery or highway) and railways (150 m from a railway line or main line of a railroad shunting yard). Perron, S., Plante, C., Ragettli, M., Kaiser, D., Goudreau, S. and Smargiassi, A. (2016). Sleep disturbance from road traffic, railways, airplanes and from Total environmental noise levels in Montreal. International Journal of Environmental Research and Public Health 13(8), 809.

The ALPNAP-study (N ¼ 1641) along a European transit traffic route through the alpine part of Austria studied a different three source combination (highway, main road, and railway). See section on “Noise and Total exposure.”

Transportation Noise Combined With Special Acoustic Features Traffic and Impulse Noise A seminal, European multinational study addressed the question whether impulse noise exhibits a moderating effect on annoyance from traffic noise. In laboratory studies, impulsive noise and road traffic noise were paired in all combinations of levels at 35, 45, 55, and 65 dBA to subjects that were asked to make separate judgments of impulse, traffic, and total annoyance. Significantly higher annoyance were found in low background situations (< 50 dBA) at noise levels between 35 and 60 dBA compared to a high noise background (> 50 dBA), thus providing strong support of a level-dependent correction. The source-specific ratings of annoyance suggested a penalty of 10 dBA in low-noise environments, which gradually disappears up to 80 dBA. The total annoyance ratings did not support this finding (remember the problems with total annoyance ratings). In contrast, the pooled field studies from four countries did reveal higher adjustment factors for impulse noise over the full noise level range ( 11 dBA). Lower adjustment factors were found in France where the road traffic annoyance relation was stronger than that in the other countries (possible context effect). What became clearer from these carefully designed studies is that experiments are powerful for relative comparisons between sound sources but not so well suited for absolute comparisons with real life noise situationsdespecially at higher noise levels (compare also the similar discrepancies seen in the studies on noise and vibration).

Traffic and Low-Frequency Noise Busses, trucks, tramways, and underground rail systems, but also ordinary road traffic emit structure-borne sound and low frequency noise (LFN) into buildings. The ubiquitous presence of LFN is largely neglected, although studies in the Netherlands, Taiwan, and Italy show the contrary. Though, there is general agreement that the dBA underestimates the annoyance potency for frequencies below 200 Hz because LFN penetrates more easily through walls and windows. Higher frequencies are more effectively attenuated than lower frequencies. Therefore, in impact assessment, possible underestimations may occur and should be screened for. The difference between the C- and the A-weighting would quickly give an indication (using DIN 45680). Older Scandinavian studies, conducted in typical LFN environments illustrated the problem. Combinations of noise from road traffic and ventilation, heating, and air-conditioning systems are becoming increasingly prevalent in urban areas. Such systems, often positioned on the quiet side of the building or a building block, typically contain dominant portions of low frequencies (20–200 Hz). These studies observed higher annoyance scores in areas with combinations of road traffic noise from the outer facade and low-frequency noise from such installations in the quieter backyards. Further effects reported concerned the cortisol response upon awakening, sleep disturbance and tiredness in the morning. A small German study among children (N ¼ 56) who were

702

Combined Transportation Noise Exposure in Residential Areas

exposed to a high level of lorry noise at night (indoor levels Lmax ¼ 33–52 dBA resp. 55–78 dBC) found this low-frequency exposure significantly associated with concentration, memory, and sleep and also higher cortisol secretions, when exposure took place in the first half of the night. The reported higher annoyance with public transportation in inner city areas of Belgrade was also associated with the operation of busses and tramways in the shoulder hours of the day. On main roads, LFN from trucks and diesel cars is often highly intrusive since houses are situated closer to the road. The annoyance response for a main road in the ALPNAP-study (N ¼ 1641) was the highest among the latest WHO evidence review exposure response curves. A midsize study (N ¼ 820) from Taiwan found an increasing risk of hypertension specifically for exposure to road traffic noise at 63 and 125 Hz even at relatively low levels (people  51 dB at 125 Hz had an OR of 4.65 (95% CI ¼ 1.46–14.83). Overall, earlier studies were often small in size and different LNF exposure indicators were used. Still, few studies analyze their health results due to the larger effort needed on the exposure side. Thus, it is difficult to judge whether the observed effects can be attributed to the specific LFN-components or are due to combined exposure. Note: low frequency noise components below 50 Hz increase the likelihood of vibrations in buildings (see later). This issue is also related to wind turbine noise, which is not covered in this article.

Combined Sound Sources From Industry and Transportation Industrial noises entering residential areas can be highly heterogeneous due to the different activities involved. They are typically perceived as intermittent or fluctuating at randomdirregular enough to attract attention. Often, they contain a mix of tonal, impulsive characteristics, and low-frequency noise. In general, these mixtures have been shown to elicit higher annoyance responses already at lower ambient noise levels. The only comprehensive field study (N ¼ 1875) at 11 locations in the Netherlands provides summary exposure response curves. Among the studied industrial activities, shunting yards elicit higher annoyance at the same sound levels compared with other industrial activities. The unpredictable shunting operations, combined with vibrations, may be responsible. A in-depth field study in France (N ¼ 99) investigated a steady and continuous industrial noise all day and night (21–47 dBA, 27–52 Lden) combined with road traffic noise (41–68 day, 34–61 Lnight). Several psychophysical and perceptual models were evaluated. Personal, attitudinal, and contextual factors were collected by interview. Important to mention that the industrial site had a good imagedand only 8% were employed there. Nevertheless, 27% of the people surveyed found both noises equally annoying, 57% found road noise more annoying. Not surprisingly, the energy summation model was a poor performer (R2 ¼ 0.05). More surprisingly, the strongest component model performed similar (R2 ¼ 0.92) to a more differentiated mixed model which included both sources plus a source difference measure. In the mixed model all terms contributed about equal to the annoyance response. Although both models predicted annoyance equally well, the mixed model provided more information. The difference term in the model indicated the importance of relevant amplitude fluctuations (mainly during night, when the road traffic does not mask the industrial noise). Among the main nonacoustic factors of importance (only tested by correlation) were fear (of accident), and the unexpected character of the industrial noise. Noise sensitivity was weakly correlated. For this study, the WHOevidence update (2017) on noise annoyance calculated a significant effect on annoyance (OR of 2.3) per 10 dB increase of the combined noise (LAeq, 24 h). The group conducted follow-up research in the laboratory. Again road-industry sound combinations with variation of temporal and spectral features were investigated. Zwicker’s loudness N performed best to characterize the perceptual nature of the sources. The analysis of specific and total annoyance responses revealed coexistence of synergetic and strongest component effects between the combined noises. The size of these components vary with the type of combination (different vehicle pass-by noise combined with industrial variation) experienced. The best model here was the vector summation model before the mixed model. The conclusions: to assess synergetic or antagonistic (inhibitory) effects perceptual model are to be favored. Among those, the vector summation and the mixed model seem more appropriate than the strongest component, as it does not allow to account for interaction.

Models for Combined Noise Sources (Multisource Situation) Since in planning prospective assessments are needed, a number of models were proposed to allow the impact assessment of several noise sources in practice. They can be classified as either sound level oriented (psychophysical models) or effect oriented (perceptual models). Most psychophysical models are only adjusted variants of the most basic one, the energy summation model. This model simply sums up the equivalent continuous sound level of the single sources to assess the relative contribution of the sources for a given environment. Energy summation is still the most common practice to which regulators and practitioners always resort, although many reservations and critique have been issued over the years and led to adjustments. Some examples are given in the following text:

• • •

Pressure summation model: this leads to 6 dB higher levels when two equal overlapping sounds are summed up instead of only 3 dB. Summation and inhibition model: this applies a correction for absolute differences in the component-noise levels. Level-dependent correction model: this is the only model that accounts for slope differences.

Combined Transportation Noise Exposure in Residential Areas



703

Dominant source model: this acknowledges the results of field studies that total annoyance is often equal to or less than the annoyance by the dominant source. But it does not account for changes in the nondominant source or the level difference to the dominant source. Importantly, it does not account for possible interactive components of the sourcesdwhich occurs more often than you would think.

Generally, problems with simple energy summation are specifically expected under the following conditions:

• • • •

Noises with different time patterns of occurrence (rail vs. highway noise). Noises with different exposure patterns: for example, aircraft from above versus road traffic (only one side of the house). Noises with different frequency spectra (road noise vs. helicopter noise). Noises with special frequency characteristics (tonal and low frequency) or accompanying extra exposures (road: air pollution; rail: vibration).

Since these noise characteristics are known to elicit different annoyance responses, some of the models have a built-in option to adjust for the annoyance potency. The so-called annoyance equivalents model, mimics the use of toxic equivalents in air pollution risk assessment. Note, it does also use the principle of energy summation. However, instead of summing the sound energy from sources directly, the noise from the respective source is transformed into equally annoying sound energy levels of a reference source first and these transformed levels are then summed up. A schematic graphical display of the transformation procedure is provided in Fig. 4. Step 1. Given two sources LA and LB, LA is chosen as reference (road noise is preferred as reference in general practice). Step 2. Then LB is transformed into the equally annoying level of A (LB0 ). Step 3. LA and LB0 are summed up on an energy basis, resulting in L. Step 4. The total annoyance is obtained from the exposure–annoyance relationship with L. This procedure is based on a proven theorem assuming to be correct when five conditions hold, of which independence is the most critical one. Prerequisite for this procedure is the availability of valid exposure–annoyance functions for the individual noise sources which fit with the exposure situation you have to assess. Updated exposure response information is available for aircraft, rail, and road traffic from the WHO evidence reviews (2017). Only limited information is available for industrial noise. Noises with specific spectral characteristics or impulsive components remain a major problem. Moreover, the occurrence of masking and variation in the spatial pattern of exposure (Both sides or only one side of the home is exposed by all sources) are not solved either. Optimally, annoyance relationships should be linear and have nearly equal slopes. Then, the choice of the reference source does not matter so much. An alternative approach is the ANSI method (ANSI S12.9 Part 4, 1996) for combined noise environments, which tries to accommodate for some of the problems. This is, however, a computationally more demanding procedure, since an adjusted day night level (DNL) is calculated from the total adjusted sound exposure (ASE) on the average day. In this method, each event (sound exposure levels (SELs) from aircraft flyover or rail passage) is adjusted for day- and night-time (before multiplied by 10) and then summed into the total adjusted DNL. This method elaborates in several ways: 1. The key difference to all other methods proposed is that ASEs of individual events (SELs) are summed up and then combined rather than applying only adjustments to the partial sum for the individual sound sources. 2. Frequency-weighting A is used; except for high-energy impulse sound (C-weighting) and sounds with strong low-frequency content (SEL from the combined 16, 31, and 63 Hz octave bands). 3. For sounds with special characteristics (except those under 2), the (predicted or measured) sound exposure is multiplied by a specific adjustment factor (highly impulsive, regular impulsive, tonal, and rapid onset rate).

Fig. 4 The annoyance equivalents model. Modified after a powerpoint presentation of Sources: Schomer, P. (2005). Assessing multi-source noise environments with an “equally annoying” exposure summation model. ASA-meeting; and Miedema, H. M. E. (2004). Relationship between exposure to multiple noise sources and noise annoyance. The Journal of the Acoustical Society of America 116(2), 949–957.

704

Combined Transportation Noise Exposure in Residential Areas

4. The ANSI method, allows correction of each single event for the background sound in which the event is perceived. Thus, quiet events may be excluded from the sum. This method implemented several features supported by theory and practical experience over time:

• • • •

The use of the weighted SEL is generally recommended when there are distinct events to the noise, as with aircraft or railway noise. The account for higher annoyance by special sound characteristics. The possible correction for background sounddcompatible with the notice event research from lab, field, and simulation studies. In addition, loudness weighting (ISO 532 B-Zwicker) or loudness-level weighting (ISO 226) could be used instead of Aweighting.

An acknowledged disadvantage is the larger burden on measurement and computation. Advanced alternatives are recently developed perceptual models (at ENTPE in Lyon), which base their model input on specific acoustic indices, which are derived from acoustic feature analysis of recorded sounds in laboratory experiments. Both, mixed and vector summation models showed good performance (see previous section on sound combinations with industrial noise). An evaluation against the old European Union exposure-response relationships has been done in a recent field assessment. It is time to make these more elaborated methods mandatory for cumulative noise assessment in environmental health impact studies.

Combined Exposure to Noise and Vibration This form of combined exposure is mainly relevant in the vicinity of rail tracks, rail tunnels or main roads with heavy traffic (houses built close to the source). However, when ground conditions are favorable, the propagation of vibrations can be transmitted over several hundred meters. These vibrations enter buildings and lead to vibrations of the floor and rattle (e.g., in cupboards). Vibration due to low-frequency aircraft noise can also occur within the runway sidelines of airport neighborhoods. Note: the pathway to possible health effects of vibration goes via the somatosensory system. Vibrations and associated low frequency noise is also more difficult to locate and to cope with. Furthermore, the brain attempts to integrate auditory and vibration stimuli via the process of cross-modal perception, which can lead to overstimulation but also inhibition effects. This applies to possible infrasound components of the combined exposure as well.

Experimental Studies In experimental settings, most studies found some evidence of an interaction between the effects of noise and vibration but also showed that the effect modification was complex and did not always improve predictions of annoyance. This may partly be related to the decrease in annoyance when the noise exposure gets higher (masking hypothesis or perception threshold shift). This led to the conclusionda more accurate prediction of annoyance may be obtained from a relation involving a summation of the effects of both stimuli. The observed subjective dB equivalence from such studies ranged from 6 to 10 dB when vibration was experienced simultaneously with noise. A recent study with high speed trains revealed a considerably greater annoyance caused by combined noise and vibration compared with noise alone. However, vibration did not influence ratings of noise annoyance. Studies (N ¼ 12, N ¼ 24) from the Gothenburg sleep laboratory investigated in addition the effect on sleep quality and cardiovascular parameters. While increasing vibration amplitude leads to decrease in sleep quality ratings, the noise ratings were not affecteddwhich indicates that participants distinguish perceptually vibration from noise. The high vibration (High ¼ 0.072 mm/s2 RMS unweighted) and noise condition was also associated with a higher heart rate and an effect on the latency of the response. Seventy-nine percentages of participants showed an average increase of at least 3 bpm per train. No difference was found with respect to noise sensitivity or gender. Additional effects were reported on the EEG recordings: sleep macrostructure was most affected in high vibration nights with 36 events, with increased wakefulness (P < .05), reduced continual slow wave sleep (P < .05), earlier awakenings (P < .05), and an overall increase in sleep stage changes (P < .05). The limitation of the laboratory studies: (1) the short duration of exposure, (2) the machine generated vibration effect, is not yet able to fully mimic the 3D-exposure from the vibrating building which people experience in their homes.

Field Studies Only a handful of study groups investigated this exposure combination with adequate sample size. A few of these studies have actually measured vibration levels in the house, mostly of a smaller study subset. Other studies relied on subjective assessment of perceived vibration experience.

Railway Noise and Vibration The majority of studies investigated railway noise and vibrations. The results cannot easily be generalized.

Combined Transportation Noise Exposure in Residential Areas

705

At the receiver level

The noise component is mostly rated as the larger problem in terms of annoyance and more often attributed to perceived health effects. In contrast, only a small portion of the exposed believes that they could protect themselves from the vibration component, whereas the majority is confident to protect themselves against the noise. Interestingly, more people think they could get used to vibrations but not to the noise.

At the research level

A carefully conducted smaller German study with vibration measurements exhibits a trend for a stronger effect of vibration experience on annoyance at lower levels of noise exposure (below 60 dBA). The dBA equivalent effect at 50 dBA is around 15 dBA and shrinks to 3 dBA at 65 dBA sound exposure. In contrast, the larger Swedish and Austrian population studies (Figs. 5 and 6) did not replicate smaller effects on annoyance at higher noise levels. Both studies observed a continuous effect in terms of proportion highly annoyeddcorresponding to an equivalent of 10–15 dBA due to the additional perceived vibration. Early Japanese studies also did not observe larger effects of vibration at lower noise levels; rather the differences in annoyance reached significance only at higher levels. The additional effect of vibration on noise annoyance was around 5–6 dBA. A following path analysis of the same data revealed strong direct effects of vibration annoyance on noise annoyance (railway > road) and smaller indirect contributions of measured vibration level, which provides support for the use of subjective assessments of vibration. The later Japanese studies were more interested in the differences in reported annoyance between high-speed (Shinkansen) and conventional railway and road traffic. At the same noise level, measured vibration levels were highest near high-speed rail followed by conventional rail and road traffic. This may explain why no rail bonus is observed in Japan, where annoyance due to conventional railway and road traffic is about the same. Another study used structural equation modeling to analyze potential interactions between vibration and noise for residents living within 100 m of the Shinkansen railway line. They found the strongest effect modification for those living within 40 m of the rail track. The effect ranged from 5 to10 dBA equivalents. Road traffic noise was found to exert an inhibitory effect on railway annoyance, whereas rail noise did not modify road traffic annoyance. In an earlier study, road traffic annoyance was inhibited, but only at higher rail noise levels. This finding was more prominent in the field study than in a comparable laboratory study. The correlation between level and response was much stronger in laboratory studies (more level dependent). A Swedish research project (TVANE) concluded in 2012 that a 5–7 dB-lower noise level is needed in areas where railways cause strong ground-borne vibrations or a large number of train pass-bys (> 400) occur. A new analysis from this project (2017) compared one area without vibration (N ¼ 521) with two areas (N ¼ 341) exposed to vibration (0.20 and 0.38 mm/s). The exposure response is in Fig. 7. It indicates an effect size of about 7 dBA at 60 dBA. A regression analysis supports further the influence of noise annoyance in the presence of vibration, but annoyance by vibrations is not influenced as much by the noise level. In Japan (2017), data of six socio-acoustic surveys conducted over 20 years, were subjected to a secondary analysis of the community annoyance response associated with Shinkansen railway noise and vibration. The combined effect of noise and vibration exposure on annoyance was confirmed and corresponded to a 10 dB equivalent at noise levels beyond 48 dBA.

Summary

While in recent laboratory studies an effect of vibration on annoyance ratings is often not observed the field studies show consistently a vibration effect (5–15 dB-equivalents). Surveys using subjective perception questions observe higher effects than studies using vibration measurements in a subsample and calculate exposure for the rest of the sample. Why? Measurements can

Fig. 5 Proportion annoyed (rather/highly) due to railway noise (LAmax, dBA) in areas with (Partille) and without (Lund) additional vibration exposure. Graph based on Table 2 from Öhrström, E. (1997). Journal of Sound and Vibration 205, 555–560.

706

Combined Transportation Noise Exposure in Residential Areas

Fig. 6 Exposure–annoyance curves for railway noise by degree of annoyance from perceived vibration exposure. Heimann, D., de Franceschi, M., Emeis, S., Lercher, P. and Seibert, P. (eds). (2007). ALPNAP comprehensive report, Trento, Italy: Università degli Studi di Trento, Dipartimento di Ingegneria Civile e Ambientale.

Fig. 7 Percentage annoyed people from railway traffic noise in Area 1 (no vibration) and Area 2 (with vibration)dcompared with EU-standard curve (Miedema and Oudshoorn, 2001). Ögren, M., Gidlöf-Gunnarsson, A., Smith, M., Gustavsson, S. and Persson Waye, K. (2017). Comparison of annoyance from railway noise and railway vibration. International Journal of Environmental Research and Public Health 14(7), 805.

underestimate perceived exposure for two reasons: first, measurements are often conducted outdoors and indoor vibration varies depending on building type. Even, when measured indoors, vibration can vary substantially within rooms and between rooms and floors. A detailed exposure assessment study in the city of Graz (2016) supports this conclusion. By applying psychoacoustic indicators and measuring vibration in the homes an underestimation of the combined effects on annoyance and sleep could be observed in a HIA of public complaints, where individual guideline values were not exceeded.

Aircraft Noise and Vibration/Rattle Only few studies are available on this prevalent noise combination around airports. American investigators described a relationship between an event-based metric of low-frequency aircraft noise and annoyance due to perceived rattle and vibration. The range of levels within which the likelihood of rattle due to measured low-frequency noise increases rapidly was found to be around 75 and 80 dB (C-weighted). The percentage noticing rattle, however, doubled already between 62.5 and 72.5 dB (C-weighted). Also, helicopter noise-induced vibration and rattle has been shown to increase community annoyance.

Combined Exposure to Noise and Air Pollution Although noise and air pollution show strong associations along road traffic routes, only few research groups have addressed this combination in major field studies before 2005. In the meantime, more research groups analyze not only annoyance but also more

Combined Transportation Noise Exposure in Residential Areas

707

severe health effects. Studies during the past decade have shown that the correlation between noise and air pollution varies widely, depending on a multitude of factors. Even in large cities such as London you find substantial spatial differences not only for noise but also for air pollution. Therefore, air pollution exposure needs to be assessed at the same GIS-level as noise exposure is usually done (grid of 10 m2). Otherwise, misclassification will bias toward the null, when both exposures are in a regression model and interactions will rather be overlooked.

Effects on Annoyance The research group at TOI in Oslo has analyzed the relationship between noise and air pollution annoyance in more detail with a three-step procedure and applied various sensitivity tests. By omitting sensitive persons, they also tested against bias due to negative affectivity. Further adjustments for other sensitivities, socio-demographic variables, and mobility did not change the result: both noise and air pollution annoyance depends in a highly significant way on both the NO2-indicator and the noise indicator (Fig. 8). Based on their work they recommended the integration of air and noise pollution modeling to get more robust exposure–effect relationship in transportation assessments and avoid misleading results by relying on separate models. The Tyrol studies have repeatedly (1992, 1995, 1999, 2007) demonstrated the mutual dependence of the annoyance response from this exposure combination. Overall, a shift in the response toward higher annoyance has been consistently observed over 15 years, exhibiting a strong effect across all noise exposure levels (Fig. 9).

Effects on Other Health Outcomes

100%

Estimated percentage highly annoyed

Estimated percentage highly annoyed

In the framework of the ENNAH-project a collaborative review (2012) about cardiovascular effects of the combined exposure to noise and outdoor air pollution was presented. Eleven studies were eligible. Most did only test mutual adjustment of either noise

Highly annoyed

90%

Somewhat annoyed

80%

Not annoyed

70%

Does not hear

60% 50% 40% 30% 20% 10% 0% 0

10

20

30

40

50

60

70

80

90

100

3-month periodic average NO2 at facade most exposed side ug/m3

100% Highly annoyed

90%

Somewhat annoyed

80%

Not annoyed

70%

Does not notice

60% 50% 40% 30% 20% 10% 0% 50

55

60

65

70

75

80

24h equivalent sound pressure level in dBA

Fig. 8 Estimated probabilities of people being highly annoyed with exhaust/odor right outside their apartment by NO2-levels. Separate curves for people differing in their degrees of noise annoyance (left panel). Estimated probabilities of people being highly annoyed with road traffic noise by 24 h equivalent SPLs. Separate curves for people with different degrees of annoyance with exhaust/odor (right panel). N ¼ 2990. Klaeboe, R. et al. (2000). Atmospheric Environment 34, 4727–4736.

Fig. 9 Noise exposure–annoyance curves for motorway noise by degree of annoyance from perceived particles/soot exposure (left) and traffic exhaust (right). Heimann, D., de Franceschi, M., Emeis, S., Lercher, P. and Seibert, P. (eds.). (2007). ALPNAP comprehensive report, Trento, Italy: Università degli Studi di Trento, Dipartimento di Ingegneria Civile e Ambientale.

708

Combined Transportation Noise Exposure in Residential Areas

or air pollution. Only two studies tested for interaction. One study lacked power due to a high correlation between noise and air pollution, the other large US-study did not find any indication of interaction between noise and black carbon levels with respect to coronary mortality. Conclusion: Noise and air pollution probably exert independent effects on cardiovascular health. Later studies between 2012 and 2017 show also more or less small relationships of both exposures with a broad variety of cardiovascular and metabolic outcomes, among them also precursor of disease studies (e.g., TAC). Studies still face substantial uncertainties in exposure assessment and not surprisingly, “subjective” exposure estimates in a German study (2017) show relationships of similar size compared with “objective” noise and air pollution measurements. Hitherto, no proof of interaction is observed among studies with cardiovascular and metabolic outcomes, however, small additive effects between noise and air pollution cannot be excludeddas the power to detect multiplicative interactions is usually low. It seems that exposure measurement error is a particular methodological challenge in studies of this kind. Only, when the noise exposure is measured validly (improved noise indicators are needed) and with small error and is low correlated with air pollution, then a reasonable chance will occur to better disentangle this relationship. Other studies addressed lower levels health outcomes (health status, health related quality of life (HRQoL), cognition). Few did formally assess interaction. A HRQoL-study from New Zealand (2015) showed that air pollution annoyance predicts greater variability in the physical HRQOL domain while noise annoyance predicts greater variability in the psychological, social, and environmental domains. A Canadian study using subjective assessments observed significant negative effect of noise annoyance on both mental and physical health factors of the SF-12 with a significant covariance between noise annoyance and odor annoyance. In addition, a significant effect of psychological responses to cumulative exposures on HRQoL was found. Data from the German SocioEconomic Panel were used to follow-up persons with good health status in 2009 (N ¼ 6544) and examined the risk of poor health in 2011 for participants with perceived job insecurity and combined noise and air pollution in 2009. For both studied risk factors a significant relation was found (RR ¼ 1.4). In a joint risk factor analysis, persons with both risks (higher job insecurity and higher combined annoyance) in 2009 were associated with a higher risk of being in poor health after 2 years RR ¼ 1.95 (1.49–2.55). Adjustment were made for age, gender, sociodemographic and economic position, and health-related behavior in a multivariable model using Poisson regression. Detailed analyses of effect modification by noise and air pollution on cognition were conducted in the large population-based Heinz Nixdorf Recall cohort study (N ¼ 4086). In fully adjusted models, high noise exposure was negatively associated with a global cognitive score. A categorical approach revealed stronger negative associations in those participants with double exposure compared to effect estimates of each single exposure (Fig. 10). Thus, while at the lower health levels independence of noise and air pollution effects is likely to be the major routedadditive effects may occur with specific health outcomes when a combination of noise and air pollution or other stressors is at work. Exposure errors may still blur the true picture. The relation of combined air and noise pollution on respiratory diseases is not yet sufficiently explored. Although a series of large ecological studies in Spain (2001, 2006, 2016) and a cross-sectional analysis of a cohort of children in Southern California from 2017 suggest an effect of noise not only on the cardiovascular but also on the respiratory system, evidence of interaction was not provided. Two German studies (2003, 2005) among children (age 5–12 years) based on pediatric office data are still unique. This

Fig. 10 Association between air pollution/noise with global cognitive score using indicator variables with Lden (threshold 60 dBA) and air pollution (AP) dichotomized at median, comparing to the group with both low exposures. Models adjusted for age, sex, SES, alcohol consumption, smoking status, ETS, any regular physical activity, and BMI. Group A (reference group)dparticipants with low AP and low noise exposure; Group Bdparticipants with low AP but high noise exposure; Group Cdparticipants with high AP but low noise exposure; Group Ddparticipants with high AP and high noise level (>60 Lden). Tzivian, L., Jokisch, M., Winkler, A., Weimar, C., Hennig, F., Sugiri, D., Soppa, V.J., Dragano, N., Erbel, R, Jöckel, K-H., Moebus, S. and Hoffmann, B. (2017). Associations of long-term exposure to air pollution and road traffic noise with cognitive functiondAn analysis of effect measure modification. Environment International 103, 30–38.

Combined Transportation Noise Exposure in Residential Areas

709

Fig. 11 Number of physician contacts due to bronchitis per year by traffic emission class (increasing combined exposure to noise and air pollution). Ising H. et al. (2005). Somnologie 9(2), 105–110.

approach used a 3-grade traffic exposure index to estimate combined effects of noise and air pollution. In the midsized crosssectional study (N ¼ 401), significantly higher odds ratios in the highest traffic exposure category were found for diagnoses of asthma, bronchitis, and atopic dermatitis. Results were adjusted for age, sex, education, persons per household, pet keeping, and environmental tobacco smoke exposure. In a small subsample (N ¼ 68), children’s physician contacts over the past 5 years were extracted retro-spectively from office records. The rate of contacts due to chronic bronchitis did show a dose–response increase with the 3-grade traffic exposure index (Fig. 11).

Summary The role of combined noise and air pollution exposure on annoyance is supported by large population studies and should find adequate consideration in EHIA and planning due to its relevant effect size. The interaction seems plausible at the sensory leveldas both exposures trigger different biological systems and their combined afferent input to the brain may induce higher awareness and consecutive higher annoyance in exposed people. Whether there are further additive effects at the level of other health outcomes is currently suggested only by a minority of the larger number of studies published in the past 5 years. A true multiplicative interaction has not yet been observed. Both pollutants are only small predictors of more severe outcomes. Without prospective designs, larger samples, and less exposure misclassification the chance is low to detect interactions between small predictors.

Noise and Total Exposure in Different Contexts Even by including physical and chemical factors, such as vibration and air pollution, the variance explained of observed annoyance/ health reaction to noise is still small. Personal, situational, and environmental constraints to cope with the total exposure package have further to be taken into account. Noisy (urban) environments frequently coexist with crowding, substandard housing, poverty, and pollution. A study in the Netherlands found a high clustering of other environmental risks with noise exposure at local levels, whereas risks due to radiation or chemical substances were more evenly distributed across the country. American studies (see Table 2) have shown that multiple environmental stressor exposure (including noise) is more prevalent in a poverty sample of children compared to a middle-income sample. Furthermore, it is well known that the presence of multiple stressors reduces the Table 2

Prevalence of stressor exposure in children living in poverty versus middle income

Stressor domain

Poverty exposed (in %)

Middle income exposed (in %)

x2 test

Density Noise Housing problems Family turmoil Family separate Violence

16 32 24 45 45 73

7 21 3 12 14 49

4.60* 4.53* 25.70** 35.20** 30.47** 16.90**

Note: Percentage exposed 0 1 SD above the mean, except for violence wherein any exposure was counted. *P < .05; **P < .01. Evans, G.W. and English, K. (2002). Child Development 73(4), 1238–1248.

710

Combined Transportation Noise Exposure in Residential Areas

effectiveness of coping efforts, which depends on the expenditure of coping resources in the presence of other stressors. During the past decade research into the importance of restoration capabilities (quiet building sides, near-by green areas) made clear that such disadvantaged areas with multiple stressor experience often lack such restoration options. Thus, the conditions of the immediate residential and the wider neighborhood environment are potentially relevant modifiers of any given noise effects and need to be adequately considered in epidemiological studies. However, it is rather difficult to disentangle the various contributing factors (e.g., compare the science story: lead and social class). Selected examples shall point to the public health importance and the further need to study these complex relationships involving noise exposure. In a complex research framework, the seminal longitudinal Alameda County study reported ambient noise exposure to be associated with a loss of physical function in older adults. Other cross-sectional studies point in the same direction; however, most studies lack a noise exposure assessment and are of ecological design. In the meantime more studies followed a multiexposure research design including noise, air pollution and other residential stressors. Still, few studies included positive environmental features related to restoration options, like green space and neighborhood amenities, for which mounting health evidence is accumulated over the past decade. A Swedish study (N ¼ 385) investigated four city neighborhoods near potential sources of environmental stressors (sewage treatment plant, a harbor, roads, and railway tracks, train, and bus station) and a reference area with only local traffic. An annoyance index based on 15 environmental stressors, general health (one item) and life satisfaction (five items), perceived stress in daily life, dwelling and residential satisfaction (one item), perceived restoration possibility (one item), and place attachment (five items). Annoyance by noise and vibration was highest, followed by odors and light. A structural equation model (SEM) showed that levels of annoyance rise in the presence of multiple environmental stressors targeting multiple sensory systems (noise, vibration, odors), even each individual stressor does not exceed typical threshold values. A multinational effort focused on mixed transportation noise exposure and health status (anxiety and somatic health subscales from the General Health Questionnaire (GHQ) in a suburban and rural area (N ¼ 572). Both objective noise and air pollution measurements and perceived annoyance from noise, air, and vibrations were entered in a SEM (Fig. 12). Moreover, restorative quality and residential satisfaction were entered as potential mediators. The SEM adjusted for individual environmental sensitivity, age, gender, education, and housing quality. The direct effects accounted for relatively large amounts of explained variance in residential satisfaction (R2 ¼ 0.48) and self-rated health (R2 ¼ 0.21). Analyzing the indirect effects from perceived traffic disturbances via observed constrained restorative quality of the home revealed significant

Fig. 12 Structural equation model showing the estimated effects of perceived traffic-related exposures on health and residential satisfaction. The standardized regression coefficient is given for each path. Statistically significant regression paths (P < .05) are shown in bold. Calculations for the direct, indirect and total effects based on these coefficients are given in the panels below the path model. Explained proportions of variance (r2) are given for the key variables in the model, together with associations with control variables. Only those control variables with significant relationships (P < .05) to the given variable are displayed. To enhance readability, manifest indicators for latent constructs and error terms are omitted. For all latent variables, the internal consistency of the self-report item sets exceeded alpha ¼ 0.70. von Lindern, E., Hartig, T. and Lercher, P. (2016). Trafficrelated exposures, constrained restoration, and health in the residential context. Health & Place 39, 92–100.

Combined Transportation Noise Exposure in Residential Areas

711

mediation pathways for both residential satisfaction (b ¼ .086, P < .01) and self-rated heath (b ¼ .032, P ¼ .03). This means that restorative quality of the home contributes beyond the effects observed by the traffic stressors and indicates that impaired restoration is an important mediator on the pathway to adverse health effects of transportation noise and its accompanying environmental stressors. To explore the benefits of living in quieter areas, a New Zealand study (N ¼ 823) took a differentiated sample from a noisy city, quiet city, quiet rural, and noisy rural area. They provided annoyance-response (HRQoL) functions, based on multivariate analysis of variance (MANOVA) with dichotomized annoyance ratings and four WHOQoL-BREF domains (physical, psychological, social, and environmental). Adjustments were made for socio-demographics, noise sensitivity, and neighborhood annoyances (14 items). In general, transportation annoyance was higher than neighborhood annoyance across areas and showed larger decrements in all HRQoL functions. In the specific analysis by location the noisy rural area showed the highest annoyance due to neighborhood noise but similar low proportions of transportation annoyance as the other rural area. This result fits with concepts in environmental psychology, where the notion of congruence between the person and the environment is put forward. Although low annoyed by transportation noise, the 24 h presence of an industrial noise source in the neighborhood (wind turbine) does not fit with the expectations of living in a rural area. The introduced mechanical sound masks the more natural and quieter soundscape of a rural area (especially during night) and imposes environmental constraints on the person’s ability to use the existing resources for restoration. However, due to the lower transportation annoyance, the effects on overall HRQoL is lower. This example illustrates also the inherent limitation of an approach using standard noise indicatorsdas the lower level of the wind turbine noise would have masks this deviating relationship of the noisy rural area. Eventually, it suggests the positive effect of a relatively quiet area on several dimensions of HRQoL. Another multisource study (main road, highway, railway) used noise exposure response curves from multiple regression curves of different complexity by adding personal, nonacoustic and contextual variables (N ¼ 1641). While models including classical noise indicators (Lden) and personal variables revealed steeper exposure response, the explained variance (R2) was much lower (0.05–0.13) compared with the complex models (up to 0.58). In the complex models the exposure response in overall annoyance was much flatter. The total annoyance is already high at lower levels and the contribution of the acoustic indicators is getting smaller compared with the nonacoustic and contextual predictors. The amount of necessary coping actions, additional perception of air pollution and vibration, the overall rating of the soundscape, and negative emotional feelings toward the traffic load (anger) turned out to be the most important contributors. The perceptually dominant single source was the main road and it kept its importance also in the multisource analyses. An indicator of sound emergence was used and showed slightly better prediction compared with the Lden-indicator. Thus, even the analysis was constrained by using the pure energy summation of the three sources it points to the importance of a multisensory assessment of transportation noise for planning. No multiplicative interaction was found, but relevant additive effects were observed. Furthermore, the high model contribution of the reported dissatisfaction with the general soundscape indicates that the total sound environment (see also the Montreal results on sleep) needs to be considered when noise abatement should be implemented (e.g., in EU-noise action plans). All recent examples employing multisource analysis in a multisensory context show that an approach not including the multifaceted exposure mixture in the context of transportation noise will fail to assess the true personal, community and environmental effects and miss relevant facts for interventions and planning sustainable residential environments. The quality of the overall soundscape, the availability of quiet sides, good quality courtyards and nearby green spaces need to be integrated in city and land use planning. The single causal role of noise in total exposure studies is rather difficult to establish and will be only possible with selective intervention in prospective studies.

See also: Monetary Valuation of Health Impacts From Noise.

Further Reading Basner, M., Müller, U., Elmenhorst, E.-M., 2011. Single and combined effects of air, road, and rail traffic noise on sleep and recuperation. Sleep 34 (1), 11–23. Botteldooren, D., Verkeyn, A., 2002. Fuzzy models for accumulation of reported community noise annoyance from combined sources. The Journal of the Acoustical Society of America 112 (4), 1496–1508. Chang, T.-Y., Beelen, R., Li, S.-F., Chen, T.-I., Lin, Y.-J., Bao, B.-Y., et al., 2014. Road traffic noise frequency and prevalent hypertension in Taichung, Taiwan: A cross-sectional study. Environmental Health 13 (1), 37. Cik, M., Lienhart, M., Lercher, P., 2016. Analysis of psychoacoustic and vibration-related parameters to track the reasons for health complaints after the introduction of new tramways. Applied Sciences 6 (12), 398. Franklin, M., Fruin, S., 2017. The role of traffic noise on the association between air pollution and children’s lung function. Environmental Research 157, 153–159. Guski, R., Schreckenberg, D., Schuemer, R., 2017. WHO environmental noise guidelines for the European region: A systematic review on environmental noise and annoyance. International Journal of Environmental Research and Public Health 14 (12), 1539. Kang, J., Schulte-Fortkamp, B., 2015. Soundscape and the built environment. CRC Press, London, pp. 1–310. Klæboe, R., Engelien, E., Steinnes, M., 2006. Context sensitive noise impact mapping. Applied Acoustics 67 (7), 620–642. Klein, A., Marquis-Favre, C., Champelovier, P., 2017. Assessment of annoyance due to urban road traffic noise combined with tramway noise. The Journal of the Acoustical Society of America 141 (1), 231–242. Lercher, P., van Kamp, I., von Lindern, E., Botteldooren, D., 2015. Perceived soundscapes and health-related quality of life, context, restoration, and personal characteristics. In: Kang, J., Schulte-Fortkamp, B. (Eds.), Soundscape and the built environment. CRC Press, London, pp. 89–131.

712

Combined Transportation Noise Exposure in Residential Areas

Lercher, P., De Coensel, B., Dekonink, L., Botteldooren, D., 2017. Community response to multiple sound sources: Integrating acoustic and contextual approaches in the analysis. International Journal of Environmental Research and Public Health 14 (6), 663. Leroux, T., Klaeboe, R., 2012. Combined exposures: An update from the international commission on biological effects of noise. Noise & Health 14 (61), 313–314. Marquis-Favre, C., Morel, J., 2015. A simulated environment experiment on annoyance due to combined road traffic and industrial noises. International Journal of Environmental Research and Public Health 12 (7), 8413–8433. Miedema, H.M.E., 2004. Relationship between exposure to multiple noise sources and noise annoyance. The Journal of the Acoustical Society of America 116 (2), 949–957. Miedema, H.M.E., Oudshoorn, C.G.M., 2001. Annoyance from transportation noise: Relationships with exposure metrics DNL and DENL and their confidence intervals. Environmental Health Perspectives 109, 409–416. Miedema, H.M., Vos, H., 2004. Noise annoyance from stationary sources: Relationships with exposure metric day–evening–night level (DENL) and their confidence intervals. The Journal of the Acoustical Society of America 116 (1), 334–343. Nguyen, T.L., Nguyen, H.Q., Yano, T., Nishimura, T., Sato, T., Morihara, T., et al., 2012. Comparison of models to predict annoyance from combined noise in Ho Chi Minh City and Hanoi. Applied Acoustics 73 (9), 952–959. Nilsson, M.E., 2001. Perception of traffic sounds in combination. Archives of the Center for Sensory Research 6, 1–117. Nilsson, M., Bengtsson, J., Klaeboe, R., 2014. Environmental methods for transport noise reduction. CRC Press, Boca Raton, FL, pp. 1–294. Ögren, M., Gidlöf-Gunnarsson, A., Smith, M., Gustavsson, S., Persson Waye, K., 2017. Comparison of annoyance from railway noise and railway vibration. International Journal of Environmental Research and Public Health 14 (7), 805. Oiamo, T.H., Baxter, J., Grgicak-Mannion, A., Xu, X., Luginaah, I.N., 2015. Place effects on noise annoyance: Cumulative exposures, odour annoyance and noise sensitivity as mediators of environmental context. Atmospheric Environment 116, 183–193. Perron, S., Plante, C., Ragettli, M., Kaiser, D., Goudreau, S., Smargiassi, A., 2016. Sleep disturbance from road traffic, railways, airplanes and from total environmental noise levels in Montreal. International Journal of Environmental Research and Public Health 13 (8), 809. Pierrette, M., Marquis-Favre, C., Morel, J., Rioux, L., Vallet, M., Viollon, S., et al., 2012. Noise annoyance from industrial and road traffic combined noises: A survey and a total annoyance model comparison. Journal of Environmental Psychology 32 (2), 178–186. Rice, C.G., 1996. Human response effects of impulse noise. Journal of Sound and Vibration 190 (3), 525–543. Riedel, N., Loerbroks, A., Bolte, G., Li, J., 2017. Do perceived job insecurity and annoyance due to air and noise pollution predict incident self-rated poor health? A prospective analysis of independent and joint associations using a German national representative cohort study. BMJ Open 7 (1), e012815. Schomer, P., Mestre, V., Schulte-Fortkamp, B., Boyle, J., 2013. Respondents’ answers to community attitudinal surveys represent impressions of soundscapes and not merely reactions to the physical noise. Journal of the Acoustical Society of America 134, 767–772. Shepherd, D., Dirks, K., Welch, D., McBride, D., Landon, J., 2016. The covariance between air pollution annoyance and noise annoyance, and its relationship with health-related quality of life. International Journal of Environmental Research and Public Health 13 (8), 792. Tenailleau, Q.M., Bernard, N., Pujol, S., Houot, H., Joly, D., Mauny, F., 2015. Assessing residential exposure to urban noise using environmental models: does the size of the local living neighborhood matter? Journal of Exposure Science and Environmental Epidemiology 25 (1), 89–96. Trollé, A., Marquis-Favre, C., Parizet, É., 2015. Perception and annoyance due to vibrations in dwellings generated from ground transportation: A review. Journal of Low Frequency Noise, Vibration and Active Control 34 (4), 413–458. Tzivian, L., Jokisch, M., Winkler, A., Weimar, C., Hennig, F., Sugiri, D., Soppa, V.J., Dragano, N., Erbel, R., Jöckel, K.-H., Moebus, S., Hoffmann, B., 2017. Associations of longterm exposure to air pollution and road traffic noise with cognitive functiondAn analysis of effect measure modification. Environment International 103, 30–38. von Lindern, E., Hartig, T., Lercher, P., 2016. Traffic-related exposures, constrained restoration, and health in the residential context. Health & Place 39, 92–100. Vos, J., 1992. Annoyance caused by simultaneous impulse, road-traffic, and aircraft sounds: A quantitative model. The Journal of the Acoustical Society of America 91 (6), 3330–3345. Wothge, J., Belke, C., Möhler, U., Guski, R., Schreckenberg, D., 2017. The combined effects of aircraft and road traffic noise and aircraft and railway noise on noise annoyancedAn analysis in the context of the joint research initiative NORAH. International Journal of Environmental Research and Public Health 14 (8), 871. Yokoshima, S., Morihara, T., Sato, T., Yano, T., 2017. Combined effects of high-speed railway noise and ground vibrations on annoyance. International Journal of Environmental Research and Public Health 14 (8), 845.

Relevant Websites http://www.mdpi.com/journal/ijerph/special_issues/WHO_reviews. http://www.mdpi.com/journal/ijerph/special_issues/quality-of-life. http://www.mdpi.com/journal/ijerph/special_issues/environmental-exposures. http://www.mdpi.com/journal/applsci/special_issues/vibration_control. http://www.qside.eu. http://www.fp7sonorus.eu/(go to final report). http://alpnap.i-med.ac.at/.

Community Outdoor Air Quality: Sources, Exposure Agents and Health Outcomesq DG Shendell, UMDNJ-School of Public Health, Piscataway, NJ, United States © 2019 Elsevier B.V. All rights reserved.

Glossary PM0.1 Ultrafine particles. PM10 Respirable particles. PM2.5 Fine particles. PM2.5–10 Coarse particle fraction of respirable particles. PMx Particulate matter or particles, suspended in air. T Air temperature, in degree Fahrenheit ( F) or degree Celsius ( C).

Nomenclature AAQS Ambient (outdoor) air quality standards ATSDR Agency for Toxic Substances and Disease Registry CalEPA California Environmental Protection Agency CARB California Air Resources Board LDCs Less developed countries right after CARB and before NCEH NCEH National Center for Environmental Health OEHHA Office of Environmental Health Hazard Assessment RELs Reference exposure level (cancer, chronic noncancer) RH Relative humidity, expressed as a percentage (%) SVOCs Semi volatile organic compounds; can be attached to PMx T Air temperature, in degree Fahrenheit ( F) or degree Celsius ( C) TRI Toxic Release Inventory, data publicly available via USEPA USCDC United States Centers for Disease Control and Prevention USEPA United States Environmental Protection Agency VOC Volatile organic compound WHO World Health Organization

Introduction Worldwide, people live in urban, suburban, and rural communities with varying levels of development, geographic size, and population density. People across age groups, gender, and race/ethnicity spend time in various outdoor (ambient) and indoor microenvironments to clean, cook, eat, live, learn, play, pray, provide care, sleep, research, study, travel, work, worship, and so on. Thus, from an environmental health perspective, it is important to have a fundamental understanding of the potential exposure agents (hazards) of concern outdoorsdnatural and man-madedand the various factors that may make a person or community more, or less, susceptible and vulnerable. Hazards may be biologic, chemical, ergonomic, physical, psychosocial, or radiological in nature. An identified hazard, or mixture of agents, poses risks of adverse acute and chronic human and ecological health effects. Readers of this article will gain an overview of community outdoor air and environmental quality including sources of pollution-emitting agents/ hazards; exposure assessment; potential adverse health outcomes of concern including incidence, prevalence, and severity of symptoms of specific diseases; subpopulations who are known or thought to be more susceptible and vulnerable to environmental and outdoor occupational exposures as well as health disparities; and how outdoor air quality potentially influences indoor air and environmental quality (and vice versa).

q

Change History: April 2019. DG Shendell updated Further Reading section. This is an update of D.G. Shendell, Community Outdoor Air Quality: Sources, Exposure Agents and Health Outcomes, Editor(s): J.O. Nriagu, Encyclopedia of Environmental Health, Elsevier 2011, Pages 791–805.

Encyclopedia of Environmental Health, 2nd edition, Volume 1

https://doi.org/10.1016/B978-0-12-409548-9.11824-X

713

714

Community Outdoor Air Quality: Sources, Exposure Agents and Health Outcomes

Background Concepts: Exposure Science The concept of exposure is actual contact between a person (or animal, plant, fish, etc.) and an agent (pollutant or hazard), or the vector of a known agent released or emitted by a source, in one or more environmental media (after agent reaches media from sources by transport pathways), through one or more defined routes, at or over a defined period of time in some geographic area, that is, a defined location or microenvironment (space) in the community (environmental) or at a workplace (occupational).

Exposure Agents There are four main categories of exposure agents with respect to outdoor air quality and are classified as carcinogenic or noncarcinogenic. In occupational settings, ergonomics may be a subset of physical agents. Also, in community (environmental) and occupational (workplace) settings, another category of agentsdpsychosocial stressdis of increasing interest to public health research, policy, programs, and practice.

Biologic

There are numerous examples, from natural sources both indoors and outdoors, including:

• • •

Bacteria, including metabolic products and cell-wall components; Mold (fungi, mildew, and spores); and. Pollen from various trees, flowers, and plants.

Please note some species have human (engineered in the laboratory) sources. Please also note that biologic sources may create toxins or chemical toxicants due to metabolic processes, for example, microbial volatile organic compounds (MVOCs). It should also be noted that the presence of these agentsdthat is, above analytic method detection limit above analytic method detection limitdand levels they are found indthat is, the measured concentrations per unit of dust (gram) or air (cubic meter) sampledddepend on several factors including:

• • • •

The various sources of these agents both indoors and outdoors; The design and construction (materials selected, etc.) of interior spaces, which can affect the presence and sources of food and moisture these agents require to live and grow; The presence of natural and mechanical ventilation with particle filtration; and. The possible “sinks” like carpets.

Chemical

Chemical agents can be generally classified as organic and nonorganic. a. Organic (contains carbon and hydrogen atoms), with or without metals. Examples: There are numerous examples, and most have natural and human sources both indoors and outdoors. i. Organic chemicals, including many volatile toxic air contaminants (TACs) or hazardous air pollutants (HAPs including volatile organic compounds (VOCs)); ii. Persistent organic chemicals, referred to by named categories that represent many different congeners of configurations of atoms, for example, polychlorinated biphenyls and dioxins. b. Nonorganic (does not contain carbon), with nitrogen, sulfur, hydrogen, oxygen, and metals. Examples: There are numerous examples, with outdoor natural and human sources and human-activity-related sources indoors (including outdoor sources brought inside). i. Metals including heavy metals and transition elements on a periodic table of a chemistry textbook. Metals can exist in elemental form as solids, liquids, and vapors (gas), and can attach to or get adsorbed on particles as well as in salt complexes with water. Metals include arsenic, cadmium, chromium, lead, magnesium, and mercury. ii. Particulate matter, or particles, exist in varying size ranges (PMx) and usually are not a perfect sphere in shape; their size refers to their aerodynamic diameter in micrometers or microns (mm) under a microscope or, more commonly in the present, as counted/measured by laser-based technology. Size fractions most commonly referred to by scientists, engineers, and policy makers from local to global geographic scales over varying time frames are as follows: B Total suspended particles (TSPs), generally  15 mm; B Respirable particles (PM10); B Coarse fraction of respirable particles (PM2.5–10); B Fine particles (PM2.5), which are primary or secondary emissions in origin due to the formation of acid aerosol particles based on nitrates (HNO3) and sulfates (H2SO4); and B Ultrafine particles (PM0.1), including nanoparticles that may be designed as perfect spheres but still may vary in composition and thus in their potential toxicity (research in this area is still nascent). iii. Nitrogen oxides (NOx) including nitric oxide (NO), nitrogen dioxide (NO2), and nitrous oxide (N2O); iv. Sulfur oxides (SOx) including sulfur dioxide (SO2);

Community Outdoor Air Quality: Sources, Exposure Agents and Health Outcomes

715

v. Carbon dioxide (CO2); and vi. Carbon monoxide (CO). It should be noted that the presence of these agentsdthat is, samples are above analytic method detection limitdand levels they are found indthat is, the measured concentrations per unit of air (cubic meter) sampledddepend on their various sources both indoors and outdoors as well as the design of interior spaces, the presence of natural and mechanical ventilation with particle filtration plus possibly activated carbon (charcoal) to remove VOCs, and possible “sinks” like carpets.

Physical

There are numerous examples, and most have natural and human sources both indoors and outdoors. a. Asbestos (six types naturally occurring and mined in modern times), especially if in fibrous (friable) and not bound (nonfriable) forms; b. Light (natural or artificial, including fluorescent and incandescent) including ultraviolet (UV) rays such as UVA, UVB, and UVC; c. Noise (frequency, in hertz (Hz) and loudness, in A-weighted decibels (dBA)); and d. Weather (meteorology) parameters like temperature, relative humidity, and wind (speed and direction).

Radiological

There are numerous examples, and most have natural and human sources (the latter category engineered in the laboratory for research and development, energy resources, or consumer products used in various outdoor and indoor microenvironments) located both indoors and outdoors. a. b. c. d.

Electromagnetic fields and other forms of nonionizing radiation; Ionizing radiation like X-rays; Nuclear power waste fuel rods; Radon gas, which is emitted from a natural source into outdoor air, infiltrates indoor microenvironments like homes and schools via three major pathways: i. Through cracks in a building’s envelope where walls meet; ii. Through open windows (natural ventilation); and. iii. Through cracks in a building’s foundation (soil-to-concrete, etc.).

Also, radon is emitted from its source as a function of three main factors: i. The particular location’s underlying geology (rock formations); ii. The slope of the land built into; and iii. The type, and condition and age, of building foundation.

Environmental Media Although in this article attention is focused on ambient or outdoor air quality, outdoor air quality can impact most other known environmental media: a. Indoor air and environmental quality; b. Surface water resources (which then may impact groundwater aquifers) through rainout and washout as well as acidification by increased levels of carbon dioxide; c. Soils in rural, agricultural, and urban/suburban areas; b. Sediments near surface water resources like rivers, streams, and so on; c. Dust on fields, construction and demolition sites, and asphalt or concrete surfaces including roads and door entry ways (Figs. 1–3); d. Food, in particular agricultural products/crops like fruits and vegetables. It is also important to note that various bodily fluids can be sampled to analyze for identified biomarkers as one way to characterize and quantify personal exposure to outdoor air pollution. Worldwide, this research is a common component of clinical research and environmental and occupational epidemiology focused on air pollution. In the United States, the Centers for Disease Control and Prevention (USCDC) led a national effort through the National Health and Nutrition Examination Survey to update a developed National Report Card on Human Exposure to Environmental Chemicals. Biomarkers are potentially obtained from blood (including from the mother’s placenta to estimate prenatal exposure of the fetus late during the third trimester), urine, breast milk, saliva, sweat, tears, toenails, fingernails, hair, feces, and bile. With respect to outdoor air pollution, including tobacco smoking outdoors or near (adjacent to) operable doors and windows leading indoors, the first three listed are most commonly used.

716

Community Outdoor Air Quality: Sources, Exposure Agents and Health Outcomes

Fig. 1 Many categories of natural and anthropogenic (human) outdoor air pollution sources are represented in this photograph taken near the community of Baldwin Park, CA, in eastern Los Angeles county.

Fig. 2 Unpaved roads in smaller cities and rural areas of less developed countries like Guatemala lead to resuspended dust (coarse particles) on top of mobile source emissions of fine particles and various gaseous air pollutants.

Fig. 3 Several health and safety issues for pedestrians and children attending school near paved and dirty roadways in urban Nigeria are prevalent; susceptibility and vulnerability factors are clearly evident.

Community Outdoor Air Quality: Sources, Exposure Agents and Health Outcomes

717

Pathways Including Fate and Transport (Agents From Sources to Microenvironments) The process of how an agent or a mixture of agents travels from one location to another, or from the source(s) of emissions to a specific microenvironment like outdoor air adjacent to a home or school, is called an exposure pathway. In general, pathways are determined by a combination of factors including an agent’s chemical and physical properties, weather (temperature, humidity, wind speed and direction, and amount of sunlight versus cloud cover), topography, and altitude (relative to sea level). There are three categories or types of processes. 1. Chemical processes specifically relevant to outdoor air quality: a. Absorption (into) and adsorption (attach to); b. Ionization; c. Oxidation/reduction reactions (“redox” reactions in chemistry terminology); d. Photolysis, direct or indirect/sensitizing (when light hits something, then there is a reaction with target agent(s)). 2. Biologic processes specifically relevant to outdoor air quality: a. Bioaccumulation/bioconcentration; b. Biodegradation, with or without sunlight. 3. Physical processes specifically relevant to outdoor air quality: a. Volatilization (to air) when mixing; b. Sorption or attachment to sediment or soil particles that are then resuspended when dry and windy conditions persist.

Routes of Exposure (Agents Contact Targets) There are three primary routes of exposure relevant to humans during typical daily life. They are inhalation through the mouth or nose, generally as a function of the level of activity; dermal or through the skindwhether intact or with open cutsdas well as eyes (ocular); and ingestion of solid and liquid foods. Three other highly relevant routes of exposure to human beings are transplacental, from a mother to the developing fetus; through breast milk from a mother to an infant/toddler until some age (unless exclusively fed using water or milk-based formulae); and intravenously for prescription or illicit drugs. Other routes of exposure used in toxicology, that is, toxicity studies on animals in the laboratory (e.g., mice and rats) are subcutaneous (under a skin fold), intraperitoneal (into the stomach cavity), and intramuscular (into a muscle).

Time and Space Considerations, Including Human Time–Location–Activity Patterns Issues concerning outdoor air quality, exposure and human health, or environmental quality (aesthetics, ecological health, and visibility) must be examined in both space and time. Time includes frequency and duration of exposure; with quantity, these attributes define dose.

• • •

Acute, or one-time, exposures; Chronic exposures, which are continuous, intermittent (think about your daily commute to work or school), or episodic (think volcano eruption); and. Internal dose (via pharmacokinetics) versus biologically active dose (via pharmacodynamics).

Space is typically thought of in terms of geography relative to altitude, with these terms:

• • • • • • •

Local (town or city, small county); Regional (multiple counties); State; Regiondmultiple counties in a state or multiple states like mid-Atlantic and New England; National; International; Global.

The concepts of time and space help explain a portion of inter- (between) and intra- (within) individual variability in exposure and dose at any geographic level during a defined time period. In general, based on data from previous studies on human time–location–activity patterns:

• • •

Data differed by age group more than by gender; Race/ethnicity differences have not been adequately explored yet; 24-h, 48-h, or weekly diaries at 10–30 min resolution were used.

Outdoor Air Quality in Communities Sources of Pollution (Pollutant Emissions) In general, there are two broad categories or general classifications of source of pollution, that is, emissions of one pollutant or a mixture of pollutantsdnatural and anthropogenic or human/man-made. Also, a specific pollutant or exposure agent, or class

718

Community Outdoor Air Quality: Sources, Exposure Agents and Health Outcomes

of pollutants, based on physical and chemical properties, may have one or more known, identifiable sources outdoors as well as indoors. In this article, which focuses on outdoor (ambient) air quality, three air pollution source categories are defined; one category has several subcategories. These categories and subcategories, with natural and human examples presented, are discussed next. Also, see Figs. 1–4. 1. A point sourcedusually involves a combustion or mechanical process. Human examples: power plant with smoke stack, industrial facility, or a cement factory. Natural examples: a volcano erupting or a large coniferous or deciduous tree. 2. An area sourcedknown quantities of pollutant(s) may be released within a defined geographic area, but individual source locations (i.e., point sources) where pollutant(s) are produced are hard to identify, and therefore, the defined space is considered one source. Human examples: a large oil refinery or port complex operated by several companies. Natural examples: a large agricultural field or a concentrated animal feeding operation; a forest area with various coniferous and deciduous trees; a garden with various plants, flowers, and trees. 3. A mobile sourcedoperates on different types of fuel including diesel (low or high sulfur content), unleaded gasoline, compressed natural gas, and liquefied natural gas and blends including biodiesel and ethanol. a. On-road Human examples: automobiles/cars, buses (single- or double-decker), mopeds, motorcycles (various types of engines), recreational vehicles (RVs), sport utility vehicles (SUVs), light duty trucks (e.g., pick-up trucks and all-terrain vehicles), and heavy duty trucks (e.g., 18-wheelers). Note: An entire highway, freeway, motorway, or main primary road through an urban or suburban area may be called a “mobile line source.” b. Off-road i. On-land Human examples: construction, demolition, and farming equipment including tractors with digging, drilling, or moving apparati and trains where the locomotive has an internal combustion engine. ii. On-water Human examples: aircraft carriers; barges; boats with on-board motors used for fishing, recreation, or competitive racing including sailing; cargo vessels; cruise ships; ferries used for passenger transportation or sightseeing tours; submarines; and tugboats. iii. In-air Human examples: airplanes for cargo and passengers, and helicopters.

Fig. 4 Various mobile sources are prevalent on relatively narrow paved and unpaved streets in urban areas in less developed countries like Guatemala.

Community Outdoor Air Quality: Sources, Exposure Agents and Health Outcomes

719

Please note that these are selected examples to depict and emphasize scientific concepts; other examples pertinent to industrialized and less developed countries (LDCs) exist or may be developed, and globally, across examples, names used to identify air pollution sources may vary. There are also some other important things to note to understand the concepts involved with identifying and characterizing pollution sources. First, “source emissions” are not the same as “pollutant emissions” or the specifically identified agent or agents of exposure (hazards with risks to health) with typically known, research-based physical and chemical properties. The physical and chemical properties determine if, at a given temperature and pressure, the agent is present as a solid, liquid (including aerosolized), or gas, among others. In other words, they are not necessarily equal qualitatively or quantitatively. Source emissions are typically described by measurements in pounds or tons emitted per defined time period (daily or annually) into the environment. In these cases, the “sources” are used as surrogates or indicators of the true pollutant(s) emitted. Emissions from sources can be modeled on computers to calculate estimated pollutant exposure if other data are available to scientists and engineers. For example, there is a need for data on meteorology (weather) and topography (land descriptions), distances between the population at risk and identified sources (any category/subcategory), what form the agent is present in, and whether the source is safely or improperly stored, and where. Thus, in summary, the following relationship may be considered true: Source emissions–Pollutant concentrations–Human/ecological exposure–Internal dose

Ambient Air Quality Standards for Criteria Pollutants There are ambient (outdoor) air quality standards (AAQS) that have been passed in many industrialized countries, but not in every country, as well as in some developing countries with rapidly growing industries, cities, and economies (e.g., China and India) and by the World Health Organization (WHO). In addition, in the United States, where primary and secondary AAQS are defined in Title I of the Federal Clean Air Act Amendments of 1990, individual states may have more stringent standards than the federal (national) standards for the same time interval and for different time intervals. The best example is the State of California. California AAQS, administered by the California Air Resources Board (CARB), historically have been equal to or more stringent than the US and WHO standards. This progressive situation is, in part, because of primary AAQS in California being subject to more frequent review by agencies and policy makers based on advances in environmental science, engineering technology, and human-healthrelated research. There are primary AAQS for selected individual pollutants (exposure agents and hazards) and secondary AAQS for selected individual or groups of pollutants. Primary and secondary AAQS differ based on their emphasis on protecting human health versus ecological health and environmental quality, respectively. Primary AAQS are based, in theory, on the best available science from human-health-related research conducted in the laboratory (toxicology, controlled environmental chamber exposure studies, etc.), in the clinic, and in the community (epidemiology, clinical panel studies, exposure assessments, etc.). The political process may sometimes delay the ability of new science to regularly update primary AAQS at defined intervals due to the roles and interests of multiple stakeholders. Secondary AAQS include an emphasis on aesthetics, visibility, and resource degradation and erosion. The six pollutants typically considered within the primary AAQS are listed below in alphabetical order. They were defined, including the common acronyms used, and discussed with respect to their sources in the subsection “Chemical”. 1. Carbon monoxide (CO); 2. Lead (Pb2 þ); 3. Nitrogen dioxide (NO2, both directly emitted or formed by chemistry in the lower atmosphere, or troposphere, from reactions involving nitric oxide (NO)); 4. Particulate matter or particles, in one or more size ranges (e.g., TSP, PM10, and PM2.5), with PM2.5 both directly emitted and formed by aqueous chemistry involving nitrogen oxides and sulfur oxides in the troposphere; 5. Ozone (O3); and 6. Sulfur dioxide (SO2). Primary and secondary AAQS are defined for specific time intervals and with specific measurement units for specific statistics. The most common time intervals are 1 h, 8 h, 24 h or daily, and annual. The most common specific measurement units are parts per million (ppm), parts per billion (ppb), micrograms per cubic meter (mg m 3), and milligrams per cubic meter (mg m 3). The most common specific statistics are the arithmetic mean or average over a specified time period (e.g., integrated average), the maximum measured air concentration, and a specified percentile (e.g., 95% or 98%) based on the data distribution. These terms are used by government agencies like CARB and the United States Environmental Protection Agency (USEPA), and their public (universities) and private sector (consultants) collaborators, in reference to data collected at routine central site monitoring stations (see Figs. 5–7). Data are collected by validated active (sample quantities of air at calibrated, defined flow rates are pumped) or passive (diffusion-based) methods. The various pieces of sampling equipment are operated based on standardized methods to acquire data to help the USEPA, CARB, and WHO, among others, to monitor and enforce the primary AAQS at local (city or county or regions defined by multiple cities and counties), state, provincial, and national levels. Please refer to the resources listed in the sections “Further reading” and “Relevant websites” for more details. They include access to tables listing the most current primary and secondary AAQS.

720

Community Outdoor Air Quality: Sources, Exposure Agents and Health Outcomes

Fig. 5 Better estimated measurements of human exposure to outdoor air pollutants than data from central site monitoring stations may be achieved for specific subpopulation groups through research conducted outside homes or closer to identified sources.

Fig. 6 Better estimated measurements of human exposure to outdoor air pollutants than data from central site monitoring stations may be achieved for specific subpopulation groups, including school-aged children from ethnic minorities as depicted here, through research conducted with personal air monitoring. This example used active and passive samplers within a noninvasive, least intrusive, least disruptive protocol.

Community Outdoor Air Quality: Sources, Exposure Agents and Health Outcomes

721

Fig. 7 An example of how to conduct personal air monitoring focused on pedestrian and street vendor exposures to daytime outdoor fine particle and carbon monoxide air pollution in urban areas like Guatemala City, Guatemala.

Hazardous Air Pollutants or Toxic Air Contaminants (Air Toxics) There are no AAQS related to HAPsdsometimes termed TACs or air toxics or air toxicants like in the State of Californiadbut there are certain regulatory provisions concerning them. In the United States, Title III of the Federal Clean Air Act Amendments of 1990 is focused only on HAPs officially listed. Additionally, in the United States, due to the 1986 Emergency Planning and Community Right-to-Know Act, the Toxic Release Inventory (TRI) was established. Briefly, publicly available TRI data represent self-reported quantities of emissionsdover a defined thresholddfrom identified point and area human sources in a community into environmental media (outdoor air, etc.) on an annual basis by HAPs or group (class) of chemical compounds. In the State of California, Proposition 65 has focused on TACs known to be potentially harmful to human health as carcinogenic (causing cancer), developmental, and reproductive toxicants. Approximately 200 TACs have been listed, including most recently diesel exhaust particles and environmental tobacco smoke. Also, in the State of California, the California Environmental Protection Agency (CalEPA), through the Office of Environmental Health Hazard Assessment (OEHHA), has developed science-based cancer reference exposure levels (RELs) for acute and chronic exposures and noncancer chronic RELs for outdoor environments. CalEPA/OEHHA conducted literature reviews and finalized decisions based on the best available science from toxicology and environmental and occupational epidemiology. RELs cover many of the listed TACs; note that many of these TACs were listed as HAPs by USEPA. Four emerging concerns in industrialized countries, countries in economic and industrial transition like China and India, and LDCs of the Americas, Africa, and Asia are related to air toxics and are worth noting. First, it must be recognized how medical and electronic waste incineratorsdassuming reuse and recycling measures were first conducted as part of best management practicesdmay be point sources of TACs and particulate matter in a community. The second is sometimes referred to by the term “hot spots” due to the potential or known disproportionate impacts of air pollution on specific communities adjacent to or downwind (on average) of identified sources, which may be in part responsible for suspected or documented health disparities. The third is related to “ship breaking” or decommissioning activities of old cruise, cargo, and military ships involving asbestos fibers and a mixture of chemicals attached to particles emitted during the mostly incomplete combustion of metal, steel, and wood the ships were built from. The fourth, in particular in mega cities and rapidly growing smaller cities subjected to urban sprawl (growth outward instead of upward with tall buildings), involved unregulated open household or community-level trash pile burns of various sizes along roads, behind or in front of homes (Figs. 8 and 9). These trash piles include potentially recyclable plastics used in beverage bottles, food containers, batteries, unused medicines, and cleaning compounds, among others. Thus, particles,

722

Community Outdoor Air Quality: Sources, Exposure Agents and Health Outcomes

Fig. 8 Poor community sanitation and dilapidated housing may exist along major paved or dirty roadways, including frequent instances of open trash pile burning. These situations are due, in part, to urban sprawl and built environment issues in less developed countries such as Nigeria, as depicted here.

Fig. 9 Poor community sanitation and dilapidated housing include open trash piles burning behind homes. These situations are due, in part, to urban sprawl and built environment issues in less developed countries such as in Nigeria, as depicted here.

gases, metals, and dioxins will be released. This public health issue affects multiple environmental media and reflects strains on public services beyond potable water and sanitation.

Microenvironments of Interest There are specific microenvironments of concern with regard to environmental and occupational ambientdor outdoor and semiencloseddair pollution epidemiology. These are nonoccupational, occupational, institutional and medical locations, and public parks:

• • •

Near freeways/primary roadways, which are mobile line source locations where there are lots of traffic comprised of various vehicles; Adjacent to railroad tracks given diesel locomotive emissions; Adjacent to or downwind a relatively short distance away from a point source or an area source.

The following locations are examples of nonoccupational microenvironments of interest:

Community Outdoor Air Quality: Sources, Exposure Agents and Health Outcomes

• • •

723

Private homes for a single family or multiple families (condominiums or townhouses); Rental apartments (with a property manager and a landlord); Section 8 public housing in the United States, which is a type of apartment for low-income populations, typically urban minorities, for which the government subsidizes rent and certain services.

Public and private school facilities, including day care/pre-K and K-12, are “transitional” microenvironments of interestdthey may be viewed as occupational (adult teachers/professors, principals, physical education instructors, coaches, administrators, nurses and clinic assistants, custodians, etc.) or nonoccupational (students), depending on target of interest based on age. Also, schools could have “special/other” settings within them for specific programs. The following locations are examples of other occupational microenvironments of interest because in certain job classifications, workers primarily work outdoors or they may take breaks and eat meals outdoors in adjacent locations:

• • • •

Commercial office buildings and call centers; Shopping malls, bars, and restaurants; Factories; Various industries, like cement plants, mining, and ports and goods movement activities, both on- and off-road.

The following locations are examples of institutional microenvironments of interest:

• •

Prisons, given the outdoor and semi-enclosed areas carefully fenced off and guarded; Military active bases and training sites, on- and off-land (think aircraft carriers).

It should be noted that whether public (government subsidized) or private, each of these has varying levels of security, which may affect the exposure to the hazard(s) and thus the risk implications. Religious buildings may be considered institutions, occupational, or nonoccupational settings given the diversity of religions and the variety of design and construction observed across and within religions across countries and levels. Finally, the following locations are examples of medical microenvironments of interest:

• •

Assisted living facilities, nursing homes, hospice care facilities for short-term stays, and neighborhood wellness centers serving seniors (older adults) and patients with chronic or terminal illnesses; Community health centers, clinics, and hospitals with outdoor rehabilitation/recreation and walking and visiting areas.

Basics of Natural and Built Environment Factors for Community Air Quality Environmental factors affecting community outdoor air quality, potentially either positively (i.e., helping improve) or negatively (i.e., elevating air concentrations of pollutants), are called natural and built or physical environmental factors. Previously, in the subsection “Sources of pollution (pollutant emissions),” natural and anthropogenic sources of pollution (hazards or agents of exposure) were discussed. Other articles of this encyclopedia will cover fate and transport of pollutants due to physical and chemical properties. Thus, only selected basic concepts and a few examples are presented. Natural factors include geography (i.e., opposite seasons in northern versus southern hemisphere and proximity to the equator), topography (i.e., slope or incline of the land, geology and present ground cover, and altitude) (Figs. 10 and 11), and atmospheric stratification including inversions and weather (i.e., temperature (T), relative humidity (RH%), and wind speed and direction). There are different technical classifications of atmospheric inversions. Briefly, since the basic concept is that typically warmer air rises, an atmospheric inversion may occur when a layer of cooler air is present and restricts upward air movement and traps pollutants relatively closer to the ground than under normal circumstances when warm air rises. This occurs, for example, during winter months in valleys, negatively affecting community outdoor air quality in cities worldwide. Built or physical environment factors are increasingly discussed in relation to not only community outdoor air quality but also other public health issues such as opportunities for physical activity like walking and bicycling, availability of nutritious foods like fruits and vegetables, safety through prevention of unintentional pedestrian injuries, and urbanization and population growth, for example, urban sprawl. Built environment factors of interest are those related to mobile sources of combustion pollution relatively cleaner public mass transportation options (rail and buses), and sidewalks, crosswalks, and clearly defined lanes and adjacent offroad paths for pedestrians (walking and running) and bicyclists.

Outdoor Air Quality Can Impact Indoor Air and Environmental Quality in a Community In a community, whether urban/suburban or rural or agricultural, outdoor air quality can negatively impact indoor air and environmental quality. First, natural ventilation through doors and windows completely or partially open, for short or extended durations of time, allows pollutantsdgas-phase, aerosolized, or attached to particles suspended in the airdto enter into homes, offices, schools, and child care facilities, among others. Second, mechanical ventilation with particle filtration may reduce indoor levels of larger, coarse particles including pollen from flowers, plants, and trees as well as ozone from outside, but can introduce fine particles and gas-phase pollutants not destroyed on surfaces of the mechanical ventilation unit and distribution system. Third, the sun and prevailing weather conditions (T, RH%, and clouds) influence the amount of natural daylight and glare available and the thermal

724

Community Outdoor Air Quality: Sources, Exposure Agents and Health Outcomes

Fig. 10 This school bus traveling in a neighborhood on the outskirts of Guatemala City, Guatemala, has increased emissions due to the slope/ incline of the steep hill.

Fig. 11 These vehicles traveling in Ibadan, Nigeria, have increased emissions due to the slope/incline of the hill and the unpaved state of the outer lanes on the side of the original road. Note: Nigeria does not yet have rigorous standards for emission controls on cars, minivans used as local transport vehicles in urban areas, trucks, and so on; so emissions of numerous “criteria” and toxic outdoor air pollutants are relatively higher. Also, there is not yet a routine central site outdoor/ambient monitoring system.

comfort of indoor living, learning, and working spaces depending on the color of exterior building materialsdwhite and lighter colors reflect incident sunlight and heat, whereas black and darker colors absorb heat from incident sunlight.

Indoor Air Pollution Can Impact Outdoor Air Quality in a Community Conversely, whether in urban/suburban or rural or agricultural settings, and in particular the latter category in LDCs, sources of indoor air pollution due to heating and cooking activities throughout the day can negatively impact community, and potentially regional, outdoor air quality. The sources of fuel used for heating and cooking year-round in LDCs, both indoors in a single living space or in semi-enclosed structures adjacent to sleeping areas, are typically dirty and subject to incomplete combustion (Fig. 12). These include coal with high sulfur content, wood, charcoal made from such wood locally, animal dung/manure, and crop residues. The potential impacts on community outdoor air quality in modern suburban and rural areas of industrialized nations are relatively higher during the heating season, that is, late fall, winter, and early spring months, due to wood-burning fireplaces and outdoor wood boilers. This is why in North America, and in particular in the United States, like the State of California, local woodburning ordinances have recently been passed at local (city/town) and regional (county or multicounty) levels to try and reduce levels of wintertime particulate matter outdoors, especially during conditions favoring morning or evening atmospheric inversions.

Community Outdoor Air Quality: Sources, Exposure Agents and Health Outcomes

725

Fig. 12 The crude/dirty sources of fuel used indoors for cooking and heating in less developed countries, and wintertime heating in industrialized nations (firewood), may also be used adjacent to homes outdoors or in semi-enclosed attached areas in urban Nigeria, as depicted here, producing copious amounts of smoke (particles) and air toxics.

Selected Health Outcomes of Public Concern Due to Community Air Quality Human exposure to outdoor air pollutants in a community, in the various outdoor and partially enclosed microenvironments, may potentially lead to adverse health outcomes. Adverse health outcomes are mortality or described by various measures of morbidity differentiated by their level of severity. In general, in a population of interest, morbidity exceeds mortality. Regarding morbidity, there will be higher incidence and prevalence of clinical symptoms of a diseasedas reported on patient surveys and by interview or self-reportdthan of both absenteeism from work or school and medication purchases. These morbidity-related measures will, in general, be more prevalent than outpatient clinic visits, which are generally more prevalent than emergency room/department visits, and finally those visits are, in general, more prevalent than hospitalizations for one or more nights. There may even be levels of severity of symptoms; asthma is a good example to illustrate this point. An individual with physician-diagnosed asthma is usually classifieddbased on the frequency and severity of symptoms on a daily, weekly, or monthly basisdas having mild persistent, moderate persistent, or severe persistent asthma. The four primary symptoms reported are chest tightness, cough, shortness of breath, and wheezing. The goal of proper, individually tailored asthma management is to assist patients through clinical (controller and reliever or emergency/rescue inhaler medications), behavioral, and environmental modifications plus help by family members to minimize the disease’s impact on activities of daily living for the patient and his or her family.

Susceptibility and Vulnerability Factors There are many personal attributes and other factors that may make individuals in a community more susceptible, that is, at increased risk to air pollution (one or more agents of exposure) and thus to the potential adverse health effects subsequent to exposure. In addition, within a defined susceptible population subgroup, there may be individuals who are relatively more vulnerable due to their personal attributes and other factors. These susceptibility and vulnerability factors have been identified worldwide by copious public health and medical research, including environmental and occupational air pollution epidemiology. Susceptibility and vulnerability factors related to personal attributes (Fig. 4) are as follows:

• •

• • • •

Age;  For children, by age subgroups: under 1 year (infants), under 5 years, 5–12, 13–17, over 18, or young adults;  For adults, ages 18–55 and over 55 (and then, by 5–10-year intervals). Gender;  If a woman, then pregnancy status or pre- versus postmenopause;  Genetics, that is, inherited traits and family history;  Medication use, over the counter or by prescription(s);  Metabolism. Nutritional habits, that is, daily diet and supplemental vitamins and minerals; Race and ethnicity; Underlying health status (acute and chronic conditions); Use of alcohol, tobacco products, and illicit/illegal drugs.

Other susceptibility and vulnerability factors are as follows:

• •

Exposure to secondhand or environmental, passive or involuntary smoke; Other health-related behaviors, for example, physical activity of varying intensity;

726

• • •

Community Outdoor Air Quality: Sources, Exposure Agents and Health Outcomes

Socioeconomic status indicators (education, income, or occupation/industry of job), and demographic variables of the individual and mother, father, siblings, and so forth; Psychosocial and physical stress indicators; Weather including seasonal differences.

Cardiovascular and Respiratory Health Outdoor air pollution has been associated, through basic science, clinical, epidemiological, toxicology, and environmental science and engineering research, over several decades with many different measurable indicators of adverse acute and chronic cardiovascular (heart-related) and respiratory (lung-related) health-related outcomes. Asthma, a chronic disease characterized by airway inflammation and bronchoconstriction after exposure to various airborne triggers and irritants, was briefly described at the beginning of this section. Other examples of adverse respiratory health outcomes associated with air pollution exposure include acute respiratory infections, bronchitis, chronic obstructive pulmonary disease, and lung cancer. Examples of adverse cardiovascular health outcomes of concern associated with air pollution exposure include data on measures of blood pressure of specific arteries and ventricles, hardening of the arteries or atherosclerosis, heart attacks, heart rate variability, ischemia, and myocardial infarctions.

Adverse Birth Outcomes Measures of estimated prenatal exposure of pregnant womendeither throughout the conception and pregnancy period or during a specific trimesterdto outdoor air pollution have also been associated through research with several adverse birth outcomes. These include various birth defects, low and very low birth weights, preterm births, and spontaneous abortions. Specific pollutants suspected as the primary causes include the six criteria air pollutants subject to primary AAQS in the United States as well as polycyclic aromatic hydrocarbons and the mixture of gaseous and particle-phase pollutants comprising environmental tobacco smoke.

Neurotoxicity Including Learning and Developmental Disabilities Airborne lead, one of the criteria air pollutants subject to primary AAQS in the United States, has been conclusively associated with adverse acute and chronic cognitive, development, and learning outcomes among children and adults depending on the magnitude of the exposure; internal dose is typically measured by a biomarker, the blood lead level. In recent years, research has advanced to suggest adverse effects at lower levels of exposure to lead. Therefore, even for noncancer outcomes there may be no threshold or safe level of exposure to lead. Also, in recent years, due to the increasing prevalence of autism spectrum disorders, research started on possible gene–environment interactions including exposure to outdoor air pollutants.

Cancer Basic science, clinical, epidemiological (environmental and occupational), and toxicology research over several decades has associated or suggested associations between specific outdoor air pollutants and specific types of cancer. The most well-known examples include exposure to benzene and multiple forms of leukemia, a blood cancer; exposure to nonionizing electromagnetic fields and brain cancer; and exposure to radon gas and lung cancer.

Community Noise Outdoor noise, a physical exposure agent, is unwanted sound in a community and is emitted by many air pollution sources. Noise has been associated in epidemiological research with hearing loss and impairment, adverse cognitive and learning effects, and hypertension.

Conclusion Given that over half the world is characterized as being urbanized, and the rapidly growing population in LDCs desires the goods and amenities of industrialized nations, there is an increasing prevalence of examples of every type of human source category of community outdoor (ambient) air pollution discussed in this article. Emissions are likely to be both greater in quantity and relatively more toxic due to a lack of or inadequate monitoring and enforcement of regulations concerning source emission controls and emission reporting. The adverse human and ecological health, and environmental quality, implications can be acute or chronic in nature.

Community Outdoor Air Quality: Sources, Exposure Agents and Health Outcomes

727

See also: Air Pollution Episodes; Air Quality Legislation; Community Noise Disease Burden: DALYs May be the Answer, but What is the Question?; Neighbourhood ‘Social Infrastructure’ for Health; Prioritizing Community Environmental and Health Needs: Novel Approaches and Methods.

Further Reading Anenberg, S.C., West, J.J., Fiore, A.M., Jaffe, D.A., Prather, M.J., Bergmann, D., Cuvelier, K., Dentener, F.J., Duncan, B.N., Gauss, M., Hess, P., Jonson, J.E., Lupu, A., MacKenzie, I.A., Marmer, E., Park, R.J., Sanderson, M.G., Schultz, M., Shindell, D.T., Szopa, S., Vivanco, M.G., Wild, O., Zeng, G., 2009. Intercontinental impacts of ozone pollution on human mortality. Environmental Science and Technology 43, 6482–6487. Anenberg, S.C., West, J.J., Yu, H., Chin, M., Schulz, M., Bergmann, D., Bey, I., Bian, H., Diehl, T., Fiore, A., Hess, P., Marmer, E., Montanaro, V., Park, R., Shindell, D., Takemura, T., Dentener, F., 2014. Impacts of intercontinental transport of anthropogenic fine particulate matter on human mortality. Air Quality, Atmosphere and Health 7 (3), 369–379. Boothe, V.E., Shendell, D.G., 2008. Potential health effects associated with residential proximity to freeways and primary roads: Review of scientific literature 1999–2006. Journal of Environmental Health 70 (8), 33–41. Committee of the Environmental and Occupational Health Assembly of the American Thoracic Society, 1996CommitteeoftheEnvironmentalandOccupationalHealthAsse. Health effects of outdoor air pollution. American Journal of Respiratory and Critical Care Medicine 153 (1), 3–50. Dockery, D.W., Pope 3rd, C.A., Xu, X., et al., 1993. An association between air pollution and mortality in six U.S. cities. The New England Journal of Medicine 329 (24), 1753–1759. Doherty, R.M., Wild, O., Shindell, D.T., Zeng, G., Collins, W.J., MacKenzie, I.A., Fiore, A.M., Stevenson, D.S., Dentener, F.J., Schultz, M.G., Hess, P., Derwent, R.G., Keating, T.J., 2013. Impacts of climate change on surface ozone and intercontinental ozone pollution: A multi-model study. Journal of Geophysical Research 118, 3744–3763. https:// doi.org/10.1002/jgrd.50266. Fanning, E.W., Froines, J.R., Utell, M.J., et al., 2009. Particulate matter (PM) research centers (1999–2005) and the role of interdisciplinary center-based research. Environmental Health Perspectives 117 (2), 167–174. Finlayson-Pitts, B.J., Pitts Jr., J.N., 2000. Chemistry of the upper and lower atmosphere: Theory, experiments, and applications, 1st edn. Academic Press, San Diego, CA. Friis, R.H., 2007. Air quality. In: Riegelman, R. (Ed.), Essentials of environmental health, 1st edn. Jones and Bartlett Publishers, Sudbury, MA. ch. 10. Frumkin, H. (Ed.), 2005. Environmental health: From global to local, 1st edn. Jossey-Bass, San Francisco, CA. chs. 2–4, 13–17, 22. Gusev, A., MacLeod, M., Bartlett, P., 2012. Intercontinental transport of persistent organic pollutants: A review of key findings and recommendations of the task force on hemispheric transport of air pollutants and directions for future research. Atmospheric Pollution Research 3, 463–465. Hemispheric Transport of Air Pollution (2010) UNECE, GenevadThis was an assessment done by the task force of the UNECE-LRTAP convention for the four priority pollutants of global concern which included ozone and its precursors, PM, POPs and mercury. The four full reports can be found at: http://www.htap.org. Kim, J.J., 2004. American Academy of Pediatrics Committee on environmental health, ambient air pollution: Health hazards to children. Pediatrics 114 (6), 1699–1707. Lippman, M. (Ed.), 1992. Environmental toxicants: Human exposures and their health effects, 1st edn. Van Nostrand Reinhold, New York. chs. 1–6, 8, 10, 12, 14, 16–18, 20–22. Lippman, M., Cohen, B.S., Schlesinger, R.B. (Eds.), 2003. Environmental health science: recognition, evaluation, and control of chemical and physical health hazards, 1st edn. Oxford University Press, New York. chs. 2 and 12. Maxwell, N.I., 2009. Understanding environmental health: How we live in the world, 1st edn. Jones and Bartlett, Sudbury, MA. chs. 2, 3 (3.3 only) and 4. Moore Moore, G.S. (Ed.), 2002. Living with the earth: Concepts in environmental health science, 2nd edn. CRC Press, Boca Raton, FL. ch. 10. Nadakavukaren, A., 2006. Our global environment: A health perspective, 6th edn. Waveland Press, Inc., Long Grove, IL. chs. 10–12. Rao, S., Mathur, R., Hogrefe, C., Keating, T., Dentener, F., Galmarini, S., 2012. Path forward for the air quality model evaluation international initiative (AQMEII). EM Magazine (Air and Waste Management Association) 7, 38–41. Schwela, D., 2000. Air pollution and health in urban areas. Reviews on Environmental Health 15 (1–2), 13–42. Seinfeld, J.H., Pandis, S.N., 1998. Atmospheric chemistry and physics: From air pollution to climate change. Wiley, New York.

Relevant Websites http://216.185.112.5/presenter.jhtml?identifier¼4419dAmerican Heart Association. Air Pollution, Heart Disease and Stroke. http://www.lungusa.org/healthy-air/outdoor/dAmerican Lung Association, Outdoor Air Quality. http://www.arb.ca.gov/homepage.htmdCalifornia Air Resources Board. http://www.ktl.fi/expolis/dEXPOLIS Project, On-Line Library (six European urban areas). http://www.aqmd.gov/Default.htmdSouth Coast Air Quality Management District, California. http://www.atsdr.cdc.gov/general/theair.htmldUnited States Centers for Disease Control and Prevention, Agency for Toxic Substances and Disease Registry (USCDC/ATSDR). Air and Air Pollution. http://www.epa.gov/ebtpages/airairquality.htmldUnited States Environmental Protection Agency (USEPA). Air Quality (Outdoor/Ambient). http://www.cdc.gov/nceh/airpollution/default.htmdUSCDC, National Center for Environmental Health (NCEH), Environmental Hazards and Health Effects Program. Air Pollution and Respiratory Health. http://www.epa.gov/iaq/dUSEPA. Indoor Air Quality. http://www.who.int/topics/air_pollution/en//dWorld Health Organization. Air Pollution.

Complex Air Pollution in China Lijian Han and Weiqi Zhou, State Key Laboratory of Urban and Regional Ecology, Research Center for Eco-Environmental Sciences, Chinese Academy of Sciences, Beijing, China © 2019 Elsevier B.V. All rights reserved.

Chinese Understanding on Air Pollution: From Ancient to Present China has a long history in both Asia and the world. It is the country that has one of the oldest civilizations in the human history. Chinese characters is one of the iconic characters to illustrate the detail of the world by pictograms, compound ideograms, simple ideograms, etc. Those would give people the direct understand on the air pollution since the words was create in the ancient and succeed to the present. One of the most typical examples is the writing style “haze” in oracle bone script and present form (Fig. 1). Those indicate ancient China’s understanding on the haze is the combination of wind, snow, and soil (or dust). Further understanding of the haze can be obtained from the historical records. The History of Yuan Dynasty recorded strong wind and heavy haze occurred in the capital city, and the emperor pray for God to solve the problem. Another typical record was in the poem of Pray for snow in Ganzhou that illustrated the heavy dust or haze is caused by the dry surface and strong wind. Those information gives the initial understanding to the Chinese people. Now, with the aid of modern technology, we know it is fine particulate matter (PM2.5) makes the haze occurrence. PM2.5 is the pollutant particles with diameter < 2.5 mg/m3, which is about 1/30th the diameter of a human hair. The PM2.5 could bring significant impact on public health as the clue of significant association between exposure to PM2.5 and premature death from heart or lung disease. The sources of the PM2.5 include both nature events, for example, Asian dust, and intensive human activities’ emission under stable meteorological conditions, particular in urban areas.

China’s Development and Air Pollution The development of modern China, Peoples’ Republic China, started in 1949. In the early stage from 1949 to 1979 before China established the famous “Reform and Opening-up (ROU)” policy, China paid strong attention to industrial development, but limited population movement. Almost no attention was paid to environmental protection. It was very lucky that the human activities were not intensive enough to significant influence the environment including the air quality. From the end of 1970s to the end of 1990s, the ROU start to taking effect but the country was still in a moderate development. Although such development brings some negative impact on the environment but still with in local scale which attracted very limited attention to the authority and the public. After 2000, China started its remarkable economic development which introduced intensive human activities that emit massive and complex air pollutants without strict actions, especially at the level of local practices, on the pollution control. Thus, heavy air pollution events have been frequently occurred in China, especially in areas with high density of population, industries and cities and their surrounding regions as a whole.

Overview of China’s Air Pollution in the World Global average PM2.5 increased from 14.9 mg/m3 in 2000 to 17.4 mg/m3 in 2015, while China’s PM2.5 increased from 22.5 mg/m3 in 2000 to 31.9 mg/m3 in 2015. Although China’s PM2.5 is not the highest in the world (Fig. 2), the polluted area was mainly located in the high-density populated areas of East and Central China. Similar cases can also be obtained at India. Analysis in large cities at global scale gives a clue on the differences between China and other countries. In general, large cities in developed countries had better air quality than in those in developing countries. In developed countries, no large cities with PM2.5 concentration higher than the interim target-1 (annual mean PM2.5 concentration not higher than 35 mg/m3; IT-1) of World Health Organization (WHO), but in developing counties, more than 30% of the large cities with PM2.5 concentration

Fig. 1

728

Chinese characters of haze in oracle bone script (left) and present form (right).

Encyclopedia of Environmental Health, 2nd edition, Volume 1

https://doi.org/10.1016/B978-0-12-409548-9.11448-4

Complex Air Pollution in China

Fig. 2

729

Global PM2.5 concentration in 2015.

Fig. 3 PM2.5 concentrations of large cities with both population size larger than 0.75 million and urban area more than 100 km2, 1998–2012. From Fig. 2 in Han, L., Zhou, W., Pickett, S. T., Li, W., Li, L. (2016a). An optimum city size? The scaling relationship for urban population and fine particulate (PM2.5) concentration. Environmental Pollution 208, 96–101.

exceed the IT-1 of WHO. Large cities in the United States and Europe were the cases that stand for the developed World’s air quality, while, large cities in China and India were the cases that stand for the developing World’s air quality. Especially, large cities in China marked as one of the most severe air pollution condition in world’s large cities, only around 20% large cities in China with annual mean PM2.5 concentration below the IT-1 of WHO, and no large cities in China was obtained with PM2.5 concentration within the Air Quality Guideline (annual mean PM2.5 concentration not higher than 10 mg/m3; AQG) of WHO (Fig. 3).

Current Status, Changes and Potential Impact of China’s PM2.5 Pollution The initial understand of China’s air pollution was with the aid of satellite remotely sensed PM2.5 concentration after 2000. The PM2.5 concentration was higher in East and Central China, as well as the desert areas in Xinjiang province, for both 2000 and 2014. The areas with PM2.5 concentrations > 70 mg/m3 were found only rarely in 2000, in Hebei and Xinjiang provinces, but were found over large areas in Hebei, Shandong, Henan provinces, and the desert areas of Xinjiang in 2014 (Fig. 4). The areas

730

Complex Air Pollution in China

Fig. 4 PM2.5 concentrations in 2000 (A) and 2014 (B), and the changes (C) between them. From Fig. 3 in Han, L., Zhou, W., Li, W., and Qian, Y. (2018c). Urbanization strategy and environmental changes: An insight with relationship between population change and fine particulate pollution. Science of the Total Environment 642, 789–799.

with stronger PM2.5 concentration increases were mainly found in two belts: the Beijing-Hunan belt, including Beijing, Tianjin, Hebei, Shandong, Henan, Anhui, Jiangsu, Hubei, Jiangxi and Hunan provinces; and the Xinjiang-Gansu belt, including Xinjiang, Qinghai, and Gansu provinces. In 2000, 26% of China’s territory were found to have PM2.5 concentrations > 35 mg/m3, increasing to 31% in 2014. In 2014, 67%, 55%, 30%, and 23% of Central, East, Northeast, and West China’s areas, respectively, were found to have PM2.5 concentrations > 35 mg/m3. PM2.5 pollution in urban areas was heavy, as evidenced by the PM2.5 concentrations in different population density urban areas that had PM2.5 concentrations > 35 mg/m3. The average urban PM2.5 concentration was higher and increased more rapidly in high density populated urban areas (LG), while being lower and increasing more slowly in low density populated urban areas (MW) from 2000 to 2014. Similar phenomena were also observed in East, Northeast, Central, and West China. Moreover, cities in East China had the highest PM2.5 concentrations in both 2000 and 2014, while West China had the lowest values. The increase in population and spread of PM2.5 pollution led to an increase in the population exposure to pollution, particularly in urban areas. A population of 723 million was exposed to PM2.5 pollution in 2014, an increase of 105 million from 2000. The urban population exposure was 460 million in 2014, an increase of 103 million from 2000, while the rural population exposure was 263 million in 2014, with only an increase of 2 million over 2000. The urban population exposure mainly occurred in LG areas, where 271 million urban residents were exposed to PM2.5 pollution, an increase of 52 million from 2000. Moreover, 102 and 87 million urban residents in moderate density populated urban areas (MG) and MW areas were exposed to PM2.5 pollution, with increases of 24 and 27 million from 2000, respectively. The population exposure in LG areas was mainly found in East China, where a population of 132 million was exposed to PM2.5 pollution in 2014, an increase of 37 million from 2000. In addition, 69, 51, and 19 million LG urban residents in Central, West, and Northeast China were exposed to PM2.5 pollution in 2014, with increases of 5, 8, and 1 million from 2000, respectively. The population exposure in MG areas was mainly found in Central China, where 43 million residents were exposed to PM2.5 pollution in 2014, an increase of 13 million from 2000. Moreover, 39, 17, and 4 million MG urban residents in East, West, and Northeast China were exposed to PM2.5 pollution in 2014, with increases of 7, 5, and 0 million from 2000, respectively. The population exposure in MW areas was mainly found in East China, where 36 million residents were exposed to PM2.5 pollution in 2014, an increase of 16 million from 2000. In addition, 31, 14, and 6 million MW urban residents were exposed to PM2.5 pollution in Central, West, and Northeast China in 2014 with increases of 8, 2 and 2 million from 2000, respectively. Major cities contributed a large portion to the urban population exposure in China. Twenty-seven percent of the urban population exposure occurred in the major cities in 2014, while the portion was 23% in 2000. Thirty-six percent of the LG urban

Complex Air Pollution in China

731

population exposure was found in the major cities in 2014, and the portion was 29% in 2000. The increase of urban population exposure in major cities accounted for 40% of the total increase of urban population exposure in China from 2000 to 2014; in particular, the increase in LG urban population exposure in major cities contributed 65% of the total increase of LG urban population exposure in China from 2000 to 2014. Among the total population exposure to PM2.5, susceptible population were also exposed to the heavy PM2.5 pollution. In 2010, only 1% susceptible population within the AQG of WHO, 7%, 9%, and 14% susceptible population within the interim target-3 (annual mean PM2.5 concentration not higher than 15 mg/m3; IT-3), interim target-2 (annual mean PM2.5 concentration not higher than 15 mg/m3; IT-2), and IT-1 of WHO, respectively. While, 69% susceptible population exposed to heavy PM2.5 pollution: 20%, 18%, 16%, 12%, and 3% susceptible population exposed to 35–50 mg/m3, 50–65 mg/m3, 65–80 mg/m3, 80–95 mg/m3, and more than 95 mg/m3, respectively.

More Than PM2.5: Multicontaminant Air Pollution in China There are, generally, three ways to quantify air contaminant concentrations: Modeling with complex inputs and computer systems, mapping with satellite remote sensing imagery, and monitoring with ground-based networks. Modeling provides the most rapid way to forecast air quality dynamics but relays on very detailed data inputs and complex computer systems. Satellite remote sensing provides one of the most convenient ways in mapping the spatial pattern of air contaminants (e.g. PM2.5; NOx; SO2); however, its uncertainty still needs further evaluation. Compared to modeling and remote sensing for air contaminant quantification, ground monitoring stations, at the human height scale and in areas with dense population, provide the most accurate results while also permitting high temporal resolution. Many high-income countries have established air quality monitoring networks, while serious air quality degeneration in some mid-income countries, has stimulated the creation of monitoring networks in major cities. Such networks provide an efficient way of understanding the urban air contaminant composition, and evaluating their potential impact on public health. China has established urban air quality monitoring network since 2012 under the newly upgraded Ambient Air Quality Standards (GB3095-2012). The data from that network provide hourly records of PM2.5, coarse particulate (PM10), NO2, SO2, and O3. With aid of that dataset, the multicontaminant air pollution and its potential impact on population in China was understand. At annual scale of 2014, three pollutants’ combination that are PM2.5, PM10 and/or NO2 were examined. A total of 56 cities (36% of the analyzed cities) having 142 million urban population (51% of the total analyzed urban population) were exposed to three-contaminant mixtures consisting of PM2.5, PM10, and NO2. While for two-contaminant mixtures, all cities were exposed to PMx pollution, and 56 cities having 142 million urban population were exposed to PM2.5 and NO2 or PM10 and NO2 pollutions. Cities with annual multicontaminant exposures to 1) PM2.5, PM10 and NO2, 2) PM2.5 and NO2, and 3) PM10 and NO2 were mainly found in East China, specifically in Hebei, Henan, Shandong, Jiangsu, and Zhejiang Provinces and in the megacities of Beijing, Guangzhou, Tianjin, and Shenzhen. At diurnal scale throughout 2014, five pollutants’ combination were examined and understood to the public: 1. Five-contaminant pollution: Only two cities, Dongying and Linyi in Shandong province, were exposed to five-contaminant mixtures for 3%–4% days of a year. Weifang and Zibo in Shandong province were exposed to five-contaminant pollution for 2%–3% days of a year. Jining in Shandong province, Wuhan in Hubei province, and Jiayuguan and Jinchang in Gansu province were exposed to a five-contaminant mixtures for 1%–2% days of a year. Other cities had < 1% days or no five-contaminant pollution in a year. 2. Four-contaminant pollution: More attention should be given to four-contaminant mixtures of PM2.5, PM10, SO2, and O3, or PM2.5, PM10, NO2, and SO2 pollutions, because up to 30% or 20% days of a year was exposed to combinations of fourcontaminant. Cities with high frequencies of diurnal four-contaminant mixtures of PM2.5, PM10, SO2, and O3, were observed in Shandong province, while cities with high frequencies of diurnal four-contaminant mixtures of PM2.5, PM10, NO2, and SO2, were mainly observed in Shandong and Hebei Provinces. Other four-contaminant mixtures were rare, with < 5% days of a year at Chinese major cities observed to have diurnal four-contaminant mixtures consisting of PM2.5, PM10, NO2, and O3, of PM2.5, O3, NO2, and SO2, or of PM10, O3, NO2, and SO2. 3. Three-contaminant pollution: Strong attention should be given to three-contaminant mixtures consisting of PM2.5, PM10, and SO2, because 110 cities with 173 million population were exposed to this degree of pollution mixture for more than 40% days annually. Those cities were mainly found in East and Central China, with particularly heavy occurrences in cities in Hebei, Henan, Shandong, and Shanxi Provinces. Attention should also be given to three-contaminant mixtures 1) of PM2.5, PM10, and O3, 2) of PM2.5, O3, and SO2, 3) of PM2.5, PM10, and NO2, 4) of PM2.5, SO2, and NO2, 5) of PM10, SO2, and NO2, or 6) of PM10, O3, and SO2, because up to 30% or 20% days of a year was exposed to such combination of three-contaminant. However, < 5% of days annually at major Chinese cities were found to have three-contaminant mixtures of PM2.5, O3, and NO2, of PM10, SO2, and NO2, or of NO2, O3, and SO2. 4. Two-contaminant pollution: Strong attention should be given to two-contaminant of PM2.5 and PM10, PM2.5 and SO2, or PM10 and SO2, because 145 cities with 268 million urban population, 116 cities with 184 million urban population, or 111 cities with 175 million urban population were exposed to the pollution for more than 40% days of a year, respectively. Those cities were mainly observed in provinces in the east of China: Hebei, Henan, Shandong, and Shanxi.

732

Complex Air Pollution in China

Trade-off Between Urbanization and Air Pollution Nine types of relationships between population and PM2.5 concentration were obtained (Fig. 5A) for better understand the trade-off between urbanization and air pollution. A total of 42% of China’s territory, which was mainly in the rural areas of East and Central China, was identified as showing population decrease with PM2.5 concentration increase. Nineteen percent of China’s territory, in the remote areas in West China, was identified as showing no change in population with an increase in PM2.5 concentration. Eleven percent of China’s territory, mainly the urban areas in East and Central China, was identified as showing population increase with PM2.5 concentration increase. Another 11% of China’s territory, in the north of the Loess Plateau, Mt. Dahingganling, Fujian, Guangxi, and Hainan, was identified as showing population decrease with PM2.5 concentration decrease.

Fig. 5 Relationships between changes of population and PM2.5 concentration at the pixel level (A), LG (B), MG (C), MW (D), and rural areas (E). From Fig. 6 in Han, L., Zhou, W., Li, W., and Qian, Y. (2018c). Urbanization strategy and environmental changes: An insight with relationship between population change and fine particulate pollution. Science of the Total Environment 642, 789–799.

Complex Air Pollution in China

733

Increase in both population and PM2.5 concentration was the major type found in urban areas. A total of 128 prefectures were found to have increases in both LG area and PM2.5 concentration, in West, Central and Northeast China, as well as the mega-cities (e.g. Beijing, Shanghai, Guangzhou) in East China (Fig. 5B). A total of 132 prefectures were found to have increases in both MG area and PM2.5 concentration (Fig. 5C), and a total of 150 prefectures were found to have increases in both MW area and PM2.5 concentration (Fig. 5D). A total of 76 prefectures were found to have increasing PM2.5 concentration but decreasing LG area, particularly in East China’s Hebei, Shandong, Jiangsu, Henan, and Anhui provinces. A total of 54 prefectures were found to have increasing PM2.5 concentration but decreasing MG area, in part of Shandong, Anhui, and Jiangxi provinces. Moreover, a total of 42 prefectures were found to have increasing PM2.5 concentration but decreasing MW area, particularly in the south of China. Very limited numbers of prefectures were found to have decreases of both LG/MG/MW and PM2.5 concentration. In addition, some prefectures in the Loess Plateau were found to have LG/MG/MW increase but PM2.5 decrease. Population decrease with PM2.5 concentration increase was the major type found in rural areas (Fig. 5E). A total of 136 prefectures were found to have rural areas showing population decrease along with an increase in PM2.5 concentration. Only 30 prefectures were found to have rural areas with increases in both population and PM2.5 concentration, in south Anhui province and south Jiangxi province. Inverse “U-shape” relationship was obtained between urban population size and the frequency of diurnal multicontaminant air pollution. Cities with population sizes larger than 10 million generally had low frequency of diurnal multicontaminant air pollution. Diurnal frequency of five-contaminant air pollution in a year had no significant difference among cities with population sizes of < 10 million, but was significantly higher than that in cities with population size larger than 10 million. Diurnal frequency of four-contaminant air pollution in a year had no significant difference among cities with population sizes of 0.5–10 million, but was significantly higher than that in cities with population < 0.5 million, or larger than 10 million. Diurnal frequency of three-contaminant air pollution in a year was higher in cities with population sizes of 0.5–10 million, but lower in cities with population < 0.5 million, or larger than 10 million. Diurnal frequency of two-contaminant air pollution in a year had no significant difference among cities with population < 10 million, and was higher than that in cities with population larger than 10 million. A significant inverse “U-shape” relationship was obtained between urban population sizes and frequencies of diurnal three-contaminant air pollution in a year. While relatively weak inverse “Ushape” relationships were observed between urban population sizes and frequencies of diurnal three- or two-contaminant air pollution.

Actions on Improving Air Quality in China Asia dust was considered as the major natural source of PM2.5 concentration throughout of northern China, however, the dust frequency was significantly decreased during the past decades and mainly occurred at dryland. An insight to the intensive anthropogenic activity pointed that during 2000–2010, around 70% of the country’s energy was coming from the coal, in addition, vehicle amount has rapidly increased, contributing a huge amount of PM2.5 emission in the cities. On the other hand, China’s rapid urbanization did not give enough attention to urban green, which could significantly remove part of PM2.5 through the leaf absorption and facilitated deposition, however, was not well constructed. Besides the increase in emission and decrease in mitigation ability, the lack of effective environmental protection actions is another reason for the environmental degeneration, leading to the explosion of severe environmental problems in recent years. China has built up a comprehensive environmental legal system ever since the establishment of the first environmental protection law in 1979. However, not all the environmental laws have been successfully enforced throughout China’s local governments. Local governments, as major executors of central government policies, also need to satisfy the increasing demand for local economic development. “Keep only one eye open,” which vividly represents paying less attention to environmental pollution, was a wellknown expression illustrating the lack of local environmental protection actions. Under such kinds of environmental protection action for the past several decades, China’s environmental problems have become increasingly obvious in the past decade. The central government has recently recognized the severity of environmental degeneration, and therefore released the “Ecological Civilization” concept, which represents restructuring the economy to achieve man-nature, production-consumption harmony. Furthermore, the latest version of China’s environmental law, after a long period for revision, was put forward on New Year’s Day in 2015. However, problems remain with effective local implementation and public scrutiny that can either improve or break China’s “Ecological Civilization” environmental dream. In particular, on the way to accomplishing the “New-type Urbanization Plan,” facing the accumulated environmental problems and putting forward effective environmental protection actions is essential to urban residents’ well-being, and suggests a better strategy for other developing countries that are on a path towards rapid urbanization.

See also: Air Pollution Episodes; Air Quality Legislation.

734

Complex Air Pollution in China

Further Reading Bai, X., Shi, P., Liu, Y., 2014. Realizing China’s urban dream. Nature 509, 158–160. Boys, B., Martin, R., van Donkelaar, A., et al., 2014. Fifteen-year global time series of satellite-derived fine particulate matter. Environmental Science & Technology 48, 11109– 11118. Chan, C., Yao, X., 2008. Air pollution in mega cities in China. Atmospheric Environment 42, 1–42. Han, L., 2018. Relationship between urbanization and urban air quality: An insight on fine particulate dynamics in China. Progress in Geography 37, 1011–1021 (In Chinese with English abstract). Han, L., Zhou, W., Li, W., Li, L., 2014. Impact of urbanization level on urban air quality: A case of fine particles (PM2.5) in Chinese cities. Environmental Pollution 194, 163–170. Han, L., Zhou, W., Li, W., 2015a. Increasing impact of urban fine particles (PM2.5) on areas surrounding Chinese cities. Scientific Reports 5, 12467. Han, L., Zhou, W., Li, W., 2015b. City as a major source area of fine particulate (PM2.5) in China. Environmental Pollution 206, 183–187. Han, L., Zhou, W., Pickett, S.T., Li, W., Li, L., 2016a. An optimum city size? The scaling relationship for urban population and fine particulate (PM2.5) concentration. Environmental Pollution 208, 96–108. Han, L., Zhou, W., Li, W., 2016b. Fine particulate (PM2.5) dynamics during rapid urbanization in Beijing, 1973-2013. Scientific Reports 6, 23604. Han, L., Zhou, W., Pickett, S.T., Li, W., Qian, Y., 2018b. Multicontaminant air pollution in Chinese cities. Bulletin of the World Health Organization 96, 233–242E. Han, L., Zhou, W., Li, W., Qian, Y., 2018c. Urbanization strategy and environmental changes: An insight with relationship between population change and fine particulate pollution. Science of the Total Environment 642, 789–799. Peng, J., Chen, S., Lu, H., et al., 2016. Spatiotemporal patterns of remotely sensed PM2.5 concentration in China from 1999 to 2011. Remote Sensing of Environment 174, 109–121.

Relevant Websites cnemc, n.d. http://www.cnemc.cndChina National Environmental Monitoring Centre. National, n.d. http://106.37.208.233:20035/dNational urban air quality monitoring online platform. Ourworldindata, n.d. https://ourworldindata.org/dOur World in Data.

Connecting Environmental Stress to Cancer Cell Biology Through the Neuroendocrine Responseq A Melhem and S Conzen, University of Chicago, Chicago, IL, United States © 2019 Elsevier B.V. All rights reserved.

Abbreviations ACTH Adrenocorticotropic hormone ADBR1/2 Human beta-adrenergic receptor 1/2 ANS Autonomic nervous system CRH Corticotropin-releasing hormone CSF-1 Colony stimulating factor 1 DHEA Dehydroepiandrosterone GC Glucocorticoid GR Glucocorticoid receptor GREs Glucocorticoid responsive elements HPA axis Hypothalamic-pituitary-adrenal axis IgA Immunoglobulin A IL Interlukin MHPG 3-Methoxy-4-hydroxyphenylglycol MKP-1 MAP kinase phosphatase-1 MMP Matrix metalloproteinase NK cells Natural killer cells SGK-1 Serum and glucocorticoid regulated kinase-1 SGRMs Selective glucocorticoid receptor modulators SiRNA Small interfering RNA TNF-a Tumor necrosis factor-alpha VEGF Vascular endothelial growth factor

Introduction An individual’s exposure and response to social stressors has long been implicated in chronic disease risk. The potential mechanisms underlying the link between stressors and cancer susceptibility have been the subject of long-standing research for both social and basic scientists. Despite their continued effort, the exact impact that psychological stressors have on cancer susceptibility and progression is still poorly understood. However, knowing the mechanisms connecting social stress and cancer biology could lead to major changes in public health initiatives and cancer treatment. Epidemiological evidences taken together with biological models of cancer development have shed some light on how chronic stress may contribute to human cancer. Following exposure to stress, both the endocrine system and the autonomic nervous system (ANS) are activated. This neuroendocrine response can mediate signaling pathways and gene expression changes at the cellular level that have the potential to alter tumor biology (Fig. 1). For example, in preclinical models of both breast and ovarian cancer, there are growing data linking environmental stressors with an ensuing neuroendocrine response and ultimately molecular changes in tumor tissues. These changes include inhibiting tumor cell apoptosis and increasing cell proliferation, tumor invasion, and angiogenesis.

Defining Stress Stress can be broadly defined as an individual’s perception of a noxious stressor and the subsequent activation of the central and peripheral nervous system to generate a defensive/adaptive response. Stress can be perceived as positive if transient and successfully

q

Change History: October 2018. The section editor Oladele A. Ogunseitan updated the references. This is an update of A. Melhem, S. Conzen, Connecting Environmental Stress to Cancer Cell Biology Through the Neuroendocrine Response, Editor(s): J.O. Nriagu, Encyclopedia of Environmental Health, Elsevier, 2011, Pages 822–827.

Encyclopedia of Environmental Health, 2nd edition, Volume 1

https://doi.org/10.1016/B978-0-12-409548-9.11679-3

735

736

Connecting Environmental Stress to Cancer Cell Biology Through the Neuroendocrine Response

Stress

Hypothalamus/CRH

Central sympathetic system/sympathetic ganglia

HPA axis Pituitary/ACTH

Peripheral sympathetic neurons

Adrenal gland

Cortisol

ANS

Epinephrine/norepinephrine Cancer growth

Fig. 1 Stress elicits a neuroendocrine response via the hypothalamic-pituitary-adrenal (HPA) axis and the autonomic nervous system (ANS) resulting in increased systemic glucocorticoid and catecholamines that may enhance cancer growth. Note: CRH, corticotrophin-releasing hormone; ACTH, adrenocorticotropic hormone.

overcome. In this situation, the stressor leads to adaptation and increased self-esteem. However, toxic stress involves continued chaotic stressors where the individual perceives a lack of control over his or her circumstances. Initially, the response to stressors involves adaptive neuroendocrine changes that enhance certain functions (e.g., alertness and cardiac output); however, eventually, toxic stress impairs many of these adaptive processes (e.g., decreased cognition and immune dysfunction) leading to a potentially pathological state. Stressors can be highly variable in their type and frequency. For instance, day-to-day challenges (e.g., work commitments and traffic) can be unrelenting and lead to chronic stress. However, more acute and unexpected stressors (e.g., loss of a family member) may be handled more successfully. Adding to the complexity is the fact that exposure to psychological and physical stressors often varies considerably throughout a single lifetime and individuals differ widely in the way they cope with these stressors. Coping mechanisms, in turn, are dependent on an individual’s genetic makeup, prior experiences, and the presence of buffers such as an adequate social support system. Despite all these variables, chronic exposure to stressors or a maladaptive stress response is clearly associated with a higher relative risk of chronic disease and even death. The imbalance that sometimes results between environmental stress and an individual’s ability to cope has been coined “allostatic overload,” differentiating this situation from the protective and healthy response to stress that is termed “allostasis” (Fig. 2).

Epidemiological Data Chronically stressed individuals may share phenotypic similarities with patients with the obesity-associated endocrine disorder known as the “metabolic syndrome.” These similarities include insulin resistance, hypertension, visceral obesity, and dyslipidemia. Interestingly, the metabolic syndrome is also associated with an increased risk of developing certain cancers. Despite many similarities between the chronically stressed phenotype and the cancer-associated metabolic syndrome, epidemiological data do not consistently demonstrate an increased risk of cancer in patients exposed to chronic stress. One reason for this discrepancy is likely due to the wide variety of methods that have been used to measure stress. To date, there are very few well-validated biomarkers to measure chronic measure stress. This has led to significant reliance on subjective data. A large prospective study of 6848 adults living in Alameda County, California, in the 1960s attempted to objectively measure stress by assessing social isolation. Confounding variables such as age, smoking, health at baseline, alcohol consumption, and household income were adjusted for. A follow-up on this study revealed that social isolation, particularly in women, increased the risk of mortality from cancer. However, social connections in men did not show such an association, leading one to assume that social isolation may have different physiological effects depending on gender. A role for stress in women’s health has been studied in breast cancer. This interest dates back to 200 CE when Galen noted that “melancholic” women were much more susceptible to cancer than “sanguine” women. More recently, a study analyzing life events and incidence of cancer from a cohort 10,808 Finnish women revealed a twofold increased risk of breast cancer following major life stressors such as loss of a spouse. In contrast, a Danish study revealed that self-reported stress in women is inversely correlated with the development of breast cancer at long-term follow-up. A meta-analysis of studies done between 1966 and 2002 showed a modest association between death of a spouse and breast cancer risk. Most other studies have found a weak association between

Connecting Environmental Stress to Cancer Cell Biology Through the Neuroendocrine Response

737

Allostasis Adaptive responses

Acute stress

Homeostasis

Allostatic overload Adaptive responses

Chronic stress

Disease Fig. 2 “Allostasis” is the protective and healthy process through which the body responds to stressors leading to homeostasis. “Allostatic overload” is caused by repeated exposure to stressors or a maladaptive stress response leading to disease.

psychosocial factors and breast cancer incidence. Overall, the evidence is stronger for tumor progression compared to cancer incidence and the strongest predictors are highly stressful life events. However, most studies have been flawed because of a lack of methodological rigor, small sample size, and paucity of prospective data. Also confounding are the behavioral effects resulting from chronic stress such as smoking, alcohol abuse, obesity, and lack of exercise, all of which are linked to cancer susceptibility.

The Physiological Response to Stress Despite conflicting epidemiologic data, fairly consistent biologic data exist on how stress may lead to cancer susceptibility. Following exposure to acute or chronic stress, both the ANS and the hypothalamic-pituitary-adrenal (HPA) axis are activated (Fig. 1). This leads to an increase in their respective mediators, catecholamines (norepinephrine/epinephrine) and glucocorticoids (cortisol). Catecholamines are released from the adrenal medulla and the sympathetic neurons, leading to the “fight-or-flight” stress response. They act by activating alpha- and beta-adrenergic receptors, allowing an adaptive response to acute stress and ultimately to an organism’s survival. Catecholamine levels can be elevated in both acutely and chronically stressed individuals. Cortisol, the active stress response steroid hormone in humans, is produced by the adrenal cortex in response to stimulation from the HPA axis. Cortisol acts on tissues by binding to and activating the glucocorticoid receptor (GR). The activated GR can in turn repress

738

Connecting Environmental Stress to Cancer Cell Biology Through the Neuroendocrine Response

or induce genes through direct binding of DNA to glucocorticoid responsive elements (GREs) or through indirect binding via interactions with other transcription factors. Other potential, albeit less established, GR-mediated effects are through its nongenomic actions. These lead to interactions with signaling pathways and changes in cell physiology independently of transcription. Ultimately, an acute increase in blood cortisol levels mediates a broad stress response through the ubiquitously expressed GR. The final downstream effects of the activated GR are mainly reflected in changes in cellular metabolism, immune function, and survival. In addition to stress, levels of cortisol have normal daily variations in response to a circadian rhythm and are typically high in the morning and low in the evening. Chronically stressed individuals have elevated basal levels of cortisol, an altered circadian cortisol rhythm and heightened cortisol secretion in response to acute stressors. Following prolonged stress/stimulation, the HPA axis can become “exhausted,” resulting in a net decrease in cortisol output. This alteration in the levels of both catecholamines and glucocorticoids may lead to the disruption of multiple physiologic processes involved in tumorigenesis. In human subjects, disrupted cortisol release can lead to altered immunity and changes in fat distribution, both of which have been linked to cancer risk. Changes in the immune system such as depressed natural killer (NK) cell cytotoxicity, reduced lymphocyte proliferation, and altered T-cell responses to antigen presentation ultimately lead to decreased immune surveillance. This in turn can lead to progression of virally mediated cancers such as cervical cancer, lymphomas, and subset of skin cancers. In addition to changes in the immune system, visceral adiposity occurs following prolonged exposure to elevated glucocorticoids. Visceral adiposity is a known risk factor for the development of malignancies such as breast, endometrial, colon, pancreatic, and gastric cancers. This may be mediated in part by the deleterious physiological consequences that excess visceral fat has on insulin sensitivity, estrogen levels, and proinflammatory cytokines such as interleukin-6 (IL-6) and tumor necrosis factor-alpha (TNF-a). Also proposed is a role for adipocyte-secreted hormones in cancer progression. Finally, chronic stress might lead to cancer through long-term alterations in the levels of other hormones such as dopamine, prolactin, estrogens, and oxytocin, all of which play a role in cancer biology.

Preclinical Models of Stress and Cancer Animal models have provided evidence that repeated exposure to psychological stressors can lead to the development of metabolic disturbances (e.g., increased cortisol and insulin resistance) and ultimately affect the natural history of a genetically predisposed disease (including cancer). For example, Sprague–Dawley rats are inbred rodents that have a genetic predisposition to mammary gland tumor formation. McClintock et al. showed that chronic social isolation of female Sprague–Dawley rats is associated with an increase in the corticosterone response to acute stressors as well as with an earlier onset of palpable spontaneous mammary gland tumors. Glucocorticoid administration may also enhance the growth and metastatic potential of tumors. This has been demonstrated by xenograft experiments in which human tumors are transplanted into nude mice. Xenografted mice treated with dexamethasone (a synthetic glucocorticoid) or vehicle were followed for primary tumor growth as well as assessment of the number of metastatic sites. When compared to control mice, dexamethasone-treated mice showed an increase in both the primary tumors’ size and the number of tumor metastasis. Similarly, animal models have provided support for a role of catecholamines in cancer. For example, using a human ovarian carcinoma xenograft model in nude mice, Sood et al. found that either daily immobilization or social isolation enhanced tumor growth and increased angiogenesis. This effect could be blocked by small interfering RNA (siRNA) targeting the gene encoding for the human beta-adrenergic receptor 2 (ADBR2). Administration of beta-adrenergic blockers also reversed the growth effect, raising the possibility of using beta-blockers in cancer prevention and treatment.

Stress and Epithelial Tumor Biology As previously stated, both glucocorticoids and catecholamines exert their effects through activation of ligand-specific receptors. The GR (glucocorticoid-specific) and the ADRB1 and ADRB2 (catecholamine-specific) are expressed in both normal and malignant tissue and their stimulation may be growth-promoting and linked to tumorigenesis. However, the cellular mechanisms altering tumor cell growth, which result from increased glucocorticoid and catecholamine levels, are only beginning to be uncovered. For glucocorticoids, recent evidence suggests that GR activation inhibits apoptotic cell death in ovarian and breast cancer cells subjected to environmental stressors such as serum deprivation or chemotherapy treatment. In fact, the tumorigenic effect of increased cortisol exposure is thought to be mediated in part by a decrease in proapoptotic signaling and an increase in antiapoptotic signaling pathways. Also noted is an increase in proliferative rates following hydrocortisone treatment of certain cell lines derived from metastatic breast cancer. In vitro experiments also provide evidence for a role of glucocorticoids in increasing tumor cell invasion. This has been demonstrated using cell adhesion assays of breast cancer cells following exposure to dexamethasone. Specifically, this effect is thought to be mediated by upregulation of the protooncogene C-fms that encodes the tyrosine kinase receptor for the macrophage colony stimulating factor (CSF-1). C-fms expression has been linked to increased invasiveness of breast carcinomas. Recently, glucocorticoids have also been shown to decrease the expression of DNA repair genes such as BRCA1. This has been demonstrated in mammary epithelial cell lines where hydrocortisone downregulates the expression of BRCA1. In addition to breast cancer, the in vitro growth-promoting effects of glucocorticoids have been reported in numerous nonlymphoid tumor

Connecting Environmental Stress to Cancer Cell Biology Through the Neuroendocrine Response

739

Individual

Tumor

Environment

Stressors

Neuroendocrine response

Gene expression Cell proliferation Cell survival Angiogenesis Metastasis

Transdisciplinary research

Fig. 3 Gene–environment interactions in cancer biology may be defined by interactions between environmental stress, the individual’s neuroendocrine response, and tumor biology.

types such as ovarian cancer, fibrosarcoma, glioblastoma, hepatoma, non-small cell lung cancer, follicular thyroid cancer, and prostate cancer. This is in sharp contrast to glucocorticoid’s proapoptotic effects in hematological malignancies. The role of stress-induced catecholamines in tumor growth is also likely to be important. Angiogenesis, a requirement for tumor growth, is thought to be mediated by upregulation of the vascular endothelial growth factor (VEGF) secreted by tumor cells. This VEGF upregulation has been observed in ovarian cancer cells following stimulation of the beta-adrenergic receptor by norepinephrine. Stromal invasiveness of ovarian cancer cells has also been shown to be increased following exposure to catecholamines. The results of cotreating ovarian cancer cells with catecholamines and matrix metalloproteinase (MMP) inhibitors suggest that MMPs play a critical role in mediating catecholamine’s effects on the ability of ovarian cancer cells to invade the extracellular matrix. Taken together, these results suggest that the effects of adrenaline (epinephrine) may play an important role in both the tumor-promoting angiogenesis and the degree of tumor invasiveness. In summary, recent data examining the physiologic mediators of stressdglucocorticoids and catecholaminesdsuggest a defined series of biological connections between the social environment, the neuroendocrine stress response, and tumor growth (Fig. 3).

Prevention and Therapeutic Implications The data presented earlier clearly implicate stress and the ensuing individual neuroendocrine response to increased cancer susceptibility. To prevent the detrimental effects of a prolonged HPA axis activation, one might consider using GR antagonists. However, these antagonists may have potentially harmful effects such as hypotension or heightened inflammation. To circumvent this, novel selective glucocorticoid receptor modulators (SGRMs) would have to be developed. These would ideally have preserved essential mineralocorticoid and immunological functions while lacking the prosurvival effects that glucocorticoids have on tumor cells. Furthermore, screening for SGRMs that do not induce expression of prosurvival genes commonly found downstream of the GR response may be another useful approach. Two examples of important GR target genes that encode potent “antiapoptotic proteins” are the genes for serum and glucocorticoid regulated kinase-1 (SGK-1) and MAP kinase phosphatase-1 (MKP-1), both of which have been shown to be required for GR-induced cell survival in vitro. Conversely, inhibiting the most upstream mediator of cortisol release, corticotropin-releasing hormone (CRH), has also been proposed in an effort to dampen the neuroendocrine response in psychiatric disorders. In adrenergic signaling, also appealing is the use of selective beta-blockers to reduce the potentially cancer-promoting effects of chronic stressors. Pharmacologically targeting both effectors of the body’s stress response is therefore an attractive strategy. Other potential pharmacologic interventions such as antidepressants and anxiolytics may also be considered. However, judging their efficacy in controlling stress is complex and relies on subjective data. Similarly, evaluating the efficacy of psychosocial interventions in controlling stress may be challenging in the absence of objective measures of stress. Hence, it is not surprising that studies examining cancer incidence and progression following psychosocial interventions have shown conflicting results. Clearly, more objective methods of assessing stress are needed. These would provide more reliable information on the effects of stress and therapeutic interventions. Such information could ultimately lead to stronger incentives for government policies encouraging healthy lifestyles and improved social support.

Challenges and Future Directions Increasing the understanding of the role of chronic social stress in cancer susceptibility has many challenges, in part because measuring social stress requires expertise that is outside the usual scope of most cancer biologists and epidemiologists. Therefore, one of the major challenges facing this field is increasing the interdisciplinary dialogue between social, clinical, and basic scientists. For example, if a model of chronic stress leading to increased neuroendocrine reactivity could be applied to robust models of

740 Table 1

Connecting Environmental Stress to Cancer Cell Biology Through the Neuroendocrine Response Stress biomarkers

Targets

Upstream biomarkers

Downstream biomarkers

Challenge protocols

HPA axis

CRH: Cerebrospinal fluid (CSF) ACTH: Serum

Cortisol: CSF, saliva, serum, 24 h urine, diurnal cortisol Glucocorticoids receptors: in white blood cells (WBC) Dehydroepiandrosterone (DHEA): Saliva, serum IL-6, IL-1b: Serum TNF-a: Serum Immunoglobulin A (IgA): Saliva Epinephrine/norepinephrine: serum, urine a-Amylase: saliva Chromogranin A: saliva, serum 3-Methoxy-4-hydroxyphenylglycol (MHPG): serum, urine Metanephrines: serum, urine Vanillylmandelic acid: serum, urine

CRH challenge: measure adrenocorticotropic hormone (ACTH) and cortisol after 1 h Dexamethasone challenge: measure cortisol after 8 h

ANS

human cancer, then cancer-promoting pathways might be uncovered. Such a discovery could reveal important signaling pathways connecting the neuroendocrine response to cancer etiology and progression. Similarly, it is anticipated that the identification of validated “stress” biomarkers will result from the transdisciplinary interactions that are evolving between cancer biologists, physicians, and social scientists. Until now, most measurements of stress have relied on assessment tools that measure an individual’s perceived stress in the absence of correlative biomarkers. Some of these studies have led to conflicting results in assessing the relationship between stress and cancer. The current markers that are most frequently used include measurements of salivary cortisol and serum IL-6 (Table 1). These markers (which are typically elevated under conditions of stress) likely do not reflect the full spectrum of relevant pathways activated downstream of chronic stressinduced glucocorticoid and catecholamine signaling. Researchers are now challenged to find biomarkers that will provide additional physiological evidence of ANS and HPA axis dysregulation. New biomarkers and the resulting improvement in assessment of neuroendocrine activity should lead to a better understanding of the connection between psychosocial stressors and cancer susceptibility. At the genomic level, identification of pathways involved in the neuroendocrine stress response and tumorigenesis is expected to reveal specific gene expression signatures in tumors following environmental stressors. One of the most interesting and difficult questions that must be tackled is why individuals may have different phenotypic or gene expression response to similar environmental stressors. The answer is likely to reflect a combination of an individual’s genetic makeup, prior life experiences, and the resulting differences in the neuroendocrine response. Defining the interactions between the social environment, neuroendocrine activity, and gene expression is an exciting challenge for transdisciplinary researchers. Discoveries resulting from these types of studies are likely to provide new insights into gene–environment interactions that were previously suspected but poorly understood.

See also: Cancer Risk Assessment and Communication; Cancer and the Environment: Mechanisms of Environmental Carcinogenesis; Environmental Carcinogens and Regulation; Gene–Environment Interactions and Childhood Cancer.

Further Reading Antoni, M.H., Lutgendorf, S.K., Cole, S.W., et al., 2006. The influence of bio-behavioral factors on tumor biology: Pathways and mechanisms. Nature Reviews Cancer 6 (3), 240–248. Cacioppo, J.T., Berntson, G.G., Malarkey, W.B., et al., 1998. Autonomic, neuroendocrine, and immune responses to psychological stress: The reactivity hypothesis. Annals of the New York Academy of Sciences 840, 664–673. Cacioppo, J.T., 2002. Social neuroscience: Understanding the pieces fosters understanding the whole and vice versa. American Psychologist 57 (11), 819–831. Chandola, T., Brunner, E., Marmot, M., 2006. Chronic stress at work and the metabolic syndrome: Prospective study. British Medical Journal 332 (7540), 521–525. Duijts, S.F., Zeegers, M.P., Borne, B.V., 2003. The association between stressful life events and breast cancer risk: A meta-analysis. International Journal of Cancer 107 (6), 1023–1029. Hawkley, L.C., Cacioppo, J.T., 2004. Stress and the aging immune system. Brain Behavior and Immunity 18 (2), 114–119. McClintock, M.K., Conzen, S.D., Gehlert, S., Masi, C., Olopade, O., cancer, M., interactions, s., 2005. Identifying multiple environments that regulate gene expression throughout the life span. The Journals of Gerontology. Series B Psychological Sciences and Social Sciences 60, 32–41. Spec. No. 1. McEwen, B.S., 2007. Physiology and neurobiology of stress and adaptation: Central role of the brain. Physiological Reviews 87 (3), 873–904. Miller, A.H., Ancoli-Israel, S., Bower, J.E., Capuron, L., Irwin, M.R., 2008. Neuroendocrine-immune mechanisms of behavioral comorbidities in patients with cancer. Journal of Clinical Oncology 26 (6), 971–982. Nielsen, N.R., Zhang, Z.F., Kristensen, T.S., Netterstrøm, B., Schnohr, P., Grønbaek, M., 2005. Self reported stress and risk of breast cancer: Prospective cohort study. British Medical Journal 331 (7516), 548. Reynolds, P., Kapan, G.A., 1990. Social connections and risk for cancer: Prospective evidence from the Alameda County study. Behavioral Medicine 16 (3), 101–110.

Connecting Environmental Stress to Cancer Cell Biology Through the Neuroendocrine Response

741

Spiegel, D., Giese-Davis, J., 2003. Depression and cancer: Mechanisms and disease progression. Biological Psychiatry 54 (3), 269–282. Thaker, P.H., Sood, A.K., 2008. Neuroendocrine influences on cancer biology. Seminars in Cancer Biology 18 (3), 164–170. Thaker, P.H., Lutgendorf, S.K., Sood, A.K., 2007. The neuroendocrine impact of chronic stress on cancer. Cell Cycle 6 (4), 430–433. Wu, W., Chaudhuri, S., Brickley, D.R., Pang, D., Karrison, T., Conzen, S.D., 2004. Microarray analysis reveals glucocorticoid-regulated survival genes that are associated with inhibition of apoptosis in breast epithelial cells. Cancer Research 64 (5), 1757–1764.

Contamination of Soil and Vegetation With Developing Forms of Parasites Jasmin Omeragic and Teufik Goletic, Veterinary Faculty of the University of Sarajevo, Sarajevo, Bosnia and Herzegovina © 2019 Elsevier B.V. All rights reserved.

Introduction Of the total number of Earth’s species, a large number (nearly 25%) lives in the soil, sometimes in incredible numbers. For example, estimates for the number of bacterial species per gram of soil range from 2000 to several millions. The vast majority of the soil inhabiting organisms are not a threat to human health. Soil organisms play an essential role in the ecosystem through various interactions such as nutrient circuits that enable the maintenance of soil fertility and water filtration, as well as the administration of useful compounds such as antibiotics, most of which are isolated from the soil organisms. Although people have isolated and used many important soil components to help fight disease, the soil also contains microorganisms that are capable of causing human and animal disease, and their effects have developed due to the competition with other organisms for water, food and space. They act either as opportunistic pathogens in susceptible individuals or as compulsory pathogens that must infect people to complete their life cycles. Exposure to infectious microorganisms from soil has been known for centuries, in different cultures and parts of the world, with geophagia (practice of eating soil) being one of the ways of introducing various microorganisms which is well documented in both animals and humans. Today, it is known that geophagia is widespread in the animal world and this phenomenon was observed in more than 200 different animal species. Geophagia certainly was not observed in animals only. The link between soil and human health has been recognized for thousands of years. Thus, even Hippocrates (460–380 BC) in his writings mentions geophagia, and the Mesopotamians and ancient Egyptians watered wounds with mud and ate the soil for the treatment of various diseases, in particular, the intestinal diseases. Some Indigenous peoples in America used soil as a spice; geophagia was a common practice in Europe until the 19th century. Certain societies, like the Tiv tribe from Nigeria, still practice consuming soil in the early stages of pregnancy, partly due to cultural norms, and partly due to nutritional need. Geophagia used to be widespread throughout the world: as many as 25% of schoolchildren in some areas of the US reported consuming soil during World War Two. Toxocariasis in humans is the most prevalent in children and is associated with geophagia. In United Kingdom, 50.0% of patients with clinical toxocariasis never owned dogs or were in direct contact, indicating that, among other things, soil contamination plays an important role in the transmission of parasites and human infection. Although soil infection is not a new phenomenon, the interest in understanding its mechanisms is increasing. There are still topical issues related to human activities that increase the risk of diseases transmitted in soil, the role of animals in the soil contamination, and what are the measures that we can take to control the soil pathogens. In that regard, and especially witnessing an increasing resistance of pathogens to the diverse range of nowadays known available drugs, the phrase stipulating that prevention is better than cure, probably best reflects a sustainable approach to addressing the problems of soil-transmitted diseases. It should be noted that globalization, food markets, tourist trips, the spread of national cuisines, an increase in the population of people, an increase in the size of the population of susceptible people due to aging, malnutrition and other chronic diseases, more people eating in restaurants, street food vendors who do not always respect the rules of food safety, as well as better diagnostic methods, are some of the factors that have contributed to the increased number of established diagnoses of this type of disease and consequently to the increased interest in their research. For example, in the past, the risk of human infection with parasites in Europe was generally considered limited to distant geographical areas due to adverse conditions for the survival of parasites on this continent (climate, vectors, hosts), but these barriers are slowly falling. In general, infectious organisms that can cause disease come from the five major phylogenetic groups: viruses, bacteria, fungi, protozoa and helminths. Soils’ properties, vegetation and climate conditions are the most important determinants of abundance and diversity of the organisms present in soil. Generally, soil microbial populations are more abundant in surface horizons than in deeper horizons. Specifically, the distribution of helminths in soils fluctuates greatly with season, climate, and the amount of organic matter in soil. Helminths are normally more abundant in warm, moist soils with plentiful organic material. If the conditions are favorable, the upper 10–15 cm of the soil profile will house most of the helminths. They may move vertically in the soil profile in response to seasonal weather changes. Maximum survival times of viruses, bacteria, protozoa and helminths in soil and on plant surfaces are given in Table 1.

Simple Classification Scheme of Soil-Borne Pathogens Based on the usual habitat of soil-borne pathogens, Jeffery and van der Putten in 2011 established classification which divides the pathogens spreading through the soil into two groups: euedaphic pathogenic organisms (EPOs), which are true soil organisms (soil inhabitants) with soil as their usual habitat (most bacterial and fungal), and soil-transmitted pathogens (STPs), for which the soil is not the right habitat but which can be maintained in the ground for a more extended period. Unlike EPOs, STPs must infect the host to complete the life cycle. Of course, this division, as many other classifications and clustering, has limitations as well, as there would probably be some overlap between the above groups which will undoubtedly be the subject of further research. In that sense,

742

Encyclopedia of Environmental Health, 2nd edition, Volume 1

https://doi.org/10.1016/B978-0-12-409548-9.11001-2

Contamination of Soil and Vegetation With Developing Forms of Parasites Table 1

743

Maximum survival times of pathogens in soils and on plant surfaces Soil

Plants

Pathogen group

Absolute maximum

Common maximum

Absolute maximum

Common maximum

Viruses Bacteria Protozoa Helminths

6 months 1 year 10 days 7 years

3 months 2 months 2 days 2 years

2 months 6 months 5 days 5 months

1 month 1 month 2 days 1 month

Adapted from Gerba, C. P., Smith, J. E. (2005). Sources of pathogenic microorganisms and their fate during land application of wastes. Journal of Environmental Quality 34, 42–48.

a good example is Strongyloides stercoralis which, on the basis of its closeness and relationship with soil, could belong both to EPOs and STPs group of organisms. EPOs and STPs, and their relationship, through various forms, are very important for the health of people and the environment as well. More often, this relationship can have a beneficial effect on the ecosystem, such as the role of EPOs in decomposition of organic matter, with consequent impact on soil fertility, erodibility and stability, reducing the effect of wind erosion and improving the water holding capacity. Conversely, it is likely that STPs provides fewer ecosystem services when they are in the soil because it is not necessarily their “habitual” habitat, but they can be a source of food for other organisms, which allows these other organisms to increase diversity. Sometimes however, this relationship could be harmful and result in a negative impact on the food chain, for example when Nematodes in the soil reduce both potato yield and tuber quality, which, especially in unstable economies, can cause worrisome fluctuations in staple food availability. When it comes to parasites of humans, Strongyloides stercoralis is part of the first group (EPOs), whereas Echinococcus multilocularis, Ascaris lumbricoides, Ancylostoma duodenale, Enterobius vermicularis, Trichuris trichiura, Entamoeba histolytica, Balantidium coli, Cryptosporidium parvum, Cyclospora cayetanensis, Giardia duodenalis, Isospora belli, Toxoplasma gondii and others are part of the second group (STPs). Also, the STPs group includes the important species for which animals are the final hosts, such as Taeniidae (Taenia spp., Echinococcus spp.), Ascaridae (Ascaris spp.), Toxocaridae (Toxocara spp.), Rhabditidae (Strongyloides spp.), Ancylostomatidae (Ancylostoma spp., Uncinaria stenocephala), Trichuridae (Trichuris spp.), Capillaridae (Capillaria spp.) and others.

Burden of Soil-Borne Parasitic Diseases Infectious diseases are one of the major causes of human suffering and mortality, responsible for an estimated 15 million deaths worldwide each year thus representing almost a quarter of total annual number of deaths. This number is expected to increase. Soilborne pathogens are important contributors to those numbers. It is estimated that, globally, more than 1.4 billion people are infected with at least one soil-transmitted helminth species (STHs). The major groups of parasitic helminths include nematohelminths (roundworms) and platyhelminths (flatworms), the latter subdivided into cestodes (tapeworms) and trematodes (flukes). Soil-transmitted helminths (STHs), colloquially known as geohelmints, are intestinal nematodes, part of the development of which takes place outside the bodydin the soil. Infection occurs through contact with parasite eggs or larva that thrives in warm and moist contaminated soil. The most important STHs, at a global level, which greatly contribute to overall disease burden of humankind include roundworms (Ascaris lumbricoides), whipworms (Trichuris trichiura), blood-feeding hookworms (Ancylostoma duodenale and Necator americanus), and threadworm (Strongyloides stercoralis). Burden of infections and soil-borne parasitic diseases can be primarily attributed to the limited access to clean water and sanitation with, concomitantly, low standards of hygiene and lack of hygienic behavior, as well as to other poverty-related issues like crowded living conditions combined with lack of access to health care, poor fecal disposal systems and wide dispersion of parasites within human communities, poor socioeconomic status and inadequate education. All together, accompanied with suitable environmental conditions favoring infection within the human and animal populations (ambient and surface temperature and humidity), can create stable transmission areas. Worldwide in 2010, an estimated 5.3 billion people, including 1.0 billion school-aged children, lived in stable transmission areas of at least one STH species, with 69% of these individuals living in Asia. A further 143 million (31.1 million school-aged children) lived in areas of unstable transmission for at least one STHs. Data about their prevalence and number of estimated infections worldwide, although impressive and disturbing are not surprising: Ascaris lumbricoides is estimated to infect 1.47 billion (nearly 20% of the world’s human population), hookworms 1.28 billion, Trichuris trichiura 1.05 billion and Strongyloides stercoralis infects more than 100 million people worldwide. Mentioned data are approximate only and in that sense particularly noteworthy are data that are publicly available through the WHO (mortalities attributed to the soil borne diseases) and the ECDC, which collects data obtained through The European Surveillance System (TESSy), whereby Member States are obliged to monitor and report data on the rates of infection of infectious diseases. Some data on contamination of soil with developing forms of STHs in different countries and cities or areas are presented in Table 2. In the light of this and having in mind that more than almost a quarter of the world’s human population is affected by helminthic parasites, often causing substantial disease and disability, it is not surprising that helminthes, mainly Nematode species, are the only group for which there is currently a particular focal point within the WHO for pathogens or parasites transmitted through the soil. Grave public health

744 Table 2

Contamination of Soil and Vegetation With Developing Forms of Parasites Contamination of soil with developing forms of soil-transmitted pathogens (STHs) in different countries, cities or areas Frequency (%)

Country Argentina Australia Bosnia and Herzegovina Brazil Chile Croatia Egypt Hungary Ireland Italy Japan Niger Poland Scotland Serbia Spain Turkey UK

USA

Data published (year) 2010 1982 1984 2002 2016 2011 2000 1974 1976 2001 1991 2002 1974 2006 1993 2008 2002 2002 2008 1976 1976 2000 2012 2008 1976 1973 1975 1987 2005 1975 1979 1979 1984

City/area Buenos Aires Brisbane Perth Herzegovina Canton Sarajevo Fernandopolis Santiago Zagreb Cairo Eastern and northern areas Dublin Bari Milano Naples Tokushima Kaduna Krakow Poznan Wroclaw Edinburgh Glasgow Belgrade Madrid Erzurum Leeds London London London Connecticut Philadelphia – – –

Roundworms

Hookworms

Whipworms

1.7 1.1 27.77 40.00 16.66 79.4 66.7 27.0 10.0 24.3–30.1 27.77 2.5 21.0 0.7–1.4 63.3 – 30.0 9.96 3.2 4.0 6.0 41.39 16.4 64.3 7.0 24.40 5.2 66.0 14.4 10.2 20.57 10.0–32.0 6.6–10.0%

20.5 – – 5.33 2.0 6.9 – – – 8.1–13.1 – 1.6 – 2.4 – 9.0 – – 4.9 – – – 3.0 – – – – – – – – – –

2.6 – – 11.33 3.0 – – 26.0 – 20.4–23.3 – 2.5 – 10.1 – – – – 4.9 – – – – – – – – – – – – – –

 Alagic, D. (2016). Contamination of soil and vegetation with developing forms of parasites in the area of Sarajevo Canton. Veterinaria Omeragic, J., Klaric, D., Smajlovic, A., Crnkic, C., 65(2), 59–65.

implications of STHs led the World Health Organization (WHO) to set the goal for STHs control by 2020 of reducing morbidity from these infections in preschool and school-aged children to a level below which it would not be considered a public health problem. The main approach is conduction of periodic large-scale antihelmintic treatment operations as a cost-effective strategy to reduce the burden of STHs. Possibly the most useful strategy is to focus on school-based control efforts, as heavy infection is the most common in school-aged children and effective treatment of this age group has a disproportionately large effect on transmission.

Parasites of Humans and Their Importance in Contamination of Soil and Plants Providing the information on all transmissible soil-borne diseases of humans and animals is beyond the scope of this article, but the most important parasitic ones, especially STHs, as the most prevalent cause of human helminthic infections in general, as well as some diseases of protozoan origin will be considered. The summary information on some of the soil-transmitted parasites of humans, their hosts, ways of transmission, symptoms, diagnostic methods and treatment is given in Table 3. The important harmful factors in helminth infections include their direct pathogenic effect and the modulatory role of the parasite on the host immune system, altering the response to other antigens or pathogens and potentially causing additional immunopathology. The direct pathogenic effect caused by helminths includes protein-energy malnutrition, anemia from chronic blood loss and iron deficiency, malabsorption syndrome, intestinal obstruction, chronic dysentery, rectal prolapse, respiratory complications, and growth retardation. Poor weight gain during geohelminthic infections may be due to adult helminth worms residing in the small

Table 3 Family Taeniidae

Soil transmitted parasites of humans, their hosts, ways of transmission, symptoms, diagnostic methods and treatment Species

Definitive host

Humans Taenia saginata, (beef tapeworm) Taenia solium (pork tapeworm)

Intermediate hosts Disease in humans Cattle (T. sagnata) Teniasis Pigs (T. solium)

Cysticercosis

Symptoms

Ingestion of raw or undercooked infected meat with cysticerci (Cysticercus bovis, C. cellulosae) Ingestion of contaminated food, water or soil that contain T. solium eggs

Echinococcus granulosus

Dogs and other Canidae Humans, sheep, cattle, goats, pigs, etc.

Echinococcus multilocularis

Foxes, coyotes, dogs, and cats

Humans, rodents Alveolar echinococcosis and wild canids (AE)

Ingestion of eggs by direct contact with animals, contaminated food, soil or plants

Ascaris lumricoides

Humans



Fecal-oral route ingestion of embryonated eggs; eggs from the infected person are excreted together with feces and pollute the soil and plants, especially if the feces of infected people are used for fertilizer

Ascariasis

Ingestion of eggs by direct contact with animals, contaminated food, water, soil and plants

Weight loss, diarrhea, abdominal pain, headaches, nausea

Diagnosis DFS, ELISA, PCR

Treatment Praziquantel, niclosamide, albendazole

Severe headaches, ELISA, PCR, MRI dizziness, paraplegy, or CT brain scans meningitis dementia, hypertension, lesions in the brain, convulsions, blindness

(Continued)

745

Albendazole (with caution, larval death provokes an inflammatory response that may increase symptoms), anticonvulsant therapy, corticosteroids, neurosurgical intervention, etc. ELISA, PCR, CT or Surgery treatment, Dysfunction of the MRI scans chemotherapy organs on which the (albendazole), cyst cysts developed, puncture, and PAIR discomfort, pain, (percutaneous nausea, vomiting; cyst aspiration, injection of rupture may cause chemicals and anaphylactic reactions, reaspiration) even death ELISA, PCR, CT AE requires Parasitic tumors in the chemotherapy with or liver that may spread to image without surgery; radical other organs including surgery in suitable the lungs and brain cases and parasitic causing discomfort or treatment with pain, weight loss, and benzimidazoles malaise; mortality rate (albendazole, is between 50% and mebendazole) 75% DFS Albendazole, Abdominal discomfort mebendazole, and pain, heavy ivermectin infections can cause intestinal blockage and impair growth in children; other symptoms such as cough are due to migration of the worms through the body

Contamination of Soil and Vegetation With Developing Forms of Parasites

Ascarididae

Cystic echinococcosis (CE)

Transmission

Soil transmitted parasites of humans, their hosts, ways of transmission, symptoms, diagnostic methods and treatmentdcont'd Species

Definitive host

Intermediate hosts Disease in humans

Toxocaridae

Toxocara canis Toxocara mystax (T. cati)

Dogs Cats

Oxyuridae

Enterobius vermicularis

Humans



Strongyloididae

Strongyloides stercoralis

Humans, possible in dogs and cats



Humans



Dogs and cats



Ancylostomatidae Ancylostoma duodenale, Necator americanus Ancylostoma caninum, Uncinaria stenocephala

Transmission

Symptoms

Diagnosis

Fecal-oral route ingestion Larvae migrate to various body organs, such as of embryonated eggs the liver, central by direct contact with nervous system, eyes, animals, contaminated etc.; symptoms include food, water, soil or fever, fatigue, plants; it is rare but people can also become coughing, wheezing abdomen and muscle infected from eating pains, etc. (VLM); undercooked meat tissue damage of one or containing with both eyes, vision loss, Toxocara larvae development of granulomas, eye inflammation or damage to the retina, etc. (OLM) Enterobiasis Fecal-oral route; direct by Anal and perineal pruritus contaminated hands, and itching, dermatitis, food or water; eggs can folliculitis, insomnia, survive in the soil for restlessness, loss of weeks and months appetite and weight, irritability, emotional instability, etc. Strongyloidiasis Ingestion or through the Abdominal disorders, intermittent diarrhea skin of the filariform with presence of blood, larvae (L3); larvae in contaminated food and dermatitis, swelling, itching, mild soil, and by actively hemorrhage, cough, penetrating larva ulcers, pneumonia, through the skin tissue damage, etc. Ancylostomiasis Ingestion of larave (L3) Abdominal pain, anorexia, diarrhea, by contaminated food, anemia or stagnation in soil or plants and growth actively penetrating Cutaneous larva migrans larvae (L3) through the Larvae during migration skin (CLM) damages tissues and organs Toxocariasis; visceral, and ocular larva migrans (VLM, OLM)

Treatment

DFS (animals) ELISA, PCR (humans)

Albendazole, mebendazole (humans) Albendazole, fenbendazole, pyrantel pamoate (animals)

DFS, transparent adhesive tape method

Albendazole, mebendazole, pyrantel pamoate

DFS, ELISA, PCR

Ivermectin, albendazole, mebendazole, thiabendazole

DFS, ELISA, PCR

Albendazole, mebendazole, pyrantel pamoate (humans) Febantel, pyrantel, fenbendazole, ivermectin (animals)

ELISA, PCR

Contamination of Soil and Vegetation With Developing Forms of Parasites

Family

746

Table 3

Trichuridae

Humans



Trichuriasis

Trichuris vulpis

Dogs



Visceral larva migrans (VLM)

Cystoisospora belli

Humans



Cystoisosporiasis

Cyclospora cayetanensis

Humans



Cyclosporiasis

Cryptosporidiidae

Cryptosporidium parvum, C. hominis

Humans



Cryptosporidiosis

Sarcocystide

Toxoplasma gondii

Cats and others Felidae

Humans, other mammals and birds

Toxoplasmosis

Eimeriidae

DFS, PCR

Albendazole, mebendazole, ivermectin (humans)

PCR

Fenbendazole, febantel, milbemycin, moxidectin and imidacloprid (dogs)

DFS, acid-fast stains, PCR

Trimethoprim/ sulfamethoxazole (TMP/SMX), pyrimethamine, ciprofloxacin

DFS, acid-fast stains, PCR

Trimethoprim/ sulfamethoxazole (TMP/SMX)

DFS (acid-fast staining, DFA), ELISA, PCR

Nitazoxanide

Cats (DFS, IFA, PCR, etc.) Humans (IFA, ELISA, PCR, etc.)

Clindamycin, pyrimethamine, sulfadiazine

747

Ingestion of embryonated People with heavy eggs with contaminated symptoms can food; eggs can survive experience frequent, in the soil for weeks and painful passage of stool months that contains a mixture of mucus, water, and blood. Rectal prolapse can also occur. Heavy infection in children can lead to severe anemia, growth retardation, and impaired cognitive development Ingestion of embryonated Larvae migrate to various eggs by direct contact body organs; with animals and symptoms include contaminated food or fever, fatigue, water coughing, wheezing abdomen and muscle pains, etc. (VLM) Fecal-oral route; oocysts Diarrhea, abdominal pain, develop in the external feve, headaches and environment, usually myalgia may also be water contaminated present with feces, and possibly soil can disperse the oocysts Diarrhea, abdominal pain, Fecal-oral route by contaminated food and weakness and weight loss water; oocysts in the soil can be maintained over a longer period Diarrhea, abdominal pain, Fecal-oral route by contaminated food and dehydration, nausea, vomiting, fever, weight water; oocysts can loss survive in the soil for weeks and months, through soil with precipitation they can be transferred to water sources Fecal-oral route by “Flu-like” symptoms, contaminated food and congenital water; ocysts can be lymphadenopathy, detected in the soil changes in the central where infected cats are nervous system, defecating; oocysts can neurological problems, survive for at least eye disease, etc. 3 months in the soil

Contamination of Soil and Vegetation With Developing Forms of Parasites

Trichuris trichiura

(Continued)

748

Soil transmitted parasites of humans, their hosts, ways of transmission, symptoms, diagnostic methods and treatmentdcont'd

Family

Species

Definitive host

Intermediate hosts Disease in humans

Hexamitidae

Giardia duodenalis

Humans (A, B genotypes) – and other mammals (C, D,E,F,G genotypes)

Giardiasis

Entamoebidae

Entamoeba hystolitica

Humans and other primates



Amebiasis

Balantidiidae

Balantidium coli

Pigs, sometimes dogs and humans



Balantidiasis

Transmission

Symptoms

Diagnosis

Diarrhea, abdominal pain, DFS and blood Fecal-oral route by tests (ELISA, contaminated food and nausea, vomiting, dehydration, weight PCR) water; cysts can be found in the soil but loss, etc. does not bind strongly with this one Diarrhea, dysentery and DFS and blood Fecal-oral route by tests (EIA, IHA, contaminated food and abscesses in liver and other organs, PCR) water; cysts can be dehydration, weight found in the soil, loss, etc. survive for several weeks or months on contaminated food and water Fecal-oral route by Diarrhea, abdominal pain, DFS contaminated food and nausea, vomiting, etc. water, cysts can survive for several months in the soil

Treatment Metronidazole, tinidazole, nitazoxanide

Metronidazole, paromomycin, diloxanide

Tetracycline, metronidazole, iodoquinol, nitazoxanide

Abbreviations: Direct fecal smear (DFS); direct fluorescent antibody (DFA); immunofluorescence (IFA); enzyme-linked immunosorbent assay (ELISA); enzyme immunoassay (EIA); indirect hemagglutination (IHA); polymerase chain reaction (PCR).

Contamination of Soil and Vegetation With Developing Forms of Parasites

Table 3

Contamination of Soil and Vegetation With Developing Forms of Parasites

749

intestine. The magnitude of pathogenic effect of STHs is strongly related to the intensity of infection, with most individuals hosting only a small number of worms and a few people disproportionately large numbers. According to most studies, a few seriously infected individuals are at a considerably higher risk of disease and are also the prime source of environmental contamination, as approximately 70% of the worm population is hosted by 15% of the host population. This dispersion might be due to both exposure and host susceptibility. Clinical complications of chronic infection, especially obstruction of the intestines, in rare cases can result in death. Modulatory effect on the host immune system is usually linked with chronic helminth infections. Such infections can induce Tcell hypo-responsiveness, which may affect immune responses to other pathogens. It is especially important to emphasize that STHs infections may increase susceptibility to other important diseases such as tuberculosis, human immunodeficiency virus and malaria, the present day unholy trinity of the most deadly killers among infectious diseases. However, a few studies have reported interesting correlations between geohelminth infections and allergic or atopic diseases. Reports have shown the suppressive effect by helminths on the outcomes of diseases such as allergies, autoimmunity and inflammatory bowel disease. Therefore, helminths can have a possible beneficial effect in restricting inflammation. The immune response of the host to helminths has received considerable attention and, although an understanding of the individual responses has improved, the protective role of the different effector mechanisms is still less well understood. The relationship between infection, the production of IgE and the manifestations of atopy certainly requires further exploration, as does the balance between immune mediated resistance to infection and immunopathology. STHs, Ascaridae in particular, elicit powerful IgE and T helper type 2 responses, and how their presence or absence relate to allergic reactions is currently a focus of research, particularly in given of the ever-higher number of allergies over recent decades.

Life Cycle and Infection Helminths are transmitted to humans by many different ways. Direct transmission is simplest waydembryonated eggs are passed, they hatch and re-infect within 2–3 h by accidental ingestion of infective eggs from the anal margin to the mouth and either do not reach the soil or, if they do, do not require a period of development there. Whipworm (Trichuris trichiura) and pinworm (Enterobius vermicularis) and are typical representatives of this kind of transmission. Species of the genus Trichuris commonly cause disease of domestic and wild mammals. Whipworm T. trichiura, which is one of the most prevalent human geohelminths, infects humans causing trichuriasis. T. trichiura occurs worldwide, but is most prevalent in warm and humid tropical regions of the world but also during the warmer months in temperate climates zone. These environmental conditions, together with high fertility of the parasite and the resistance of the eggs in the external environment are the main factors facilitating their distribution. The parasite is often present in areas of the world where hygiene is at a low level. People in these areas have a higher risk of getting an infection if the soil is contaminated with human feces, and if they consume fruits and vegetables that are not properly washed or thermally treated. Transmission is direct from mature eggs to the mouth via fingers contaminated from infected soil. As well as through accidental ingestion of eggs, transmission in some settings can also occur though the practice of geophagia. Eggs that are deposited on the ground embryonate after 10–14 days in the soil, which is the infectious phase of whipworm life cycle. After ingestion, larvae develop in the small intestines, penetrate the villi and develop for a week until they re-emerge and pass to the caecum and colorectum, where they attach themselves to the mucosa and become adult. In humans, T. trichiura is often found in coinfection with other helminths such as A. lumbricoides. Infections can cause abdominal pain, anorexia, and diarrhea, and can lead to anemia or stagnation in growth. Very severe infections lead to weight loss, anemia and prolapse of the rectum, especially in children. Moreover, even symptomless infections in children may have subtle and insidious effects on nutritional status, and physical and intellectual growth. Infection with pinworm E. vermicularis, which lives in the rectum and colon of humans, especially the caecum and lower ileum, causes enterobiasis, relatively benign condition that typically affects children. Therefore, it is also known as small children’s worm. It causes severe itching in the anal and perianal region, especially in children, therefore promoting direct fecal-oral route by fingernail contamination and subsequent ingestion of eggs. Eggs of E. vermicularis become infectious in a few hours and can survive 2–3 weeks. The disease can also spread through the soil, actually via contaminated dust in which embryonated eggs have been detected. The socio-economic standard plays an important role in the onset of the disease and subsequent reinfections. Symptoms are caused when gravid females migrate out of the anus on to perianal skin to deposit eggs, where they cause pruritus as the main symptom, which can vary from mild itching to acute pain, occurring mainly at night. This leads to general symptoms presented as insomnia and restlessness, and a considerable proportion of children show loss of appetite, loss of weight, irritability, emotional instability and enuresis (involuntary urination, especially by children at night). The second way of STHs infection involves soil in a more active manner as eggs are passed out in the stool and undergo a period of development in the soil (optimal: humid, warm soil) and on plants before being ingested (especially if the faces of infected persons are used as fertilizer) hatching in the small intestine, releasing larvae which penetrate the mucous membrane and enter the circulation to reach the lungs, passing up the respiratory tract to enter the esophagus, reaching the intestine where they become adult. Unembryonated eggs can be ingested but are not infectious. Examples of this way of infection include roundworm (Ascaris lumbricoides) and Toxocara spp. People ingest developmental forms of parasites directly from contaminated hands or by consuming vegetables and fruits that came into contact with the infected soil, and which were not adequately washed, cooked or peeled. A. lumbricoides (large children’s worms) is the most common and largest nematode of humans, sometimes over 30 cm with estimated adult life span of 1–2 years. Extraordinary global prevalence of ascariasis (disease caused by A. lumbricoides) is mainly attributed to two reasons. First, the female adult A. lumbricoides worm has a remarkable ability to produce offspring. It is estimated that

750

Contamination of Soil and Vegetation With Developing Forms of Parasites

a single worm may release up to 27 million eggs during the course of an infection. Second, the A. lumbricoides eggs are quite hardy and have an outer proteinaceous coat and thick egg wall that renders them remarkably resistant to environmental extremes for long periods of time. Ascariasis may have a detrimental effect on the host when the worms are abundant, which includes two major types of clinical sequelae: due to migrating larvae and due to adult worms. Infection with A lumbricoides is usually asymptomatic or produces mild, nonspecific symptoms. However, A. lumbricoides can also cause lack of appetite, nausea, vomiting, asthenia, weight loss, blocking of the digestive tract, abdominal pain, anemia, obstruction or perforation of the intestine and occasionally obstruction of the pancreatic duct or prolapse of the rectum. In some cases, migrating larvae can penetrate the wall of the intestine and enter the bloodstream, where they can migrate to other parts of the body, most often to the lungs, where, 10–14 days after infection, they can cause eosinophilic pneumonia (Loeffler’s pneumonitis). Hypersensitive individuals may develop allergic reactions to migrating larvae such as urticaria and asthma. In addition to the oral route of infection, some species can enter the host through the skin. In this group, eggs are passed in the stools to the soil, where they hatch into larvae, which undergo further development before they are ready to penetrate the skin and reach the circulation and lungs, which they penetrate to enter the respiratory tract; they move up to enter the esophagus and reach the small intestine, where they become adult. The hookworms (Ancylostoma duodenale and Necator americanus), and threadworm (Strongyloides stercoralis) belong to this group, but differ in that S. stercoralis larvae are passed in the stool and autoinfection can occur at the anal margin, or independent development may take place in the soil, where they can exist in the absence of any further cycle through humans. This is a reason why S. stercoralis can be classified as both EPOs and STPs as stated previously. Species from the genus Strongyloides do not exhibit high specificity regarding host selection, and all species of the genus Strongyloides can parasitize in several types of hosts. Developmental forms of strongyloides can survive for a long time in the soil because the life cycle (homologous and heterologous) develops in two ways. In an evolutionary sense, Strongyloides spp. represents a transition from nonparasitic to parasitic nematodes. In the homologous cycle, the development takes place in the host, and the forms occurring in the external environment are eggs and larvae (L1 L3). The infection of a host is most commonly caused by oral ingestion of the filariform larvae (L3) through contaminated food and water, and by actively penetrating larva through the skin. In the heterologous cycle, under certain conditions from their rhabditiform larvae fully mature males and females which live as nonparasitic forms. They can give parasitic progeny by copulation. What kind of reproduction will occur primarily depends on the activity of host immune mechanisms, as well as on the conditions of the external environment (pH, moisture, temperature, etc.). S. stercoralis is most commonly presented in humans, but it can also parasitize in dogs. The infection can be transmitted to humans when a person comes into contact with the soil containing L3 strongyloides larvae in the filariform phase. These larvae can also enter the host through the skin, after which they enter digestive tract through the circulation, lymphatic system or lungs. People infected with strongyloides sometimes have no symptoms, although the disease can cause lethal infections in immunocompromised hosts. In humans, the clinical picture of chronic strongyloidiasis includes asthenia, abdominal tenderness and disorders, intermittent episodes of diarrhea with presence of blood, and respiratory symptoms, dry cough, skin irritation, and irritation as a consequence of parasite migration. There have also been cases of strongyloides hyperinfection that may be associated with the use of drugs such as corticosteroids, which can cause an immunosuppressive effect. In these cases, autoinfection can lead to strongyloides hyperinfection syndrome, typically presenting as intestinal or pulmonary failure, potentially occurring decades after the initial infection. The hookworms (Ancylostoma duodenale and Necator americanus), are estimated to have the geographic distribution second only to A. lumbricoides. A human excretes eggs into the environment with feces and in favorable conditions, within 1–2 weeks infective filariform larvae (L3) develop, which are introduced into the host organism orally or through the skin. Human infection occurs through contact with soil that is contaminated with feces. Larvae can survive in the soil for several weeks, and besides being ingested (fecal-oral route) they can penetrate the skin, usually through the feet. The larvae then migrate to the intestine of their host. The estimated life span of hookworms is 5–7 years, and the female worm lays around 10,000–30,000 eggs per day. The illness is characterized by digestive tract disorders (abdominal pain, nausea, vomiting, anorexia), heart, respiratory and kidney problems, and poor concentration. Each individual adult hookworm during heavy infections can cause up to 0.2 mL of blood loss per day, which leads to depletion of host iron and protein reserves, causing iron deficiency anemia and protein malnutrition. Additionally, larvae migrating through the skin can lead to dermatitis. In several cases, infection requires an intermediate host vector. The good example is taeniasis, the infection of humans with the adult tapeworm of Taenia solium or Taenia saginata. Taeniasis occurs after the ingestion of inadequately processed pork or beef contaminated by their larval stages (Cysticercus cellulosae and Cysticercus bovis, respectively). Cattle and pigs, as intermediate hosts, become infected ingesting vegetation contaminated with eggs or gravid proglottids (body segments containing a complete sexually mature reproductive system). In the animal’s intestine, the oncospheres (embryonic form of tapeworm) hatch, invade the intestinal wall, and migrate to the striated muscles, where they develop into cysticerci. A cysticercus can survive for several years in the animal. It should be noted that humans can also be infected if they ingest eggs from human excrements, in which scenario, eggs can be maintained in the soil for longer period. Thus, cysticercosis can be characterized by serious symptoms due to mass effect and inflammation caused by degeneration of cysts and antigen release. Depending on the location and number of cysticerci, seizures, increased intracranial pressure, hydrocephalus, impaired mental status, aseptic meningitis and the similar conditions can occur in the patient. Cysticerci can also infect the spinal cord, muscles, subcutaneous tissue, and eyes. Species of the genus Echinococcus spp. in humans cause disease as a result of the ingestion of parasite eggs, for which the carnivores are the final hosts. Carnivores act as definitive hosts for the parasite, and host the mature tapeworm in their intestine. They are infected through the consumption of viscera of intermediate hosts that harbor the parasite. Although there are more species, the two most important for the public health are Echinococcus granulosus and E. multilocularis. The life cycle of E. granulosus includes domestic

Contamination of Soil and Vegetation With Developing Forms of Parasites

751

dogs and other Canidae as the final hosts, and a number of herbivorous and omnivorous animals, and humans as intermediates. Humans act as so-called accidental intermediate hosts in the sense that they acquire infection in the same way as other intermediate hosts, but are not involved in transmitting the infection to the definitive host. Human infections, other than through the direct contact with animals (usually dogs), occur by the introduction of eggs that the final hosts have left on the soil or plants. These eggs can be found in the soil, plants, and can also contaminate fruits and vegetables. The two most important forms in humans, which are of medical and public health relevance, are cystic echinococcosis, also known as hydatid disease or hydatidosis caused by infection with E. granulosus, and alveolar echinococcosis caused by infection with E. multilocularis. Hydatid disease is characterized by the development of one or more hydatid cysts, which are most often found in the liver and lungs, and sometimes in the spleen, kidneys, heart, bones, and CNS. The asymptomatic incubation period of the disease can last many years until hydatid cysts grow to an extent that triggers clinical signs. Clinical symptoms manifest as dysfunction of the organs on which the cysts developed. If the cyst is ruptured, the sudden release of its contents can cause an allergic reaction that can end with fatal anaphylaxis. Fertile cysts, by rupturing in the organism, can lead to dissemination in other parts of the organism and creation of secondary cysts. Foxes are the most common host for E. multilocularis, and the hydatid parasite larvae can be found in wild rodents. Similarly, infected dogs and other canines are an important link for occasional human infections. Infection with E. multilocularis, with resulting alveolar echinococcosis, is characterized by an asymptomatic incubation period of 5–15 years and the slow development of a primary tumor-like lesion which is usually located in the liver. This form of echinococcosis presents a much more dangerous form of disease in humans due to the formation of alveolar cysts whose membrane becomes thinner with growth, and there is a risk of rupture and metastasis. Larval metastases may spread either to organs adjacent to the liver (for example, the spleen) or distant locations (such as the lungs, or the brain) following dissemination of the parasite via the blood and lymphatic system. Untreated, alveolar echinococcosis is progressive and fatal.

Parasites of Dogs and Their Importance in Contamination of Soil and Plants In contamination of the soil with parasites and developmental forms of parasites, in addition to humans, the role of dogs is crucial, which are in addition to birds and cats, most common animals in urban areas. Considering all animals related to humans, the dog is the most important source of parasitic diseases. In general, dogs can host about 300 species of parasites including more than 60 species of trematodes, 22 types of cestodes, 32 types of nematodes and 8 species of acanthocephalous. In the area of the former Soviet Union, 82 types of helminths have been identified in the dog, 32 of which can invade humans, and 26 which can invade domestic and other animals. Parasites of dogs with their widespread distribution and high potential for transmission, and pathogenic action can cause considerable damage and negatively affect the development and health of animals, and directly and indirectly affect humans. Developmental forms of nematodes in dogs, if introduced into the digestive tract of a human as a nonspecific carrier or if the larvae penetrate the skin, can, in the stage of the invasive filariform larvae, cause pathogenic alterations in specific organs or tissues, although they do not develop into the adult. The disease is manifested as “visceral, vascular, and ocular larva migrans syndrome.” Humans, especially children, in addition to the contact with animals, can also be infected with parasites of dogs via contaminated soil or plants by ingesting developmental forms of parasites directly, with food or water. Particularly young children are vulnerable to the habit of soil-eating, as they have a predilection for eating nonfood items such as soil. Children under the age of 18–20 months normally explore and acquaint themselves with the environment by mouthing everything they come across. Eggs and larvae of parasites of dogs can be found in the soil and on the vegetation of parks in a large concentration, near the place where the dogs are moving or staying (see Pictures 1–4). The developmental forms of these parasites can be maintained in nature for a long period. Clinical signs of disease in humans vary, and may be presented as an increase in body temperature, cough, anemia, eosinophilia or hepatomegaly. However, some of these symptoms may be absent, or other symptoms may occur. In severe cases, there are abdominal pains, enlargement of the liver, respiratory tract, muscle pain, etc. The disease can result in tissue damage of one or both eyes, causing the development of granulomas, which can result in loss of vision. The most important parasites of dogs that can cause disease in humans are Toxocara canis, Ancylostoma caninum, Unicinaria stenocephala and Trichuris vulpis. Toxocara canis is localized in the small intestine of dogs and wild carnivores (length 10–18 cm). Invasive form of T. canis are eggs in which larva L3 is formed. They do not have transient hosts, and the development goes with hepato-pulmonary tracheal and somatic migration, depending on the age of the animals and infestations of the parasite. Sometimes they develop through paratenic hosts. Sexually mature parasites develop in 4–5 weeks. T. canis female can produce up to 200,000 eggs per day, and their larvae are the most common cause of larva migrans. The effect of larva migrans in humans was first recognized in the eosinophilic granuloma of children in 1952. Before that, both granulomatous and eosinophilic abscesses in the eyes of some patients with suspected retinoblastoma were described, while the histological parts of the granuloma revealed the presence of larvae of nematode, which were later identified as larvae T. canis. Localization of Ancylostoma caninum is in the small intestine of dogs and wild canids (length 1–1.8 cm). The development cycle is direct, and the infection is with L3 larva, which enters the animal by oral or percutaneous, rarely lactogenic or intrauterine route, or through paratenic hosts (host not needed for the parasite’s development cycle to progress.). The prepatent period lasts 15–18 days, especially in the young dogs.

752

Contamination of Soil and Vegetation With Developing Forms of Parasites

Picture 1

Toxocara canis egg. The developmental parasite forms identified in the soil and vegetation (Olympus, digital camera, 40).

Picture 2

Taeniid eggs. The developmental parasite forms identified in the soil and vegetation (Olympus, digital camera, 40 ).

Picture 3

Trichuris spp. egg. The developmental parasite forms identified in the soil and vegetation (Olympus, digital camera, 40).

Uncinaria stenocephala is localized in the small intestine of dogs, cats and wild carnivores (length 0.5–1.2 cm). The life cycle is similar to that of A. caninum, except that the main pathway of the infection is oral, while fewer larvae can reach the host percutaneously. There is no intrauterine or lactogenic infection. Trichuris vulpis is a parasite localized in the cecum and colon of dogs and foxes (length 4.5–7.5 cm). The life cycle of T. vulpis is direct, and the animal ingests eggs in which the larva L1 has formed. It has been proven that T. vulpis can enter the epithelium of the mucous membranes of various parts of the digestive tract, but only those that localize in the cecum can develop into an adult. The prepatent period lasts 11–12 weeks. The pathogenic effect of these parasites in animals is mainly due to mechanical activity, where

Contamination of Soil and Vegetation With Developing Forms of Parasites

Picture 4

753

Nematoda larvae. The developmental parasite forms identified in the soil and vegetation (Olympus, digital camera, 40 ).

parasite larvae during migration damage tissues and organs; adults in the small intestine damage the mucous membrane (inflammation, bleeding, secretory, respiratory and motor function of the intestine) and perforate the intestines. In addition to mechanical, it also displays important toxic (allergenic) activity, where metabolic products after absorption or dead parasites themselves can act on the central and hematopoietic system of the host.

Significance of Protozoa in Contamination of Soil and Plants In addition to already described parasites, it is impossible not to mention the role of protozoa, which often parasitize in several types of different hosts. Protozoa, single-celled eukaryotic organisms are found in large quantities in the soil as food for bacteria and fungi, and through organic matter, protozoa provide food for other invertebrates. In most soils protozoan biomass equals or exceeds that of all other soil animal groups taken together, with the exclusion of earthworms. General estimation is that, of total respiration of soil animals, 70% and 15% might be attributed to protozoa and nematodes, respectively. Predation by protozoa significantly impacts the controlling of bacterial populations in soil, and the degradation of bacteria undoubtedly contributes to the maintenance of soil fertility. Similarly, protozoa play an important part in the cycling of nutrients in aquatic food chains. Some species are pathogenic in humans, and the most famous is Entamoeba histolytica, which is widespread worldwide and is a threat to humans, especially in areas with inadequate hygiene and poor living conditions. Infection with E. histolytica may be asymptomatic or may cause diarrhea, dysentery, with changes in the digestive tract, liver and other organs. Although infection usually occurs by the fecal-oral route, cysts can be found in the soil, where they can survive for several weeks or months, and thus infect hosts who later consume food or water contaminated with the contaminated soil. Balantidium coli is the most common parasite in pigs which sometimes also occurs in dogs. Under favorable conditions of adequate temperature and humidity, B. coli can survive for several months as cyst-shaped in the soil and cause diarrhea in humans, loss of fluid, nausea, vomiting, headache, although the infection can often be asymptomatic as well. Cyclospora cayetanensis can infect humans, be transmitted by a fecal-oral route, and, importantly for its transmission, can be maintained in the soil over a longer period. Infections are followed by diarrhea that can be explosive and accompanied by abdominal cramps, tiredness, weakness and loss of body weight. Species of the genus Cryptosporidium are transmitted by fecal-oral route in areas of poor hygiene, and the pathogen replicates in the epithelial cells of the small intestine cells of the vertebrate host. Infectious oocysts reach the lumen and are excreted through the feces. A very small number of oocysts are sufficient to cause disease and increase the risk of transmission from person to person. Oocyst Cryptosporidium are highly resistant to external influences, which helps them disseminate, and have the potential to be transmitted from animals to humans and vice versa, which significantly increases the number of reservoirs. Developmental forms can survive in the soil for weeks and months, and with precipitation can be transferred to water sources such as rivers and lakes, and also contaminate fruits and vegetables, which are then source of infection for humans. The most common cause of the disease is C. parvum, and the symptoms in humans are aqueous diarrhea, loss of appetite, weight loss, vomiting, abdominal cramps, nausea, smallness, etc. In immunosuppressed individuals, apart from the small intestine, Cryptosporidium spp. can also spread to other parts of the digestive tract, as well as into the respiratory tract. Isospora belli occurs most often in tropical and subtropical countries. Oocysts develop in the external environment, usually in feces-contaminated water, while soil can possibly disperse the oocysts. There is currently no information on survival in the soil, but as with other protozoa, it is probably surviving depending on the soil moisture, and the main route of infection is the fecaloral route. Disease is usually presented with profuse diarrhea with an unpleasant odor, stomach cramps, loss of appetite and fever. Headaches and myalgia may also be present. Toxoplasma gondii is an intracellular parasite, primarily in cats, while humans, together with other mammals, are only indirect hosts. T. gondii infection is primarily due to the ingestion of inadequately cooked meat containing cysts or ingestion of oocysts

754

Contamination of Soil and Vegetation With Developing Forms of Parasites

via food or water contaminated with cat feces. T. gondii can be detected in the soil where infected cats are defecating and thus transfer the oocysts which can survive for at least 3 months, meaning that the soil can be infectious over a longer period. Symptoms occur mainly in immunocompromised individuals and newborns who can be infected with the parasites from mothers during intrauterine development. Symptoms include lymphadenopathy, changes in the central nervous system, neurological problems and pneumonia. Congenital toxoplasmosis can be a severe and disabling disease causing jaundice, delayed development, visual defects, including blindness, and cerebral calcification. Although Giardia duodenalis does not bind strongly with soil, the precipitation can transport it to greater distances. G. duodenalis causes animal and human disease. Giardiasis usually presents a wide range of symptoms; often there are diarrhea, stomach cramps, vomiting, fever, exhaustion, loss of appetite, fatigue and weakness.

Final Thoughts Research on contamination of the soil, parasites and their developmental forms shows that the degree of environmental vulnerability is very high, and that contamination of the soil can present a severe public health problem. Parasites pose a permanent danger to the health of people, especially children. It is extremely important to examine and implement measures to reduce contamination of the soil and vegetation and to coordinate them by following previous experience and relevant legal regulations with the contribution of animal owners, veterinarians, doctors, environmental experts and all others involved in the mentioned issues. Particular attention should be paid to continuing education at all levels, maintaining adequate personal hygiene, especially after contact with pets, any outdoor activities and before consuming food. Likewise, land use and management are very important to reduce contamination. Plowing and cultivating grassland and cultivated land can improve the protection of humans against parasite contamination and developmental forms of parasites, but it should be also remembered that most of the organisms listed can survive longer in wet soil compared to dry one. Climate change can be of great significance in the distribution of diseases, but it is uncertain to predict the effects of climate change on the conditions at the local level. It is important that many diseases that are transmitted in the soil have a relatively limited distribution in the tropical and subtropical parts of the world, and it should be kept in mind that climate change could lead to the wider spread of pathogens transmitted through the soil.

Further Reading Brooker, S.J., Bundy, D.A.P., 2014. Soil-transmitted helminths (Geohelminths). In: Farrar, J., Hotez, P., Junghanss, T., Kang, G., Lalloo, D.G., White, N.J. (Eds.), Manson’s tropical diseases, 23rd edn. Elsevier Saunders, Philadelphia, pp. 766–794. Holland, C.V., Kennedy, M.W. (Eds.), 2002. World class parasites: Volume 2: The geohelminths: Ascaris, trichuris and hookworm. Kluwer Academic Publishers, London. Jacobs, D., Fox, M., Gibbons, L., Hermosilla, C., 2015. Principles of veterinary parasitlogy. Wiley Blackwell, Chichester, UK. Jeffery, S., van der Putten, W.H., 2011. Soil borne human diseases. Publications Office of the European Union, Luxembourg, pp. 3–56. https://doi.org/10.2788/36703. Lewis, J.W., Maizles, R.M. (Eds.), 1993. Toxocara and Toxocariasis: Clinical, epidemiological and molecular perspectives. Birbeck & Sons Limited, Birmingham, UK. Mehlhorn, H., 2016. Animal parasites: Diagnosis, treatment, prevention. Springer International Publishing, Cham, Switzerland. Nieder, R., Benbi, D.K., Reichl, F.X., 2018. Soil components and human health. Springer, Netherlands, pp. 723–827. Ojha, S.C., Jaide, C., Jinawath, N., et al., 2014. Geohelminths: Public health significance. Journal of Infection in Developing Countries 8 (1), 5–16. Omeragic, J., Klaric, D., Smajlovic, A., Crnkic, C., Alagic, D., 2016. Contamination of soil and vegetation with developing forms of parasites in the area of Sarajevo Canton. Veterinária 65 (2), 59–65. Pullan, R.L., Brooker, S.J., 2012. The global limits and population at risk of soil-transmitted helminth infections in 2010. Parasites & Vectors 5, 81. Pullan, R.L., Smith, J.L., Jasrasaria, R., Brooker, S.J., 2014. Global numbers of infection and disease burden of soil transmitted helminth infections in 2010. Parasites & Vectors 7, 37.

Relevant Websites http://www.cdc.gov/parasites/sth/dCenters for Disease Control and Prevention (CDC). ParasitesdSoil-transmitted Helminths (STHs). https://www.cdc.gov/parasites/whipworm/biology.htmldCenters for Disease Control and Prevention. Parasitesdtrichuriasis (also known as whipworm infection). Biology. https://www.cdc.gov/parastites/hookworm/biology.htmldCenters for Disease Control and Prevention. Parasitesdhookworm. Biology. https://www.cdc.gov/parasites/strongyloides/biology.htmldCenters for Disease Control and Prevention. Parasitesdstrongyloides. Biology. http://www.thiswormyworld.org/dThe Global Atlas of Helminth Infection. http://www.who.int/mediacentre/factsheets/fs366/en/dWorld Health Organization (WHO). Soil-transmitted helminth infections.

Cost-Benefit Analysis and Air Quality Related Health Impacts: A European Perspective Mike Holland, EMRC, Reading, United Kingdom © 2019 Elsevier B.V. All rights reserved.

Introduction Recent years have shown greatly increased application of cost-benefit analysis (CBA, though also commonly referred to, especially in North America, as benefit-cost analysis, BCA) to inform the development of environmental legislation, starting in Europe from the late 1980s. CBA is concerned with the efficient allocation of scarce resources. CBA methods in Europe and North America, and increasingly elsewhere, gauge efficiency in terms of the preference of the general public, usually measured as willingness to pay for environmental and health improvement. Older methods for valuation of health impacts were based on the human capital approach, where health is valued in terms of the contribution of an individual to the economy, essentially in terms of GDP. The human capital approach fails to recognize that GDP is only a partial representation of the economy as a whole, which includes many other aspects, from the utility offered by good health to the appreciation of the natural world. Modern applications of CBA for pollution control bring together several techniques:

• • • •

Development of accounting frameworks to identify and describe the full range of options available, and the consequences of those options. Cost-effectiveness analysis, which seeks to define the most efficient route to reduce pollution to meet specified targets. The impact pathway approach for quantifying the impacts of pollution and their economic value. Uncertainty analysis, focused on the question of whether uncertainties, in combination, are likely to change the conclusion of an initial comparison of cost and benefit.

The primary oputputs of the CBA are then the net benefit (benefits in excess of costs) and the benefit: cost ratio. The former should be larger than zero, the latter larger than one for a measure to be evaluated as efficient. Together, these provide a first indication of how worthwhile it is likely to be to adopt a particular course of action. It is typically necessary, at least in conditions similar to those in most of Europe at the present time, to then consider uncertainties of any kind (statistical uncertainty, methodological sensitivities, various possible biases including omission of some benefits or costs) that may affect the outcome of analysis. These methods have been used many times in appraisal of environmental policies. The focus here is on their use at a European level, principally for the European Commission and the UN Economic Commission for Europe (UN/ECE) under the Convention on Long-Range Transboundary Air Pollution. This contribution describes the methods used, including key assumptions and strengths and weaknesses. It also considers important examples of the application of CBA both in Europe and more widely, including North America and Asia. Whilst discussion is focused mainly on health impacts, consideration is also given to other effects on ecosystems, crops, forests and materials used for construction, that are linked to emissions of the same air pollutants.

Historical Perspectives Before considering the application of CBA, it is important to recognize that major progress was made in reducing emissions of air pollutants prior to the 1980s, to put the strengths and limitations of the approach in context. This section focuses explicitly on the situation in London (and the United Kingdom more generally), given its long association with poor air quality and efforts made over many years to control the problem. Of course, similar issues, linked to intensive coal burning, affected other parts of Europe also, such as the Ruhr Valley in Germany and the Meuse Valley in Belgium. Since the 1970s and 1980s pollution in the “Black Triangle” covering parts of East Germany, Czechoslovakia (now Tchech Republic) and Poland was also prominent in debate. Meteorological data from London in the late 1800s show that attempts over several 100 years to improve air quality had been ineffective, as pollution still had a major effect on the London environment. There were, for example, zero sunshine hours observed during December 1890 and visibility in Central London was reduced to less than 1 km for several months in 1901–02. In the 1940s visibility in the city was less than 2 km for more than 75% of the time from November to March. The lack of sunlight led to an extremely high incidence of rickets caused by vitamin D deficiency in children, sufficiently so that it was sometimes referred to in German as Die englische Krankheit (“the English disease”). The Great London Smog of December 1952 provided a turning point in the history of air pollution. Over 4000 excess deaths occurred in a single 1 week. Deaths were clearly linked to very high pollution levels that had built up over the city (Fig. 1). An inquiry into the episode led to the United Kingdom’s 1956 Clean Air Act, initiating a number of measures:

• •

Establishment of smoke control areas to reduce domestic emissions Use of cleaner coals with a lower sulfur content, and solid “smokeless” fuel

Encyclopedia of Environmental Health, 2nd edition, Volume 1

https://doi.org/10.1016/B978-0-12-409548-9.10649-9

755

75 SO2 / 100 M

SMOKE 50

200

500

25

100

0

0

SO2

250

DEATHS / DAY

750

DEATHS

MG / 100 M3

Cost-Benefit Analysis and Air Quality Related Health Impacts: A European Perspective

1000

756

0

SMOKE

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 DATE DECEMBER 1952

Daily Air Pollution and Deaths

Fuel Rescearch Station, Greenwich 6.9.56

Fig. 1

• •

DL.5237A

Mortality and pollution levels through the Great London Smog of December 1952. Units: 1 mg/100 m3 ¼ 10 mg/m3.

Relocation of power stations to rural areas and the use of tall chimney stacks Encouragement of the use of cleaner (non-solid) fuels

The United Kingdom’s 1968 Clean Air Act extended the legislation to industry more generally, though again with an emphasis on tall chimneys to aid dispersion, rather than pollution abatement. Sulfur dioxide removal was considered unaffordable, although London’s Battersea Power Station had been the first plant in the world to be equipped with flue gas desulphurisation (FGD). However, it did not work well and emitted pollutants directly to the River Thames. In developing this legislation there was very limited economic analysis to support the decision making process. There was some consideration of inefficiency, with for example an estimate of the waste of coal in the form of avoidable smoke in Great Britain being equivalent to the production of 10,000 miners annually, illustration of the cost savings possible through improved efficiency, and even an estimate of the cost of damage to buildings per tonne of coal burned. However, there was no quantification or valuation of health impacts, though these were recognized qualitatively. Despite the lack of detailed CBA prior to the late 1990s, the legislation caused a major fall in emissions of both particles and sulfur over the following decades (Figs. 2 and 3). The success of the earlier legislation raises some important questions on the role of CBA that are addressed at the end of the contribution.

Methods Basic Principles for Quantification Analysis is intended to inform the policy debate, typically for a governmental organization though sometimes also for nongovernmental organizations or affected industry. To inform the policy debate it is necessary to observe the following principles:



Clear definition of goals: air pollution policy can be designed to: B Minimize risks to human health by setting emission controls on industry, vehicles, equipment, etc. to reduce emissions in a way that benefits the whole population, or large parts of it.

Cost-Benefit Analysis and Air Quality Related Health Impacts: A European Perspective

757

700

UK PM10 emission, ktonnes/year

600

500

400

300

200

100

0 1970

1975

Coal Fig. 2

1980

Coke, petcoke

1985

1990

Gas

Oil

1995

2000

Other

2005

Peat

2010 Waste

2015 Wood

Decline in United Kingdom particle emissions (PM10) since 1970.

B Reduce inequalities by reducing peak exposures through the setting of ambient air quality limit values. These limit values are sometimes wrongly interpreted as representing thresholds for effect. Epidemiological research, including in locations with very low pollutant levels such as rural Canada, has not identified evidence for thresholds at the population level. B Minimize non-health impacts, for example, to ecosystems. Whilst there is overlap between these objectives, it is necessary to be clear about the main objective of the policy being undertaken and the consequences of that objective. A focus on meeting air quality limit values may incur a high cost per unit benefit, relative to measures that reduce exposure of the population more generally. Take the example of exceedance of the annual mean limit value for NO2 close to a road. Only a limited set of control options will be available. These may have a higher cost per unit emission reduction than measures taken on other sources, and only benefit a small number of people. Such measures can appear inefficient. However,

7000

UK SO2 emission, ktonnes/year

6000

5000

4000

3000

2000

1000

0 1970

1975

1980

Power stations Fig. 3

1985

1990

Other industry

Decline in United Kingdom sulfur dioxide (SO2) since 1970.

1995

2000

Transport

2005 Domestic

2010

2015

Other

758

Cost-Benefit Analysis and Air Quality Related Health Impacts: A European Perspective

the reverse may be true if the objective of action is defined against equity improvement, ensuring that no part of the population is subject to a level of risk that is considered to be unreasonably higher than elsewhere. It should be added that some local measures will be very cost-effective, especially when co-benefits are accounted for (e.g., reduction of greenhouse gases linked to boiler replacement programmes, reduced congestion, noise, etc. linked to public transport improvements).

• • •



• •

Also on goal definition, clarity is needed on the ultimate aim of policy development: is it considered that (e.g.) new legislation will move us to the final goal, or to an intermediate position for the short to mid term? A lack of clarity on this issue can lead to the implementation of measures that quickly become outdated. Transparency: Methods must be clearly and fully described and referenced. Assumptions need to be clearly defined and justified. A lack of transparency opens the field for competing and similarly opaque analyses that will detract from an informed debate on policy. State of the art analysis: The relation of the analysis to state of the art methods should be reported, with possible variation in practice identified and choices justified. To illustrate, European CBAs tend to use a different set of (dose or concentration) response functions for mortality quantification to that adopted by the Global Burden of Disease study run by the World Health Organization and the Institute of Health Metrics. This has been justified by reference to the populations studied in the epidemiological studies on which selected response functions are based. Completeness: Analysis needs to be sufficiently complete to demonstrate the likelihood that benefits will exceed costs, and to give a reasonable indication of the benefit: cost ratio. It is not necessary to quantify or even describe every type of impact that may be linked to a policy: most air quality CBAs will focus on mortality associatd with fine particles (PM2.5) or ozone, accepting that there are additional impacts on morbidity (hospital admissions, increased frequency of asthma attacks, etc.). In all cases, consideration should be given to ancillary impacts of measures (additional costs or benefits beyond direct and intended consequences), in order that possible co-benefits can be maximized and trade-offs minimized or mitigated altogether. Valuation: As noted above, is to be based on public preference for the allocation of resource. It should also be carried out on a marginal basis (i.e., relative to small changes in emission), with policy justified in terms of the CBA only up to the point where marginal costs and benefits of action are equal. Reporting uncertainty. Uncertainty needs to be considered in terms of the robustness of the conclusions reached, whether there are situations under which the balance of costs and benefits would change to the extent that alternative decisions would be made. This requires an overall synthesis of uncertainty across all elements of the analysis.

Quantification of the Costs of Emission Controls The cost of emission controls are generally defined by cost-effectiveness analysis (CEA), proceeding through the following stages: 1. 2. 3. 4.

Defining emission inventories for each pollutant, identifying the sectors responsible for emissions Identifying measures that will reduce emissions Quantifying the costs of these measures Quantifying the likely impact of measures on emissions (accounting for abatment efficiency and applicability within the targeted sector).

In this bottom-up approach, measures can then be ranked to provide a marginal abatement cost (MAC) curve (e.g., Fig. 4), with measures on the left hand side being most cost-efficient, and those on the right, least cost-efficient. Whilst cost curves are an important tool for making rational decisions on pollution controls they have a number of important limitations: 1. They tend to focus on technical controls, such as equipping vehicles with 3-way catalysts, or adding flue gas desulphurisation to a coal fired power station. 2. Other types of measure, including fuel switching, modal switching in transport, efficiency improvements and behavioral measures are often omitted from the cost-curves. Some measures in these categories generate cost savings without taking account of the benefits of action, and hence are very high priority for implementation. 3. MAC curves are typically produced for individual pollutants (e.g., NOx) or groups of pollutants (e.g., greenhouse gases). They do not integrate effects of other consequences of measures (e.g., co-abatement of GHGs and local/regional air pollutants such as PM and NOx) and hence provide only a partial representation of benefit in terms of emissions avoided. 4. Uncertainties are very rarely reported. There are potential biases present, often in the direction of overestimation of costs and underestimation of abatement. These may arise because the companies for which costs are likely to be highest have the greatest incentive to respond to surveys or to become involved in research, and because widespread application of measures leads to efficieincy improvement and lower costs especially as the number of contractors able to provide equipment and materials increases. The omission of measures, particularly those that lead to cost savings, will also lead to some pessimism in forecasts of potential abatement and associated costs. Assumptions for instance on the emission target according to which the MAC of a given policy is derived, the economic development including the evolution of emissions and consideration of technological development also add to the overall uncertainty.

Cost-Benefit Analysis and Air Quality Related Health Impacts: A European Perspective

Fig. 4

759

Example of a marginal abatement cost (MAC) curve, addressing non-agricultural emissions of ammonia in the United Kingdom in 2010.

The existence of these limitations should not be regarded as undermining the concept behind the MAC curves: there is not an alternative to ranking measures in the form provided by CEA. Some issues raised here as limitations can be modeled outside of the MAC curve, for example using sensitivity analysis to account for the effects of fuel switching or efficiency improvements, or the increase in cost-effectiveness possible through learning over time. Wider impacts can be brought into the assessment later in the process, in the quantification of benefits. However, it is important to appreciate that MAC curves do not provide a complete synthesis of information on the cost side of the CBA equation, and to later consider how this affects the balance of costs and benefits.

Quantification of Impacts and Monetary Benefits Impacts and monetary benefits are quantified using the impact pathway approach developed in the European Commission funded Externalities of Energy (ExternE) Project series of the 1990s and 2000s. The impact pathway follows a logical sequential process quantifying emissions, pollutant dispersion and associated chemical reactions where appropriate, population exposure, impact quantification and finally valuation (Fig. 5).

Demand for polluting activity

Pollutant emission

Pollutant dispersion and chemistry

Exposure of people, ecosystems, etc.

Quantification of impacts

Valuation of impacts Fig. 5

Illustration of the impact pathway approach for quantification of the impacts of pollution and benefits of additional abatement measures.

760

Cost-Benefit Analysis and Air Quality Related Health Impacts: A European Perspective

An important conclusion early in the ExternE research was that no threshold could be identified for the effects of fine particles on health at the societal level. This meant that analysis should be extended over however far emissions may be expected to travel from the site of release. In the context of the electricity sector this was a very important conclusion as it meant that whilst impacts could be reduced by locating a power station outside an urban centre and by using tall chimney stacks, impacts were not eliminated. Overall, results demonstrated that the external costs of power generation were of a similar magnitude to the internalized costs for the fossil fuel sector, particularly for coal and oil. Results of analysis demonstrate that by far the most important health impact in the benefits assessment for the major air pollutants (fine particles, ozone, NO2) is the loss of utility associated with mortality. Depending on the approach taken to mortality valuation, this accounts for between 70% and 95% of the total health impact. There is, however, a growing number of impacts being linked to air pollution that are yet to be accounted for in the quantification of morbidity, including impacts on dementia and other effects with a very high social value. Some estimates are available for non-health impacts, to crops, forests, ecosystems, and materials. Within Europe the impacts on ecosystems may be very significant given widespread exceedance of the “critical load” for nitrogen. The critical load defines a pollutant loading above which ecological change is anticipated. Nitrogen stimulates the growth of grasses and other common plants, at the expense of other species that have evolved to grow in what were previously low-N environments. Results for other pollutants are more varied. Much analysis for the toxic metals has focused on impacts such as IQ loss or mortality through the development of cancer. Recent analysis for lead, mercury and arsenic suggests that a focus on a restricted set of impacts may lead to significant underestimation of the overall benefits of control. Important limitations of benefits assessment in the context of CBA concern the omission of impacts in whole or part. The methods for quantification and valuation of ecosystem impacts are also limited, with a lack of detail on likely ecological change. This leaves any attempt at valuation open to question as respondents to any valuation survey seem likely not to have a proper understanding of what it is that they are asked to value. Adopting the paradigm of individual willingness to pay as an appropriate metric for valuation of ecosystems is also questionable, as it raises questions of responsibility for the protection of valued natural heritage.

Describing Uncertainty For a full CBA of air pollution policies at a European scale there are many uncertainties present, including: 1. Future projections of energy use, transport fleets, economic development, technical progress, population, etc. leading to emissions 2. The measures contained in the cost curve, and associated estimates of cost and effectiveness, and their applicability to each country 3. Dispersion and chemical conversion of atmospheric pollutants, and associated exposure of sensitive receptors 4. Quantification of impacts to health, ecosystems, materials, etc. and their valuation Fortunately, many of these uncertainties face constraints, such that they cannot vary too significantly from current expectation. With most CBAs being carried out over periods of between 5 and 20 years, there is unlikely to be major unanticipated change in emission sources, population, etc. Pollutant dispersion is also sufficiently characterized with existing models correlating reasonably well with measurement. Key to the description of uncertainty is the definition of the objective of uncertainty analysis, which is to consider the robustness of conclusions drawn from comparison of quantified costs and benefits, first whether a net benefit is likely, and then information on the size of the benefit: cost ratio. This is helpful for limiting the scope of the uncertainty assessment, focusing more on the likelihood that specific actions will yield a positive result than on the full breadth of possibilities that are available. Uncertainties can usefully be grouped as follows: 1. Statistical uncertainties, for example concerning health response functions correlating pollutant exposure with mortality and morbidity. These can be collated and modeled using Monte Carlo techniques to define the likely spread of net benefits. Commercially available software enables a large number of endpoints to be modeled simultaneously, accounting for the likelihood of uncertainties canceling each other out to some extent and not being purely additive. 2. Methodological sensitivities, including model assumptions. An important example here concerns the approach taken to mortality valuation in the benefits assessment, which can be based on loss of life expectancy through use of the “value of a life year” (VOLY) or equivalent attributable deaths, based on use of the value of a statistical life (VSL). Sensitivities are generally assessed through additional model runs, varying one or more variables within plausible ranges. 3. Unquantified biases, for example through the omission of measures from MAC curves or types of impact from the benefits assessment. These may be partially modeled using sensitivity analysis. Where this is not possible a concise description can be provided to illustrate the potential effect of bias on the conclusions of analysis. An example of the application of these methods is provided below.

Cost-Benefit Analysis and Air Quality Related Health Impacts: A European Perspective

761

European Examples This section considers two examples of the use of CBA in Europe, first dealing with air quality policy through the development of national emission ceilings that limit future total emissions from any country, and second dealing with the air quality co-benefits of European climate policies.

National Emission Ceilings The EU’s National Emission Ceilings Directive (NECD) sets maximum amounts of ammonia (NH3), oxides of nitrogen (NOx, covering NO and NO2 combined, but not other oxides such as N2O), fine particles (PM2.5), sulfur dioxide (SO2) and volatile organic compounds (VOCs). These pollutants are controlled together under a common policy framework to recognize their shared impacts and chemistry (Table 1). Ceilings are assessed by the European Commission and Member States taking account of impacts to human health and ecosystems, the latter via impacts of eutrophication, acidification and ozone. Emission ceilings provide a flexible policy instrument that gives added scope for economic efficiency by:

• • •

Recognizing that air pollution impacts are not controllable by action at the national level alone. Allowing countries to account for their own national situation, including future plans, when determining precisely how emission reductions to meet the ceilings should be achieved. Recognizing that problems such as acidification and eutrophication are linked to several pollutants. Policy designed around individual pollutants would not account for the synergies that are present, or differences in the strength of each pollutant relative to impacts.

Analysis to inform the development of the NECD has been led by the International Institute for Applied Systems Analysis (IIASA) initially using the RAINS, and later, GAINS models. These have been coupled with the ALPHA-Riskpoll model of EMRC since the late 1990s to quantify the benefits of different policy scenarios and provide the comparison of costs and benefits. A number of other models are also linked to GAINS, providing data on developments and emissions from the energy, transport, industry, agriculture and other sectors. Fig. 6 shows the evolution in the marginal costs and benefits for 2030 from the EU’s Clean Air Policy Package of December 2013. It defines the trend in costs and benefits between a current legislation (CLE) scenario (i.e., all agreed legislation as of 2013 is in force) and a scenario defining the “maximum technically feasible reduction” (MTFR), for which all technical measures contained in the GAINS database are employed. “Gap closure” between these scenarios is defined against mortality impacts only, as it was considered that this was the impact that could be quantified with highest confidence. Given that the gap closure is defined against mortality, the marginal benefit curves are flat. Two are represented, one taking a lower estimate of the value of a life year, valuing mortality against life years lost, and the other taking a higher estimate of the value of statistical life, valuing mortality against equivalent attributable deaths. By omitting other types of benefit (reductions in morbidity or in damage to ecosystems, crops, forests, materials) the figure provides a conservative estimate of the optimal zone for setting abatement (where benefits and costs are equal), corresponding to a gap closure between 76% and 92%. There are several notable features of the results shown in this figure: 1. Despite the development of air quality policies over several decades, much of the available controls can be achieved with relatively low cost measures for which benefits are estimated to be very much greater than costs (up to around 60% gap closure). 2. The large difference between the lower bound estimate of benefits and the upper bound (roughly a factor of 5) correspond to a rather small interval in terms of gap closure (16%). This is a common conclusion for analysis that involves the use of health response functions for which thresholds do not apply: the shape of the marginal benefit curve will tend to being flat, whilst for any MAC curve involving more than a very small number of measures it is likely that there will be a large variation in abatement cost per tonne leading to rapidly increasing costs at higher levels of abatement. 3. The inclusion of additional benefits (reductions in morbidity or in ecosystem damage, etc.) would, naturally, increase marginal benefits, pushing the optimal zone for gap closure higher at the lower end, to around 83%. However, the shape of the MAC curve is such that there would be very little change in the maximum justifiable level of control, given the very high cost per unit emission reduction of measures included to the right hand side of the figure. Table 1

Impacts of the pollutants covered by the EU’s National Emission Ceilings Directive

Human health: particles Human health: ozone Human health: NO2 Ecosystems: eutrophication Ecosystems: acidification Ecosystems: ozone Materials damage

NH3

NOx

PM2.5

SO2

VOCs

U

U U U U U U U

U

U

U U

U U

U U

U

U U

762

Cost-Benefit Analysis and Air Quality Related Health Impacts: A European Perspective

Marginal cost, benefits (billion EUR/% gap closure)

5.0 4.5 4.0 3.5 3.0 Mortality benefit high: medium VSL

2.5 2.0 1.5 1.0

Mortality benefit low: median VOLY

0.5 0.0 0

20

40

60

80

100

Gap closure (% between CLE and MTFR) Cost

Fig. 6

Benefit low

Benefit high

Marginal costs and benefits from the review of the EU’s Thematic Strategy on Air Pollution of December 2013.

4. Uncertainties through the omission of measures (behavioral measures, fuel switching, energy efficiency) would move the MAC curve to the right, increasing the level of gap closure that can be justified through the analysis. The analysis continued with a more detailed assessment of policy options. Analysis focuses on three scenarios in the year 2030, CLE and MTFR (as above) and B7, the Commission’s proposal, intended to lead to a 67% gap closure (a little lower than the level indicated in Fig. 6, but based on the results shown in Fig. 6). Health impacts for these scenarios are shown in Table 2, with their monetised equivalents in Table 3 and benefit: cost ratios in Table 4. The results highlight the importance of air pollution for public health in Europe, with over 2.5 million life years lost in 2030, 230,000 cases of chronic bronchitis, 180,000 hospital admissions and many millions of days of restricted activity (including work loss days). These translate to an economic cost, accounting for health care costs, lost productivity and lost utility (from loss of life expectancy, pain, loss of opportunity, etc.), of between V205 and 734 109/year for the CLE scenario in 2030. The range reflects

Table 2

Annual health impacts due to air pollution (thousand cases, days, etc.), 2030, EU28, for scenarios considered in the European Commission’s Clean Air Policy package of December 2013 Units

CLE

B7

MTFR

Impacts

Thousands

2030

2030

2030

Acute mortality (all ages) Respiratory hospital admissions (>64) Cardiovascular hospital admissions (> 64) Minor restricted activity days (all ages) Chronic mortality (30 year þ)a Chronic mortality (30 year þ)a Infant mortality (0–1 year) Chronic Bronchitis (27 yearr þ) Bronchitis in children aged 6–12 Respiratory Hospital Admissions (all ages) Cardiac Hospital Admissions (>18 years) Restricted activity days (all ages) Asthma symptom days (children 5–19 year) Lost working days (15–64 years)

Prem. deaths Cases Cases Days Life years lost Prem. deaths Prem. deaths Cases Cases Cases Cases Days Days Days

17 20 80 83,557 2539 304 0.4 234 732 101 77 320,526 7728 76,102

16 19 82 78,394 2055 246 0.3 190 595 82 63 259,895 6288 61,686

14 17 73 70,211 1817 218 0.3 168 526 72 55 229,943 5568 54,586

O3 O3 O3 O3 PM PM PM PM PM PM PM PM PM PM

Note: For PM effects, premature adult deaths and life years lost are alternative estimates of the same impact and are not additive. Adapted from Holland, M. (2014). Cost-benefit analysis of final policy scenarios for the EU Clean Air Package (version 2). http://ec.europa.eu/environment/air/pdf/TSAP%20CBA.pdf.

a

763

Cost-Benefit Analysis and Air Quality Related Health Impacts: A European Perspective Table 3

Monetised equivalent of annual health impacts due to air pollution, 2030, EU28, Vmillion/year, 2005 prices for scenarios considered in the European Commission’s Clean Air Policy package of December 2013

Damage, VM/year Acute mortality (all ages) lowa Acute mortality (all ages) higha Respiratory hospital admissions (>64) Cardiovascular hospital admissions (> 64) Minor restricted activity days (MRADs all ages) Chronic mortality (All ages) lowa Chronic mortality (30 year þ) higha Infant mortality (0–1 year) lowa Infant mortality (0–1 year) higha Chronic Bronchitis (27 year þ) Bronchitis in children aged 6–12 Respiratory hospital admissions (all ages) Cardiac Hospital admissions (>18 years) Restricted activity days (all ages) Asthma symptom days (children 5–19 year) Lost working days (15–64 years) Total: Low Total: High

O3 O3 O3 O3 O3 PM PM PM PM PM PM PM PM PM PM PM

CLE

B7

MTFR

2030

2030

2030

1000 2400 45 200 3500 150,000 670,000 640 1300 13,000 430 220 170 29,000 330 9900 200,000 740,000

930 2200 42 180 3300 120,000 550,000 520 1100 10,000 350 180 140 24,000 260 8000 170,000 600,000

830 2000 38 160 2900 100,000 480,000 460 940 9.000 310 160 120 21,000 230 7100 150,000 530,000

a Note: Analysis includes assessment of sensitivity to different positions on mortality valuation. Rows showing the same effect/pollutant combination are not additive. Adapted from Holland, M. (2014). Cost-benefit analysis of final policy scenarios for the EU Clean Air Package (version 2). http://ec.europa.eu/environment/air/pdf/TSAP%20CBA.pdf.

alternative positions on mortality valuation. No account is taken in the tables of non-health impacts, though they were quantified, but made only a small contribution to total damage. Table 4 demonstrates a high benefit cost ratio for the B7 scenario. However, going further to MTFR indicates a net cost (B:C ratio < 1) for all but one of the sensitivity cases. Interpretation of the results needs to be made with care: 1. Air pollution is recognized as one of several agents negatively affecting the circulatory and respiratory systems, along with lack of exercise, smoking, poor diet, alcohol, etc. These factors will act in combination to a greater or lesser degree, complicating the concept of an “air pollution death.” The United Kingdom’s Committee on the Medical Effects of Air Pollutants (COMEAP) considered in a report from 2010 that estimates of deaths were better expressed as “equivalent attributable deaths” than simply “deaths.” The concept “equivalent attributable deaths” recognizes that few people will die from exposure to air pollution alone. It indicates an expectation that the number of people whose lives are shortened by air pollution to some extent (e.g., by a few months rather than an average in the order of 10 years that can be calculated from estimates of deaths and life years lost that are generated using the response functions) would be considerably higher than the quantified number of deaths, whilst providing a result that can be compared with estimates of deaths from other causes. 2. Linking pollution effects to valuation requires a good understanding of the severity of impacts associated with air pollution. In the case of chronic bronchitis, for example, it seems most likely that new cases of bronchitis attributable to air pollution at levels typical of western Europe would be mild. There is, however, a possibility that some of those with bronchitis for other reasons (e.g., as a consequence of smoking habit, or occupational illness) might move from mild to moderate or moderate to severe as a consequence of exposure to ambient air pollution. Unfortuntely, the epidemiology on which the response functions are based provides no insight on this issue, though there will be a significant difference in cost between disease at different severities. 3. It should not be implied that the health damage quantified here represents a reduction in gross domestic product (GDP) or some other aggregate measure of national economy as this is only partly true (insofar as it relates to productivity and to some extent health care). GDP and other such measures are an imperfect representation of the economy as they deal only with marketed

Table 4

Health benefit to abatement cost ratios for the scenarios for 2030 considered in the European Commission’s Clean Air Policy package of December 2013

Total with median VOLY Total with mean VSL

CLEdB7

B7dMTFR

12 42

0.41 1.44

Adapted from Holland, M. (2014). Cost-benefit analysis of final policy scenarios for the EU Clean Air Package (version 2). http://ec.europa.eu/environment/air/pdf/TSAP%20CBA.pdf.

764

Cost-Benefit Analysis and Air Quality Related Health Impacts: A European Perspective

goods. The fact that the quantification here goes beyond GDP should also not be considered as heading into a world where costs are not real: people value their health very highly and should not be expected to have their health traded freely by others. Consideration was given to whether uncertainties in the analysis would undermine the principal conclusion of the CBA, that marginal benefits would exceed marginal costs at the position adopted by the European Commission. Taking account of statistical uncertainties and methodological sensitivies concerning mortality valuation, it was concluded that there was a > 90% probability of benefits exceeding the central estimate of costs for the B7 scenario. Bias associated with unquantified elements for benefits (e.g., reduced ecosystem damage) and costs (e.g., omission of cost-effective control measures) strengthened the conclusion that benefits would significantly exceed costs.

Climate Co-Benefits There are a number of difficulties in quantifying the benefits of climate policies in terms of their primary objective of controlling global warming and limiting its impacts, linked to the global and long-term nature of damage. Addressing impacts on a time course of 30 or more years leads to significant uncertainty, for example with respect to population size, economic development and the availability of low carbon technologies. Generally speaking, policies that reduce greenhouse gas emissions are also beneficial for air quality, given that the dominant source of both is the use of fossil fuels. Accordingly, estimates of the benefits of air quality improvement in contrast to the longterm build up of damage linked to climate change, co-benefits via air pollution reduction are both immediate and local to the societies that reduce emissions. However, analysis of the ancillary effects of climate policies also highlights areas where trade-offs arise. Examples include the promotion of wood and other solid biomass as supposed low carbon fuels (for wood even this is questionable when taking a life cycle perspective), and the use of “diesel farms” designed to provide back-up supply in the United Kingdom and possibly elsewhere to account for the intermittency of some renewables such as wind or solar power. Both are significant sources of fine particles. The complementarity of climate and air quality objectives at a global level has been assessed for the Global Energy and Climate Outlook 2017 (GECO 2017) which investigated three principal scenarios, existing trends (Reference), commitments made to greenhouse gas (GHG) mitigation (INDC) and commitments necessary to ensure that a global temperature increase of > 2 C is avoided (B2C) (Fig. 7). Analysis at a global level demonstrated that through an acceleration of decarbonisation trends via the phasing out of coal and reductions in demand for oil and gas, together with increased electrification it would be possible to stay below an increase of 2 C, whilst being consistent with economic growth. Co-benefits of improved air quality (principally related to health) would largely offset the costs of climate mitigation (Fig. 8) accepting that this is dependent on the precise approach to mortality valuation, with GECO 2017 using a higher baseline estimate of the VSL than analysis for the European Commission. However, even using a more conservative approach would mean that a significant part of the costs of climate mitigation could be accounted for by air quality benefits.

The Role of CBA in Policy Development CBA provides input to the policy making process to determine whether the benefits of action, expressed using methods designed to reflect public preference for resource allocation for risk reduction, are likely to exceed emission control costs. In recent years this work has focused on effects on health in Europe as well as other parts of the world. This indicates a change in emphasis as earlier

Fig. 7 Scenarios considered by GECO 2017. Reference ¼ current trends. INDC ¼ Intended Nationally Determined Contributions to GHG mitigation following the Conference of the Parties (COP21) in Paris in December 2015 under the UN Framework Convention on Climate Change. B2 C ¼ policies in place to meet the target of not exceeding a global temperature increase of 2 C. Form Kitous, A., Keramidas, K., Vandyck, T. et al., (2017). Global energy and climate outlook 2017: How climate policies improve air quality. Sevilla: JRC Science for Policy Report. https://ec.europa. eu/jrc/en/geco.

Cost-Benefit Analysis and Air Quality Related Health Impacts: A European Perspective

765

Fig. 8 Costs and air quality benefits of climate mitigation policies. Form Kitous, A., Keramidas, K., Vandyck, T. et al., (2017). Global energy and climate outlook 2017: How climate policies improve air quality. Sevilla: JRC Science for Policy Report. https://ec.europa.eu/jrc/en/geco.

European work on the threat of transboundary air pollution was mainly concerned with ecological damage through acid rain and eutrophication. Given the threat of eutrophication especially across Europe, it is appropriate that checks are in place to ensure that cost-effective options for meeting ecological as well as health goals are adopted. Legislation introduced prior to the use of detailed CBA methods achieved very significant improvements in air quality, opening the question of whether CBA has a useful role in the current climate. However, the early legislation was not perfect, and in some ways generated additional problems, for example through the dilute and disperse policies that avoided very high localized concentrations of pollution around sources, but made transboundary pollution worse. The methods developed for CBA have highlighted that associated health impacts are serious and that options are available to resolve them. This view has been repeatedly challenged: there has been opposition to the improvement of air quality ever since it started to be addressed at a European level. This is the case despite the fact that even the latest analysis identifies options that generate high benefit: cost ratios substantially in excess of 1. The availability of detailed appraisal of policy proposals has facilitated the development and refinement of legislation. CBA methods have found increasing application in the climate debate in relation to quantification of co-benefits of climate policies. It cannot be stressed too strongly that analysis that omits consideration of co-benefits and possible trade-offs exposes policy making to a serious risk of generating unwelcome surprises. This is demonstrated, for example, through the widespread adoption of diesel powered buses in the latter half of the 20th century, and ongoing interest in the use of solid biomass for power generation. Fortunately, awareness of these problems is growing, and should enable CBA to provide increasingly solid information on the desirability of policy options in the coming years.

See also: Air Pollution Episodes; Air Quality Legislation; Assessing Indoor Air Quality; Assessment of Human Exposure to Air Pollution; Decision Making Under Uncertainty: Trade-Offs Between Environmental Health and Other Risks; Economic Analysis of Health Impacts in Developing Countries; Effects of Outdoor Air Pollution on Human Health; Estimating Environmental Health Costs: General Introduction to Valuation of Human Health Risks; Optimal Pollution: The Welfare Economic Approach to Correct Related Market Failures; Power Generation and Human Health; Social Cost-Benefit Analysis of Air Pollution Control Measures at Industrial Point Emission Sources: Methodological Overview and Guidance for the Assessment of HealthRelated Damage Costs.

Further Reading Amann, M., Holland, M., Maas, R., Saveyn, B., Vandyck, T., 2017. Costs, benefits and economic impacts of the EU clean air strategy and their implications on innovation and competitiveness. International Institute for Applied Systems Analysis, Laxenburg, Austria. Atkinson, G., Mourato, S., Groom, B., Braathen, N.-A., 2018. Cost-benefit analysis and the environment: Further developments and policy use. Organization for Economic Cooperation and Development, Paris. Bachmann, T.M., 2015. Assessing air pollutant-induced, health-related external costs in the context of nonmarginal system changes: A review. Environmental Science & Technology 49 (16), 9503–9517. COMEAP, 2010. The mortality effects of long-term exposure to particulare air pollution in the United Kingdom. A report by the Committee on the Medical Effects of Air Pollutants. https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/304641/COMEAP_mortality_effects_of_long_term_exposure.pdf. Holgate, S., 2016. Chair of working group Every breath we take: The lifelong impact of air pollution. Royal College of Physicians and Royal College of Paediatrics and Child Health, London. https://www.rcplondon.ac.uk/projects/outputs/every-breath-we-take-lifelong-impact-air-pollution. Holland, M., 2014. Cost-benefit analysis of final policy scenarios for the EU Clean Air Package. Version 2. http://ec.europa.eu/environment/air/pdf/TSAP%20CBA.pdf. Kitous, A., Keramidas, K., Vandyck, T., et al., 2017. Global energy and climate outlook 2017: How climate policies improve air quality. JRC Science for Policy Report, Sevilla. https:// ec.europa.eu/jrc/en/geco. Nedellec, V., Rabl, A., 2016. Costs of health damage from atmospheric emissions of toxic metals: Part 1-methods and results. Risk Analysis 36, 2081–2095. Nedellec, V., Rabl, A., 2016. Costs of health damage from atmospheric emissions of toxic metals: Part 2-analysis for mercury and lead. Risk Analysis 36, 2096–2104. OECD, 2016. The economic consequences of outdoor air pollution. Organization for Economic Cooperation and Development, Paris. http://www.oecd.org/environment/indicatorsmodelling-outlooks/the-economic-consequences-of-outdoor-air-pollution-9789264257474-en.htm. OECD, 2013. Mortality risk valuation in environment, health and transport policies. Organization for Economic Cooperation and Development, Paris. Pearce, D., Atkinson, G., Mourato, S., 2006. Cost-benefit analysis and the environment: Recent developments. Organization for Economic Cooperation and Development, Paris.

766

Cost-Benefit Analysis and Air Quality Related Health Impacts: A European Perspective

Rabl, A., Spadaro, J., Holland, M., 2014. How much is clean air worth? Cambridge University Press. Reis, S., Grennfelt, P., Klimont, Z., Amann, M., ApSimon, H., Hettelingh, J.-P., Holland, M., LeGall, A.-C., Maas, R., Posch, M., Spranger, T., Sutton, M.A., Williams, M., 2012. From acid rain to climate change. Science 338, 1153–1154. Smith, A.C., Holland, M., Korkeala, O., et al., 2015. Health and environmental co-benefits and conflicts of actions to meet UK carbon targets. Climate Policy 16, 253–283. World Health Organization, 2016. Health risks of air pollution in EuropedHRAPIE project Recommendations for concentration–response functions for cost–benefit analysis of particulate matter, ozone and nitrogen dioxide. WHO Regional Office for Europe, Bonn. http://www.euro.who.int/__data/assets/pdf_file/0006/238956/Health_risks_air_ pollution_HRAPIE_project.pdf?ua¼1.

Relevant Websites http://www.externe.info/externe_d7/dExternEdExternalities of energy. https://ec.europa.eu/jrc/en/gecodGlobal Energy and Climate Outlook (GECO 2017). http://www.iiasa.ac.at/web/home/research/researchPrograms/air/Program-Overview.en.htmldIIASA, GAINS model. https://www.gov.uk/government/groups/committee-on-the-medical-effects-of-air-pollutants-comeapdUK Committee on the Medical Effects of Air Pollutants (COMEAP). https://www.epa.gov/clean-air-act-overview/benefits-and-costs-clean-air-actdUnited States Environmental Protection Agency, Benefits and Costs of the Clean Air Act. http://www.who.int/airpollution/en/dWorld Health Organization.

Critical Windows of Children’s Development and Susceptibility to Environmental Toxinsq,qq CA Robledo, University of Texas Rio Grande Valley, Harlingen, TX, United States P Mendola, Eunice Kennedy Shriver National Institute of Child Health and Human Development, Rockville, MD, United States SG Selevan, Consultant, Silver Spring, MD, United States © 2019 Elsevier B.V. All rights reserved.

Abbreviations DDE Dichlorodiphenyldichloroethylene DDT Dichlorodiphenyltrichloroethane DES Diethylstilbestrol IUGR Intrauterine growth restriction LBW Low birth weight PAH Polycyclic aromatic hydrocarbon PBB Polybrominated biphenyl PCB Polychlorinated biphenyl SGA Small for gestational age SIDS Sudden infant death syndrome TSH Thyroid-stimulating hormone VLBW Very low birth weight

Background Today, the major diseases affecting children in developed nations are chronic conditions of multifactorial origin such as asthma, birth defects, developmental disorders (e.g., attention deficit hyperactivity disorder and autism) and cancer. Approximately 10%–20% of childhood illnesses are attributed to genetic factors but most causes of these diseases remain unknown. The focus of children’s environmental health research is to determine to what extent the occurrence and accrual of exposures to environmental chemicals may contribute to particular outcomes of interest. A fundamental principle of children’s environmental health is that there are critical windows or time periods of susceptibility during lifestages when exposure to environmental chemicals can result in adverse health outcomes. Epidemiological research that aims to identify risk factors for diseases, often defines critical windows of susceptibility rather broadly such as during early pregnancy or lactation. Given what is known about human development, critical windows can be defined for time periods of rapid development where environmental insults can disrupt organogenesis, or the maturation of physical structures and functional systems. Defining critical windows of susceptibility for child development is challenging. Human functional systems can be quite complex and studies that utilize animal models to characterize the timing of human development and organization may not be applicable to humans if they do not completely mimic processes under study. Not only are effects on child development dependent on the timing of exposure but also on its dose and duration. It is thought that adverse effects on child development are more likely to occur at higher levels of environmental chemical exposure. However, recently it has been shown that environmental chemicals known as endocrine disruptors can disrupt hormonal processes even at low doses. Effects may not be clinically evident immediately following exposure and may only be apparent with the onset of symptoms after a period of subclinical disease. Lastly, it is important to consider how extrinsic factors such as negative life events, poor nutritional status, culture and/or socioeconomic status modify the impact environmental insults have on child development.

q

Change History: April 2019. P. Mendola updated Abstract, and Table 1 using information from EPA publication and previous WHO reference, added section on “Overview of Human Development,” updated figure 1 to color version, adapted Passage on Exposure Timing to Exposure Science, updated all passages on associated outcomes under Critical Windows of Susceptibility and Associated Outcomes section, added figures 3–7, updated References in Further Reading, and Relevant Websites. qq The findings and conclusions in this article are those of the authors and do not necessarily represent the views of the Centers for Disease Control and Prevention. This is an update of.A. Kimmel, P. Mendola, S.G. Selevan, G.L. Kimmel, Critical Windows of Children’s Development and Susceptibility to Environmental Toxins, Editor(s): J.O. Nriagu, Encyclopedia of Environmental Health, Elsevier, 2011, Pages 834–843.

Encyclopedia of Environmental Health, 2nd edition, Volume 1

https://doi.org/10.1016/B978-0-12-409548-9.11506-4

767

768

Critical Windows of Children’s Development and Susceptibility to Environmental Toxins

Lifestages Lifestages are periods of life with distinct anatomical, physiological, and behavioral or functional characteristics that contribute to potential differences in vulnerability to environmental exposures. Child development can be thought of as a sequence of lifestages that begin before conception and continue through adulthood. Preconception measures generally focus on parental environmental exposures that act as germ cell toxicants. These exposures can compromise the quality of gametes available for reproduction. For women, this key period actually begins in utero. In fact, by the sixth month of fetal development, a woman’s lifetime supply of eggs is complete. The prenatal lifestage consists of the embryonic and fetal stages and begins at conception and ends at birth. Infancy begins at birth and continues through the first year of life. Childhood includes all the lifestages from infancy through adolescence which can encompass ages 12 to adulthood (commonly defined as 21 years of age) when the individual reaches reproductive capacity. Environmental chemical exposures can result in adverse outcomes in the lifestage in which they occur or the effects may not be apparent until later in life. A comprehensive approach to assess the health risks to children from environmental exposure would consider both early-life exposures and those that occur across the lifespan. This approach would allow the study of adverse outcomes that may take time to manifest. Lifestages can be further subdivided into critical windows of susceptibility based on what is known about the development of different organ systems, anatomy, physiology and behavior. Specific age groups proposed to assess the toxic effects of environmental chemicals through the lifespan are shown in Table 1.

Brief Overview of Human Development Throughout development, the maturity of biological systems and the ability of the individual to respond to environmental chemical exposures varies, as do the types of environmental exposures that may be encountered. Preconception exposures can occur to either parent and can directly affect gametes (sperm and ova), which form the conceptus. The prenatal lifestage involves the development of the embryo and fetus from a mass of undifferentiated cells to an individual possessing the primary attributes necessary for life independent from the maternal environment (Fig. 1). After fertilization, the resulting zygote undergoes division and the resulting blastocyst must successfully implant into the uterine lining for further development to occur. Congenital malformations are unlikely to result from exposure of embryos to teratogens during the first 2 weeks of development. However, death of the embryo and spontaneous abortion can occur. The prenatal lifestage can be further subdivided into the embryonic period and the fetal

Table 1

Lifestages and age groups that can be used to define a child’s susceptibility to environmental exposuresa

Lifestage

Developmental stage/event

Time period

Preconception Prenatal

Prefertilization

Reproductive age Conception to birth Conception to implantation Implantation to 8 weeks of pregnancy 8 weeks of pregnancy to birth 29 weeks of pregnancy to 7 days after birth 80 mg dL 1 usually result in permanent health damage. Of particular concern is the association of lead with developmental and reproductive toxicity. Reports indicating lead exposure in male lead workers at blood levels of 40–50 mg dL 1 showed severe depression of sperm count as documented by ATSDR and HSDB. Similar exposure to lead during pregnancy produced adverse fetal effects, including increased preterm delivery, low birth weight, and impaired mental development. These findings were noted at blood lead levels of 10–15 mg dL 1. Animal studies have also shown similar adverse effects. In a paper by Thoreux-Manley, it has been shown that administration of lead acetate at 8 mg kg 1 day 1 for 5 weeks produced adverse effects to the Leydig cell function in rats by direct impairment of steroidogenesis. In other reports by Mello, when pregnant rats were exposed to lead in the form of lead acetate in drinking water at 1.0 mM during the pre- and postnatal periods, alterations in development were identified in the offspring, affecting specific motor activity skills. Similarly, a study looking at the effects of lead exposure on the development of the reproductive system in the Sprague– Dawley pups by exposing the dam to drinking water at concentrations of 0.05%, 0.15%, and 0.45% (w/v) from gestation day 5 to

Fig. 3

Potential lead exposure in children from ingestion of contaminated paint chips (http://www.health.state.tn.us/images/lead.jpg).

56

Developmental and Reproductive Toxicity of TCDD, Lead and Mercury

Positive terminal

Negative terminal

Vent caps

Electrolyte solution (dilute sulfuric acid) Cell connectors

Protective casing

Positive electrode (lead dioxide)

Cell divider Negative electrode (lead)

Fig. 4

Composition of a typical lead acid battery (http://www.alternative-energy-news.info/images/technical/lead-acid-battery.jpg).

weaning indicated that the reproductive axis is particularly sensitive to lead, as a result of delayed sexual maturation by suppression of sex steroid biosynthesis, as described by Ronis. A series of experiments were conducted to look at the effects of lead when exposed to rats in utero, pre- and postpubertally via drinking water at a concentration of 0.6% (w/v) by the same group. The results indicated few effects in the adult animals, but most of the toxicity was observed in the animals receiving lead in utero and prepubertally. In male animals the secondary sex organ weights were decreased before puberty, in addition to suppression of serum testosterone in animals exposed in utero. Female animals exposed prepubertally showed delayed vaginal opening and disruption of the estrus cycling, in addition to significant suppression of circulating estradiol following in utero exposure. Effects of lead on luteinizing hormone (LH) at the level of the hypothalamic–pituitary axis indicated a direct effect on gonadal steroid biosynthesis. Therefore, the effects of lead suggest perturbations in sexual maturation and growth possibly due to alterations in the hypothalamic–pituitary–gonadal axis.

Mercury Mercury is a heavy metal that can be found naturally in the environment, but more commonly its presence is the result of human introduction through chemical by-products, such as spillage into water systems, volatilization from combustion, or from commonly used mercuric biocides used in agriculture, due to their effective control of microbiologicals. One of the more common forms of mercury is its methylated form, known as methylmercury, which can be found in fish and shellfish. According to the World Health Organization, the earliest effects of methylmercury in humans occur when blood concentrations are between 200 and 500 ng mL 1. The two most widely known epidemics of methylmercury poisoning occurred in Minamata Bay and Niigata, Japan, in 1953 to the early 1960s. These episodes were caused by the industrial release of methyl and other mercuric compounds into neighboring waters, followed by accumulation of the mercury in edible fish. The median level of total mercury in fish was estimated between 10 and 11 mg kg 1 fresh weight. By 1974, a total of 1200 cases of methylmercury poisoning were identified, of which 55 proved fatal. Highest concentrations of mercury were found in the blood and hair. People developed a neurological syndrome, sometimes called Minamata disease (see Fig. 5), and consisted of numbness of limbs, muscle weakness, damage to sight and hearing, and in severe cases insanity and coma followed by death. Methylmercury poisoning has a pronounced toxic effect on the developing fetus; the fetal brain appears to be the most sensitive organ. Mercury can occur in many forms and has been associated with spontaneous abortions, and menstrual disorders. Marsh evaluated mothers and their infants exposed to methylmercury as a wheat fungicide during pregnancy. Peak maternal hair levels showed a pattern related to the frequency of maternal symptoms and neurological effects in infants exposed in utero. Severe neurological deficits were observed in children with maternal hair levels of mercury between 165 and 320 ppm. Mothers with peak hair levels of less than 68 ppm showed minimal symptoms. A greater fetal risk appeared to be related to exposure during the second trimester. Methylmercury has been shown to be a potent developmental toxicant. Chang describes how this chemical can cross the placental barrier and accumulates in the conceptus. Therefore, human neonates born to mothers exposed to methylmercury through

Developmental and Reproductive Toxicity of TCDD, Lead and Mercury

57

Fig. 5 This photo, by William Eugene Smith, shows an outwardly healthy mother bathing her fetal-poisoned 16-year-old daughter, physically crippled since birth due to environmental industrial mercury poisoning in the Minamata Bay tragedy (www.hamline.edu/personal/amurphy01/es110/ eswebsite/ProjectsSpring03/ebarker/Minamata%20Web%20Page.htm).

consumption of contaminated fish or grain were found to have higher RBC levels of mercury than mothers without exposure. In addition, developmental toxicity was seen without maternal effects. The effects of mercuric chloride was evaluated by Kahn on the reproductive performance of mice. Male and female mice were exposed to 0.00, 0.25, 0.50, and 1.00 mg kg 1 day 1 of inorganic mercury. Fertility and survival indices in these animals were significantly reduced; however, there were no effects on litter size, or evidence of mercury-induced toxicity in either the clinical pathology parameters or histopathology evaluations. The results of the study suggested that oral exposure to 0.25– 1.00 mg kg 1 day 1 of inorganic mercury produced adverse effects on the reproductive performance of mice in the absence of overt mercury toxicity. A report by Lee and Dixon compared the reproductive effects of organic methylmercury hydroxide and inorganic mercuric chloride in male mice. These compounds were administered by intraperitoneal injection once at a dose level of 1 mg kg 1, or spermatogenic cells were exposed in vitro to mercury concentrations ranging from 10 3 to 10 8 M. The in vitro experiments indicated effects on spermatogonia and spermatids. In vivo administration of methylmercury and inorganic mercury both significantly affected the spermatogenic cells. The fertility profiles from serial mating studies indicated that the primary effect of methylmercury was on spermatogonial cells, premeiotic spermatocytes, and early elongated spermatids, with no apparent effect on spermatozoa in testis. Inorganic mercury also affected spermatogonial and premeiotic cells, but the effect was less than that seen with methylmercury. These adverse effects on the sperm cells were found to be reversible. The authors concluded that the effects of organic and inorganic mercury both had adverse effects on the early sperm production, with a greater effect shown with methylmercury. This is important to consider, as methylmercury may be found most commonly in fish ingested by humans and therefore would have important health consequences in them.

Conclusion There is sufficient evidence in the literature to support the disruptive effects of the three chemicals, TCDD, lead, and mercury, and identify them as potential agents targeting developmental and reproductive function in animals and humans. Key factors in studying these compounds include identification of the specific routes of exposure, such as inhalation from aerial contamination, dermal deposition, or ingestion of contaminated sources. A thorough understanding of the complicated interactions of these chemicals with the environment and biological systems is instrumental in identifying their risk to humans. The ultimate goal will be to find more efficient ways to first identify and then minimize or even eliminate exposure to these agents in daily lives.

See also: Developmental Immunotoxicants; Monetary Valuation of Trace Pollutants Emitted Into Air by Industrial Facilities; Neurodevelopmental Toxicants; Organophosphate Insecticides: Neurodevelopmental Effects.

58

Developmental and Reproductive Toxicity of TCDD, Lead and Mercury

Further Reading Agency for Toxic Substances and Disease Registry (ATSDR), 1997. Toxicological Profile for Lead (Update). Public Health Service, US Department of Health and Human Services, Atlanta, GA. Draft for Public Comment. Ahrenhoerster, L.S., Leuthner, T.C., Tate, E.R., Lakatos, P.A., Laiosa, M.D., 2015. Developmental exposure to 2, 3, 7, 8 tetrachlorodibenzo-p-dioxin attenuates later-life notch 1mediated T cell development and leukemogenesis. Toxicology and Applied Pharmacology. 283 (2), 99–108. Bjerke, D.L., Sommer, R.J., Moore, R.W., Peterson, R.E., 1994. Effects of in utero and lactational 2,3,7,8-tetrachlorodibenzo-p-dioxin exposure on responsiveness of the male rat reproductive system to testosterone stimulation in adulthood. Toxicology and Applied Pharmacology 127 (2), 250–257. Bryant, P.L., Schmid, J.E., Fenton, S.E., Buckalew, A.R., Abbott, B.D., 2001. Teratogenicity of 2,3,7,7-tetrachlorodibenzo-p-dioxin (TCDD) in mice lacking the expression of EGF and/or TGF-alpha. Toxicological Sciences 62 (1), 103–114. Buser, M.C., Abadin, H.G., Irwin, J.L., Pohl, H.R., 2018. Windows of sensitivity to toxic chemicals in the development of reproductive effects: An analysis of ATSDR’s toxicological profile database. International Journal of Environmental Health Research 28 (5), 553–578. Chahoud, I., Hartmann, J., Rune, G.M., Neubert, D., 1992. Reproductive toxicity and toxicokinetics of 2,3,7,8-tetrachlorodibenzo-p-dioxin. 3. Effects of single doses on the testis of male rats. Archives of Toxicology 66 (8), 567–572. Chang, L.W., 1996. Toxicology of Metals. Lewis Publishers, Boca Raton, FL, p. 1054. Cheung, M.O., Gilbert, E.F., Peterson, R.E., 1981. Cardiovascular teratogenicity of 2,3,7,8-tetrachlorodibenzo-p-dioxin in the chick embryo. Toxicology and Applied Pharmacology 61 (2), 197–204. Couture, L.A., Abbott, B.D., Birnbaum, L.S., 1990. A critical review of the developmental toxicity and teratogenicity of 2,3,7,8-tetrachlorodibenzo-p-dioxin: Recent advances toward understanding the mechanism. Teratology 42 (6), 619–627 (review). Faqi, A.S., Dalsenter, P.R., Merker, H.J., Chahoud, I., 1998. Reproductive toxicity and tissue concentrations of low doses of 2,3,7,8-tetrachlorodibenzo-p-dioxin in male offspring rats exposed throughout pregnancy and lactation. Toxicology and Applied Pharmacology 150 (2), 383–392. Gray Jr., L.E., Ostby, J.S., 1995. In utero 2,3,7,8-tetrachlorodibenzo-p-dioxin (TCDD) alters reproductive morphology and function in female rat offspring. Toxicology and Applied Pharmacology 133 (2), 285–294. Gray Jr., L.E., Kelce, W.R., Monosson, E., Ostby, J.S., Birnbaum, L.S., 1995. Exposure to TCDD during development permanently alters reproductive function in male long Evans rats and hamsters: Reduced ejaculated and epididymal sperm numbers and sex accessory gland weights in offspring with normal androgenic status. Toxicology and Applied Pharmacology 131 (1), 108–118. Hassoun, E., d’Argy, R., Dencker, L., 1984. Teratogenicity of 2,3,7,8-tetrachloro-dibenzofuran in BXD recombinant inbred strains. Toxicology Letters 23, 37–42. Hazardous Substances Data Bank (HSDB), U.S. Department of Health and Human Services, 1993U.S.DepartmentofH. National Toxicology Information Program. National Library of Medicine, Bethesda, MD (online database). Hurst, C.H., DeVito, M.J., Birnbaum, L.S., 2000. Tissue disposition of 2,3,7,8-tetrachlorodibenzo-p-dioxin (TCDD) in maternal and developing long–Evans rats following subchronic exposure. Toxicological Sciences 57 (2), 275–283. Huuskonen, H., Unkila, M., Pohjanvirta, R., Tuomisto, J., 1994. Developmental toxicity of 2,3,7,8-tetrachlorodibenzo-p-dioxin (TCDD) in the most TCDD-resistant and -susceptible rat strains. Toxicology and Applied Pharmacology 124 (2), 174–180. Khan, A.T., Atkinson, A., Graham, T.C., Thompson, S.J., Ali, S., Shireen, K.F., 2004. Effects of inorganic mercury on reproductive performance of mice. Food and Chemical Toxicology 42 (4), 571–577. Lee, I.P., Dixon, R.L., 1975. Effects of mercury on spermatogenesis studied by velocity sedimentation cell separation and serial mating. The Journal of Pharmacology and Experimental Therapeutics 194 (1), 171–181. Li, X., Johnson, D.C., Rozman, K.K., 1995. Effects of 2,3,7,8-tetrachlorodibenzo-p-dioxin (TCDD) on estrous cyclicity and ovulation in female Sprague–Dawley rats. Toxicology Letters 78 (3), 219–222. Mably, T.A., Bjerke, D.L., Moore, R.W., Gendron-Fitzpatrick, A., Peterson, R.E., 1992. In-utero and lactational exposure of male rats to 2,3,7,8-tetrachlorodibenzo-p-dioxin. 3. Effects on spermatogenesis and reproductive capability. Toxicology and Applied Pharmacology 114 (1), 118–126. Marsh, D.O., Myers, G.J., Clarkson, T.W., et al., 1981. Dose–response relationship for human fetal exposure to methyl mercury. Clinical Toxicology 18, 1311–1318. Mello, C.F., Kraemer, C.K., Filippin, A., et al., 1988. Effect of lead acetate on neurobehavioral development of rats. Brazilian Journal of Medical and Biological Research 31 (7), 943–950. Mimura, J., Yamashita, K., Nakamura, K., et al., 1997. Loss of teratogenic response to 2,3,7,8-tetrachlorodibenzo-p-dioxin (TCDD) in mice lacking the Ah (dioxin) receptor. Genes to Cells: Devoted to Molecular & Cellular Mechanisms 2 (10), 645–654. Mocarelli, P., Gerthoux, P.M., Ferrari, E., et al., 2000. Paternal concentrations of dioxin and sex ratio of offspring. Lancet 355 (9218), 1838–1839. Murray, F.J., Smith, F.A., Nitschke, K.D., Humiston, C.G., Kociba, R.J., Schwetz, B.A., 1979. Three-generation reproduction study of rats given 2,3,7,8-tetrachlorodibenzo-p-dioxin (TCDD) in the diet. Toxicology and Applied Pharmacology 50 (2), 241–252. Ohsako, S., Miyabara, Y., Sakaue, M., et al., 2002. Developmental stage-specific effects of perinatal 2,3,7,8-tetrachlorodibenzo-p-dioxin exposure on reproductive organs of male rat offspring. Toxicological Sciences 66 (2), 283–292. Peters, J.M., Narotsky, M.G., Elizondo, G., Fernandez-Salguero, P.M., Gonzalez, F.J., Abbott, B.D., 1999. Amelioration of TCDD-induced teratogenesis in aryl hydrocarbon receptor (AhR)-null mice. Toxicological Sciences 47 (1), 86–92. Poland, A., Glover, E., 1980. 2,3,7,8-Tetrachlorodibenzo-p-dioxin: Segregation of toxicity with the Ah locus. Molecular Pharmacology 17, 86–94. Ronis, M.J., Badger, T.M., Shema, S.J., Robertson, P.K., Shaik, F., 1995. Reproductive toxicity and growth effects in rats exposed to lead at different periods during development. Toxicology and Applied Pharmacology 136 (2), 361–371. Ronis, M.J., Gandi, J., Badger, T., 1998. Endocrine mechanisms underlying reproductive toxicity in the developing rat chronically exposed to dietary lead. Journal of Toxicology and Environmental Health. Part A 54 (2), 77–99. Sergeyev, O., Burns, J.S., Williams, P.L., Korrick, S.A., Lee, M.M., Revich, B., Hauser, R., 2017. The association of peripubertal serum concentrations of organochlorine chemicals and blood lead with growth and pubertal development in a longitudinal cohort of boys: A review of published results from the Russian Children’s study. Reviews on Environmental Health 32 (1–2), 83–92. Theobald, H.M., Peterson, R.E., 1997. In utero and lactational exposure to 2,3,7,8-tetrachlorodibenzo-rho-dioxin: Effects on development of the male and female reproductive system of the mouse. Toxicology and Applied Pharmacology 145 (1), 124–135. Thoreux-Manley, A., Le Goascogne, C., Segretain, D., Jegou, B., Pinon-Lataillade, G., 1995. Lead affects steroidogenesis in rat Leydig cells in vivo and in vitro. Toxicology 103 (1), 53–62. Weber, H., Harris, M.W., Haseman, J.K., Birnbaum, L.S., 1985. Teratogenic potency of TCDD, TCDF and TCDD-TCDF combinations in C58BL/6 N mice. Toxicology Letters 26 (2– 3), 159–167. World Health Organization (WHO), 1976. Environmental health criteria for mercury. In: Environmental Health Criteria 1. Mercury. World Health Organization, Geneva, Switzerland. Wolf, C.J., Ostby, J.S., Gray Jr., L.E., 1999. Gestational exposure to 2,3,7,8-tetrachlorodibenzo-p-dioxin (TCDD) severely alters reproductive function of female hamster offspring. Toxicological Sciences 51 (2), 259–264.

Developmental Immunotoxicantsq MI Luster, National Institute for Occupational Safety and Health, Morgantown, WV, United States RR Dietert, Cornell University, Ithaca, NY, United States DR Germolec, National Institute Environmental Health Sciences/National Toxicology Program, Research Triangle Park, NC, United States RW Luebke, US Environmental Protection Agency, National Health and Environmental Effects Research Laboratory, Research Triangle Park, NC, United States SL Makris, US Environmental Protection Agency, National Center for Environmental Assessment, Washington, DC, United States © 2019 Elsevier B.V. All rights reserved.

Abbreviations BALT Bronchial–alveolar lymphoid tissue CD Cluster of differentiation CDC Center for disease and control CT Computerized tomography DES Diethylstilbestrol DIT Developmental immunotoxicology ETS Environmental tobacco smoke FQPA Food quality protection act HIV Human immunodeficiency disease IEL Intraepithelial lymphocyte Ig Immunoglobulin MHC Major histocompatability complex PCB Polychlorinated biphenyl PM Particulate matter PMN Polymorphonuclear cell ROFA Residual oil fly ash RSV Respiratory syncytial virus TCR T cell receptor TREC T cell excision circle Tregs Regulatory T cells

Immune System Development Development and maturation of the immune system begin early in gestation and is essentially completed at puberty in humans and laboratory animal models. Key events include the appearance of hematopoietic stem cells in the yolk sac, followed by production of immune system cells in the fetal liver. Immune system histogenesis and organogenesis is marked by the sequential appearance of lymphoid cells in distinct organs, including the bone marrow, thymus, spleen, lymph nodes, and mucosal-associated lymphoid tissues, and in the peripheral circulation. Although antibody synthesis is detectable before birth, affinity and specificity toward antigens are low and do not reach adult levels until after birth. Cells of the innate immune system, including granulocytes and macrophages, are also present before birth, but are not functionally equivalent to adult cells. The majority of immune system development in humans takes place in the prenatal period, during the first two trimesters, whereas in rodents, these events take place during the second and third trimesters and extend into the postnatal period. These

q

Change History: September 2018. The section editor Orish Ebere Orisakwe made changes to the references. This report has been reviewed and approved for publication by the Environmental Protection Agency’s Office of Research and Development and the National Institute of Occupational Safety and Health. Approval does not signify that the contents necessarily reflect the views and policies of the Agency nor does the mention of trade names or commercial products constitute endorsement or recommendation for use. This is an update of M.I. Luster, R.R. Dietert, D.R. Germolec, R.W. Luebke, S.L. Makris, Developmental Immunotoxicants. Editor(s): J.O. Nriagu, Encyclopedia of Environmental Health, Elsevier, 2011, Pages 44–50.

Encyclopedia of Environmental Health, 2nd edition, Volume 2

https://doi.org/10.1016/B978-0-12-409548-9.11508-8

59

60

Developmental Immunotoxicants Table 1

Comparison of immune system developmental landmarks between humans and mice

Event

Approximate % of term (human)

Approximate % of term (mouse)

Appearance of T cells in fetal liver Organogenesis of thymus begins Lymph nodes evident Spleen develops B cell lymphopoiesis begins in bone marrow B lymphocytes detectable in blood CD4þ and CD8þ T cells detectable in spleen Thymus development completed Bone marrow becomes the major site of hematopoiesis T cell receptor expression in periphery

15–20 15 20–30 25–35 30 30 35 37–40 55 58

67 52 50 62 81 62 91 62 83 Early postnatal development

temporal differences must be taken into account when designing and interpreting data from studies in animal models that may target only a portion of gestation or early prenatal life. Comparisons of immune system developmental landmarks are presented in Table 1. Neonates are more susceptible to infections that require production of antibodies and complement for resistance because immune function is not mature at birth. Low rates of antibody synthesis, combined with low expression levels of innate immune function, lead to inefficient bacterial killing and subsequent development or worsening of infection. Bacteria, which are commonly associated with neonatal sepsis and infections, are initially controlled by polymorphonuclear leukocytes (PMN; cells of the innate immune system that are the first to arrive at sites of infection or tissue damage). This initial innate response is critical to recovery because bacteria replicate so rapidly (some, as often as once every 20 min) that failure to control the early phase of bacterial growth can result in overwhelming infection before the adaptive response can adequately participate. Bacteria that are engulfed by PMNs are destroyed by various lytic enzymes contained in cytoplasmic granules. However, newborn PMNs have approximately half of these enzymes as adult cells. Functional impairment is compounded by a relatively low rate of PMN production by the neonatal bone marrow; thus, the supply of PMNs can be exhausted during infection. Recovery from infection depends on coating of bacteria by antibody to facilitate phagocytosis. At birth, neonates have approximately 90% of adult IgG levels, which are maternally derived, as IgG is actively and passively transported across the placenta. This form of passive protection is present in those organisms against which the mother has adequate antibody titers, but wanes as maternal antibody is catabolized, such that infants at 1–3 months of age have only 30% of total adult immunoglobulin (Ig) levels. Antibody synthesis increases with age. Total IgM, IgG, and IgA levels are approximately 30%, 37%, and 11%, respectively, of adult levels at 1–3 months of age, and 60%, 80%, and 75% of adult levels in 12–16-year-olds. IgM and IgG levels that are approximately half that of healthy adults are present in 7–12-month-old infants, but IgA levels do not reach 50% of adult levels until children are 3–5 years old. Although neonates have a higher percentage of total lymphocytes in the circulation than adults, the majority ( 90%) of thymus-derived lymphocytes are immature, compared to approximately 50% in adults. Immature cells are incapable of making cytokines or generating memory cells. Furthermore, the balance between Th1 and Th2 cytokine production also differs between neonates and adults. The relative abundance of Th1 and Th2 cytokines determines whether the cellular (Th1) or humoral (Th2) arm of adaptive immunity predominates in response to foreign antigens. At birth, cytokine production is skewed in favor of Th2 responses, and this response phenotype persists in children up to 12 years of age, decreasing the efficiency of host-protective responses, particularly to intracellular bacteria. Similar age-related defects in immune function may also be a predisposing factor in repeated inner ear infections in young children, as 5%–10% of children experience four or more inner ear infections within the first year of life.

Critical Windows of Immune System Vulnerability Having Systemic Impact Six critical windows of immune vulnerability have been described where exposure to specific xenobiotics in rodent models can produce systemic immune effects. Among the earliest of the processes is the emergence of myelomonocytic cells and seeding of tissues and organs. In humans, this occurs between 4 and 6 weeks of gestation. Resident macrophage function can affect various tissues ranging from the liver (Kupffer cells) to reproductive organs (e.g., testicular macrophages). Second, seeding of the thymus with prothymocytes occurs primarily between 8 and 12 weeks of gestation. This is followed by an additional developmental window (12–26 weeks), in which thymocyte populations are expanded through positive selection and autoreactive T cell clones are deleted via negative selection. This maturational process, including T cell education, is critical for effective postnatal T-dependent immune responses while minimizing the risk of autoimmune disease. The production, seeding to the periphery, and activation of T regulatory cells (Tregs) as well as Th17 cells are important in later life for regulating autoimmune reactions as well as certain local inflammatory reactions. In humans, this critical window falls between 12 weeks of gestation to parturition with the possibility for additional early life vulnerability until approximately 2 years of age. Balance and effective function of these cells influence later-life host defense as well as risk of autoimmunity.

Developmental Immunotoxicants

61

A recently identified critical developmental window involves alteration and maturation of macrophage populations in response to a series of surfactant proteins, also known as “collections.” This maturation is an important factor in both parturition as well as postnatal innate immunity. Although surfactants were initially recognized as proteins important for lung function, the response of macrophages to these proteins extends well beyond the lung, producing systemic outcomes. This maturational process impacts neonatal innate immunity and also has a regional impact since it is critical for function within the bronchial–alveolar lymphoid tissue (BALT). The primary macrophage-surfactant critical window occurs between 16 and 38 weeks of gestation with additional postnatal maturation for 6–9 months following birth. Finally, maturation of dendritic cells to promote Th1 responses along with concomitant shifts among Th cell populations form a critical immune developmental window that impacts the balance of childhood-acquired immune responses. A successful pregnancy requires the maintenance of a semiallogeneic fetus. To facilitate this, a Th2-skewed environment is promoted during gestation, thereby, ensuring that the risk of Th1-driven immune response that could jeopardize the pregnancy is minimized. However, this Th cell balance is usually corrected at birth to provide the child with a complete spectrum of host defense and to reduce the risk of Th2-promoted diseases. A shift in dendritic cell maturation is essential for balanced immune responses in the child, promoting both increased antiviral responses and a reduced risk of asthma and allergic diseases. In humans, this immune vulnerability window begins just before parturition and extends until approximately 2 years of age.

Critical Windows Having Local–Regional Impact Several immune maturational processes are important to regional and tissue-specific function in the child. Impaired immune maturation in early life can impact the reproductive, endocrine, hepatic, cardiovascular, gastrointestinal, respiratory, and neurological systems. However, a handful of specific prenatal immune maturation events affecting specific tissues warrant particular attention relative to childhood health risks. For example, intraepithelial lymphocytes (IELs) represent a specialized population of particular importance to the gastrointestinal tract. IEL function can impact food tolerance as well as food allergies. In humans, the seeding, maturation, and expansion of IELs occur between 14 weeks of gestation and 1 year after birth. Similarly, resident myelomonocytic cells in the brain play critical developmental roles in potential xenobiotic-induced neurological impairment. Developmental windows of immune vulnerability affecting this tissue would include the first appearance of microglia cells (5.5–7 weeks of gestation), seeding and expanded distribution of microglia cells in the cerebral wall (12–24 weeks of gestation), and appearance and expansion of astrocyte populations (16 weeks of gestation to 2 years after birth).

Neonates and Asthma The prevalence of asthma in Western society has increased significantly over the past 30 years. The Center for Disease Control and Prevention (CDC) estimates that the incidence of asthma in the US population was approximately 16 million or 7.5% in 2004. Although the number of people affected and related morbidity and costs are disproportionately higher in children, accurate incidence data for childhood asthma are lacking; this is inspite of the fact that 40% of all children show a sustained wheezing illness in the first year of life. Although the reasons for this rapid increase are unclear, epidemiological and animal studies have suggested that a number of environmental factors may contribute. Among the environmental factors postulated to aggravate childhood asthmatic responses, environmental tobacco smoke (ETS) is one of the most clearly established factors. Meta-analyses on parental smoking and asthma prevalence in children have shown a dose-dependent increase in asthma rates, with the strongest effect in younger children. In addition, maternal smoking during pregnancy (i.e., in utero exposure) has been associated with increased asthma risk in offspring and persistent deficits in lung function. Studies showing associations between certain air pollutants and asthma in children (such as ozone, nitrogen dioxide (NO2), sulfur dioxide, and particulate matter (PM), including those in diesel exhaust) are less definitive. Many of these materials produce oxidative stress in the upper-and lower respiratory tract, which can lead to airway inflammation and airway hyperreactivity and, thus, possibly exacerbate allergic asthma. To date, epidemiological studies dealing with children have employed relatively small populations, but large-scale studies, such as the National Children’s Study, which is designed to track disease in a large population from before birth and relate it to environmental exposure, should provide better information. Indoor environments are also associated with childhood asthma, although this can be complicated by the fact that children now spend more time indoors and indoor environments have been made more airtight to improve energy efficiency. Various allergens, such as house dust mite, mold, as well as rodent and pet allergens are sometimes present indoor at high concentrations and several birth cohort studies have shown a relationship between exposure and incident asthma. To date, prevention trials where the suspect household allergens were reduced have shown limited success in reducing overall incidence. In children, exposure to some biologics seems to be an important risk factor for the development of asthma. A number of studies have suggested that increases in lower respiratory viral infections early in life (e.g., from day care facilities), particularly with respiratory syncytial virus (RSV), is a risk factor for subsequent development of asthma. Similarly, evidence suggests that the lack of prenatal or early postnatal exposure to endotoxin is also a risk factor for occurrence of asthma at a later stage. Evidence to support this observation originates primarily from several cross-sectional studies, which have shown a lower risk of allergic sensitization in children who grew up on farms or were exposed to pets early in life. These observations have led to the “hygiene hypothesis,” which states that exposure to microbial products early in life is important for normal immune development (i.e., Th1:Th2 cell balance) and the development of tolerance.

62

Developmental Immunotoxicants

In addition to environmental pollutants, although beyond the scope of this review, other factors such as obesity, decrease in exercise, maternal stress, and changes in diet may also contribute to the rise in childhood asthma. A similar pattern of lung development occurs in most mammalian species, although the timing and onset of each stage are significantly different based on actual length of the gestational period and the relative degree of lung maturation at birth. Appreciating this fact, several investigations have been conducted using pregnant or weaning mice with exposure paradigms that simulate in utero or neonatal exposure in humans. Similar to humans, these studies in mice suggested that in utero exposure to pollutants including ETS, diesel exhaust particles, or residual oil fly ash (ROFA) may act as an adjuvant to increase the ability to induce asthma later in life.

Agents That Suppress the Developing Immune System A number of experimental studies have described associations between chemical exposure, altered immune system end points, and frequency of infections following exposure to environmental chemicals and therapeutics. In many instances, such as persistent organochlorine compounds, xenoestrogens, and some therapeutics, there are supporting evidence from both epidemiologic studies conducted in children following early life exposures and laboratory animal studies. In laboratory rodents, the chemical agents that have been reported to modulate the developing immune system and lead to altered function later in life are highly diverse. They include, among others, halogenated aromatic hydrocarbons (i.e., dioxins and polychlorinated biphenyls (PCBs)), organotins, organochlorines, diethylstilbestrol (DES), polycyclic aromatic hydrocarbons (i.e., benzo[a]pyrene and dibenz(a,h)anthracene), pesticides (i.e., chlordane), heavy metals (i.e., lead and methyl mercury), therapeutic agents (i.e., cyclosporin A and prednisone), and drugs of abuse (i.e., nicotine, cocaine, and ethanol). One of the best studied associations in pediatric populations is the well-documented studies between PCB exposures and infections that have been found in children from populations in China, Japan, the Netherlands, and Canada. Studies of accidentally exposed populations in Japan, referred to as Yusho disease, and in China as Yu-Cheng disease, suggested an association of PCBs, their thermal breakdown products (quaterphenyls), and polychlorinated dibenzofurans with immune abnormalities and increased infections. Children born to PCB-exposed mothers in the Yu-Cheng study group had a higher frequency of respiratory infections and otitis media compared to matched, unexposed controls. In the Yusho study population, recurrent respiratory infections and elevated blood levels of pentachlorophenol (PCP) were negatively associated with lymphocyte counts, CD4:CD8 ratios, and absolute counts of a number of leukocyte subpopulations. Varying degrees of association between PCBs and increased frequency of otitis media in children have also been described in pediatric populations in North America. A positive association was found between burdens of PCBs and DDE (the primary metabolite of DDT) or between PCBs and hexachlorobenzene and otitis media in Michigan; and in studies from Artic Quebec, Canada, the relative risk of recurrent episodes of otitis media was higher in breast-fed infants with high levels of organochlorine exposure. In Dutch preschool children, PCB levels in breast milk (nonortho and coplanar PCBs) were also associated with increased recurrent otitis media and other symptoms of respiratory infection. In these children, the body burden of PCBs at 42 months of age was associated with a higher prevalence of recurrent otitis media and chicken pox. The social or economic impact, if any, of background levels of these chemicals on disease burden has not been determined. Significant effects on the developing immune system have also been described in the offspring of women who took various therapeutic drugs during pregnancy. For example, bone marrow function, thymus size, and serum Ig Levels have been reported to be suppressed for up to 1 year in infants born to women given azathioprine during gestation. The use of cyclosporin A during pregnancy has been associated with delayed development or maturation of T and B cells, decreased expression of the major histocompatibility complex (MHC) antigens, and decreased Ig levels. Isolated case reports have linked in utero exposure to immunosuppressive therapeutics with autoimmune disease, although no significant epidemiology studies have been conducted. In contrast, both male and female children of DES-exposed mothers have been reported to show an increased incidence of autoimmune diseases and asthma. It has been suggested that the significantly shorter half-life of immunosuppressive therapeutics provides an explanation as to why these compounds are not reported to increase the incidence of infections in children exposed during gestation. In rodents, exposure to DES, cyclosporin A, or other immunosuppressive drugs during gestation produce effects that persist into the equivalent of young adulthood and, depending on the exposure window, may last till adulthood. There is also evidence that lifestyle factors, such as recreational drug use, alcohol abuse, nutritional status, and smoking, can affect immune system development. In instances such as with tobacco smoke, the effects seem to directly target the immune cells and tissues. However, with gestational exposure to recreational drugs, such as ethanol and opiates, which predominantly affect cellmediated immunity, the immunologic defects may be the result of indirect mechanisms, such as interactions between the developing central nervous system and circulating glucocorticoid levels or altered sympathetic innervation of immune tissues. Altered immune function has been reported in human and experimental animal neonates whose mothers smoked cigarettes. It has been hypothesized that the increased risk of cancer in children of women who smoked while being pregnant may be due to the immunosuppressive properties of cigarette smoke.

A Testing Framework for Developmental Immunotoxicants in Experimental Animals Historically, a great deal of concern has been expressed regarding the need for identification and characterization of developmental immunotoxic potential of environmental contaminants including pesticides, industrial chemicals, and pollutants. It is generally

Developmental Immunotoxicants Table 2

63

Selected historical landmarks in the development of a framework for DIT assessment of environmental toxicants

Years

Event

Impact/conclusions

1993

1997 1999

NRC publication: “Pesticides in the Diet of Infants and Children” EPA legislation: Food Quality Protection Act; Safe Drinking Water Act Amendment Executive Order 13045 EPA Workshop on Critical Windows of Exposure

2001

ILSI/HESI DIT workshop

2001

NIEHS/NIOSH DIT workshop

2003

ILSI/HESI DIT workshop

Recommended testing; acknowledged age-related susceptibility and recommended DIT assessment Required characterization of susceptibility and assessment of risk for infants and children Required federal agencies to address risks to children Related timing of developmental exposure to differential effects on immune system function Proposed approaches to DIT testing; identified issues for further resolution Defined appropriate experimental design for DIT testing, including limitations and data gaps Proposed framework for DIT testing

1996

acknowledged that the National Research Council publication, “Pesticides in the Diet of Infants and Children,” was an early landmark event in raising regulatory attention to the issue of developmental immunotoxicity (DIT) testing. DIT is defined as the effects on the immune system resulting from pre- or postnatal exposure to physical factors (e.g., ionizing and ultraviolet radiation), chemicals (including drugs), biological materials, medical devices, and, in certain instances, physiological factors, collectively referred to as agents. It encompasses studies of various immune pathologies associated with pre- or postnatal exposure of humans and wildlife species, including allergic hypersensitivity, immune dysregulation (suppression or enhancement), autoimmunity, and chronic inflammation. As summarized in Table 2, significant environmental legislation soon followed, which specifically required the assessment of risk for susceptible life stages, focusing in particular on infants and children. To address the best approaches and methods for the assessment of DIT, a number of collaborative scientific workshops were conducted. These addressed a framework for DIT testing, developed scientific consensus on general approaches and specific issues, characterized normal immune system development in humans and test animal models, and identified critical windows of developmental exposure for the perturbation of immune system structure or function. Although there is currently no formally published DIT testing guideline per se for the screening of environmental toxicants, a study protocol for the assessment of immunosuppression following developmental exposures has been discussed and supported by many scientists working in the area. The need for tests to assess allergic potential has also been strongly supported by scientists but no consensus exists that validated or even practical test models are available. The DIT study protocol for immunosuppression can be conducted independently, or to reduce the use of test animals and refine the testing paradigm, incorporated into assessments for another protocol, for example, a reproduction study. The latter approach is often encouraged by regulatory agencies for environmental chemical assessments. In a typical DIT screening study, maternal rats would be administered test agents from at least the time of implantation (approximately gestation day 6) to parturition and into the lactation period. It is critical that the offspring be continuously treated during all sensitive phases (windows) of immune system development. Therefore, agent administration would continue in the offspring, either via maternal milk (as confirmed by pharmacokinetic data) or directly to the offspring by the most appropriate method, through the time of weaning (i.e., postnatal day 21) and until approximately postnatal day 42. By that age, the offsprings would be sexually mature young adults with a fully functional immune system. Recommendations for validated end points that might be typically assessed in a DIT study are summarized in Table 3. Based on the evidence that the developing immune system is more sensitive to toxic insult than the mature immune system, the use of a developmental screening paradigm to characterize potential immunosuppression may be preferable to the use of a protocol that includes only adult animals. Nevertheless, it is also recognized that this study design neither evaluates other perturbations of immune function that are important to human health risk assessment (e.g., asthma, autoimmunity, and hypersensitivity), nor does it assess latent responses to developmental insult.

Table 3

Recommended DIT study end points

General observations

Immune system assessments

Bodyweight Survival Clinical observations Macroscopic pathology

Complete total and differential blood cell count Organ weights (thymus, spleen, and lymph nodes) Primary antibody response to a T-dependent antigen Functional test of Th1 immunity (e.g., cytotoxic T lymphocyte or delayed hypersensitivity response)

64

Developmental Immunotoxicants

Tests Available to Identify Immunosuppression in Human Neonates There exists a myriad of immune tests to identify children and infants with pronounced immunosuppression, such as human immunodeficiency virus (HIV) infection or primary immunodeficiency diseases. The sensitivity of these tests to accurately identify immunotoxicity of environmental agents in children or infants in epidemiological studies is less clear, as the effects most likely to occur would be more subtle in nature. One immune test that should possess sufficient sensitivity and can be included into either prospective or retrospective epidemiological studies is the measurement of the immune responses to routine childhood vaccinations, such as diphtheria-tetanus-pertussis (DTP) vaccine. The response is determined by either measurement of serum antibody titers (humoral immunity) or lymphocyte proliferation tests (cell-mediated immunity) to specific vaccine epitopes several weeks following primary immunization. For children, the study will need to be conducted under specific conditions since the quality of the vaccine response is a function of age and time the samples are collected after vaccination. Vaccine measurements have been used successfully in immunotoxicology studies, including pediatric populations. The thymus, a common target for many immunotoxic chemicals in experimental studies, is the primary site of T lymphopoiesis during fetal and early postnatal life and continues to play a role in adults for T cell diversity. In children, thymic output has been indirectly quantified by measurement of phenotypically naive T cells in the circulation or by chest computerized tomography (CT) measurement of thymic volume, both of which have limitations. The lack of quantitative biomarkers to identify human recent thymic emigrants (new T cells or RTEs) has limited accurate characterization and quantification of thymic output. The use of T cell receptor excision circles (TRECs) to study age-related changes in the frequency of recent thymic emigrants has been used in studies of children with acquired immunodeficiency syndrome (AIDS) or primary thymic hypoplasia, such as DiGeorge syndrome, although with some controversy to their clinical usefulness. TRECs are small DNA fragments (episomes), generated during V(D)J gene recombination (a process responsible for the diversity of the T cell antigen receptor (TCR) repertoire). Other tests that children and infants commonly undergo, to identify defects in the immune system, include quantification of serum Ig(i.e., IgG, IgM, and IgA) levels, immunophenotyping, and, more recently, cytokine profiling. Serum Ig concentrations may be useful but are generally considered fairly insensitive for the purposes of immunotoxicity studies. Immunophenotyping, which involves enumeration of cell-surface markers (clusters of differentiation, CD) on lymphoid and myeloid cells, is performed by flow cytometry. It has provided considerable information on the ontogeny and activation state of the human immune system, as well as assists in the clinical diagnosis for immunologic and hematopoietic disorders in children, although its sensitivity for immunotoxicity studies may also be limited. The capacity to produce specific cytokines, while requiring further development, may have prognostic value in the future for predicting immune disease susceptibility in children and associations with environmental exposures, particularly with regard to Th1/Th2 cytokine profiles and allergic disease.

Risk Assessment Issues Conducting risk assessments for environmental toxicants, developmental end points, either structural or functional, can be considered in hazard characterization and dose–response analysis, and integrated with exposure information in risk characterization. By definition, this includes developmental immunotoxicity end points. Developmental toxicity data can be used in setting reference doses (RfDs) for oral exposures or reference concentrations (RfCs) for inhalation exposures in human health risk assessment for environmental toxicants. Reference values are derived from study no-observed-adverse-effect-levels (NOAELs) or benchmark dose lower confidence limits (BMDLs) that define a point of departure for health effects that are not assumed to have a linear lowdose–response relationship (i.e., most noncancer health effects and carcinogens that act via indirect mechanisms). In calculating the reference values, uncertainty factors are applied as deemed appropriate. These include factors to address animal to human extrapolation (which may be further divided into toxicokinetic and toxicodynamic components), human variability, the lack of a no-observed-adverse-effect-level (NOAEL), the use of a subchronic study to set a chronic reference value, and a database factor to account for missing data that are considered essential in characterizing risk. The application of an additional 10-fold safety factor is required for pesticides, but this may be revised (reduced, removed, or sometimes even increased) on the basis of the quality and extent of toxicity and exposure data relevant to children’s health risk assessment. Thus DIT data, either from animal studies or from epidemiological assessment in children, can impact the risk calculations in either ways. First, end points and doses from a DIT study could be used as the critical effect in calculating reference values. Alternatively, for a chemical with identified immunotoxic potential, the presence or absence of an adequate assessment of developmental immunotoxic hazard and dose–response may affect the determination of the uncertainty factors used in reference value calculations. For example, a database uncertainty factor might be applied (or a Food Quality Protection Act (FQPA) factor retained) to address the lack of a developmental immunotoxicity study in rodents or the lack of sufficient characterization of a response in children; careful consideration of the overall toxicology database is critical in determining the need for and the magnitude of such an uncertainty factor.

Developmental Immunotoxicants

65

See also: Developmental and Reproductive Toxicity of TCDD, Lead and Mercury; Neurodevelopmental Toxicants; Organophosphate Insecticides: Neurodevelopmental Effects.

Further Reading Alvarado-Cruz, I., Alegría-Torres, J.A., Montes-Castro, N., Jiménez-Garza, O., Quintanilla-Vega, B., 2018. Environmental epigenetic changes, as risk factors for the development of diseases in children: A systematic review. Annals of Global Health 84 (2), 212–224. Burns-Naas, L.A., Hastings, K.L., Ladics, G.S., Makris, S.L., Parker, G.A., Holsapple, M.P., 2008. What’s so special about the developing immune system? International Journal of Toxicology 27, 223–254. Das, B.B., 2015. A neonate with acute heart failure: Chromosomally integrated human Herpesvirus 6–Associated Dilated Cardiomyopathy. The Journal of Pediatrics 167 (1), 188–192. Dietert, R.R., Dietert, J.M., 2007. Early-life immune insult and developmental immunotoxicity (DIT)-associated diseases: Potential of herbal- and fungal-derived medicinals. Current Medicinal Chemistry 14, 1075–1085. Dietert, R.R., Piepenbrink, M.S., 2006. Perinatal immunotoxicity: Why adult exposure assessment fails to predict risk. Environmental Health Perspectives 114, 477–483. Duramad, P., Tager, I.B., Holland, N.T., 2007. Cytokines and other immunological biomarkers in children’s environmental health studies. Toxicology Letters 172, 48–59. Holladay, S.D. (Ed.), 2005. Developmental immunotoxicology. CRC Press LLC, New York. Holladay, S.D., Smialowicz, R.J., 2000. Development of the murine and human immune system: Differential effects of immunotoxicants depend on time of exposure. Environmental Health Perspectives 108 (Supplement 3), 463–473. Holsapple, M.P., West, L.J., Landreth, K.S., 2003. Species comparison of anatomical and functional immune system development. Birth D effects Research Part B: Developmental and Reproductive Toxicology 68, 321–334. Holsapple, M.P., Burns-Naas, L.A., Hastings, K.L., et al., 2005. A proposed testing framework for developmental immunotoxicology (DIT). Toxicological Sciences 83 (1), 18–24. House, R., Luebke, R., Kimber, I. (Eds.), 2007. Immunotoxicology and immunopharmacology, 3rd edn. Target organ toxicology series. Raven Press, New York. Huang, S.K., Zhang, Q., Qiu, Z., Chung, K.F., 2015. Mechanistic impact of outdoor air pollution on asthma and allergic diseases. Journal of Thoracic Disease 7 (1), 23. Jung, J.Y., Gleave Parson, M., Kraft, J.D., Lyda, L., Kobe, B., Davis, C., Robinson, J., Peña, M.M., Robinson, C.M., 2016. Elevated interleukin-27 levels in human neonatal macrophages regulate indoleamine dioxygenase in a STAT-1 and STAT-3-dependent manner. Immunology 149 (1), 35–47. Kimmel, C.A., King, M.D., Makris, S.L., 2005. Risk assessment perspectives for developmental immunotoxicity. In: Holladay, S.D. (Ed.), Developmental immunotoxicology. CRC Press, Washington, DC. Luster, M.I., Victor, J., Johnson, V.J., Yucesoy, B., Simeonova, P.P., 2005. Biomarkers to assess potential developmental immunotoxicity in children. Toxicology and Applied Pharmacology 206, 229–236. MacGillivray, D.M., Kollmann, T.R., 2014. The role of environmental factors in modulating immune responses in early life. Frontiers in Immunology 12 (5), 434. Pinkerton, K.E., Joad, J.P., 2000. The mammalian respiratory system and critical windows of exposure for children’s health. Environmental Health Perspectives 108 (Supplement 3), 457–462. Selgrade, M.K., Lemanske, R.F., Gilmour, M.I., et al., 2006. Induction of asthma and the environment: What we know and need to know. Environmental Health Perspectives 114, 615–619.

Developmental Programming and the Epigenomeq JM Gohlke, National Institute of Environmental Health Sciences, Research Triangle Park, NC, United States © 2019 Elsevier B.V. All rights reserved.

Introduction Mounting evidence has accumulated suggesting that events early in life play a major role in determining adult phenotype. In the early 1970s, exposure to certain environmental contaminants (dioxin and radiation) and pharmaceuticals (diethylstilboestrol (DES)) during pregnancy was shown to cause lifelong health problems in the offspring. In the late 1980s, David J.P. Barker demonstrated an inverse relationship between birth weight and mortality from cardiovascular disease. Underlying this research is the concept termed developmental programming, which refers to permanent changes in physiology brought on by conditions experienced during critical periods of development. Importantly, this field of research emphasizes that traditional genetic mechanisms focusing on changes in DNA sequence cannot explain important aspects of long-term phenotypic variation. The developmental programming hypothesis proposes that a developing system may be particularly receptive to environmental cues during critical periods, thereby incorporating information gained from the environment that will ultimately affect the developmental trajectory and hence long-term physiology of the organism. In comparison to traditional theories on adaptation strategies via selection of DNA sequence over generations, a period of increased sensitivity to environmental cues during development of an individual organism is thought to be evolutionarily advantageous as it offers a shorter-term adaptation strategy. Therefore, adaptation in this context occurs primarily via the environment as it is experienced by the mother, although paternal transmission of environmental information has recently been noted as well. Although the term “fetal programming” was originally used to describe this hypothesis, it was soon recognized that a critical period of sensitivity for a given system may occur during the embryonic, fetal, infant, childhood, or pubertal stage; hence, “developmental programming” is the more inclusive term for all potentially sensitive stages of development. Finally, it is thought that this adaptation strategy could become maladaptive, potentially leading to disease, if a mismatch occurs between the environment experienced during development and that experienced during adult life. Developmental programming is thought to be especially important for those systems that have evolved to sense environmental changes, such as the nervous system, the immune system, and the endocrine system (Fig. 1). These systems have evolved in higher organisms to effectively communicate environmental information to other organ systems as well as integrate short- and long-term responses to environmental cues. Therefore, developmental programming of these systems allows the organism to respond to changing environmental conditions and allocate limited resources accordingly. The endocrine system has been a central focus in developmental programming research. It is composed of numerous small organs and tissues that produce, store, and secrete extracellular signaling molecules known as hormones, which are transported in the blood to target tissues and organs. The endocrine system is instrumental in regulating metabolism, growth, development, and puberty, and also is an important mediator in mood and behavior. The endocrine system is closely associated with the nervous system and specific neurotransmitters, including dopamine, serotonin, epinephrine, and norepinephrine, regulate hormone secretion. Hormones can also act as immunomodulators, altering the sensitivity of the immune system. For example, female sex hormones are known immunostimulators of both adaptive and innate immune responses. In fact, the autoimmune disease lupus erythematosus strikes women preferentially (10:1). By contrast, male sex hormones such as testosterone seem to be immunosuppressive. Other hormones appear to regulate the immune system as well, most notably prolactin, growth hormone, and vitamin D. Conversely, some hormones are regulated by the immune system, notably thyroid hormone activity. The extensive crosstalk between the endocrine, nervous, and immune systems suggests these systems integrate signals in response to environmental stimuli. In addition, the tight relationship between these three systems suggests perturbed development in one may have profound effects on the other two systems as well. At present, mounting evidence suggests obesity, hypertension, coronary artery disease, type II diabetes, osteoporosis, reproductive disorders, allergy, autoimmune disease, and certain types of cancers have strong developmental origins. In addition, emerging evidence suggests several neuropsychiatric disorders may also have developmental origins. Therefore, identifying the environmental factors that contribute to developmental programming is a critical area of interest for environmental health scientists. In the following text, examples of developmental programming are explored with the emphasis on the relationship between the environment experienced during development and disease risk later in life.

q

Change History: Octobe 2018. Jerome Nriagu has updated the text throughout the article. This is an update of J.M. Gohlke, Developmental Programming and the Epigenome, Editor(s): J.O. Nriagu, Encyclopedia of Environmental Health, Elsevier, 2011, Pages 51–59.

66

Encyclopedia of Environmental Health, 2nd edition, Volume 2

https://doi.org/10.1016/B978-0-12-409548-9.11720-8

Ne u

Nervous system

Endocrine system

Neurotransmitters

67

es on s rm ine Ho tok Cy

ro t Ho rans mi rm tte on rs es

Developmental Programming and the Epigenome

Immune system

Cytokines Fig. 1 Integration of the endocrine, nervous, and immune system functions. The endocrine, nervous, and immune systems sense environmental stimuli and communicate to other organ systems over long distances using specialized molecules. Molecules produced by each system can bind to receptors in the other two systems allowing integration of response to environmental stimuli. Therefore, modulation of development in one of these systems will most likely lead to changes in the other two systems.

Developmental Programming and Disease Malnutrition/Overnutrition An intuitive outcome of undernourishment during pregnancy is reduced growth of the fetus and lowered birth weight. However, research shows that undernutrition not only has the short-term consequence of reduced growth, but also affects the developmental trajectory of the metabolic system in several ways, with both short- and long-term consequences. This has become the classic example supporting the developmental programming hypothesis showing that malnutrition during fetal development leads to aberrant glucose regulation in adulthood. In an analysis of several large cohorts of individuals born shortly after famines, such as the Dutch Hunger Winter during World War II, it was found that poor nutrition in utero led not only to growth restriction of the fetus but also to poor glucose tolerance and increased risk of type II diabetes and cardiovascular disease during adulthood. In fact, the worst glucose tolerance was seen in those individuals who were obese as adults, supporting the hypothesis that a mismatch between fetal and adult environments leads to disease. These effects have been found to cross generations, as grandsons of Swedish boys who were exposed to famine in the 19th century have increased incidence of obesity, diabetes, and cardiovascular disease. Furthermore, studies in monozygotic twins showed that for those pairs of twins who were discordant for diabetes, the diabetic twin had significantly lower birth weight than the nondiabetic twin. More recent research has linked increased risk of depression and osteoporosis with low birth weight as well. Interestingly, high birth weight is associated with an increased risk for developing breast cancer, leukemia, and testicular cancer. Research has shown that greater postnatal weight gain, and other lifestyle factors such as smoking, alcohol intake, and inactivity, is independently associated with risk of metabolic syndrome phenotypes later in life. Therefore, metabolic syndrome disease risk is greatest in those born small who subsequently become large, suggesting a mismatch between early and later life may increase risk of disease. According to the most recent estimates by the World Health Organization (WHO), the incidence of obesity, defined as > 25% body fat in men or greater than 30% body fat in women, has risen dramatically in the past two to three decades in developed and developing countries; however, global incidence of low birth weight (< 2500 g at birth) has been stable during the same time period. Obesity is a complex disorder in which primary drivers are thought to be dietary choices, sedentary lifestyles, and genetic predisposition. In humans, separating the components of metabolic syndrome phenotypes that are determined by an early developmental programming mechanism versus adult lifestyle choices, or the interaction of these two components, has been quite difficult to date. It should also be noted that birth weight, although easily obtainable, is a crude measure of a wide array of factors contributing to the intrauterine environment. Therefore, it is important to note that animal model research, in which genetics, postnatal diet, and environment are controlled, supports the findings suggesting prenatal malnutrition increases risk of metabolic syndrome phenotypes later in life.

68

Developmental Programming and the Epigenome

Stress Stressful or hostile conditions experienced during development may increase the risk of developing neuropsychiatric and immune disorders later in life. Although there is currently only equivocal evidence in humans, mounting research in animal models corroborates this hypothesis. This type of developmental programming is initiated by activation of the maternal hypothalamus–pituitary– adrenal axis (HPA). Specifically, perception of a threat by the maternal hypothalamus initiates release of adrenocorticotropic hormone (ACTH) by the pituitary gland, which in turn stimulates release of maternal stress hormones, namely, glucocorticoids and catecholamines (epinephrine and norepinephrine) from the adrenals. Increased levels of stress hormones result in acute enhancement of heart rate and metabolic processes, such as glycogenolysis, gluconeogenesis, lipolysis, and proteolysis, thereby mobilizing resources to react to the stressor. This is commonly referred as the “fight or flight” response. However, glucocorticoids also cross the placenta allowing maternal stress to be transmitted to the fetus during fetal development. Therefore, a developmental programming hypothesis proposes that a stressful maternal environment produces excessive amounts of glucocorticoids, which signals to the developing fetus to change developmental processes based on this information. In humans, maternal psychosocial stress has been shown to increase the risk of preterm birth and low birth weight. Extreme activation of the HPA axis results in uterine relaxation, thus leading to miscarriage. In addition, self-reported maternal nervousness is correlated with higher levels of IgE and proinflammatory cytokines in cord blood and increased risk of preeclampsia and premature labor. It has been suggested that children born to mothers who reported depression or anxiety during pregnancy are also at an increased risk for the development of allergy later in life. Mood disorders, sleep disturbances, and increased cortisol levels are also more common in children whose mothers reported anxiety or depression during pregnancy. For example, altered cortisol levels and temperament were seen in offspring of women diagnosed with posttraumatic stress disorder during pregnancy following the World Trade Center attacks. Increased HPA axis activation is associated with higher blood pressure, insulin resistance, glucose intolerance, and hyperlipidemia, all of which are early markers for diabetes and cardiovascular disease. However, it is difficult to separate genetic, prenatal stress, and postnatal stress contributions to these phenotypes. Therefore, animal models offer an important research tool to study the potential for long-term effects of stress during critical developmental periods. Models of developmental stress have been defined in several species including monkeys, pigs, sheep, and rats using physical restraint, exposure to potential predators, unpredictable and loud noise, or bright lighting. In addition, injection of corticotropin releasing hormone (CRH), lipopolysaccharide (LPS), or dexamethasone elevates glucocorticoid levels in a similar manner and causes phenotypes that mimic those seen in physical or psychosocial stress models. Using these models, long-term consequences of maternal stress to the fetus include increased hyperactivity and inattention and less exploratory behaviors. Morphological correlates found include increased neuronal numbers in limbic structures such as the amygdala, an area known to play a key role in the control of anxiety and fear, and decreased glucocorticoid receptor (GR) levels in the hippocampus, an area important for the negative feedback response of HPA activation. In addition to behavioral disorders, modification of immune responses is evident after exposure to stress during gestation. It is generally known that acute stress during adulthood enhances, whereas chronic stress suppresses, immune function. In fact, glucocorticoids are known to influence the thymic output of CD4 þ and CD8 þ cells and the thymus is also innervated by noradrenergic fibers, making it responsive to HPA axis activation via hormonal and neurotransmitter communication. In general, most animal models of prenatal stress suggest short-term immunosuppressive consequences in offspring such as reduced proliferation of lymphocytes in response to mitogens, reduced secretion of proinflammatory cytokines TNF-a and IL-6, and reduced leukocyte number, particularly CD4 þ T-helper. Alternatively, as these animals age, a proinflammatory condition is present by the time they reach full maturity. Specifically, prenatal stress increased percentages of CD8 þ and NK cells, and increased secretion of gamma interferon (IFN-g) after challenge in adults. This response increased with increasing age. Therefore, current research suggests stress during gestation may produce a near-term immunosuppressive response, but a long-term proinflammatory/autoreactive condition, consistent with increased incidences of autoimmune disease and allergy in adults.

Sex Hormone Exposure Exposure during development to compounds that mimic sex hormones has been shown to alter reproductive physiology and function not only in those directly exposed, but in their offspring as well. The best understood example to date is the role of developmental exposure to estrogenic compounds and consequent increased risk of reproductive disorders and cancer across generations. From 1941 to 1971, DES, a potent estrogenic compound, was used to prevent miscarriage and other complications of pregnancy. After decades of use, it was shown that daughters of mothers exposed to DES during pregnancy had a heightened risk of clear cell carcinoma of the vagina and cervix. Since this initial finding, following children of the original mothers that were exposed to DES has led to the discovery of numerous transgenerational effects. Women exposed to DES in utero, known as DES daughters, are at an increased risk for reproductive tract structural differences, pregnancy complications, infertility, and autoimmune disorders. Men exposed to DES in utero, known as DES sons, are at an increased risk for noncancerous epididymal cysts, autoimmune disorders, and increased prevalence of hypospadias and retained testes. Interestingly, pregnant women prescribed DES were found to have only a modestly increased risk for breast cancer, whereas their daughters were found to have a significantly heightened risk for development of breast cancer (2.5-fold increase). Effects seen in animal model research recapitulate those seen in humans. In addition, animal model research has demonstrated increased risks for reproductive system abnormalities and cancers in the F2 generation, that is, the offspring of females who were exposed in utero.

Developmental Programming and the Epigenome

69

These findings suggest alteration of reproductive function via exposure to sex hormone mimics can be transmitted across generations. These transgenerational effects are now an important area of intense research. For example, vinclozolin, a fungicide commonly used in vineyards, is an antiandrogenic compound and causes increased risk of male infertility, breast cancer, kidney disease, prostate disease, and immune abnormalities in four generations after exposure in rats. Mechanistically, these effects can be explained by epigenetic alterations that are kept in place in germ cells. This hypothesis is explored further in the following sections. Emerging evidence suggests developmental exposure to sex hormones may also permanently alter energy metabolism, appetite, and fat deposition. It is well known that a common side effect of estrogen- and progestin-containing birth control pills is weight gain, yet exposure occurs mainly in adult populations and is thought to be transient. Using animal models, Newbold et al. have shown prenatal exposure to DES, and environmentally ubiquitous estrogen mimics such as genistein (a soyderived phytoestrogen) and bisphenol A (a compound that leaches into food from hard plastics and the inner lining of aluminum cans), leads to obesity in adult animals. In addition, before development of obesity, these animals had aberrant serum profiles including elevated leptin, adiponectin, IL-6, and triglycerides, suggesting that adipogenesis is dysregulated before the onset of obesity. The epidemiological and animal model research reviewed earlier suggests environmental conditions during development influence disease risk later in life. This phenomenon seems to be primarily controlled by tight interactions between the endocrine, nervous, and immune systems in such a way that perturbation of one system leads to increased risk of disease in all three systems. Increased disease susceptibility that crosses three generations suggests there is a mechanism in which at least some environmental conditions are recorded in germ cells. Research in the field of epigenetics has offered a mechanism in which this phenomenon can be explained.

The Epigenome: A Mechanistic Basis for Developmental Programming Genetics, or changes in DNA sequence such as mutation, deletion, gene fusion, tandem duplications, or gene amplifications, has long been known to modulate susceptibility to disease across generations. In contrast, the field of epigenetics, or changes in gene expression that occur without a change in DNA sequence, is relatively new, particularly as it relates to research in environmental health. The developmental biologist C.H. Waddington coined the term epigenetics, strictly meaning outside or above conventional genetics, when he discovered temperature during development permanently altered wing-vein patterns in Drosophila in the 1940s. Epigenetic changes can be mitotically and sometimes meiotically heritable, as in the case of imprinting. Therefore, alterations to the epigenome offer a compelling mechanistic explanation for developmental programming and the resultant transgenerational effects reviewed earlier. Epigenetic changes in gene expression can arise from changes in the folding of DNA to form chromatin, changes in the architecture of that chromatin within the nucleus, or changes in transcript stability, all of which are controlled by histone modifications, DNA methylation, and noncoding RNAs (micro-RNAs or small-interfering RNA). Therefore, the epigenome refers to the global pattern of DNA methylation, histone modifications, and noncoding RNA expression that distinguish between spatial (e.g., cell types) and temporal (e.g., developmental or aged) gene expression within an organism with the same DNA sequence in each cell. X-inactivation is a clear example occurring in all females by which gene expression from one X chromosome is silenced to match the levels of gene expression from the single X chromosome present in males. This inactivation process involves both DNA methylation and histone modifications. The following text provides a brief description of histone modifications and DNA methylation, the most studied forms of epigenetic regulation, along with known examples of their effects on human disease. In addition, inherited epigenetic patterns via imprinting and several disorders associated with imprinted genes are discussed.

Histone Modifications, DNA Methylation, and Human Disease Histones are globular proteins around which DNA is wound to form chromatin. Histone tails protrude from the central globular unit and are posttranslationally modified (e.g., acetylation, methylation, phosphorylation, and ubiquitination). Specific combinations of these modifications produce a code that is recognized by proteins that regulate chromatin structure, gene expression, and DNA repair. For example, histone acetylation is associated with active gene expression of surrounding genes, whereas histone deacetylation is associated with gene silencing. DNA methylation involves the methylation of cytosine at the carbon-5 position in CpG dinucleotides via DNA methyltransferases. Methylation recruits other chromatin-binding proteins to remodel chromatin and is associated with the silencing of genes, possibly by blocking the binding of transcription factors in the promoter region necessary for initiation of transcription. In fact, several studies have shown that disruption of methylation pharmacologically or genetically by inactivating the methyltransferase enzymes results in reactivation of gene expression at silent loci. Epigenetic dysregulation is a central mechanism in cancer progression. Aberrant DNA methylation is thought to contribute to cellular transformation in numerous ways. For example, DNA hypomethylation is believed to initiate chromosome instability and activate oncogenes. In contrast, DNA hypermethylation silences tumor suppressor genes and has been found in promoter regions of numerous genes associated with carcinogenesis. Genome-wide patterns of aberrant histone modifications are also characteristic of many types of cancer. For example, specific patterns of acetylation and methylation in histone H4 are present in lymphomas,

70

Developmental Programming and the Epigenome

colorectal adenocarcinomas, and squamous carcinoma. In fact, the epigenome is a popular target in cancer pharmacology research, as exemplified by numerous clinical trials of DNA methyltransferase or histone deacetylase inhibitors. It has been shown that epigenetics also plays a critical role in sickle cell disease, previously thought of as a simple, monogenic Mendelian disorder. In all persons affected by sickle cell disease, a mutation in the beta globin gene is transmitted through the germ line. However, within this population, different susceptibilities in different individuals are seen depending on the amount of fetal hemoglobin that is produced, such that increased production during development reduces symptoms later in life. The methylation status of fetal hemoglobin determines its expression. Normally, it is progressively methylated such that in adults no expression is evident. Drugs that reduce methylation, such as methyltransferase inhibitors, increase fetal hemoglobin expression in sickle cell patients and repress the development and severity of sickle cell disease. However, fetal hemoglobin induction is only effective if it is done at a very young age, suggesting a critical window in which developmental programming of the epigenome occurs.

Establishing the Epigenome As described earlier, epigenetic patterns are clearly heritable in somatic tissues. In fact, most epigenetic changes are not heritable across generations as a genome-wide removal of epigenetic tags occurs soon after fertilization. Thus, the genome of the early embryo (before implantation) is hypomethylated and cells exhibit pluripotency, meaning they are able to develop into all cell types found in the adult body. Beginning at implantation and continuing through the fetal period, lineage-specific reestablishment of DNA methylation occurs, restricting the gene expression of differentiating tissues. Once these patterns are established, they are maintained throughout life; therefore, the period in which DNA methylation patterns are established may serve as a critical period in which environmental stimuli can influence developmental programming and hence disease risk later in life. A special subset of epigenetic patterns, for those genes that are said to be imprinted, is heritable across generations, somehow dodging the genome-wide removal process. Imprinted genes have been well characterized in a number of species including humans. The ultimate activity of imprinted genes in all tissues is therefore parent-dependent, with an allele from one parent being expressed and the allele from the other parent being silenced. Approximately 1% of human genes have been identified as being imprinted. The most widely accepted explanation for the occurrence of genomic imprinting is the “parental conflict hypothesis.” This hypothesis suggests imprinting arose because mothers and fathers have competing interests in the development of their offspring. The father is more interested in the growth of his offspring, at the expense of the mother. The mother’s interest is to conserve resources for her own survival while providing sufficient nourishment to the current offspring, but also for future offspring. Therefore, paternally expressed genes are growth promoting, whereas maternally expressed genes are growth limiting. In line with this hypothesis, a number of imprinted genes have been shown to have important roles in adiposity and glucose-regulated metabolism and some, such as insulin-like growth factor 2 (IGF2), are associated with human metabolic disorders and cancers. Angelman syndrome and Prader–Willi syndrome are the most well-known human disorders caused by dysregulated imprinting. Both of these syndromes are associated with the same genomic region, namely, chromosome 15q, and the particular syndrome that will develop depends on whether mutations in this region are inherited from the child’s mother or from their father. Beckwith–Wiedemann syndrome is also caused by disrupted genomic imprinting, often caused by abnormalities in maternal genomic imprinting of a region on chromosome 11. Interestingly, children arising from assisted reproductive technologies (ART) are at a much greater risk for developing Beckwith–Wiedemann syndrome (RR ¼ 9.0), most likely due to the loss of transfer of maternal methylation patterns in in vitro culture conditions. Children conceived via ART are also at an increased risk for both low birth weight and fetal overgrowth, suggesting again loss of imprinting is at play. The clinical manifestations of these genetic imprinting syndromes are characterized by endocrine abnormalities (hypogonadism and obesity) and neurological and behavioral impairments (mental retardation, hypotonia, and ataxia). There is also some evidence that autoimmune disease may have developmental origins as well. Systemic lupus erythematosus (SLE) is characterized by the production of autoantibodies to nuclear antigens. T cells drive the autoantibody response. In the T cells of SLE patients, methyltransferase levels are reduced and the genome is globally hypomethylated. Furthermore, when normal mouse T cells are treated with a DNA methyltransferase inhibitor, they become autoreactive and induce a lupus-like disease when injected into mice. Finally, loss of imprinting of the IGF2 gene has been implicated in various childhood and adult cancers, including Wilms’ tumor, and colorectal and gastrointestinal cancer. Normally only the paternally inherited allele of this gene is expressed. Through examination of peripheral blood samples, it has been shown that loss of imprinting of this gene (meaning both paternal and maternal copies are expressed) occurs at a higher rate in men with a family history of colorectal cancer (fivefold increase) and a much higher rate in those diagnosed with colorectal cancer (20-fold increase). These results suggest there may be promise in developing epigenetic biomarkers for predicting risk of a particular disease later in life. However, determining the cause of the loss of imprinting is still an area of intense research. One area of intense research is in the search for environmental modulators of the developing epigenome, which may cause permanent alteration in gene expression and lead to disease later in life.

Environmental Modulators of the Epigenome Consistent with the developmental programming hypothesis, mounting evidence suggests the environment plays a critical role in establishing the epigenome of an individual. Most research to date has focused on environmental influences affecting the supply of

Developmental Programming and the Epigenome

71

dietary methyl donors and activity of DNA methyltransferases to induce either hypermethylation or hypomethylation. Alternatively, the epigenome could be influenced via environmental factors that alter expression of transcription factors necessary to drive critical cell fate processes that regulate DNA methylation. In the following text, research is described delineating how these general mechanisms may help to explain the patterns of developmental programming in adult diseases in three areas of environmental research, including dietary factors, social environment, and exposure to environmental contaminants.

Dietary Influences Methionine, choline, folic acid, and vitamin B12 in our diets serve as the sole source of methyl groups for DNA methyltransferases, the enzymes responsible for establishing DNA methylation patterns. Specifically, these methyl donors are used to generate S-adenosyl methionine (SAM), which provides the one carbon substrate for DNA methyltransferases. Homocysteine, the byproduct, can then be remethylated to produce SAM once more. In fact, studies of individuals with compromised folate pathways have found that genomic DNA methylation correlates positively with folate status and negatively with homocysteine status. In mice, folic acid deficiency leads to genome-wide hypomethylation. Dietary influences on the development of the epigenome were first discovered in studies of mice with a metastable epiallele called the viable yellow agouti (Avy). In metastable epialleles, DNA methylation occurs probabilistically, resulting in gradations of phenotypes across individuals with the same genotype. This is in contrast to most genomic regions that undergo developmentally programmed establishment of the epigenotype, therefore showing little interindividual variation. Because the agouti gene controls hair follicle color, a range of coat colors from normal brown to yellow is typically seen in Avy mice. Interestingly, the yellow coatcolored mice are obese and at a heightened risk for a host of diseases including diabetes and cancer. In line with this, it was discovered that the Agouti protein binds to the melanocortin 4 receptor in the hypothalamus. Therefore, hypomethylation of this epiallele leads not only to yellow coat color, but also to endocrine disruption. Supplementation of the mother’s diet during pregnancy with a single carbon donor such as folic acid increases the proportion of offspring with methylation of a transposable element found near the Agouti gene, resulting in a higher proportion of mice with the normal brown coat color instead of the yellow coat color. These dietary effects on the distribution of coat color are inherited through the F2 generation as well. Finally, a recent study showed that supplementation of the maternal diet with genistein, a soy-derived estrogenic compound, produced Avy hypermethylation and the consequent phenotypes in much the same way as supplementation with a methyl donor did. This intriguing result provides a mechanistic link between sex hormones, epigenetic regulation during development, and consequent effects on risk of several endocrinerelated disorders later in life. Recently, another metastable epiallele has been identified in mice, axin fused (AxinFu), which can also be modulated by variation in methyl-donor diets. However, it is unknown whether a similar mechanism exists in humans involving metastable epialleles. Several human conditions have been linked to dietary methyl donors. It is well established that supplementation with folic acid during pregnancy decreases the risk of neural tube defects. Intriguingly, valproic acid, a drug used to treat seizure disorders but also a cause of neural tube defects, has recently been shown to act as a histone deacetylase inhibitor. Cancer, cardiovascular disease, schizophrenia, and autism are complex diseases that have been linked to aberrant tissue and blood levels of folate, homocysteine, and SAM. The hypermethylation of the RELN, a gene important in cell-to-cell communication in the brain, is thought to be the cause of reduced Reelin levels observed in bipolar, schizophrenic, and autistic patients. However, as of yet, metastable epialleles and their function in humans have yet to be discovered. Other dietary factors may also play a role in establishing the epigenome. For example, in utero exposure to a high-fat diet results in hypomethylation of the estrogen receptor (ER), leading to the overexpression of ER in rat mammary glands, which is associated with a higher incidence of tumorigenesis. In addition, maternal diets supplemented with genistein induce aberrant hypermethylation of a set of genes in rat prostate and are associated with prostate growth and tumorigenesis.

Social Environment Behavioral inputs modulate epigenetic patterns as well. As detailed earlier, the early social environment, particularly exposure to stressful situations, has been shown to influence disease later in life. Recent studies in mice have demonstrated that the transgenerational acquisition of nurturing behaviors, including pup licking and grooming and arch-back nursing, is not germline inherited but is passed on to the offspring directly from the mother during the first week of life. In addition, adult offspring of dams who exhibited greater nurturing behaviors show less fearful behaviors and less HPA axis activation in response to stress. The mechanism of this maternal programming effect has been shown to involve DNA methylation and histone modifications of a promoter region of the GR. In fact, this epigenetic programming can be reversed by infusion of a histone deacetylase inhibitor.

Environmental Contaminants Several environmental contaminants, particularly endocrine disruptors, have been implicated in developmental programming of adult diseases. Mechanistically, only a few have been tested for epigenetic effects. Heavy metals, such as cadmium and arsenic, have been shown to inhibit DNA methyltransferase activity and reduce SAM in rodent models. Prenatal exposure to bisphenol A leads to hypomethylation in the Agouti (Avy) mouse model, which is reversed by dietary supplementation with

72

Developmental Programming and the Epigenome

methyl donors. Furthermore, bisphenol A alters the prostate epigenome, specifically leading to hypomethylation of genes responsible for prostate cell growth, which leads to an increased incidence of tumors in the prostates of these mice. 2,3,7,8Tetrachlorodibenzo-p-dioxin (TCDD) has been shown to induce histone modifications in an in vitro culture system of human mammary epithelial cells. Future research will clarify the role of epigenetic regulation in the mechanisms of chronic toxicity of these environmental contaminants. It is important to note that current research does not rule out the possibility that an alternative primary target of these environmental contaminants is the cause of the altered epigenome. In either case, whether the epigenetic effects are primary or secondary, it provides a mechanistic explanation that can account for the well-known long-term health impacts of these compounds.

Epigenomic Tools for Environmental Health Scientists As reviewed earlier, there are accumulating data linking environmental modulation of the epigenome to human disease (summarized in Table 1). Animal model data support this hypothesis, showing that subtle environmental influences during development cause persistent changes in epigenetic gene regulation, changing disease risk later in life and in subsequent generations, long after exposure has occurred. Therefore, it is of critical public health importance to determine the potential for epigenetic dysregulation after exposure to ubiquitous environmental contaminants. Particular attention should be paid to those compounds that are already known to affect the endocrine, nervous, or immune systems. The tissue and age specificity of epigenetic gene regulation makes it particularly hard to study when compared to traditional genetics research methods as an individual has only one genotype, but multiple epigenotypes. Therefore, DNA samples are readily available using noninvasive techniques, but a particular epigenotype of endocrine, immune, or nervous system tissue is difficult to obtain from human subjects. The interspecies variability in epigenetic regulation will also be an important area of future research. In particular, elucidation of the role of metastable epialleles in human epigenetic variation should provide important answers to the usefulness of particular animal models for epigenetic research. If current animal model research is predictive of humans, then metastable epialleles may be useful as biomarkers for the methylating conditions experienced during the critical developmental periods in which the epigenome is established. Several tools are now available to streamline environmental health research on the epigenome including large-scale chromatin immunoprecipitation arrays (ChIP-on-chip) and bisulfite sequencing. Microarray technology that allows measurement of variability across the epigenome is currently available. Therefore, epigenome association studies are now performed by correlating epigenomic variability with variability of a particular phenotype. In much the same way gene expression microarrays have been used to develop diagnostic and prognostic biomarkers for cancer, epigenome arrays are currently being evaluated to determine their predictive power. It is compelling to think that these techniques may also be useful in the future as biomarkers of exposure to environmental mediators of the epigenome. Following the Human Genome Project’s success, the Human Epigenome Project aims to “analyze DNA methylation in the regulatory regions of all known genes in most major cell types and their diseased variants.” When completed, the Human Epigenome Project will provide an important foundation on which future environmental health research in the field of epigenetics can be based.

Table 1

Categories of evidence linking diseases to developmental programming, epigenetic regulation of target genes, and environmental modulators of the epigenome Developmental programming

Disease categories

Human evidence

Animal evidence

Epigenetic regulation implicated

Endocrine Metabolic syndromea

Yes

Yes

GR, IGF2

Yes

Yes

IGF2, ER

Yes Yes

Yes –

RELN, GR RELN, GR

Yes –

Yes Yes

GR –

Reproductive disorders/cancers Neurological Anxiety/depression Schizophrenia/bipolar disorder Immune Asthma/atopic disease Autoimmune disorders

IGF2, insulin-like growth factor 2; ER, estrogen receptor; GR, glucocorticoid receptor; RELN, Reelin. a This includes obesity, type II diabetes, and cardiovascular disease.

Environmental modulators of the epigenome Methyl donors (folate, vitamin B12), sex hormone mimics (DES, bisphenol A, genistein, vinclozolin), stress, glucocorticoids, heavy metals, TCDD

Developmental Programming and the Epigenome

73

See also: Epigenetic Changes Induced by Environment and Diet in Cancer; Epigenetic Effects of Nanoparticles; Epigenetics of Environmental Exposures; Genetics is Involved in Everything, but not Everything is Genetic; Genome Effects and Mutational Risk of Radiation.

Further Reading Bansal, A., Simmons, R.A., 2018. Epigenetics and developmental origins of diabetes: Correlation or causation? American Journal of Physiology. Endocrinology and Metabolism 315 (1), E15–E28. Bern, H.A., 1992. The fragile fetus. In: Colborn, T., Clement, C. (Eds.), Chemically induced alterations in sexual and functional development: The wildlife/human connection. Princeton Scientific Publishing, Princeton, NJ, pp. 9–16. Block, T., El-Osta, A., 2017. Epigenetic programming, early life nutrition and the risk of metabolic disease. Atherosclerosis 266, 31–40. Bock, C., Lengauer, T., 2008. Computational epigenetics. Bioinformatics 24, 1–10. Dolinoy, D.C., Jirtle, R.L., 2008. Environmental epigenomics in human health and disease. Environmental and Molecular Mutagenesis 49 (1), 4–8. Dolinoy, D.C., Huang, D., Jirtle, R.L., 2007. Maternal nutrient supplementation counteracts bisphenol A-induced DNA hypomethylation in early development. Proceedings of the National Academy of Sciences of the United States of America 104 (32), 13056–13061. Drake, A.J., Tang, J.I., Nyirenda, M.J., 2007. Mechanisms underlying the role of glucocorticoids in the early life programming of adult disease. Clinical Science 113 (5–6), 219–232. Fernandez-Gonzalez, R., Ramirez, M.A., Bilbao, A., Rodriguez De Fonseca, F., Gutierrez-Adan, A., 2007. Suboptimal in vitro culture conditions: An epigenetic origin of long-term health effects. Molecular Reproduction and Development 74 (9), 1149–1156. Grun, F., Blumberg, B., 2007. Perturbed nuclear receptor signaling by environmental obesogens as emerging factors in the obesity crisis. Reviews in Endocrine & Metabolic Disorders 8 (2), 161–171. Hachwell, E., Greally, J.M., 2007. The potential role of epigenomic dysregulation in complex human disease. Trends in Genetics 23 (11), 588–595. Ho, S.M., Tang, W.Y., 2007. Techniques used in studies of epigenome dysregulation due to aberrant DNA methylation: An emphasis on fetal-based adult diseases. Reproductive Toxicology 23 (3), 267–282. Janesick, A.S., Blumberg, B., 2016. Obesogens: An emerging threat to public health. American Journal of Obstetrics and Gynecology 214 (5), 559–565. Jirtle, R.L., Skinner, M.K., 2007. Environmental epigenomics and disease susceptibility. Nature Reviews Genetics 8 (4), 253–262. Joss-Moore, L.A., Lane, R.H., Albertine, K.H., 2015. Epigenetic contributions to the developmental origins of adult lung disease. Biochemistry and Cell Biology 93 (2), 119–127. Lee, H.S., 2015. Impact of maternal diet on the epigenome during in utero life and the developmental programming of diseases in childhood and adulthood. Nutrients 7 (11), 9492–9507. Newbold, R.R., Hanson, R.B., Jefferson, W.N., Bullock, B.C., Haseman, J., McLachlan, J.A., 1998. Increased tumors but uncompromised fertility in the female descendants of mice exposed to diethylstilbestrol. Carcinogenesis 19 (9), 1655–1663. Newbold, R.R., Padilla-Banks, E., Snyder, R.J., Phillips, T.M., Jefferson, W.N., 2007. Developmental exposure to endocrine disruptors and the obesity epidemic. Reproductive Toxicology 23 (3), 290–296. Ozanne, S.E., Constancia, M., 2007. Mechanisms of disease: The developmental origins of disease and the role of the epigenotype. Nature Clinical Practice Endocrinology & Metabolism 3 (7), 539–546. Saffery, R., Novakovic, B., 2014. Epigenetics as the mediator of fetal programming of adult onset disease: What is the evidence? Acta Obstetricia et Gynecologica Scandinavica 93 (11), 1090–1098. Simeoni, U., Armengaud, J.B., Siddeek, B., Tolsa, J.F., 2018. Perinatal origins of adult disease. Neonatology 113 (4), 393–399. Tang, W.Y., Ho, S.M., 2007. Epigenetic reprogramming and imprinting in origins of disease. Reviews in Endocrine & Metabolic Disorders 8 (2), 173–182. Tang, W.W., Kobayashi, T., Irie, N., Dietmann, S., Surani, M.A., 2016. Specification and epigenetic programming of the human germ line. Nature Reviews. Genetics 17 (10), 585–600. Treviño, L.S., Wang, Q., Walker, C.L., 2015. Phosphorylation of epigenetic “readers, writers and erasers”: Implications for developmental reprogramming and the epigenetic basis for health and disease. Progress in Biophysics and Molecular Biology 118 (1–2), 8–13. Viltart, O., Vanbesien-Mailliot, C.C.A., 2007. Impact of prenatal stress on neuroendocrine programming. The Scientific World Journal 7, 1493–1537. van Vliet, J., Oates, N.A., Whitelaw, E., 2007. Epigenetic mechanisms in the context of complex diseases. Cellular and Molecular Life Sciences 64 (12), 1531–1538. Waterland, R.A., Michels, K.B., 2007. Epigenetic epidemiology of the developmental origins hypothesis. Annual Review of Nutrition 27, 363–388. Zannas, A.S., Chrousos, G.P., 2017. Epigenetic programming by stress and glucocorticoids along the human lifespan. Molecular Psychiatry 22 (5), 640–646.

Relevant Websites www.epigenome.orgdHuman Epigenome Project. www.pptox.dk-PrenataldProgramming and Toxicity. http://www.who.int/en/dWorld Health Organisation.

Diabetes Mellitus in Albania: A Two Fold Increase in the Last Decadeq F Toti and F Agac¸i, University Hospital Center “Mother Teresa”, Tirana, Albania G Bejtja, Ministry of Health, Tirana, Albania A Golay, University Hospital of Geneva, Geneva, Switzerland © 2019 Elsevier B.V. All rights reserved.

Abbreviations BMI Body mass index DES Dietary energy supplies ESRD End-stage renal disease GDP Gross domestic product IDF International Diabetes Federation IGT Impaired glucose tolerance INSTAT INstitute of STATistics UN United Nations WHO World Health Organization

Diabetes Burden Type 2 diabetes, by far the most common form of diabetes, is increasing at an alarming rate all over the world. In the year 2000, there were 150 million individuals with diabetes worldwide, and this number is expected to double in the next 25 years. Type 2 diabetes will be certainly one of the major diseases in the 21st century and should be recognized as a priority. In the past decades, a rapid increase in the number of children and adolescents with type 2 diabetes has been witnessed, which in some countries is equal to or greater than that of children with type 1 diabetes. Medicine, today, is facing a great challenge related to the problems of persons diagnosed with diabetes. The burden of diabetes financial cost is mostly due to its micro- and macrovascular complications. The age-adjusted mortality in persons with diabetes is two- to fourfold higher than in nondiabetics. Diabetes is the primary cause that leads to end-stage renal disease (ESRD), and the lower limbs’ amputations are 10-fold higher in persons with diabetes than in general population. In this sense it is very important to know the role played by different risk factors for diabetes, to prevent or slow down the global epidemic of this disease. The developing countries, including Albania, are facing the first effects of such a huge epidemic increase in diabetes prevalence, and its related complications. It is now well established that the development of type 2 diabetes results from the interaction between the genetic makeup of the individuals and their environment. Although the genes have not changed in such a short period, the same could not be said for the latter. In developed countries, factors such as reduced physical activity and increased alimentary intake have already created escalating coepidemics of obesity and type 2 diabetes, called “diabesity.” The same figures are present in the developing countries. Based on a better understanding of the pathophysiology of glucose intolerance, clinical trials on the prevention of diabetes have been performed. Recent clinical trials have demonstrated that in subjects at high risk for type 2 diabetes, lifestyle modifications, including diet and exercise, can reduce the incidence of type 2 diabetes. A moderate increase in physical activity, accompanied with modest (5%–7%) reduction in body weight, reduced by 58% the conversion rate of impaired glucose tolerance (IGT) to type 2 diabetes. The results of these long-term, prospective studies emphasize the importance of identifying subjects at high risk for type 2 diabetes to offer them an intervention program that will prevent/halt their progression to overt diabetes.

Epidemiological Data About Diabetes in Albania Albania is a country with 28000 km2 area and approximately 3.2 million inhabitants. Actually 55% of the population lives in the rural areas, but the shift is changing rapidly, with more people moving every day to urban and suburban areas. Up to the 1980s the prevalence of unknown diabetes in the Albanian adult population was extremely low compared with that reported in other

q

Change History: September 2018. The section editor Orish Ebere Orisakwe updated the reference. This is an update of F. Toti, F. Agaci, G. Bejtja, A. Golay, Diabetes Mellitus in Albania: A Two Fold Increase in the Last Decade, Editor(s): J.O. Nriagu, Encyclopedia of Environmental Health, Elsevier, 2011, Pages 60–69.

74

Encyclopedia of Environmental Health, 2nd edition, Volume 2

https://doi.org/10.1016/B978-0-12-409548-9.11697-5

Diabetes Mellitus in Albania: A Two Fold Increase in the Last Decade

3500 3000 2500 2000 1500 1000 500 0

75

2005

2004

2003

2002

2001

2000

1999

1998

1997

1996

1995

1994

1993

1992

1991

1990

1989

1988

1987

1986

1985

Type 1 Diabetes Type 2 Diabetes

Fig. 1 Newly diagnosed case of diabetes in Albania for the period 1985–2005. Printed with permission of the Diabetes National Registry (Preliminary data) 2006.

European populations. In a survey conducted in Tirana, the capital of Albania, during the period 1976–80 with 162,706 adults living in the capital, the overall diabetes prevalence was approximately 1%. During the period 2001–06 two large screening studies were conducted in Tirana, with the aim of determining the prevalence of unknown diabetes. It was found to be 2.9% and 4.07%, respectively, in the second. In another study on a rural population of a southern region of AlbaniadGjirokastradthe prevalence of unknown diabetes was 1.26%. Data from these studies have been almost identical with other countries of the Balkan region. Based on the preliminary data of the National Diabetes Registry (2006), the prevalence of diagnosed type 2 diabetes in adult population was 1.33%. Type 2 diabetes is the most common form, representing approximately 91% of diabetics in Albania. The data, from surveys conducted in Tirana, can serve as an indicator for the real prevalence of type 2 diabetes, the diagnosis rate of which is actually extremely low. If the surveys’ figures could be accepted as attributable to the whole country, it is expected that there will be an alarming rise in new cases of diabetes in the near future, reaching at an estimated prevalence of 5%, almost same as that in neighboring countries like Greece where it is approximately 6%, or Italy and Turkey with 7%. Fig. 1 includes the newly diagnosed cases of diabetes for the period 1985–2005. The figure summarizes the constant rise of type 2 diabetes prevalence during the past 15 years. This could not be caused only by the application of the new diagnosis criteria for type 2 diabetes since 1997, or a better screening system. All the developing countries are facing the same epidemiological increase in type 2 diabetes prevalence, and Albania could not be a case apart. Fig. 1 shows that even in Albania, diabetes is increasing rapidly. In the following tables and figures, different factors that may have influenced the rise of diabetes during the transition period have been analyzed. The factors included in this review are as follows: 1. The aging of the Albanian population 2. Urbanization 3. Lifestyle changes including: a. The modification of alimentary habits b. The continuous decrease in physical activity c. Sedentary life behaviors 4. The increase in obesity and central adiposity.

The Aging of Albanian Population Increased Life Expectancy The number of individuals aged 60 years or older is escalating rapidly worldwide. In 1999, this age group represented almost 10% of the world’s population. By the year 2050, this proportion will increase to 20%. Furthermore, the population 80 years or older is projected to more than triple for the same period. By 2020, three-quarters of all deaths in developing countries would be agingrelated. The largest number would be caused by noncommunicable diseases, such as diseases of the circulatory system, cancers, and diabetes. It is a well-known fact that diabetes and especially type 2 diabetes is a disease that predominates in the elderly population. Different epidemiological studies have demonstrated that the diabetes prevalence increases with age. In the United States, the prevalence of diabetes rise is 2.5% for the age group 20–39 years and 10% for the age group 40–59, which increases to 20.9% for the age group 60 years and older. One of the major factors that influences the high diabetes prevalence in the elderly people is obesity, as part of the insulin resistance syndrome. Different studies have demonstrated that elderly persons have more tendency to be overweight or obese due to the structural body changes and decrease in basal metabolism rate, muscular mass, and especially physical activity. Table 1 shows some epidemiological data from Tirana Diabetes Registry, part of the National Diabetes Registry. Tirana is the most populous district of the country, officially with more than 600,000 habitants. The large demographic movements of the past 15 years have contributed to the rapid growth of the city population, with people coming from all over the country. This change in population has created in Tirana city an original and credible “sample” of the whole Albanian population

76 Table 1

Diabetes Mellitus in Albania: A Two Fold Increase in the Last Decade Type 2 diabetes prevalence according to age group and compared to the general population in Tirana city

Age group (years)

Frequency

(%)

General population

(%)

Diabetes prevalence (%)

15–44 45–54 55–64 65 Total

318 1230 2180 2771 6513

4.88 18.89 33.47 42.55 100.00

236,814 59,490 43,448 41,349 381,101

45.5 11.4 8.4 7.9 73.32

0.13 2.1 5.2 6.7 1.72

Diabetes National Registry (preliminary data), 2006.

7000 6000 5000 4000 3000 2000 1000 0

6324 4827 3351 2403 96

72

194

538

≤ 15

16−20

21−30

31−40

41−50

51−60

61−70

>70

Age group Fig. 2 Diabetes case according to age group in Albanian population (2006). Printed with permission of the Diabetes National Registry (Preliminary data) 2006.

for different epidemiological surveys. The table shows that diabetes prevalence increases from 0.13% for the age group 15–44 years to 6.7% for the age group 65 years and older. Fig. 2 includes data from the National Diabetes Registry (2006) for the number of persons with diabetes, according to the age group. The results of the preliminary data are taken from 12 largest districts of Albania, where approximately two-third of the Albanian population lives, among them more than 20,000 persons with diabetes. The figure demonstrates that the age group 60 years and older predominates in the total diabetic population. In today’s world people have the tendency to live longer. The life expectancy in some Western European countries is approaching or surpassing 80 years. The same tendency can be found in Albanian population, which has passed from a life expectancy of 38 years in the 1940s to 74 years at the current time, with a projection of 77 years for the year 2020. Table 2 shows the increase in life expectancy from birth for the period 1930–2020. It can be noted that life expectancy from birth is rising progressively, arriving almost at the same level as that of some Western countries. This can create the risk of an aging population, which is already the situation present in different Western European countries.

Why the Albanian Population Is Aging? Albania is still a country with a young average age of the total population. In 2006 it was 31.7 years. One would expect the average age of the population to increase in future. Table 2

Life expectancy from birth in Albanian population, 1930–2001

Years

Male

Female

Total

1930–38 1950–51 1960–61 1979–80 1989–90 1990–95 1995–2000 2001–05 2006–10a 2011–20a

37.2 52.6 63.7 67.0 69.3 68.5 71.7 72.1 72.9 74.0

38.8 54.4 66.0 72.3 75.4 74.3 76.4 78.6 79.2 80.1

38.0 53.5 64.9 69.5 72.2 71.4 74.0 75.3 76.1 77.1

a

Projection from Albania, 2001–21. Printed with permission of INSTAT 2005.

Diabetes Mellitus in Albania: A Two Fold Increase in the Last Decade

• • •



77

The increase in life expectancy has already been mentioned. The crude death rate in Albania has remained constant for the past 15 years, ranging from 5.1 to 6.0 per 1000 persons, but it has decreased to 8.1% if the total deaths, comparing the figures for 2001 and 1993. In this way, there will be more people aging and living longer every year. Albania is a country with a high emigration level, mostly for economic reasons. For the period 1990–2005, it is estimated that more than 600,000 Albanians have emigrated abroad, mostly to Greece, Italy, and Germany. This part of the population was an active labor force, the majority of them males in age group 20–40 years. In this way, another part of the young population and labor force is living abroad, affecting and changing the natural structure of the population. Another major influencing factor for the aging of the Albanian population is the decrease in birth rate. In the 1960s the mean birth rate in Albania was 6.8 per woman. This high birth rate at that period was compensating the still higher rate of child mortality, which was nearly 40 per 1000 births. In 2001 child mortality had decreased to 14–17 per 1000 births, but the birth rate had fallen to 2.3 per woman. The tendency is to decrease even more in the future, arriving at 1.4–1.8 per woman, according to the prospects for 2020. This could be explained mainly with the economic difficulties, high level of emigration for the active and fertile age population, wide use of birth control methods, and increasing age of girls’ marriage. The decrease in fertility is faced in both urban and rural areas, but in the latter case this indicator is at even higher levels. In 2001, the age group 0–14 years constituted 30.2% of the total Albanian population. From the prospects of population for the year 2020, this age group would be decreased to 18.7%–21.9%, whereas the age group 65 years and older would be almost doubled, increasing from 7.2% to 11.2%–12.7% in 2020. This aging of the population would bring a further increase in diabetes prevalence, leading to a commensurate increase in the socioeconomic costs of the disease.

Urbanization Until recently, the principal link between urbanization and health was air pollution, but now this is changing as obesity spreads, eclipsing the first as a health problem. Different studies have demonstrated that urbanization plays an important and independent role in the increase in diabetes prevalence, strongly related to obesity. In rural areas, diabetes prevalence is very low, probably due to increased physical activity and a healthy lifestyle and diet. The overall diabetes prevalence in Albania was 1.33%, according to the preliminary data of the National Diabetes Registry, 1.5% in urban areas, and only 0.37% in rural areas. Diabetes prevalence ratio (urban/rural) in Albania is similar to that in other countries such as Argentina, where the prevalence is respectively 6.7% and 2.6%, the United States 8% and 5%, and Egypt 10% and 4%, although the prevalence of diagnosed diabetes in Albania still remains very low. According to the International Diabetes Federation (IDF) website, in 2003 the number of persons with diabetes living in urban areas was 78 million and only 44 million for the rural areas. For 2025, the number is expected to climb to 182 million living in urban areas and only 61 million for the rural areas. The migration wave at urban areas could not be blamed as the only factor for this great difference in diabetes prevalence. Currently, more than 50% of the world population lives in rural areas, although the figure is changing rapidly. According to a United Nations (UN) report, it is anticipated that by 2025 almost two-thirds of the world’s population will be living in cities. Numerous studies in developing countries have demonstrated that the people moving from rural to urban areas have an excessive energy intake, in the form of sugar, refined grains, and fat. This dietary profile, referred to as a “Western diet,” has been reported to be associated with obesity, diabetes, and heart disease. People moving to cities tend to find work primarily as day laborers or factory workers. They leave behind continuous, physical labor and adopt sedentary, sporadic work. This decrease in total energy expenditure, if not accompanied by a reduction in energy intake, may result in weight gain and potential obesity. However, urban residents obtain a much higher proportion of energy from fats and sweeteners than do rural residents, even in the poorest areas of very low-income countries. New technologies in transportation, work, and leisure, along with the dietary changes, have increased obesity levels in the urban areas of developing countries. The Albanian population structure is facing a great and rapid change due to migration to urban areas. Fig. 3 shows the constant increase in urban population during the past 50 years. 90 80 70 60 50 40 30 20 10 0

80 69

64

59

41 31

32

36

1979

1989

54

46

1960

55

45 Urban Rural

20 1950

Fig. 3 2007.

68

2001

2004

2025

Structural changes in Albanian urban and rural population for the period 1950–2025 (%). Printed with permission of INSTAT 2001 and

78

Diabetes Mellitus in Albania: A Two Fold Increase in the Last Decade

The actual tendency is that urban population is growing every year and in the Population Projection for 2021 by INstitute of STATistics (INSTAT), it is foreseen that the report would be inverted: urban population 55% and rural population only 45%. Urbanization is associated with several dietary and behavioral risk factors for obesity, rise in unemployment rate, and a higher stress level in everyday life, thus creating an appropriate terrain for the increase in obesity and diabetes prevalence.

Lifestyle Changes Modification of Alimentary Habits In today’s society, people are increasingly being exposed to unhealthy diets. Different studies have demonstrated the risk of obesity, cardiovascular disease, and certainly diabetes in this “Coca-Cola and fast-food society.” In the United States, over the past two decades, individual caloric intake has risen from 1876 to 2043 kcal, or approximately 10% more for men and 7% more for women. Theoretically, consuming an extra 100 kcal a day for a year can lead to a gain of 4.5 kg. Unfortunately, diets in developing countries, especially in urban areas, are moving in the same direction. Albania until the 1990s was a country with a healthy Mediterranean diet, rich in vegetables, fruits, bread, olive oil, and less meat and milk products. This diet, together with the high degree of physical activity, has paid off in the long term, when Albania was a paradox of the “modern lifestyle” with a low prevalence of diabetes, cardiovascular disease, and a relatively high life expectancy from birth. The economic growth following the political changes of 1990 brought a dramatic shift in dietary habits. Table 3 shows the situation of different food consumption in 1990, compared to other Western European countries. The table summarizes the difference in consumption of some of the most important dietary components between the Albanian population and other countries, especially for meat, milk, fruits, and vegetables in the 1990s. During the past decade, these differences have been narrowed, when all groups of aliments have been consumed in greater quantities. Table 4 reflects the changes in supply of major food groups for the period 1965–2002. The per capita supply in Albania has exhibited two patterns. Before the 1990s, it was mainly constituted of cereals (predominantly wheat and maize), accounting for approximately 70% of the total daily intake. It was due to the predominance of the traditional agricultural production, stimulated by the communist policies to achieve staple food sufficiency. After the liberalization of the economy in the early 1990s, an increase in the daily supply of many food groups (fruits and vegetables, meat, milk, and eggs) was observed. The supply is still characterized by the high share of cereals. However, their share is reduced, to less than 60% of the total dietary energy supplies (DES), making place for an increase in other food groups, particularly dairy products, meat, and fruits. The increase suggests that the diversity of the diet is improving for a major part of the population. The supply of sweeteners and vegetable oils considerably increased from 1965–67 to 2000–02 period, and this increase could be a determinant of the nutrition transition emerging in Albania. Different studies have underlined the role played by the increase in sweeteners and animal fats consumption in the global epidemic of obesity. Sweeteners consumption in the Albanian population saw a 1.5-fold increase between 1986 and 2002, whereas milk and eggs consumption increased 2.5-fold for the same period. The only positive change in the new dietary habits of Albanian population is the increase in fruits and vegetables consumption. The overall increase in total energy intake, associated with decreasing physical activity, is the most important factor influencing the growing number of overweight and obese persons worldwide, and especially in developing countries, including Albania. In the following paragraphs, several factors that have influenced the new trends of Albanian dietary habits and calories intake have been analyzed.

The continuous agricultural growth production

The Albanian resident population has changed little during the past two decades, increasing from 3.1 million in 1991 to 3.2 million in 2005, due to the high rate of emigration abroad. Albania is one of the few economies in transition to have experienced positive agricultural growth throughout the reform process, whereas most Central and Eastern European countries saw their agricultural sector contract for several years. After suffering a shock during 1991–92, it began to improve after 1993, mainly as a result of the privatization of agriculture land. From the very beginning of transition period, annual agricultural growth ranged between 3% and 7%. The exportations of Albanian food productions have been almost constant during all this period, so the growth of Table 3

Food consumption in Albania compared to other countries in 1990 (kg per habitant per year)

Products

Albania

Italy

Greece

Bulgaria

Cereals (excluding beer) Fish Meat Milk Eggs Vegetables Fruits Sweeteners

228 3 13 123 6 60 21 18

163 18 84 278 12 162 132 27

142 18 78 224 11 201 186 28

204 9 79 205 14 113 110 35

Ministry of Agriculture and Food, 1993; Printed with permission of INSTAT 1994.

Diabetes Mellitus in Albania: A Two Fold Increase in the Last Decade Table 4

79

Trends in per capita supply of major food groups (in g per day) in Albania for the period 1965–2002

Major foods groups

1965–67

1972–74

1979–81

1986–88

1993–95

2000–02

Cereals (excluding beer) Starchy roots Sweeteners Pulses, nuts, and oil crops Fruits and vegetables Vegetable oils Animal fats Meat and offal Fish and seafood Milk and eggs Other

529 38 38 26 291 9 13 48 6 273 20

579 45 44 22 324 14 9 45 10 310 24

611 50 48 19 339 24 9 50 9 373 24

601 39 50 21 290 21 12 47 9 363 30

550 55 80 21 461 22 10 79 5 708 52

457 87 70 38 679 23 7 103 11 818 60

FAOSTAT, 2005.

agricultural production is intended essentially to the internal market, once again confirming the changes in alimentary habits of Albanian population.

Dietary shift during the transition period

According to the Food and Nutrition Division of FAO, the average energy requirement for the Albanian population in 2001 was 2275 kcal per capita per day, whereas the DES was 2880 kcal per capita per day. During the past 40 years the per capita DES has exhibited a steady increase, increasing from 2261 kcal per day in 1965–67 to 2861 kcal per day in 2000–02, remaining stable for the past decade. Another important change during this time frame is the share of the lipids at the total DES. In 1965–67 the lipid’s share was 18% of DES, whereas in 2001 it was 26%. The protein share has remained at the same level during all periods (12% in 1965 and 14% in 2001), but their origin has changed. In 1965 proteins were mostly of vegetable origin, whereas in 2001 they were mainly of animal origin. The same figures have demonstrated other studies conducted in developing countries.

Increasing incomes

Income is another important element in the nutrition transition because it measures control over the flow of goods and services. In other words, income allows one to purchase goods or services that can affect diet, activity, and nutritional status. Available literature appears to show that total caloric intake is increasing worldwide among all race, age, gender, and socioeconomic groups. These calories more frequently come from energy-dense, nutrient-poor snacks at a greater frequency throughout the day. Different studies have demonstrated that increasing income is strongly associated with changes in the proportion of energy in the diet from various sources. It is clear that food cost plays a significant role in determining eating patterns and health behaviors. Individual food choice is also affected by pricing. Both adults and adolescents indicate price as one of the most influential factors in determining food choice, second only to taste. Gross domestic product (GDP) is a simple element for evaluating the changes in socioeconomic status at a national scale. The GDP per capita in Albania has been in continuous growth since 1996, passing from 921 USD to 1128 USD in 2000 and 1740 USD in 2006. Although Albania still ranks as one of the poorest European countries, the continuous increase in incomes is evident. The “remittance economy” has influenced the growth of Albanian GDP for the general population, compounding 10%–13% of the nominal national GDP. The continuous increase in GDP and average real wage, as well as market liberalization, has created the possibility for the majority of population to consume different kind of foods and in greater quantity, changing dramatically their dietary habits, mainly influencing the increase in their total intake of calories.

Continuous Decrease in Physical Activity It is widely believed that reduced physical activity and increasing sedentary behavior, such as watching TV, is implicated as a major risk factor for obesity and metabolic syndrome. In most developed countries, increases in obesity prevalence have occurred in parallel with declines in physical activity. In 2005, according to a US survey, 57% of the American adults exercise only occasionally or not at all, a number that corresponds closely with the share of adult population overweight. The spontaneous physical activity (fidgeting), an often overlooked component of energy expenditure, can account for the expenditure of 100–800 kcal per day. During the past decade different epidemiological studies like the US Diabetes Prevention Project and Finnish Diabetes Prevention Study have demonstrated the positive role of physical activity in improving insulin sensitivity and glucose tolerance and thus reducing the incidence of type 2 diabetes. The prevalence of type 2 diabetes is lower in the physically active persons independently of their age group, positive family history, or body mass index (BMI). Several factors can be mentioned as elements of decreased physical activity. In the following paragraphs, some of the most relevant details in Albanian population have been analyzed.

80

Diabetes Mellitus in Albania: A Two Fold Increase in the Last Decade

Unemployment rate

Workplace is one of the main places where energies are expended. It is thought that physical activity, the one controllable component of total energy expenditure, accounts for 15%–30% of daily energy expenditure. Thus, a person experiencing a change in labor practices may see a decline of as much as 1000 kcal per day, which is equivalent to 50% reduction in physical activity. For this reason, it has been decided to analyze the role of unemployment in the decrease in physical activity. Until 1990, levels of physical activity in Albania were high as agriculture was poorly mechanized and highly labor intensive. In urban areas, the industrial technology was out-of-date and intensive hand labor was needed. Unemployment was a rare phenomenon in Albania in the 1990s. The transition into the new political system was accompanied by a total collapse of the economy. The state was not prepared and it did not forecast the consequences of such a political and economic situation. The period 1992–94 was the worst in terms of unemployment figures, with figures oscillating between 22% and 26%. Since 2000 unemployment rate has been continuously declining based on the fact that the private sector has a dynamic growth. However, it is difficult to make a full assessment of the Albanian labor market using just the official data of the past 10 years, for a large number of the population remains in rural areas (45%–50%). Albania still rests as a country with high level of unemployment, which creates the premises for a sedentary life. However, the poorest people are more risked for developing either undernourishment or obesity, for the fact that fat food is cheaper than the healthy one, increasing even more their risk for overweight or obesity, which will lead to the further increase in type 2 diabetes prevalence.

Vehicle ownership

The obvious thermodynamic principles of energy balance require that one recognizes the influence of both dietary change and the reduction in physical activity. The collapse in the demand for physical exertion came with cheaper cars for personal transport and multiple mechanical and electrical aids to remove the physical demands at home and at work. Actually the majority of developing countries have noted marked increases in car usage and other motorized transport, at the expense of more active commuting through walking or cycling trips. According to past studies, the risk of obesity increases by 6% for every hour spent commuting by car each day. Automobiles have eliminated daily walking and cycling. Elevators and escalators have replaced stairs. All this mechanization has decreased physical activity related to energy expenditure, thus creating the premises for a further increase in BMI and a greater risk for obesity. Until the 1990s vehicle ownership in Albania was totally absent. The few vehicles in the country were reserved for the members of the government and the common transport was neither well organized nor very frequent. The situation has changed dramatically since 1991 when Albania opened up rapidly to Western influences, facilitating the purchase of cars and their use in everyday life Fig. 4. Fig. 4 summarizes one of the major changes in Albanian lifestyle. The figures of vehicle ownership are still too far from those of developed countries such as the United States, where the number of vehicles per 1000 habitants in 1991 exceeded 750, or Italy, which in the same period had 538 vehicles per 1000 habitants, but the number is growing rapidly. The graph does not include the increase in motorcycles ownership; the official figure in 2006 was more than 12,000, or fivefold the number in 1998. The same increase is verified for agriculture machineries, which have seen an increase of 20% between 2000 and 2005. The level of physical activity decreases in parallel with the increase in the number of vehicle ownership.

Sedentary Life and Leisure at Home Related to the effect of modernization and globalization on market production is a similar shift allocation and physical effort in home and leisure activities. The food preparation technologies, thanks to home electrification, are actually present in every Albanian house, together with different housecleaning and washing machines, facilitating the work at house, but decreasing even further the energy expenditure. The saved time is not used for practicing a physical activity but in sedentary activities such as watching television, playing on computer, or reading. According to a survey with 1120 adults living in Tirana city in 2001, approximately half of them (43.4% of men and 51.9% of women) participated in only sedentary activities such as reading or watching TV. The rapid 75

80 70

58.2 63

60

49.5 44.5

50 37

40 30

18

20 10

65

21

18

20

23

27

29

11 0

0 1991 Fig. 4

1993

1995

1997

1999

2001

2003

2005

Number of cars for 1000 habitants in Albania for the period 1991–2006. Printed with permission of INSTAT 2007.

Diabetes Mellitus in Albania: A Two Fold Increase in the Last Decade

81

increase in television ownership, where actually every Albanian family has at least one TV, is a simple illustration of the abovementioned examples from all the developing countries. The situation is even more alarming with children, who have replaced their active games of the past with sedentary activities such as viewing television or playing computer games. Albania still doesnot have documentation of the children’s time spent in such sedentary activities, but the general observation of the physicians is that the country is facing an increase in overweight and obesity in children and adolescents. This would require a further study in the future.

The Increase in Obesity and Central Adiposity The prevalence of obesity is increasing worldwide, including some developing countries with previously low prevalence such as China and India. Overweight and obesity have been associated with elevated blood pressure, cholesterol, triglycerides, and insulin resistance. The relationship between obesity and type 2 diabetes is well established. In fact, evidence shows that BMI is directly and continuously related to the risk of type 2 diabetes. Confined to older adults for most of the 20th century, type 2 diabetes now affects obese children even before puberty. Worldwide, approximately 85% of persons with diabetes have type 2 and 90% of them are obese or overweight. In the United States, 35% of adults and 14% of children are obese, whereas 61% of all adults are overweight. In the United Kingdom, this figure applies to 51% of adults and in Germany approximately 50%. As the economies of developing countries continue to improve, the risk of becoming obese or overweight increases across all socioeconomic classes as a result of improved access to food, decreased physical activity, and the consumption of “Western diets.” These factors create an environment that may predispose people to become overweight or obese. The obesity prevalence from 1988 to 1997 in Chile increased from 14% to 23% in women and from 6% to 16% in men. In Brazil, the proportion of obese adults has increased from 5.7% in 1974 to 9.6% in 1989, and the same situation is present in other countries. There is evidence from several studies conducted in Europe, the United States, Australia, and Canada of a socioeconomic gradient in diet, whereby persons in higher socioeconomic groups tend to have healthier alimentary habits and to exercise more. The contrary was observed in developing countries where the low-income socioeconomic groups have more tendency to use inexpensive vegetable oils and transfats and to exercise less. Fig. 5 shows the results of three studies that have evaluated the obesity and central obesity prevalence in Albanian adults living in Tirana for the period 2001–06. The first study, conducted in 2001 with 1120 adults living in urban area of Tirana, found out that the nutrition transition has begun to give the first effects, confirmed by an overall obesity prevalence of 29.5% among adults 25 years or older, more frequent in women than in men. In 2003 and 2007, two surveys were conducted by the Service of Endocrinology. The first, with more than 5500 healthy adults living in Tirana district (urban and suburban areas), found an overall 32.4% prevalence of obesity, again more expressed in women, whereas central obesity was present in 50.1% of men and 79.9% of women. In the second survey, in 2007, the overall prevalence of obesity among nearly 6500 adults with type 2 diabetes, living in Tirana district (urban and rural areas), was found to be 32.7%. The fact that the prevalence of obesity in Tirana, the capital of Albania, is considerably higher than that in Mediterranean countries such as Italy or Spain is a cause for great concern. In 2003 according to WHO database, only 9% of Italian adults and 13.3% of Spanish adults were obese. The figures of obesity prevalence in healthy population are almost identical with that of adults with type 2 diabetes, showing a high risk for metabolic syndrome in Albanian adult population. The lack of previous data on body weight patterns in Albania precludes the identification time trends in obesity prevalence, but there are reasons to believe that there is a substantial increase in overweight and obesity in Albanian adults during the transition period. The high prevalence of overweight and obesity, especially among middle-aged persons, and the strong association between obesity and central obesity, observed in these studies, suggest that Albania could face or is actually facing major increases in morbidity and mortality from diabetes, cardiovascular, and other chronic diseases. 100

80

2001 L. Shapo et al. 2003 A. Ylli et al. 2007 F. Toti et al.

60 40

20 0 Obesity (men) Fig. 5

Obesity (women)

Overall obesity

Central adiposity (men)

Prevalence of obesity and central obesity in surveys realized in Tirana city (2001–06).

Central adiposity (women)

82

Diabetes Mellitus in Albania: A Two Fold Increase in the Last Decade

Conclusions The need for studies on the increasing prevalence of obesity and type 2 diabetes in developing countries is greater now than ever before. A cascade of diseases follows from poor dietary habits and obesity, including major killers such as heart disease, diabetes, and cancer. These chronic diseases were once of concern in the developed world, but now they are a major health concern in developing countries. Globalization changes many features of modern life, including diets. Fully 11% of global trade is in food. Diet, however, is only part of the problem. Physical activity levels have declined greatly in the past few decades. Economic modernization has systematically eliminated exercise from everyday life. Our ancestors probably obtained most of their physical activity during work, household chores, and transportation, which today have been greatly reduced owing to automation and computers at work, labor-saving devices at home, and transportation facilities. Actually the environment is being set for the manifestation of chronic diseases related to sufficient energy availability and reduced physical activity associated with development, better socioeconomic status, and urbanization. The data available shows that Albania is facing an alarming increase in obesity and type 2 diabetes, which in the future may lead to dramatic increases in morbidity and mortality from noncommunicable diseases if nothing would be done to reduce the current prevalence rates. The modifiable lifestyle behaviors such as reduced physical activity and increased dietary intake are present in Albania and now only the first symptoms of a further increase in type 2 diabetes prevalence in the near future are being experienced. According to data from Albanian National Diabetes Registry, there are about 40,000 persons with diagnosed type 2 diabetes and more than 60,000 persons with unknown diabetes. For the year 2025, according to IDF World Diabetes Atlas, the number of persons with diagnosed diabetes is supposed to climb to 90,000. The benefits of behavioral interventions in reducing the rates of cardiovascular diseases and diabetes in populations have been well proven in countries such as Finland, the United States, Japan, and Singapore. Health-care professionals should encourage all behavior modifications to achieve a healthy lifestyle, but individual actions are not sufficient to halt the epidemic. In Albania, significant efforts must be made to establish the strategies that are most likely to be effective in reducing obesity, promoting physical activity, and thus helping to prevent major burden of diabetes, obesity, and cardiovascular diseases.

Acknowledgment The authors thank Novo Nordisk Office in Tirana for supporting different diabetes projects in Albania such as National Diabetes Registry and screening programs. The authors also thank INSTAT for providing figures in demographic and socioeconomic changes, and Ministry of Health for providing figures in health and nutrition changes in the past 15 years.

See also: Pesticide Exposure and Diabetes.

Further Reading Brownson, R.C., Boehmer, T.K., Luke, D.A., 2005. Declining rates of physical activity in the United States: What are the contributors? Annual Review of Public Health 26, 421–443. Burazeri, G., Achterberg, P., 2015. Health status in the transitional countries of South Eastern Europe. South Eastern European Journal of Public Health 3 (1), 1–5. Frank, L.D., Andresen, M.A., Schmid, T.L., 2004. Obesity relationships with community design, physical activity, and time spent in cars. American Journal of Preventive Medicine 27, 87–96. Gallus, S., Lugo, A., Murisic, B., Bosetti, C., Boffetta, P., La Vecchia, C., 2015. Overweight and obesity in 16 European countries. European Journal of Nutrition 54 (5), 679–689. Gjonça, A., Bobak, M., 1997. Albanian paradox, another example of protective effect of Mediterranean lifestyle? Lancet 350, 1815–1817. Gordon-Larsen, P., Adair, L., Popkin, B.M., 2002. US adolescent physical activity and inactivity patterns are associated with overweight: The National Longitudinal Study of Adolescent Health. Obesity Research 10, 141–149. King, H., Aubert, R.E., Herman, W.H., 1998. Global burden of diabetes, 1995–2025: Prevalence, numerical estimates, and projections. Diabetes Care 21, 1414–1431. Kraja, F., Kraja, B., Mone, I., Harizi, I., Babameto, A., Burazeri, G., 2016. Self-reported prevalence and risk factors of non-communicable diseases in the Albanian adult population. Medical Archives 70 (3), 208. Mantzoros, C.S., 2006. Obesity and diabetes. Humana Press, Totowa, NJ, pp. 15–36, 99–117, 277–290. Monteiro, C.A., Moura, E.C., Conde, W.L., et al., 2004. Socioeconomic status and obesity in adult populations of developing countries: A review. Bulletin of the World Health Organization 82, 940–946. Pomerleau, J., McKee, M., Lobstein, T., Knai, C., 2003. The burden of disease attributable to nutrition in Europe. Public Health Nutrition 6, 453–461. Popkin, B.M., 2001. The nutrition transition and obesity in the developing world. Journal of Nutrition 131, 871S–873S. Salmon, J., Owen, N., Crawford, D., et al., 2003. Physical activity and sedentary behavior: A population-based study of barriers, enjoyment, and preference. Health Psychology 22, 178–188. Shapo, L., Coker, R., McKee, M., et al., 2002. Tracking diabetes in Albania: A natural experiment on the impact of modernization on health. Diabetic Medicine 19, 87–88. Shapo, L., Pomerleau, J., McKee, M., et al., 2003. Body weight patterns in a country in transition: A population-based survey in Tirana City, Albania. Public Health Nutrition 6, 471–477. Shapo, L., Pomerleau, J., McKee, M., 2004. Physical inactivity in a county in transition: A population-based survey in Tirana city, Albania. Scandinavian Journal of Public Health 32, 60–67.

Diabetes Mellitus in Albania: A Two Fold Increase in the Last Decade

83

Tapia Granados, J.A., 1998. Reducing automobile traffic: An urgent policy for health promotion. Pan American Journal of Public Health 3 (4), 227–241. Toti, F., Bejtja, G., Hoti, K., Shota, E., Agaçi, F., 2007. Poor control and management of cardiovascular risk factors among Albanian diabetic adults patients. Primary Care Diabetes 1, 81–86. Tuomilehto, J., Lindstrom, J., Eriksson, J.G., et al., 2001. Finnish Diabetes Prevention Study Group: Prevention of type 2 diabetes mellitus by changes in lifestyle among subjects with impaired glucose tolerance. New England Journal of Medicine 344, 1343–1350. Ylli, A., Toti, F., Bejtja, G., et al., 2006. Low prevalence of diabetes and bad metabolic control in Albanian population. First step in implementing national register of diabetes in Albania. Diabetic Medicine 23 (Suppl. 4), 169. Zeqollari, A., Spahiu, K., Vyshka, G., Çakërri, L., 2014. Lipid profile in diabetes mellitus type 2 patients in Albania and the correlation with BMI, hypertension, and hepatosteatosis. Journal Family Medicine Community Health 1 (4), 1018.

Relevant Websites FAO, n.d. www.fao.org/ag/agn/nutrition/alb_en.stmdFAO. INSTAT, n.d. www.instat.gov.aldINSTAT (Albanian INstitute of Statistics). INSTAT, n.d. www.instat.gov.al/repoba/english/Researches/anglisht/projections/projection25fevrier05.pdfdINSTAT. INSTAT, n.d. www.instat.gov.al/repoba/english/default_english.htmdINSTAT. EATLAS, n.d. www.eatlas.idf.orgdInternational Diabetes Federation.

DiazinondAn Insecticide Consolato M Sergi, University of Alberta, Edmonton, AB, Canada © 2019 Elsevier B.V. All rights reserved.

Chemistry, Production, Use, and Exposure Diazinon (International Union of Pure and Applied Chemistry - IUPAC name: O,O-diethyl O-[4-methyl-6-(propan-2-yl)pyrimidin2-yl] phosphorothioate; INN: Dimpylate), which can be shortened as DZN, is a colorless to dark brown liquid (Fig. 1). DZN is a thiophosphoric acid ester, which was industrialized in 1952 by Ciba-Geigy, a Swiss chemical company. Originally, DZN was produced by a b-isobutyrylaminocrotonic acid amine, which is cyclized with NaOR. In chemical notation, R is either aliphatic chain of 1–8 carbons or hydrogen. DZN was formed in a mixture of water and alcohol with 1–8 carbon atoms at a temperature above 90 C. Then, the potassium salt reacts with diethylthiophosphoryl chloride by using a heating system for several hours. At the end of the reaction, the potassium chloride is washed with water, and the solvent was removed leaving the DZN as final product. Ciba-Geigy became later Novartis and then Syngenta, which was formed in 2000 by the fusion of Novartis Agribusiness and Zeneca Agrochemicals. DZN is a nonsystemic organophosphate insecticide. In the past, DZN was used to control cockroaches, silverfish, ants, and fleas in residential buildings. DZN became available in 1955, but it is only in the 1970s and early 1980s that DZN was massively used for general-purpose gardening usage and internal pest control. It was industrialized as an alternative to the dichlorodiphenyltrichloroethane, commonly known as DDT. DZN was heavily used in both cities and countryside. In 2004, the residential use of DZN was banned in the United States. Its purpose was only permitted for agricultural applications and, specifically, for cattle ear tags intended to hold chemicals to control insects (vide infra). The exposure can occur by inhalation, ingestion, and dermal contact.

Toxicity DZN acts as an acetylcholinesterase (AChE) inhibitor breaking down the acetylcholine (ACh), which is a neurotransmitter, into choline and an acetate group. The suppression of the AChE action provokes an abnormal build-up of ACh in the synaptic split. Following the entrance of DZN into the body, DZN oxidatively degenerates to diaxozon, an organophosphate chemical compound that bears a much more poisoning activity than DZN. There are two reactions associated with the activation of DZN, the one located in the liver microsomal enzyme system requiring O2 and NADPH, and the other in the microsomal enzyme system (via oxidation). Subsequently, DZN is processed further using very effective hydro-lyases. There is a difference of the degeneration rate between insects and mammals, which degenerate DZN at a slower rate than other species. The lack of the hydrolysis step in insects seems to be the cause of the rapid accumulation of DZN and its lethality accompanying features for insects. After the hydrolysis or oxidation, DZN is further degenerated. DZN is highly toxic for vertebrates, and symptoms of acute intoxication include colic, diarrhea and/or vomiting, vertigo, headache, miosis, bradycardia, rapid blood hypotension, convulsion, and apnea that may evolve to death. LD50 for DZN is 214 mg/kg (human, oral), 66 mg/kg (rat, oral), and 17 mg/kg (mouse, oral). Generally, the treatments will differ depending on exposure and process of administration of the toxin and critical biomarkers (e.g., urine, blood) as well as heart rates are recorded while detoxifying the patient using assisted breathing, intravenous fluids (IV), skin/eye wash, and the administration of antidotes atropine and oxime. DZN toxicity may persist for weeks or months. DZN is fat soluble and may be stored in fatty tissues. The exposure to some pesticides (organophosphate) can result in long-term disturbances of neurological type including organophosphate-induced delayed neuropathy with weakness or paralysis. Paresthesia of the extremities has been recorded, although it is a rare event following DZN exposures. Individuals can also present with acute pancreatitis in the emergency departments and surgical wards (Camann et al., 2013). Paresthesia is a burning or prickling sensation, which can occur in any part of the body, although it is often experienced in the hands, arms, legs, or feet.

Cancer in Experimental Animals DZN was verified for carcinogenicity in one feeding study in male and female mice and one feeding study in male and female rats. DZN induced a significant increase in the rate of hepatocellular carcinoma in male mice that could not be related to the DZN

Fig. 1

84

Structural formula of diazinon.

Encyclopedia of Environmental Health, 2nd edition, Volume 2

https://doi.org/10.1016/B978-0-12-409548-9.11318-1

DiazinondAn Insecticide

85

because it was observed in low dose males only at a rate not much higher than historical levels for a tumor site that is highly variable in this strain of mice. There were no significant results in high dose males or female mice at any dose. DZN induced a substantial increase in the incidence of blood cancer (leukemia) or lymphoma in male rats that could not be related to DZN because it was observed in low dose males only at a rate not much higher than the upper range of historical rates for a tumor site that is highly variable in this strain of rats. Lymphoma is a type of hematological malignancy of the lymphatic system. The two main types are Hodgkin lymphoma and non-Hodgkin lymphoma (NHL). Typically, Hodgkin lymphoma has light microscopic features that distinguish it from other diseases classified as lymphoma, including the presence of Reed-Sternberg cells. There were no significant findings in high dose males or female rats at any dose. The IARC Working Group and Evaluation Committee concluded that there is inadequate evidence for the carcinogenicity of DZN in experimental animals. Thus, this data occurred only in male animals receiving a low dose of the chemical compound in each study. Recently, it has been suggested that the cumulative effects of individual chemicals acting on different pathways, and diversity of related systems (organs, tissues, and cells) could reasonably collude to produce carcinogenic synergies (Goodson 3rd et al., 2015).

Cancer in Humans Three cohort studies reported relative risk estimates for the link between DZN exposure and cancer outcomes. The studies include the Florida Pest Control Worker investigation (Pesatori et al., 1994), the United Farm Workers of America cohort analysis (Mills and Yang, 2005; Mills et al., 2005; Mills and Kwong, 2001), and the Agricultural Health Study (AHS) (Alavanja et al., 2004, 2014). The studies were conducted among ranch workers (United Farm Workers of America), and pesticide professional users (Florida Pest Control Worker Study; AHS) and their wives (AHS) in the United States. These studies evidenced a positive association for NHL, with indications of exposure–response trends. Convincing data have been reported by two large multicenter case-control investigations of occupational exposures (Waddell et al., 2001; McDuffie et al., 2001). The specific histologic types of NHL have been published by the AHS (Waddell et al., 2001). These positive relations continued even after adjustment for other pesticides, but no overall increased statistical risk of NHL could be established (Alavanja et al., 2014). In particular, Alavanja et al.’s update on the AHS included analyses on 54,306 male pesticide applicators, among whom there were 523 cases of NHL classified into six subtypes using the Surveillance Epidemiology and End Results (SEER) systematization. They identified 148 small B-cell lymphocytic lymphomas (SLL)/chronic B-cell lymphocytic lymphomas (CLL)/mantle cell lymphomas (MCL), 117 diffuse large B-cell lymphomas (DLBCL), 67 NHLs of follicular type; 53 other B-cell NHLs, 97 plasmacytomas/multiple myelomas, and 19 T-cell NHLs, and 22 undefined cell histologic types of NHLs. An exposure–response association was not observed for DLBCL. Polytomous logit models showed some heterogeneity across subtypes for DZN, although this did not grasp statistical significance. The only consistent pattern was with follicular lymphoma, an NHL, that remained the single pesticide showing an exposure–response with DZN (P ¼ 0.04), although the trend was not statistically significant. There is some support for an increased risk of leukemia in the AHS, which was reinforced by an increase in risk with cumulative DZN exposure after adjustment for other pesticides. In the multiple updates that came out from the AHS, there is a consistent increased risk of lung carcinoma with an exposure–response link that was not clarified by considering confounding factors, other pesticides, smoking, or other lung cancer risk factors (Jones et al., 2015). Nevertheless, this data was not reproduced in other populations or groups of workers. Finally, some intriguing support for human relevance was provided by a positive study of volunteers exposed to a DZN formulation (Hatjian et al., 2000). Sister chromatid exchange (SCE) is a marker of chromosome damage. SCEs were elevated in lymphocytes of the peripheral blood after exposure compared with volunteers. In vitro studies showed increased SCE and decreased replicative indices, suggesting toxic and genotoxic effects of DZN (Hatjian et al., 2000). In 2014, Schinasi and Leon conducted a systematic review and meta-analysis of NHL and occupational exposure to agricultural pesticides, including DZN (Schinasi and Leon, 2014). In this meta-analysis, three studies dealing with DZN were under lens (Mills et al., 2005; Waddell et al., 2001; McDuffie et al., 2001). There was a clear-cut metarisk-ratio of 1.6 (95% CI, 1.2–2.2). Overall, there is inadequate evidence of carcinogenicity of DZN in humans.

Mechanisms of Carcinogenicity The orally administered DZN is absorbed in humans, dogs, and rodents. Some data of human volunteers exposed to DZN indicate that cutaneous absorption of DZN is slower than oral intake, but still present. The metabolism of DZN involves cytochrome P450 (CYP450), paraoxonase 1 (PON1), and carboxylesterases and this metabolism is similar across species, including humans. The rapid metabolism of DZN includes the formation of diazoxon or 2-isopropyl-4-methyl-6-hydroxypyrimidine (IMPY) by numerous cytochrome P450s. PON1 can metabolize diazoxon to IMPY and diethyl-phosphate (DEP). In humans and experimental animals, DZN is excreted as DEP, IMPY, or, even, other metabolites. Bioassay studies showed either DNA damage (oxidative DNA damage, DNA strand breaks) or chromosomal damage (micronuclei) (Guha et al., 2016; Zhang et al., 2012). In vitro studies with human cell lines also showed DNA damage (DNA strand breaks) or chromosomal damage (micronuclei, SCE). DZN induced oxidative stress in human and mammalian cells in vitro and in a variety of tissues in numerous studies in rodents in vivo. Investigations employing pre-contact to various antioxidants mitigated the effects. DZN induces oxidative stress through alteration of antioxidant enzyme activity, increasing lipid peroxidation, and depletion of glutathione. In human cell lines, DZN decreased the induction of regulators of the immunologic system, while pathological effects consisted in suppression of humoral immune response. The cellular

86

DiazinondAn Insecticide

functional responses have been observed in rodents in vivo. Specific effects involved 7 out of the 10 “key characteristics” based on data from high-throughput screening in vitro (Guyton et al., 2018; Chiu et al., 2018). Overall, the mechanistic data provides robust support for carcinogenicity findings of DZN. The mechanistic effects include strong facts for both oxidative stress and genotoxicity. There is solid and incontrovertible evidence that these effects can operate in humans.

Safety and Conclusions DZN is an organophosphate pesticide, of which the use is restricted in many countries worldwide. Safety is a major concern regarding this pesticide because of the toxicity effects. There is strong mechanistic evidence for carcinogenicity that has not been supported by studies in humans and experimental animals. In both humans and bioassays, limited evidence for carcinogenicity was gathered by the IARC Evaluation Committee (IARC, 2017). The 2A classification (probably carcinogenic to humans) by IARC of DZN relies on inadequate evidence of carcinogenicity in both humans and experimental animals, but strong mechanistic evidence. There is strong evidence that exposure to DZN is genotoxic and this fundamental data arises from studies in experimental animals in vivo, and in studies using animal cell lines. Convincing studies in human cell lines in vitro proof that DZN has effects on chromosomal damage demonstrating that this mechanism can be operative in humans. Moreover, DZN can induce oxidative stress with data arising from experimental animals in vivo, and studies in cell lines (human and animal) in vitro. There are seven or more different brands of ear tags, which are on the market in Canada and the USA. The ear tags can be subdivided between the organophosphates and the pyrethroids. Also, one product (EliminatorÒ) is a combination of organophosphates and synthetic pyrethroids. The ear tags are characterized by slow release, so the chemical compound will eventually coat the halter and much of the horses through grooming. It is recommended using chemical-resistant gloves for protection when applying and handling the tags. The ear tags can provide a chemical-exposure hazard for human handlers. The Insecticide Cattle Ear Tag with the commercial name PatriotÔ is the only insecticide ear tag on the market that contains 40% diazinon. This ear tag is effective against pyrethroidresistant horn flies for up to five months. Thus, it needs to be changed regularly. It helps in the control of face flies, stable flies, house flies, and lice.

References Alavanja, M.C., Dosemeci, M., Samanic, C., Lubin, J., Lynch, C.F., Knott, C., et al., 2004. Pesticides and lung cancer risk in the agricultural health study cohort. American Journal of Epidemiology 160, 876–885. Alavanja, M.C., Hofmann, J.N., Lynch, C.F., Hines, C.J., Barry, K.H., Barker, J., et al., 2014. Non-Hodgkin lymphoma risk and insecticide, fungicide and fumigant use in the agricultural health study. PLoS One 9, e109332. Camann, D.E., Schultz, S.T., Yau, A.Y., Heilbrun, L.P., Zuniga, M.M., Palmer, R.F., et al., 2013. Acetaminophen, pesticide, and diethylhexyl phthalate metabolites, anandamide, and fatty acids in deciduous molars: Potential biomarkers of perinatal exposure. Journal of Exposure Science & Environmental Epidemiology 23, 190–196. Chiu, W.A., Guyton, K.Z., Martin, M.T., Reif, D.M., Rusyn, I., 2018. Use of high-throughput in vitro toxicity screening data in cancer hazard evaluations by IARC Monograph Working Groups. ALTEX 35, 51–64. Goodson 3rd, W.H., Lowe, L., Carpenter, D.O., Gilbertson, M., Manaf Ali, A., Lopez de Cerain Salsamendi, A., et al., 2015. Assessing the carcinogenic potential of low-dose exposures to chemical mixtures in the environment: The challenge ahead. Carcinogenesis 36 (Suppl 1), S254–S296. Guha, N., Guyton, K.Z., Loomis, D., Barupal, D.K., 2016. Prioritizing chemicals for risk assessment using chemoinformatics: Examples from the IARC Monographs on Pesticides. Environmental Health Perspectives 124, 1823–1829. Guyton, K.Z., Rusyn, I., Chiu, W.A., Corpet, D.E., van den Berg, M., Ross, M.K., et al., 2018. Application of the key characteristics of carcinogens in cancer hazard identification. Carcinogenesis 39, 614–622. Hatjian, B.A., Mutch, E., Williams, F.M., Blain, P.G., Edwards, J.W., 2000. Cytogenetic response without changes in peripheral cholinesterase enzymes following exposure to a sheep dip containing diazinon in vivo and in vitro. Mutation Research 472, 85–92. IARC, 2017. Diazinon. Some organophosphate insecticides and herbicides. In: IARC monographs on the evaluation of the carcinogenic risk of chemicals to humans. WHO Press, Lyon, France, pp. 1–97. Jones, R.R., Barone-Adesi, F., Koutros, S., Lerro, C.C., Blair, A., Lubin, J., et al., 2015. Incidence of solid tumours among pesticide applicators exposed to the organophosphate insecticide diazinon in the agricultural health study: An updated analysis. Occupational and Environmental Medicine 72, 496–503. McDuffie, H.H., Pahwa, P., McLaughlin, J.R., Spinelli, J.J., Fincham, S., Dosman, J.A., et al., 2001. Non-Hodgkin’s lymphoma and specific pesticide exposures in men: CrossCanada study of pesticides and health. Cancer Epidemiology, Biomarkers & Prevention 10, 1155–1163. Mills, P.K., Kwong, S., 2001. Cancer incidence in the United Farmworkers of America (UFW), 1987–1997. American Journal of Industrial Medicine 40, 596–603. Mills, P.K., Yang, R., 2005. Breast cancer risk in Hispanic agricultural workers in California. International Journal of Occupational and Environmental Health 11, 123–131. Mills, P.K., Yang, R., Riordan, D., 2005. Lymphohematopoietic cancers in the United Farm Workers of America (UFW), 1988–2001. Cancer Causes & Control 16, 823–830. Pesatori, A.C., Sontag, J.M., Lubin, J.H., Consonni, D., Blair, A., 1994. Cohort mortality and nested case-control study of lung cancer among structural pest control workers in Florida (United States). Cancer Causes & Control 5, 310–318. Schinasi, L., Leon, M.E., 2014. Non-Hodgkin lymphoma and occupational exposure to agricultural pesticide chemical groups and active ingredients: A systematic review and metaanalysis. International Journal of Environmental Research and Public Health 11, 4449–4527. Waddell, B.L., Zahm, S.H., Baris, D., Weisenburger, D.D., Holmes, F., Burmeister, L.F., et al., 2001. Agricultural use of organophosphate pesticides and the risk of non-Hodgkin’s lymphoma among male farmers (United States). Cancer Causes & Control 12, 509–517. Zhang, X., Wallace, A.D., Du, P., Kibbe, W.A., Jafari, N., Xie, H., et al., 2012. DNA methylation alterations in response to pesticide exposure in vitro. Environmental and Molecular Mutagenesis 53, 542–549.

DichloromethanedA Paint Stripper and Plastic Welding Adhesive Consolato M Sergi, University of Alberta, Edmonton, AB, Canada © 2019 Elsevier B.V. All rights reserved.

Chemistry, Production, Use, and Exposure Methylene dichloride (DCM, or dichloromethane) is an organic compound with geminal composition. It shows that there is a connection between two atoms or functional groups that are attached to the same molecule. DCM has the molecular formula CH2Cl2, and its structured formula is depicted in Fig. 1. DCM is a colorless, volatile liquid with a discreetly sweet aroma, and is extensively used as a solvent. DCM is polar but is not miscible with water, but with many organic solvents. In 1839, the French chemist Henri Victor Regnault (1810–78), as first, prepared DCM isolating it from a mixture of chloromethane (CH3Cl) and chlorine (Cl2) that had been exposed to sunlight. DCM is produced by treating either methane (CH4) or chloromethane (CH3Cl) with chlorine gas (Cl2) at 400–500 C (CH4 þ Cl2 / CH3Cl þ HCl and CH3Cl þ Cl2 / CH2Cl2 þ HCl) (Wikipedia, 2019). In 1993, it was assessed that 400,000 tons of DCM were produced in the United States, Japan, and Europe. Two more chemical reactions, including CH2Cl2 þ Cl2 / CHCl3 þ HCl and CHCl3 þ Cl2 / CCl4 þ HCl give rise to chloroform and carbon tetrachloride (CCl4) in the mixture. These chemical compounds are separated by distillation. DCM can dissolve a wide range of compounds making it a valuable solvent for many chemical procedures. DCM is used as a paint stripper and a degreaser. In the food industry, DCM has been used to decaffeinate caffeine-rich beverages (coffee and tea). Moreover, it is used to prepare extracts of hops and other flavorings. In the beer crafting, DCM is particularly known because hops are the flowers of the hop plant Humulus lupulus, which is a species of flowering plant in the hemp family (Cannabaceae), present in Europe, western Asia, and North America. Hops are primarily used as a bittering, flavoring and stability agent in beer, but they simultaneously impart floral, fruity or citrusy flavors and aroma to the beer. The volatility as quality is chiefly in considering it an aerosol spray propellant and a blowing agent for polyurethane foams. The low boiling point of this compound allows the compound to work in a heat engine extracting mechanical energy using small differences in temperature. Chemical welding is another property in which this compound may be used both professionally and by model building hobbyists. Also, DCM is used in the field of civil engineering about material testing. The pollution of the environment by DCM is not minimal. Although there are natural sources of this compound including macroalgae, wetlands, and volcanoes, most of DCM in the environment is due to the industrial emissions. A phase out industry regulation should be better and more efficiently promoted. DCM exposure can occur by either inhalation or by dermal contact.

Toxicity Symptoms of acute overexposure to DCM via inhalation include difficulty in concentrating, nausea, headaches, associated with dizziness, weakness, numbness, and irritative changes of the upper respiratory tract and eyes. More severe sequelae include suffocation, loss of consciousness, coma, and death. The metabolism of DCM is highly dangerous for the body because its metabolization gives rise to carbon monoxide (CO) potentially leading to CO poisoning and death. This aspect needs to be kept in mind by forensic pathologists. If the individuals affected by exposure survived, they may also develop neurologic disorders, such as neuropathy of the optic nerve, cardiac arrhythmias, and hepatitis. Prolonged cutaneous contact can produce skin irritation and chemical burns. Some tragedies and near-miss and accidents occurred during the last few decades. In 1965, four night-shift workers were found unconscious after exposure to DCM, of which three survived with acute bronchitis, irritation of upper respiratory tract, and conjunctivitis (Browning, 1965). Browning also reports several deaths associated with DCM used as an anesthetic (Browning, 1965). A 37-year-old chemist with inhalation exposure (long-term) to DCM and iodomethane experienced headaches, dizziness, and fatigue for about 5 years. After acute exposure, the man developed ataxia, increasing inhibition, and a critical mental state with delirium. Magnetic resonance imaging (MRI) of the brain showed a T2-hyperintense lesion in the corpus callosum, highly suggestive as myelinolysis. This finding should be considered of interest for clinical and research neurologists and neuro-pathologists working on multiple sclerosis. Sixteen days later, a subsequent MRI showed complete recovery with no lesion (Ehler et al., 2011). In 2001, a 22-year-old male was found dead without a gas mask at his working place in a car lacquering company. He was using a solvent containing DCM and pathology revealed obvious petechial bleedings of the lungs characteristic of asphyxia with microthrombosis of the pulmonary arteries (Fechner et al., 2001). Occasionally, ingestion occurred, and this was the case of a 23-year-old man with an altered mental status after attempting suicide ingesting chloroform and DCM. Abnormal liver enzymes were noted on post-ingestion day 2, and jaundice occurred on post-ingestion day 3. Computed tomographic (CT) scanning showed severe steatosis (fatty cell degeneration) of the hepatocytes (major liver cellular units). In a cell or organ, fatty change is

Cl Fig. 1

Cl

Structural formula of dichloromethane.

Encyclopedia of Environmental Health, 2nd edition, Volume 2

https://doi.org/10.1016/B978-0-12-409548-9.11311-9

87

88

DichloromethanedA Paint Stripper and Plastic Welding Adhesive

an abnormal retention of lipids. After the introduction of supportive care, the young man was miraculously restored to health from hepatic dysfunction and discharged without complications (Kim, 2008). In 1984, twenty liters of DCM had spilled accidentally in a laboratory. Unfortutenately, two workers cleaned it up just by hand using floor cloths and inhaled DCM vapors for 20 min. Both workers suffered a headache, which was accompanied by nausea, drowsiness, dizziness, fatigue, and oral dryness. All these symptoms except for a headache disappeared after a few hours. No long-term effects were noted (Bernardini et al., 1984). DCM was also causing the death of a bathtub refinisher in 2010 (bathtub refinishing or re-enameling is the process of refreshing the surface of a bathtub, which is worn or damaged, to an almost new condition). The worker used DCM-based paint stripping product marketed for use in aircraft maintenance. Two earlier, similar deaths were also identified in Michigan, United States. Also, the U.S. Occupational Safety and Health Administration (OSHA) identified ten more fatalities among bathtub refinishers associated with DCMstripping agents in nine US states during 2000–11. In all cases, protective equipment, either was inadequately used or not used at all for the protection against DCM vapor. Another case of DCM ingestion occurred a few years ago when a 51-year-old man who ingested DCM presented at the emergency department with the classical clinical features. This individuum developed an acute abdomen that required repeated laparotomy and physicians experimented the effect of an ethanol infusion on carboxyhemoglobin concentrations as a new treatment modality (Vetro et al., 2012). An accidental death after occupational exposure to an atmosphere containing DCM was also reported when a man who died while observing an industrial machine filled with DCM vapor (Takeshita et al., 2000). The OSHA and the National Institute for Occupational Safety and Health (NIOSH) warned, in 2013, that at least 14 refinishers have deceased since 2000 from DCM contact. These workers had been working alone, in poorly ventilated bathrooms, with inadequate/no protection, and no actual training about the hazards of this cehemical compound. The continued occurrence of fatalities in the United States and other countries demands an urgent re-evaluation of existing regulatory strategies in a matter of safety (Macisaac et al., 2013). In experimental animals, DCM is fetotoxic at doses that are maternally toxic, but it seems that no teratogenic effects are noted, although DCM crosses the placenta.

Cancer in Experimental Animals DCM was evaluated for carcinogenicity in experimental animals (IARC, 1979, 1986, 1999, 2017) DCM has been mostly linked to cancer of the liver, lungs, breast, salivary glands, and soft tissue in experimental animals. In 2014, The IARC evaluation committee targeted DCM (Benbrahim-Tallaa et al., 2014). The committee found 6 carcinogenicity studies on DCM in mice, including two oral administration studies (one drinking water investigation in males and female mice and one gavage investigation in male and female mice), three inhalation investigations (two in male and female mice, one in female mice only), and one in which DCM was injected in the peritoneal cavity using male mice. DCM induced hepatocellular adenomas and carcinomas in three investigations in male mice (two studies using the inhalation route and one study using the drinking water route), and in three inhalation studies in female animals (mice). DCM also induced pulmonary adenomas and/or carcinomas in three inhalation studies in male mice and three inhalation studies in female mice. In one study, DCM caused hemangiomas and/or hemangiosarcomas in the liver of male and female mice. In rats, the IARC committee found seven carcinogenicity studies on DCM. These seven investigations include two oral administration investigations (one drinking water application in males and females and one gavage application in males and females), five inhalation investigations (four in males and females, one in pregnant females and their offspring of both sexes). It was apparent that DCM induced mammary gland adenomas and/or fibroadenomas in female experimental animals (four inhalation studies) and male rats (two inhalation studies), subcutaneous fibromas and/or sarcomas in male rats (two inhalation studies), and salivary gland sarcomas in male rats (one inhalation study). It was also clear that DCM may have caused a minimal increase in hepatocellular adenomas and carcinomas (combined tumors) in female rats in one drinking water study. Syrian hamsters were also studied. There was one inhalation investigation on DCM in these animals in which there was an increase in the incidence of lymphosarcoma, which is the third most common cancer diagnosed in dogs. Lymphosarcoma is a high malignant cancer of lymphocytes and lymphoid tissues that include lymph nodes, spleen, liver, gastrointestinal tract, and bone marrow. The IARC evaluation committee indicated that there is enough evidence for the carcinogenicity of DCM in experimental animals. Sufficient evidence was seen in the liver and lung of male and female mice and the dermis of male rats. There is limited evidence of carcinogenicity in the mammary gland of male and female rats and the salivary gland of male rats.

Cancer in Humans The risk of cancer for DCM was calculated by gathering information following exposure of humans to DCM in cohort studies of occupational exposure among workers manufacturing cellulose triacetate fibers and films, a cohort study of aircraft workers exposed to multiple solvents including DCM, and case-control studies of several different cancers and occupational exposure to solvents. Moreover, several studies have been steered to explore the occurrence of cancer of the liver among workers in the printing industry in Japan. These individuals were exposed to DCM, 1,2-dichloropropane, and other solvents. It appears that only the cohort studies of cellulose-triacetate facilities provide quantitative measures of exposure to DCM. These studies showed a relatively small number of exposed workers. The case-control studies typically assessed exposure to multiple solvents, including DCM, in a semiquantitative or qualitative manner, using expert judgment, job-exposure matrices or occupational titles. Cancer of the biliary tract harbors one of the worst prognoses for patients with liver cancer and shows a rapidly evolving etiologic spectrum (Al-Bahrani et al., 2013). In

DichloromethanedA Paint Stripper and Plastic Welding Adhesive

89

a Japanese report on cancer of the biliary tract among printing workers in the country, which was identified histologically as cholangiocellular carcinoma by histopathologists, most of the individuals were exposed to DCM (Sobue et al., 2015; Tomimaru et al., 2015; Yamada et al., 2014; Kumagai, 2014; Kumagai et al., 2013). All except one of these were also exposed to 1,2-dichloropropane. In Japan, the high risk of biliary cancer in one cohort study of workers without exposures to other likely risk factors is reliable with a causal association, but the number of exposed individuals was considered small. Three case-control studies and two cohort studies evaluated DCM as a carcinogenic hazard, but all except one evidenced the increased incidence of non-Hodgkin lymphoma (NHL). The tricky with these studies is that most subjects were exposed to several chemical compounds (solvents). Some of these solvents have been beforehand associated with NHL. The risk estimates were based on quite small numbers. Nevertheless, a positive association of DCM with NHL was consistent among studies using different designs and in several countries. Several studies assessed other cancer sites, but in agreement with the IARC evaluation committee, these data should be considered inadequate for the lack of specific IARC criteria. The IARC committee evaluated that there is limited evidence in humans for the carcinogenicity of this chemical compound (IARC, 2017). Positive associations have been seen between exposure to DCM and cancer of the biliary tract and NHL.

Mechanisms of Carcinogenicity DCM absorption occurs following oral (rarely), inhalation, or dermal exposure, due to its volatile lipophilic quality. DCM is distributed systemically. There are two critical metabolic pathways for the metabolism of DCM in humans and experimental animals. The first pathway is CYP2E1-mediated. This pathway generates CO and CO2 as stable end products. In this pathway, one of the intermediates, formyl chloride (CHClO or HeCOeCl), is reactive with nucleophiles, which are a chemical species that donates an electron pair to an electrophile to form a chemical bond in relation to a reaction and examples of nucleophiles are anions such as Cl, or a compound with one pair of electrons such as NH3 (ammonia). The second pathway is glutathione conjugation, catalyzed primarily by glutathione S-transferase theta-1 (GSTT1). During this metabolic pathway, there is the formation of reactive metabolites, including formaldehyde or methanal (CH2O or H-CHO) and S-chloromethyl glutathione (C11H18ClN3O6S). Glutathione is a tripeptide, which shows a gamma peptide connexion between the amine group of the cysteine amino acid and the carboxyl group of the glutamate side chain. Moreover, a standard peptide linkage attaches the carboxyl group of cysteine to a glycine. GSH is a paramount antioxidant in plants, animals, fungi, and some bacteria and Archaea. In the body as well as in nature, glutathione is a potent molecule capable of preventing cell damage caused by reactive oxygen species (ROS) such as free radicals, peroxides, lipid peroxides, and heavy metals. CYP2E1-mediated metabolism is predominant at lower concentrations. Glutathione S-transferase (GST)-mediated metabolism prevails at higher levels. Oxidative and GST-mediated metabolism of DCM are similar between rodents and humans (qualitatively). However, quantitative differences happen across species, tissues, and cellular types, and among individuals of the same ethnic group. In human cells, DCM induces micronuclei and sister-chromatid exchange. DNA-protein cross-links and DNA damage are not observed. In bioassays, DCM induces genotoxicity using the GST pathway. In humans, the overall genotoxicity of DCM is strongly associated with GST-mediated metabolism. In both humans and experimental animals, hepatic, renal, splenic, reproductive, and developmental toxicity other than neurological toxicity have been observed in the literature. Single nucleotide polymorphisms or SNPs are a genetic variation in a single nucleotide that occurs at a precise position in the genome of humans, plants or animals, where each variety is existing to some considerable degree within a population (e.g., higher than 1%) (Corfield et al., 2010; Meyer et al., 2003). Importantly, SNPs may fall within coding sequences of genes, non-coding regions of genes, or in the intergenic regions. Since the genetic code degenerates, SNPs within a coding sequence do not inevitably change the amino acid sequence of the protein that is produced. Thus, SNPs in the coding region are of two types: synonymous and nonsynonymous SNPs. Synonymous SNPs do not affect the sequence of the codified protein, while nonsynonymous SNPs change the amino acid sequence of the protein. The nonsynonymous SNPs are of two types, including missense and nonsense SNPs. SNPs of the protein-coding regions may still affect gene splicing, transcription factor binding, messenger RNA degradation, or the sequence of noncoding RNA. No studies with DCM in humans have explored whether polymorphisms of GSTT1 are associated with neoplasm. However, one study has reported an association between a CYP2E1 SNP and NHL in DCM-exposed individuals. The IARC Working Group concluded that the mechanistic evidence for DCM carcinogenesis is strong (IARC, 2017).

Safety and Conclusions Tragedies with acute exposure occurred in the past will happen again if personal protection is not put on the place. Due to its high volatility, there is an acute inhalation hazard and can also be absorbed at the cutaneous level. These workers had been working alone, in poorly ventilated bathrooms, with inadequate/no protection, and no actual training about the hazards of this chemical compound. The continued occurrence of fatalities in the United States and other countries demands an urgent re-evaluation of existing regulatory strategies in a matter of safety (Macisaac et al., 2013). Thus, both professional material testing personnel and model building hobbyists should be aware of this potentially hazardous chemical compound with probable carcinogenic potential. In several countries, products containing DCM must carry labels warning of its health risks, and this aspect has notably reduced the risk of accidental exposure of individuals to this compound. Since 2013, OSHA has since then issued a DCM standard, while the European Parliament voted, in 2009, to ban the use of DCM in paint-strippers for several professionals and consumers. In

90

DichloromethanedA Paint Stripper and Plastic Welding Adhesive

Europe, the Scientific Committee on Occupational Exposure Limit Values (SCOEL) endorses for DCM an occupational exposure limit of 100 ppm. In case of a short-term exposure limit (15 min) the threshold is set at 200 ppm. DCM is not classified as ozone (O3)-depleting chemical by the Montreal Protocol, and, subsequently, the U.S. Clean Air Act does not regulate DCM as O3 depleter. According to the US EPA, the atmospheric lifetime of DCM is very short. Consequently, the chemical compound decomposes before reaching the O3 layer. Nevertheless, it is an environmental unfriendly compound. Since 1998 through 2016, O3 concentrations measured at the mid-latitudes from the ground up through the stratosphere have declined by 2.2 Dobson units (DU), which are the units to measure O3 layer thickness. For clarity, one DU has been defined to be 0.01 mm thickness at standard temperature and pressure, and O3 layer thickness is expressed regarding these units in honor of G.M.B. Dobson, one of the first scientists investigating the stratosphere and the atmospheric O3. In conclusion, DCM is an impressive hazard that because of the colorless quality may become one of the most hazardous chemicals for both professionally and model building hobbyists. The continued occurrence of fatalities and severe injuries due to DCM-containing solutions in the United States and other countries demands an urgent reevaluation of existing regulatory strategies. IARC in its monograph on this chemical compound advises that DCM is probably carcinogenic to humans locating this compound in Group 2A. The overall evaluation of Group 2A by IARC is based on enough evidence in experimental animals and limited evidence in humans. A Group 2A evaluation was also supported by the substantial evidence that its metabolism via GSTT1 leads to the formation of reactive metabolites. Since GSTT1 activity is strongly associated with genotoxicity both in vitro and in vivo, and GSTT1-mediated metabolism of DCM occurs in humans, data are highly supportive of the probable carcinogenicity of DCM in humans.

References Al-Bahrani, R., Abuetabh, Y., Zeitouni, N., Sergi, C., 2013. Cholangiocarcinoma: Risk factors, environmental influences and oncogenesis. Annals of Clinical and Laboratory Science 43, 195–210. Benbrahim-Tallaa, L., Lauby-Secretan, B., Loomis, D., Guyton, K.Z., Grosse, Y., El Ghissassi, F., et al., 2014. Carcinogenicity of perfluorooctanoic acid, tetrafluoroethylene, dichloromethane, 1,2-dichloropropane, and 1,3-propane sultone. The Lancet Oncology 15, 924–925. Bernardini, P., et al., 1984. An episode of acute poisoning with methylene chloride and experimental evaluation of the exposure. La Medicina del Lavoro 75, 133–138. Browning, E., 1965. Toxicity and metabolism of industrial solvents. American Elsevier, New York. Corfield, A., Meyer, P., Kassam, S., Mikuz, G., Sergi, C.S.N.P., 2010. At the origins of the databases of an innovative biotechnology tool. Frontiers in Bioscience (Scholar Edition) 2, 1–4. Ehler, E., Latta, J., Eichlerova, A., Mrklovsky, M., Urban, P., 2011. Exposure to iodomethane and dichloromethane associated with a confusional state. Neurotoxicology 32, 307–311. Fechner, G., Ortmann, C., Du Chesne, A., KohIer, H., 2001. Fatal intoxication due to excessive dichloromethane inhalation. Forensic Science International 122, 69–72. IARC, 1979. Dichloromethane. IARC Monographs on the Evaluation of the Carcinogenic Risk of Chemicals to Humans 20, 449–465. IARC, 1986. Dichloromethane. IARC Monographs on the Evaluation of the Carcinogenic Risk of Chemicals to Humans 41, 43–85. IARC, 1999. Dichloromethane. IARC Monographs on the Evaluation of Carcinogenic Risks to Humans 71 (Pt 1), 251–315. IARC, 2017. Dichloromethane. IARC Monographs on the Evaluation of Carcinogenic Risks to Humans 110, 177–255. Kim, H., 2008. A case of acute toxic hepatitis after suicidal chloroform and dichloromethane ingestion. The American Journal of Emergency Medicine 26, 1073.e3-6. Kumagai, S., 2014. Two offset printing workers with cholangiocarcinoma. Journal of Occupational Health 56, 164–168. Kumagai, S., Kurumatani, N., Arimoto, A., Ichihara, G., 2013. Cholangiocarcinoma among offset colour proof-printing workers exposed to 1,2-dichloropropane and/or dichloromethane. Occupational and Environmental Medicine 70, 508–510. Macisaac, J., Harrison, R., Krishnaswami, J., McNary, J., Suchard, J., Boysen-Osborn, M., et al., 2013. Fatalities due to dichloromethane in paint strippers: A continuing problem. American Journal of Industrial Medicine 56, 907–910. Meyer, P., Sergi, C., Garbe, C., 2003. Polymorphisms of the BRAF gene predispose males to malignant melanoma. Journal of Carcinogenesis 2, 7. Sobue, T., Utada, M., Makiuchi, T., Ohno, Y., Uehara, S., Hayashi, T., et al., 2015. Risk of bile duct cancer among printing workers exposed to 1,2-dichloropropane and/or dichloromethane. Journal of Occupational Health 57, 230–236. Takeshita, H., Mogi, K., Yasuda, T., Mori, S., Nakashima, Y., Nakajima, T., et al., 2000. Postmortem absorption of dichloromethane: A case study and animal experiments. International Journal of Legal Medicine 114, 96–100. Tomimaru, Y., Kobayashi, S., Wada, H., Hama, N., Kawamoto, K., Eguchi, H., et al., 2015. Intrahepatic cholangiocarcinoma in a worker at an offset color proof-printing company: An autopsy case report. Hepatology Research 45, 488–493. Vetro, J., Koutsogiannis, Z., Jones, D.A., Canestra, J., 2012. A case of methylene chloride poisoning due to ingestion of home-distilled alcohol and potential new treatment with ethanol infusion. Critical Care and Resuscitation 14, 60–63. Wikipedia (2019) https://en.wikipedia.org/wiki/Dichloromethane (Accessed on March 5, 2019). Yamada, K., Kumagai, S., Nagoya, T., Endo, G., 2014. Chemical exposure levels in printing workers with cholangiocarcinoma. Journal of Occupational Health 56, 332–338.

1,2-DichloropropanedA Paint Stripper and Dry-Cleaning Component Consolato M Sergi, University of Alberta, Edmonton, AB, Canada © 2019 Elsevier B.V. All rights reserved.

Chemistry, Production, Use, and Exposure 1,2-Dichloropropane (1,2-DCP, C3H6Cl2) is an organic compound, which is classified as a chlorinated hydrocarbon or chlorocarbon. A chlorinated hydrocarbon or chlorocarbon is an organic chemical compound encompassing at least one atom of chlorine (Cl ), which is covalently bonded. This bond influences the chemical behavior of the molecule. 1,2-DCP is a colorless, flammable liquid and harbors a sweet odor like chloroform. This chlorocarbon is obtained as a byproduct of the manufacture of epichlorohydrin, which is an organochlorine compound and an epoxide, is used to produce epoxy resins, and acts as a pesticide. Remarkably, epichlorohydrin is used on many paper bags to keep the tea bags from breaking. Harboring a density of 1.16 g/cm3, the 1,2-DCP has a boiling point at 96 C (NTP, 1986) (Fig. 1). 1,2-DCP is used mainly as an intermediate compound in the making of other organic chemicals including propylene, carbon tetrachloride, and tetrachloroethylene, and in paint stripping, and was used as an ink-removal agent in the printing industry (Benbrahim-Tallaa et al., 2014). 1,2-DCP is also used as an insecticide fumigant on soil and grain and can control peach tree borers. The 1,2-DCP is also used as an intermediate compound in the manufacture of perchloroethylene as well as other chlorinated products. DCP is also used as a solvent. Today, almost all the 1,2-DCP available worldwide is used as a chemical intermediate to make perchloroethylene, which is also called tetrachloroethylene (Cl2 C]CCl2) and several other related chlorinated chemicals. Tetrachloroethylene is used in the dry-cleaning industry and degreasing metals. This compound is relatively resistant to hydrolysis, is poorly adsorbed onto soil and can migrate into groundwater (WHO, 2003). Exposure can occur by inhalation, ingestion, and dermal contact.

Toxicity There is evidence of testicular changes (degeneration) and an increased number of abnormal spermatogonia in the epididymis as observed in a 13-week bioassay in rats (oral gavage) (Bruckner et al., 1989). Neonatal body weight was decreased, and neonatal mortality was increased. The authors suggested that the increased hepatocellular granularity should be considered an adaptive evolution associated with the metabolism of 1,2-DCP. Teratogenicity studies are available in rats and rabbits. Toxic effects were seen at the highest dose level. There were decreases in growth, food consumption, and muscular tone. In fetuses, the rate of delayed ossification of the skull was increased at 125 mg/kg of body weight per day. Rabbits received doses by gavage on days 7 through 19 of gestation. Effects like those in the rat investigation were found with maternal toxicity, including anorexia and anemia at 0.150 g/ kg of body weight per day and increased rate of delayed ossification of bones of the skull in fetuses at 150 mg/kg of body weight per day. Although maternal toxicity was obvious in these studies, no suggestion of teratogenicity may be proposed at any dose level in rat or rabbit fetuses. Regarding mutagenicity, 1,2-DCP proof was positive in Salmonella typhimurium (strains: TA100 and TA1535). These results direct that 1,2-DCP can provoke base pair substitution. 1,2-DCP was mutagenic in lymphoma cells of mice in the thymidine kinase test and an in vitro test using Aspergillus nidulans. Other in vitro investigations for sister chromatid exchanges in Chinese hamster ovary cells and V79 cells were positive. These tests were positive both without and with metabolic activation. Moreover, a test for chromosomal aberrations was positive in Chinese hamster ovary cells. Negative results were obtained using Drosophila melanogaster and Sprague-Dawley rats.

Cancer in Experimental Animals There are two chronic bioassay studies on 1,2-DCP in mice, the one using gavage, while the other using an inhalation route of administration. 1,2-DCP induced adenomas and/or carcinomas of the liver with hepatocellular phenotype in one oral administration study in mice (both sexes). Moreover, this compound induced bronchioloalveolar adenomas and/or carcinomas in one inhalation study in mice (both sexes), and splenic hemangiosarcoma in male mice. 1,2-DCP may have induced histiocytic sarcomas and Harderian gland adenomas in one inhalation study in male mice. The Harderian gland is an organ found within the orbit of the

Cl H3C Fig. 1

Cl

Structural formula of the 1,2-dichloropropane.

Encyclopedia of Environmental Health, 2nd edition, Volume 2

https://doi.org/10.1016/B978-0-12-409548-9.11312-0

91

92

1,2-DichloropropanedA Paint Stripper and Dry-Cleaning Component

eyes. It occurs in tetrapods (reptiles, amphibians, birds, and mammals) that possess a nictitating membrane, which is a translucent membrane that forms an inner eyelid in some mammals, reptiles, and birds with the purpose to protect the eye from dust and dryness. The organ can be compound tubular or compound tubule-alveolar, and the fluid it secretes varies between different groups of animals. There is one carcinogenicity study on 1,2-DCP in rats of both sexes by gavage. The 1,2-DCP induced mammary gland adenocarcinomas in female rats, papillomas of the nasal cavity in rats of both genders, and olfactory neuroblastoma or esthesioneuroblastoma in the nasal cavity of male rats. The esthesioneuroblastoma is a neuroepithelial-cell-derived neoplasm of the olfactory mucosa composed of homogeneous small round cells which harbor neurosecretory granules. The IARC concluded that there is enough evidence for the carcinogenicity of 1,2-DCP in the liver and lung of male and female mice, and limited evidence of carcinogenicity in the spleen (hemangiosarcoma) of male mice. In rats, there is insufficient evidence of carcinogenicity in the mammary gland in female rats and nasal cavity of male and female rats (Benbrahim-Tallaa et al., 2014; IARC, 1986, 1999).

Cancer in Humans 1,2-DCP was classified as carcinogenic to humans (Group 1), based on enough evidence in humans that exposure to 1,2-DCP causes carcinoma of the biliary tract (cholangiocellular carcinoma). The most compelling proof relies on the rate of this tumor in a printing plant in Osaka, Japan. Additional plants also showed similar increasing rates of this type of cancer. The degree of variable exposure was a challenge for the 2014 IARC group (Benbrahim-Tallaa et al., 2014). Although workers were exposed to more than 20 different compounds, exposure to 1,2-DCP was observed to be present to all except one of the 24 patients with cholangiocellular carcinoma. Cholangiocellular carcinoma is one of the most malignant epithelial tumors of the liver after hepatocellular carcinoma. Its rate seems to be increasing worldwide. Its risk factors are varied and differ globally. Although diagnostic and therapeutic healthcare has advanced in several regions, tackling this tumor remains a challenge. The causes of cholangiocellular carcinoma’s increasing rate are likely due to a differential intensification of some factors in some geographical areas. Environment-linked risk factors may play a critical role in the carcinogenesis (Al-Bahrani et al., 2013).

Mechanisms of Carcinogenesis Recently, 1,2-DCP can react with GSH under physiological pH conditions in a spontaneous fashion. The 1,2-DCP also interacts with glutathione S-transferase (GST) theta 1 (GSTT1). The protein encoded by the GSTT1 gene is a crucial member of a superfamily of proteins that catalyze the conjugation of GSH (reduced form of the compound) to a variety of electrophilic and hydrophobic chemical compounds. Human GSTs are divided into five main classes, including alpha, mu, pi, theta, and zeta. In the experiments, it was identified a different effect compared with dichloromethane (DCM). Although 1,2-DCP and DCM are both dihaloalkanes, the underlying molecular basis for carcinogenesis may be different between these two compounds (Toyoda et al., 2017).

Safety and Conclusions The 1,2-DCP is a powerful chlorocarbon with clear evidence of carcinogenicity and all measures for safety, including fume-hood, eye and face protection masks among others need to be put on place. The 2014 IARC working group considered the rarity of cholangiocellular carcinoma with its very high relative risk, the young age of the affected patients, the absence of non-occupational risk factors, and the intensity of the exposure in a closed environment with poor ventilation. This group concluded that the excessive rate of cholangiocellular carcinoma was unlikely to be the result of chance, bias, or nonoccupational confounding factors. This aspect is corroborated by carcinogenicity studies on animals with malignant lung and hepatocellular tumors observed in exposed mice. The exclusive role of this compound could not be determined because other agents, mainly DCM, was present in the exposure to humans. The increased surveillance in educating, caring, and teaching about 1,2-DCP may be crucial in decreasing the vulnerability of affected individuals reducing the costs of morbidity and mortality rates.

References Al-Bahrani, R., Abuetabh, Y., Zeitouni, N., Sergi, C., 2013. Cholangiocarcinoma: risk factors, environmental influences and oncogenesis. Annals of Clinical and Laboratory Science 43, 195–210. Benbrahim-Tallaa, L., Lauby-Secretan, B., Loomis, D., Guyton, K.Z., Grosse, Y., El Ghissassi, F., et al., 2014. Carcinogenicity of perfluorooctanoic acid, tetrafluoroethylene, dichloromethane, 1,2-dichloropropane, and 1,3-propane sultone. The Lancet Oncology 15, 924–925. Bruckner, J.V., MacKenzie, W.F., Ramanathan, R., Muralidhara, S., Kim, H.J., Dallas, C.E., 1989. Oral toxicity of 1,2-dichloropropane: acute, short-term, and long-term studies in rats. Fundamental and Applied Toxicology 12, 713–730. IARC, 1986. 1,2-Dichloropropane. IARC Monographs on the Evaluation of the Carcinogenic Risk of Chemicals to Humans 41, 131–147. IARC, 1999. 1,2-Dichloropropane. IARC Monographs on the Evaluation of Carcinogenic Risks to Humans 71 (Pt 3), 1393–1400.

1,2-DichloropropanedA Paint Stripper and Dry-Cleaning Component

93

NTP, 1986. NTP toxicology and carcinogenesis studies of 1,2-dichloropropane (propylene dichloride) (CAS No. 78–87-5) in F344/N rats and B6C3F1 mice (Gavage studies). National Toxicology Program Technical Report Series 263, 1–182. Toyoda, Y., Takada, T., Suzuki, H., 2017. Spontaneous production of glutathione-conjugated forms of 1,2-dichloropropane: comparative study on metabolic activation processes of dihaloalkanes associated with occupational cholangiocarcinoma. Oxidative Medicine and Cellular Longevity 2017, 9736836. WHO, 2003. 1,2-Dichloropropane (1,2-DCP) in drinking-water. In: Background document for preparation of WHO guidelines for drinking-water quality. World Health Organization, Geneva.

Diet as a Healthy and Cost-Effective Instrument in Environmental Protectionq Henrik Saxe, University of Copenhagen, Copenhagen, Denmark © 2019 Elsevier B.V. All rights reserved.

Glossary CFC -11 Trichlorofluoromethane, also called freon-11 or R-11, is a chlorofluorocarbon. LCA Life cycle assessment is the investigation and valuation of the environmental impacts of a given product or service caused or necessitated by its existence. Trichlorofluoromethane Trichlorofluoromethane also called freon-11 or R-11, is a chlorofluorocarbon.

Abbreviations CO2 Carbon dioxide NO3 Nitrate PDF Potentially disappeared fraction (in Table 2 only) PE Person equivalents SO2 Sulfur dioxide WHO World Health Organization

Introduction In principle, humans eat and drink for their physical maintenance, though there are a multitude of other, highly individual motives, such as taste, habits, social and ethnic norms or practices, ethical and religious codes, health, and cost. When people can afford, they indulge in refined foods, sweets, and fats, which combined with too little exercise and too much alcohol and tobacco increase lifestyle diseases: obesity, hypertension, cardiovascular diseases, diabetes, and certain types of cancer, arthritis, and respiratory problems. Globally 1.9 billion people are overweight, World Health Organization (WHO) expects this figure to increase to 2.3 billion and 650 million of whom are obese. Hospital expenses are already skyrocketing. At the same time, chronic hunger is presently a reality for 805 million people, or 11.3% of the global population, and apart from China, hunger is a looming problem for more and more people in the developing countries. Considering the worldwide growing population, the global food production has to be doubled by 2050 to be able to feed all. But as fossil fuel prices, climate change, and regional unrest limit the availability and increase the price of global food resources, more people will starve and spark further foodrelated unrest, riots, migration of populations, conflicts and suffering. All these elements are parts of interrelated, self-perpetuating negative developmental spirals affecting everyone and everything. So one needs to pay attention and act promptly. This article deals with complex interactions between diet, environmental degradation, health, economy, energy, ethics, and global starvation. It focuses on how the choice of diet affects the environment caused by altered land use, agricultural production, transport, food processing, storage, preparation, cooking, spillage, and waste, using life cycle assessment (LCA) of foods and beverages. The conclusion is that what people choose to eat and drink affects our common environment far more than previously acknowledgeddand both the environment and what one eats and drinks are strong determinants of one’s health. The analyzed data apply to Denmark; though this country is thought to be representative of most industrialized countries, other nations may have different attitudes, behavior, and policy options. Obviously, one should not eat or drink less for the sake of protecting the environment, but there is a choice between foods and beverages that have more or less of an impact on the environment. Some have more impact because they are transported over long distances, some are more resource demanding because they originate from a higher level in the food chain (e.g., fish and meat), some are grown with more pesticides, some take more energy to grow, and some are highly processed. Similarly, one can choose between diets that are better or worse for one’s health and diets of similar nutritional value that are more or less expensive. In every casedthe environment, the health, or the costdpersonal priorities are involved. An increasing number of people in industrialized countries would benefit from eating less to prevent life style diseases associated with overweight and obesity.

q

Change History: April 2016. Henrik Saxe updated this chapter. There were many minor corrections made in text. No figures or tables were updated. This is an update of H. Saxe, Diet as a Healthy and Cost-Effective Instrument in Environmental Protection, In Encyclopedia of Environmental Health, edited by J.O. Nriagu, Elsevier, 2011, Pages 70–82.

94

Encyclopedia of Environmental Health, 2nd edition, Volume 2

https://doi.org/10.1016/B978-0-12-409548-9.11196-0

Diet as a Healthy and Cost-Effective Instrument in Environmental Protection

95

Personal priorities, however, are governed by a free choice balanced by pressure from society and producers in terms of laws and regulations, labeling of foods and beverages, campaigns, education, availability, advertisements, and various tax instruments. The public debate has focused far less on environmental effects of what people eat and drink than on food prices and health effects. Maybe there has been too little knowledge and awareness of the environmental implications of the individual and collective food choice? At the same time, there has been much focus on environmental effects of other consumer choices, for example, transportation and domestic savings on electricity, heating, and water. This article also deals with how the choice of diet affects the environment, socio-economics of a recommended diet change, whom to hold accountable, options for regulation, ethical aspects, and the severe local, regional, and global consequences.

How Does Our Choice of Diet Affect the Environment? Cause and Effect Most modern production of food and beverages rely on fossil fuels for farm machines to plow, sow, and harvest; for heating of stables and greenhouses, cowsheds, and pigsties; for cooling of storage facilities; for processing and transportdall resulting in emission of carbon dioxide. Furthermore, significant amount of CO2 is released when converting natural area to agriculture land, and agricultural area absorbs less CO2 than natural ecosystems. Rice paddies, livestock, and garbage dumps all emit methane. Soil bacteria emit nitrous oxide, particularly on fertilized land, and production of fertilizer is by itself an important source of CO2 emission. Carbon dioxide, methane, and nitrous oxide are greenhouse gases that affect the global climate, that is, temperature, precipitation, drought, and extreme weather conditions. Industrial gases commonly known as Freon and Halon degrade the stratospheric ozone layer, causing increased penetration of harmful solar UV radiation, promoting skin cancer, damaging plants, and reducing oceanic plankton. Meanwhile, at the surface, photochemically produced ozone from traffic pollutants reacting with solar UV light is toxic to humans (human toxicity) and flora and fauna (ecotoxicity), and breaks down materials. Presently the most harmful refrigerant gases are being phased out. The combustion of fossil fuels pollutes the environment with sulfur and nitrogen, causing acid- and nutrient-rich precipitation. Acidification caused by acid rain destabilizes natural ecosystems and also promotes the decomposition of metals and building materials. Surplus nutrients from air pollution and agricultural fertilizers can lead to eutrophication of aquatic ecosystems. One result of eutrophication is an increase in growth of aquatic plants, algae, and photosynthetic bacteria, which die and decay, in turn causing a lack of oxygen, reductions in water quality, and affect the fish and other animal populations. On land, the nutrient input from air pollution changes the composition of flora and fauna. Heavy metals and organic chemicals released by traffic, industry, and agriculture, for example, fertilizers and pesticides, are toxic to flora and fauna and eventually to humans, who are also exposed to chemicals added directly to foods and beverages, for example, colorants, preservatives, and carriers. Finally, agriculture and livestock husbandry occupy considerable amount of land space, hampering natural biodiversity. Cattle are globally responsible for large emissions of methane, a climate gas that in recent studies have been found to have 32 times the global warming effect of carbon dioxide (Etminan et al., 2016).

Comparing Different Categories of Private Consumption The environmental impact caused by private consumption may be defined as the impact from combined activities and resources associated with transport (private cars, public transport, etc.), home (heating, electricity, water, and maintenance), cleaning, hygiene, health, leisure activities, clothing, food, and beverages. The indicators of environmental impact of private consumption and activities may be limited to those that are simple to obtain, such as energy use. But for a more adequate and transparent representation, as many nonoverlapping indicators as possible should be included to cover a broad range of environmental responses (Table 1). The eight indicators in Table 1 cover the most important aspects of the environment, though problems such as odor, noise, and congestion are not included. For each indicator, as many links in the life cycle chain as possible should be included when performing LCA of the overall environmental impact, that is, from “cradle-to-grave.” Table 1

Environmental indicators used to assess the impact of private consumption and activities in this study

Environmental indicator

Units

Greenhouse effect Ozone layer degradation Acidification Eutrophication Photochemical ozone Ecotoxicity Human toxicity Occupied area (biodiversity pressure)

Gram CO2 equivalents Gram CFC-11 equivalents Gram SO2 equivalents Gram NO3 equivalents Gram ethane equivalents Person equivalents, PE Person equivalents, PE Potential percentage of lost species m 2 year 1

96

Diet as a Healthy and Cost-Effective Instrument in Environmental Protection

The environmental impact of different activities can be compared only when using similar method and indicators. But since the indicators in Table 1 are based on different units, their values cannot be pooled to measure the overall environmental impact. Therefore, if the overall environmental impact of two types of consumption or activities, for example, transport and food, is to be compared, the impact of one can be proven to be more environmentally harmful than the impact of the other, only if all of its environmental indicators prove it to be more harmful. If even one shows the opposite, a comparison is not easy. You cannot compare apples and orangesdat least not without a common denominator, for example, through monetization.

Eating and Drinking Constitute a Major Proportion of Private Consumption The environmental impact calculated for food and beverages has typically applied one or a few indicators such as energy consumption, possibly supplemented with water use and area use. A Danish study found that when the environmental impact of private consumption is based strictly on energy use, eating and drinking is the most important activity; it affects the environment more than the combined impact of gasoline used for private cars and electricity, heat, and water consumed in private homes. However, if the environmental impact is based on waste disposal, leisure activities are the most important. No matter the delimitations, methods, and applied indicators, there is general agreement in the literature that consumption of foods and beverages contributes significantly (20%–70%) to the overall environmental impact of private consumption (Fig. 1). Private consumption in turn contributes significantly to the overall environmental impact of human activities. As an example from Denmark using the eight environmental indicators from Table 1, Fig. 1 shows that food and beverages are responsible for 13%–59% of the total environmental impact caused by private consumption, the percentage depending on the type of impact. The three-dimensional figure shows the environmental impact of private consumption caused by all types of activities

Fig. 1 Consumption of food and beverages is responsible for a significant fraction of the environmental impact caused by private consumption. The fraction varies with the specific environmental indicator. The data are traded goods in Denmark (Statistics Denmark).

Diet as a Healthy and Cost-Effective Instrument in Environmental Protection

97

and consumptions, and specified by eight major types of environmental impacts. The red line traces the environmental impact of foods and beverages, and shows the consumption of food and beverages to be of great importance to the environment. Remarkably, the consumption of foods and beverages causes 14 times more ecotoxic effects, 11 times more land use, and 4 times more eutrophication than the combined impact by private cars and the domestic consumption of heating, water, and electricity. For ozone layer degradation and acidification, the two sets of consumption have equal impact, whereas cars and domestic consumption of heat, water, and electricity have twice the impact on the greenhouse effect and human toxicity than caused by food and beverage consumption and four times the effect measured by toxic, photochemically produced ozone.

Could Choice of Diet Be an Effective Instrument in Environmental Protection? The question is whether the documented high environmental impact of eating and drinking relative to other private consumptions and activities is of any consequence, since everyone have to eat and drink and no one wants to starve to protect the environment. Yes, it could indeed make a difference, provided that choosing alternative diets could benefit the environment as much as other relevant savings, for example, saving realistic amounts of gasoline when driving or realistic amounts of electricity and heat consumed at home. Under these circumstances, choosing an alternative diet would be considered an effective instrument in environmental protection. But the alternative diet would have to be equivalent to the current dietary practice in terms of nutritional value, taste, price, etc. for people to be willing to reconsider their usual choices when buying or ordering food and beverages. Choosing an alternative diet is therefore a more complex and far-reaching decision for most people than turning down the room temperature, turning off electrical appliances not in use, or driving in a more energy-conscious fashion. So even if the choice of diet could help protect the environment, there remains the question of how realistic this strategy for environmental protection could become. This question will be dealt with toward the end of this article, after first having determined if the choice of diet can indeed, technically speaking, be an effective instrument in environmental protection.

Analyses for Denmark: A Typical Industrialized Country In this analysis of diets, an appropriate baseline diet would reasonably be what one eats today. But since diets vary with nationality, one has to select a representative country. The average Danish diet was considered representative of industrialized countries, and good data were available. The analysis relied on a database developed for the Danish Environmental Protection Agency based on LCA of several hundred food and beverage items consumed in Denmark, along with data for other types of private consumption and activities. The database applied all of the environmental indicators in Table 1, but for technical reasons they were only calculated from cradle-to-gate, that is, from soil to wholesaler. Compared with cradle-to-grave calculations, the cradle-to-gate data are believed to underestimate acidification by approximately 25%, ozone layer degradation by 33%, and the greenhouse effect, photochemical ozone, and human toxicity by 50%, whereas eutrophication, ecotoxicity, and occupied area are similar. Food and beverages consumed in Denmark come from all over the world. Environmental effects of those produced in faraway countries are typically much larger than locally produced foods. Surprisingly, the reason is not so much environmental effects caused by long-distance transport (which is included in the analyses), but rather the high efficiency of Danish agriculture, which is among the highest in the world. To give an overview of the environmental impacts of food and beverages, all items are grouped in Table 2 into fewer categories. The list is ordered in terms of highest to lowest environmental impact, as measured by a simple majority of the environmental indicators. There is a surprisingly high correlation between what is commonly known as healthy foods, for example, bread, fruits, and vegetables, and low environmental impacts. Conversely, environmental stressful items like meat, sweets, wine, alcohol, and coffee are known to be bad for human health when consumed in excessive amounts.

Health Is a Complicated Issue What is healthy, and what is not, is a complicated issue. What one can tolerate or benefit from depends on one’s general condition, activity, genetic constitution, age, sex, etc., and what is conventionally defined as healthy and unhealthy is frequently challenged by new assertions, though not always based on reliable scientific evidence. Dark chocolate high in cocoa contains epicatechin, a plant flavonoid that keeps cholesterol from gathering in blood vessels, reduces the risk of blood clots, and slows down the immune response that leads to clogged arteries; similarly, red wine contains flavonoids, and even coffee has gained a reputation of offering a variety of health benefits against diseases such as cancer and diabetes. However, medical science emphasizes that the likes of dark chocolate, red wine, and coffee do not deserve a place in the league of traditional wholesome foods like vegetables, fruits, and whole-grain products. Even though the average person in the United States has reduced his fat intake by 30%, an increasing proportion, presently amounting to a third of the population, suffers from obesity, a lifestyle disease negatively associated with health.

98

Diet as a Healthy and Cost-Effective Instrument in Environmental Protection Table 2

The environmental impact of 1 kg of the major categories of foods and beverages, each based on the most relevant items, expressed by the eight indicators of environmental impact from Table 1

Specific environmental indicators

Greenhouse Effect (kg CO2-equivalents)

Ozone layer degradation (mg CFC11- equivalents)

Acidification (g SO2-equivalents)

Eutrophication (g NO3-equivalents)

Photochemical O3 (g ethane-equivalents)

Ecotoxicity (103 person equivalents)

Human toxicity (103 person equivalent,)

Occupied area (biodiversity) (PDF m2yr–1)

Food categories (Kg)

Beef

8.9

3.9

116

1000

7.9

0.58

0.15

74

Sweets

6.1

6.4

33

171

9.3

0.19

0.15

16

Other meats

5.8

2.8

63

447

5.8

0.31

0.10

34

Fish

5.5

2.8

50

318

5.0

0.26

0.11

23

Pork

5.2

2.6

66

483

4.8

0.35

0.09

34

Wine/alcohol

4.5

3.6

20

136

6.7

0.13

0.14

13

Coffee/tea/cocoa

10.0

4.0

25

155

8.5

0.20

0.22

7.6

Pasta

2.3

2.3

12

103

3.3

0.10

0.05

10

Eggs

2.0

0.9

19

91

1.4

0.10

0.03

10

Fruit & vegetables (-potatoes)

2.5

1.9

8

47

2.2

0.08

0.06

3.1

Rice

5.0

2.0

10

63

4.0

0.07

0.04

6.0

Oils & fats

1.5

2.5

12

64

2.2

0.07

0.04

7.0

Bakery products

2.1

2.1

11

61

2.8

0.07

0.06

6.2

Cheese

0.65

2.2

5

18

1.2

0.02

0.04

1.4

Soft drinks

0.14

1.1

7

35

1.8

0.04

0.04

3.5

Other grains

0.11

1.0

6

40

1.5

0.05

0.02

4.6

Potatoes

0.10

0.7

6

40

1.3

0.06

0.02

3.3

Beer

0.09

0.7

5

25

1.3

0.03

0.03

2.8

Butter

0.06

0.4

4

24

0.6

0.03

0.01

1.8

The red fields signify the most environmentally harmful foods and beverages for a given environmental indicator, yellow the medium, and green fields the least harmful. Food categories with highest number of red fields are at the top of the list, suspected to be the overall most environmentally harmful foods and beverages; these are also recognized as the least healthy.

Approximately 150 years ago, William Banting suggested a low-carbohydrate diet for losing weight. The low-carb idea is kept alive today by heavily marketed dietsdhigh in protein and low in carbohydratedfor losing weight. Atkins’ “New Diet Revolution” sold more than 40 million copies, with Agatston’s South Beach diet as a popular rival. Atkins recommends fat meat, cheese, cream, butter, mayonnaise salads, nuts, and oils, but no or little bread, rice, pasta, and potatoes. The South Beach diet is similar, but recommends less fat.

Diet as a Healthy and Cost-Effective Instrument in Environmental Protection

99

As proteins make you feel full, high-protein diets automatically make you eat less. Concomitantly, the high-fat, lowcarbohydrate diet takes the edge off the appetite, because of a condition known as ketosis, where part of the fat is incompletely metabolized in the absence of carbohydrates and transformed to ketones. The state of ketosis is common in people fasting. Recent studies indicate that a low-carbohydrate, high-protein diet makes you lose weight in the short run, but there is no conclusive evidence for long-lasting effects, where elevated risk of heart attacks and diabetes enters into the equation. Furthermore, 70% of people on the Atkins’ diet suffered constipation caused by lack of dietary fibers from whole meal products, fruits, and vegetables, and consuming too little fibers may cause cancer of the bowel. Contrary to a low-carb diet, a Danish study involving meal testing found that potatoes were among the best foods to make you feel full, and in Southeast Asia, where consumption of rice, another carbohydrate-rich food item, is the highest in the world, cases of overweight and obesity are the lowest. By the end of the day, only a minority succeeds in holding on to a restrictive diet for life, and the weight lost from dieting is regained when you start to eat normal. The bottom line is that weight loss is caused only by calorie restriction. Conventional healthy diets, with no particular focus on losing weight, are low in meat and rich in vitamins and fibers, and contain unsaturated rather than saturated fatsdthat is, they contain more fruits and vegetables, and the protein and energy intake is moderate but entirely sufficient. If you insist on an Atkins-type diet to lose weight, it is possible to go on a high-protein and highfat and low-carbohydrate diet and leave out the meat. As indicated by Table 2, this type of Atkins diet would be better for the environment.

Alternative Diets for Environmental Protection The most obvious alternative diet for environmental protection is thus a healthy diet as presently defined by the National Board of Health in Denmark. The advantage of choosing this type of healthy diet as an alternative to the average diet is that if found to be an effective instrument in environmental protection, it is a win–win situationdgood for the environment, and good for health. Given the popularity of organic foods, it should be mentioned that these have notdlike healthy foodsdbeen shown to support a sustainable environment. Although the production of organic foods support animal welfare and require restricted use of artificial fertilizers and pesticides, and therefore protect the environment on a limited scope, healthy foods contain fewer food items high in the food chain and thus protect a much broader range of the environment, and to a much larger extent (healthy foods are mostly those in the lower 2/3 of Table 2). It is intuitively easy to understand that producing meat is typically more harmful to the environment than producing vegetables, since livestock in intensive productions eat many times their weight in terms of barley, wheat, rape, soy, and sunflower seeds before being slaughtered and served as human foods. These types of animal fodder, or what could alternatively be grown in the same fields, could be directly consumed by humans. Even with increased productivity, there will always be a loss associated with raising, feeding, caring for, transporting, and slaughtering domestic animals. An additional problem with the present increased consumption of beef is that it typically comes from South America where it causes clearing of tropical forests with vast implications for global biodiversity and climate change. Some habitats, however, can only be grazed and are destroyed if plowed for crops. Livestock grazing on such areas can thus be agreeable to the environment and improve the global production of food, wool, and leather, but frequent overgrazing causes soil erosion and desertification. The last section of the article elaborates on the global aspects of animal husbandry. Compared with the Danish official healthy diet, a vegetarian or partly vegetarian diet, which may also be healthy, can be even more environmentally friendly. Some vegetarian items, however, can be as environmentally harmful as meat. These include highly refined foods, vegetables grown in heated greenhouses during cold winters, and fruits and vegetables imported from the other side of the world; it could also be products demanding a high use of pesticides. It may be argued that more fruits, vegetables, and dairy products would have to be imported during the wintertime to support the healthy and vegetarian diets throughout this season, and there would be a demand for a more extensive use of heated greenhouses. This increases the environmental impact of these diets. But like people, farm animals also eat both stored and imported fodder during the winter. Meat and dairy products are mostly produced locally, and many locally produced fruits and vegetables store well throughout the winter with little use of energy, for example, potatoes, carrots, cabbage, and apples. Fresh fruits and vegetables are imported during the winter rather than produced locally in hothouses. But of course, all products should be priced according to the cost of production and transport (which they are), as well as their impact on health and the environment. The items in Table 2 are ranked based on the traded weight, not the weight of the prepared food. Alternatively, ranking could be based on energy content, nutritional value, protein content, and more. Foods with a high ration of dry weight to fresh weight such as rice end up relatively low on this listdapparently being environmental friendlydeven though they have been transported over a long distance. This illustrates how difficult it is to make a reasonable listing. But a simple listing, like Table 2, is important, to be able to prioritize, regulate, and tax. The data in Table 2 thus inspired us to investigate two alternative diets for protecting the environment: a healthy diet, as defined by the authorities (the National Food Institute in Denmark), and based on this, a more radical choice constructed by the authordan ovo-lacto-vegetarian diet designed by substituting meat with cheese and fish with eggs, kilogram by kilogram. The healthy diet is essentially identical to the well-known food pyramid.

100

Diet as a Healthy and Cost-Effective Instrument in Environmental Protection

Environmental Effects of Choosing Alternative Diets Human beings must eat to sustain their bodies. So in defining the three diets in a comparable manner, they were composed to be equal in terms of energy content (Table 3). The healthy and vegetarian diets weigh 12% less than the average diet. The meat content is approximately 20% lower, the fish content double, and the content of dairy products 25% higher in the healthy diet compared with the average Danish diet. The vegetarian diet contains neither meat nor fish, but 50% more dairy products than the average Danish diet. The protein content is similar and nutritionally sufficient in all diets, but the alternative diets are healthier and better for the environment.

Alternative Diets Can Indeed Be Effective in Environmental Protection Fig. 2 shows the calculated environmental impact from “cradle-to-gate” of the average Danish diet; the officially recommended, healthy diet; and the vegetarian diet measured by the eight specific environmental indicators. Like Fig. 1, this is a threedimensional representation showing the environmental impact of private consumption, in this case three different diets, and specified by eight major types of environmental impacts. Even though the indicators were not monetarized in cost–benefit analyses, and the indicators cannot otherwise be sensibly added, the result is unambiguous because all eight environmental impact indicators are improved simultaneously by both alternative dietsdand mostly by the vegetarian diet. Both alternative diets are significantly better for the environment than the average Danish diet. Fig. 3 shows the environmental protection obtained by combined savings of 10% on gasoline, heat, and electricity and the environmental protection obtained by choosing the recommended healthy diet or a vegetarian diet instead of the average Danish diet. The scale for comparison is the percentage of the total impact of private consumption and activities. The three often-quoted means of environmental protection by families and individualsdsavings on gasoline, heating, and electricitydwere estimated by Danish authorities and companies. Most Europeans drive small cars, have insulated their homes sufficiently, and have replaced incandescent bulbs with energysaving light bulbs. But further savings are possible. The Danish Road Safety and Transport Agency estimated that on average, 10% gasoline may realistically be saved by observing 10 simple rules of driving, which entails neither driving slower nor arriving later. DONG Energy estimated that 10% domestic heating may be saved by lowering the temperature at home in rooms not in use. Energy Randers estimated that 10% domestic electricity can be saved by not leaving electrical appliances in a standby mode and by turning off the light in rooms not occupied. With some effort, it is thus realistic for most Europeans to save 10% on gasoline, heating, and electricity. These savings are recognized as typical savings available to the individual and oftentimes encouraged by the legislators. The healthy diet proved to be more efficient than the combined savings on driving, heating, and electricity on six out of eight environmental indicators. The vegetarian diet proved to be even better and superior for seven of the eight indicators. By thus comparing the effectiveness of choosing alternative diets, with other means of environmental protection available to the individual and the family, Fig. 3 confirms that alternative diets can indeed, technically speaking, be effective in environmental protection. This point is amplified in Fig. 4, which for each of the environmental indicators shows just how much better the choosing of alternative diets is compared to the conventional strategies for environmental protection at realistic levels of savings. With the most affected specific environmental indicator, ecotoxicity, the most effective diet is 246 times more efficient in environmental protection than the combined 10% savings on gasoline, heating, and electricity. Even the reduction in greenhouse gases is 40% better by a vegetarian diet than by 10% combined savings on gasoline, heating, and electricity. The healthy diet overcomes the greenhouse effect with only half the efficiency of combined gasoline, heating, and electricity savings. And only when it comes to photochemical ozone are both diets not the better solution, though both diets deliver approximately half the protection measured by this indicator as the combined savings on gasoline, heating, and electricity. If the calculations had included cradle-to-grave effects rather than only cradle-to-gate effects, the healthy diet would equal or surpass the combined 10% savings on driving, heating, and electricity for all environmental indicators, and the vegetarian diet would be recognized as the superior tool of environmental protection measured by all indicators. It is concluded that environmental saving by choosing a better diet is an overlooked and very effective tool in environmental protection. Table 3

Energy content (MJ), the annually purchased amounts (kg) per person for the three diets, and their absolute and relative contents of meat, fish, and dairy products

Type of diet

Total meal

Meat content

Fish content

Dairy content

Danish average diet (MJ) Danish average diet (kg) Healthy diet (MJ) Healthy diet (kg) Vegetarian diet (MJ) Vegetarian diet (kg)

4122 (100%) 1290 (100%) 4122 (100%) 1129 (100%) 4122 (100%) 1129 (100%)

– 75 (6%) – 62 (5%) 0 (0%) 0 (0%)

– 11 (1%) – 21 (2%) 0 (0%) 0 (0%)

– 169 (13%) – 176 (16%) – 259 (23%)

The energy content of the vegetarian diet is an estimate.

Diet as a Healthy and Cost-Effective Instrument in Environmental Protection

101

Fig. 2 The environmental impact of three diets as a percentage of the total impact of private consumption measured by eight specific environmental indicators. The fraction varies with the specific environmental indicator.

The New Nordic Diet The New Nordic Diet (NND) was designed by leading Nordic chefs to be a healthy, palatable, and environmentally friendly diet of Nordic origin in accordance with the Nordic dietary guidelines. That the NND is of Nordic origin means that it includes commodities in season, inspired by the Nordic diet of the olden days, which means it has a higher content than the Average Danish Diet (ADD) of locally grown vegetables, including legumes, roots, fish, whole-grain products, nuts, and fruit and berries in season; it contains 35% less meat than the ADD. Relative to the distribution of meat types in the ADD, the NND includes only 30% beef and veal, 36% pork, and 73% chicken, but 680% grass-fed lamb and 820% venison, but the energy and protein content of the NND and the ADD is comparable (Saxe, 2014). The three features by which this diet shift affects the environment: composition, transport (import), and type of production (organic/ conventional) were separately investigated by using consequential life cycle assessment. When both diet composition and transport were taken into account, the NND reduced the environmental impact relative to the ADD measured by all of 16 impact categories. All impacts were monetized. The socioeconomic savings related to this diet shift was 32% of the overall environmental cost of the ADD. In monetary terms this savings equals the monetized environmental impact of driving an average European car 10,000 km year 1. When the actual 8% content of organic produce in the ADD and the 84% content of organic produce in the investigated recipebased NND were also taken into account, the NND reduced the environmental impact relative to the ADD measured by only 10 of the 16 impact categories whereas 6 were increased. The socioeconomic savings related to the diet shift were lowered to V42/person per year, or 5% of the overall environmental cost of the ADD. It was concluded that reducing the content of meat and excluding most long-distance imports were of substantial environmental and socioeconomic advantage to the NND when compared with the ADD, whereas including high amounts of organic produce was a disadvantage.

102

Diet as a Healthy and Cost-Effective Instrument in Environmental Protection

Fig. 3 Environmental protection by saving 10% of both gasoline (private car) and heating and electricity (private home) compared to environmental protection by two selected diets (data for Denmark). The environmental protection varies with the specific environmental indicator.

Improved Economy, Everyone Wins In another study on the New Nordic Diet the surcharge to consumers of the ADD-to-NND diet shift was V216/capita/year (Saxe and Jensen, 2014). In monetary terms, the savings related to the environmental impact of the diet shift in this study were V151/capita/ year. 70% of the increased consumer cost of the ADD-to-NND diet shift was countered by the reduced socioeconomic advantage associated with the reduced environmental impact of the NND. But a further 130% of the increased consumer cost of the ADD-toNND diet shift was countered by the health benefits of the NND (Jensen et al., 2015). So the overall result is that it for every $ the consumer spends on the ADD-to-NND diet shift, two $ come back to society. A true win-win-win choice: NND is better for your health and better for the environment, and better for the economy.

Responsibility and Global Consequences Diet Overview In the United States, an estimated 3.3% of the populationdor more than 8 million peopledare full-time vegetarians, whereas in Denmark the fraction is 4%, and in the United Kingdom the fraction is estimated at 2%–12%. Among the hundreds of millions of Buddhists, Hindus, and Sikhs, there are many vegetarians and many who limit their consumption of meat. But overall, the majority eats meat, and the consumption is increasing both on a global and on a per capita basis, as developing countries become richer and mimic the lifestyle in the industrialized countries. More than 60% of the meat is produced in developing nations, where people eat 32 kg meat per year. In the industrialized countries, people eat 85 kg meat per year. Interestingly, people with a higher education eat less meat, and women eat less meat than men. In fact, educated men eat like women. Moreover, women in higher positions eat more meat, maybe to fit into a male-dominated labor market or maybe because

Diet as a Healthy and Cost-Effective Instrument in Environmental Protection

103

Fig. 4 The figure shows how many times better (or worse) the choice of diets may be compared to conventional everyday strategies for protecting the environment (sensible driving and saving on domestic heating and electricity). The environmental protection varies with the environmental indicator.

they consumed less carbohydrates? Among young people in Northern Europe it is becoming a popular trend to become vegans, and not only exclude meat and fish, but also eggs, cheese and dairy products. Inspiring the majority to eat less food from the top and more from the bottom of the food chain would have considerable, positive environmental consequences to the benefit of all, and fewer people woulddin principledhave to starve; similarly, eating less from the top and more from the bottom of the food pyramid, fewer people would suffer from lifestyle diseases. But how does one realistically encourage the individual to choose a better diet to protect both the environment and health? What instruments are available? And who is responsible? The last question will be dealt first.

104

Diet as a Healthy and Cost-Effective Instrument in Environmental Protection

Who Is Responsible for Our Choice of Diet? A study showed that 55% of the Danes believe that what they eat and drink is their own responsibility and that you should not blame society; however, 34% believe that people have different resources and circumstances and that it is wrong to leave all responsibility to the individual. When asked directly, a vast majority (72%) of the freedom-loving Danes support stricter rules to assist in improving individual and public health. A dying species, the “socially conscious consumer” will per definition assume responsibility and choose the “best” diet, a “sensible” car, or alternatively public transportation or a bicycle; save on energy and water; and probably never use “unnecessary” chemicals. But the majority will not act for “the common good” unless being externally motivated. They will be free riders. Therefore, from a utilitarian point of view, politicians are responsible for “the common good,” for a cleaner environment, and for reducing the overall health expenses. However, the politicians are up against the producers’ and suppliers’ lust for profit, their aggressive advertisements and great ability to make their products widely available; examples are Coca Cola, Carlsberg, or Danish Bacon. You can buy all sorts of sweets at your local gas station, but how much fresh fruits? What is placed next to the cash register in your local supermarket? Your kids want it! Therefore, producers and suppliers carry a responsibility. Even scientists in the food sector and the press carry their separate responsibilities. There are many mechanisms and many actorsdbut at the end of the day, politicians influence most of them and are therefore concluded to be mainly responsible for what people eat and drink. However, all of the stakeholders should take a hard look at the arguments and findings in recent reports from the International Assessment of Agricultural Knowledge, Science and Technology for Development (IAASTD). There are several instruments available to politicians to influence people’s choice of diet, for example, taxes and subsidies, laws and regulations, required labeling, campaigns, and education, and there is an ongoing discussion of which instrument or set of instruments may work the best.

Differential Taxation May Be the Most Realistic Way Forward The environmental effects caused more by some foods than others and are ignored when it comes to applying tax instruments, and obesity, hypertension, cardiovascular disease, diabetes, and certain types of cancer, arthritis, and respiratory problems caused more by some foods than others are similarly ignored. What one eats and drinks isdand should to a wide extent bedone’s personal choice, just like the choice of which car to owndif one can afford one. However, in both cases, the choice has societal consequences. Energy-demanding and polluting cars are taxed because of this, whereas energy-demanding and polluting foods and beverages are not. These are double standards. Externalities associated with driving are sought internalized through registration taxes, green taxes, gasoline taxes, etc. Both health and environmental externalities could similarly be included in the price paid for food. Economic theory suggests that environmental taxes should be implemented as close to the source of pollution as possible. In other words, items such as energy, fertilizers, pesticides, and preservatives used in agriculture, transport, refinement, storage, and preservation of food and beverages ought to be the primary tax objects for regulating the environmental harm caused by food and beverages. But in reality, governments subsidize farmers and agribusinesses to supplement their income, manage the supply of agricultural commodities, and influence the cost and supply of certain commodities. Examples of such commodities include wheat, grain used as fodder (e.g., maize, sorghum, barley, and oats), cotton, milk, rice, peanuts, sugar, tobacco, and oilseeds (e.g., soybeans). If subsidies to farmers must be maintained, one suggestion for countries like Denmark would be to regulate the sales tax (valueadded tax) on food and beverages so that healthy and environmentally sound foods and beverages are encouraged, whereas unhealthy and environmentally unsound foods and beverages are discouraged. This would indirectly be in line with the “Polluter Pays Principle” and analogous to how politicians have opted to tax cars, energy for heating, and electricity to “inspire” people to help protect the environment. Danish politicians are reluctant to introduce a differential tax on food and beverages based on differential effects on health and the environment; they find it too disputatious and complicated, and resort to rhetorics like, “There are too many products” and “who knows for sure which items are unhealthy and environmental harmful?” Introducing a new tax, even if it involves lowering other taxes, is never a popular task for politicians who want to be re-elected. But as already emphasized, the public demands itdat least in Denmark. For simplicity, data like those in Table 2 could act as a guideline, as it groups many items into a few categories and combines interests in health and considerations for the environment. Increased sales tax on meat, fish, sweets, wine, alcohol, coffee, and tea and decreased or no sales tax on fruits and vegetables would benefit both public health and the environmentdand be “tax-neutral.” Although national politicians hesitate, the international organization WHO recommends its member states to implement such taxes “just” to promote health, and now it is known that the environment will simultaneously be improved. Danish studies have shown that cheaper prices on fruits and vegetables increase the sale of these items. However, the poor, including many families with small children, benefit the most, but this is an advantage in terms of future generations and could make differential taxation an ideal instrument for “turning the boat around.” Older and richer people will be harder to convince, and it will probably take other instruments.

Diet as a Healthy and Cost-Effective Instrument in Environmental Protection

105

No Guilty Conscious If health and environmental externalities were factored into the price of foods and beverages, no one would have to feel guilty when choosing to eat and drink in a manner that is harmful to health and environment. You would already have compensated society for the harm done. No guilty conscious, except maybe toward your loved ones, if they lose you prematurely to a lifestyle disease.

Required Labeling, Campaigns, and Education As alternatives to differential or other jaw-boning taxes, politicians may support required labeling or campaigns, appealing to one’s personal responsibility to “do right.” However, though most people are aware of the contents in what they eat and drink through required labeling, they pay little attention, and when it comes to campaigns, they are not always cost-effective. Exceptions may be the American five-a-day or the Danish six-a-day campaigns that succeeded in making people eat more fruits and vegetables. A study showed that the “good life” is more important to 48% of the Danes than a healthy lifestyle. But most have regrets when they become ill. So to “live strong and die young” may not be such a good idea, and a good life is not just a long life, but a life rid of diseases. Educating, particularly the young ones, will be essential in “turning the boat around”dand although everyone must maintain one’s free choice, choosing must be assisted by the politicians. Whatever the preferred instruments, politicians should act now, and other players must do their part; to be most effective, a combination of strategies will be necessary. The increasing consumption of unhealthy and “un-environmental” foods and beverages has severe, though remarkably neglected, consequences on a personal, local, regional, and global scale.

The Global Consequences The total meat consumption has increased fivefold in the past half century, putting extreme pressure on Earth’s limited resources, including water, land, food, and fuel. The growing demand for meat has become an important driving force behind virtually every major category of environmental damage now threatening a peaceful future. With the projected rise in global population, this problem is expected to increase. In the United States, agriculture occupies 50% of the land area, and in Europe, 84% is occupied by agriculture, where; 48% is arable land and used for cultivating permanent crops, and 36% is pastures and mixed farmland. The 7 billion livestock in the United States consume 90% of soy, 80% of corn, and 70% of grain grown in the country. If all the grain currently fed to livestock in the United States were consumed directly by people, approximately 800 million could be fed, equaling the current number of starving people worldwide. On average, animal protein production in the United States requires 28 kcal for every kilocalorie of protein produced for human consumption. It costs 39 times more energy to produce 1 cal of beef than 1 cal of soybean, and they have comparable protein value. Grain production, on average, requires only 3.3 kcal of fossil fuel for every kilocalorie of protein produced. US agriculture accounts for 80% of all the freshwater annually consumed in the country. Livestock directly use only 1.3% of that water. But when the water required for forage and grain production is included, livestock’s water usage rises dramatically. Every kilogram of beef produced requires 15,000 L of water while pork costs 6000 L. Some 1600 L of water goes into producing 1 kg of bread. Potatoes are even less “thirsty,” at 300 L kg 1. Water will be scarcer in many regions in the United States and Europe with the projected climate change. Today half the Earth’s land mass is grazed by livestock, and 33%–50% of the world’s terrestrial surface area is affected by soil degradation and eventual desertification. Overgrazing, a direct effect of animal husbandry, caused 35% of the global soil degradation. Another 30% was caused by deforestation and 27% by agricultural activities. Deforestation is expected to rise with population pressure. Much of this deforestation and agricultural activity is indirectly associated with meat production and could thus also be classified as effects of animal husbandry. Soil erosion has been reported to cost US$ 44 billion annually in the United States alone. In Central America, 40% and in Brazil 70% of all the rainforests have been cleared or burned down in the past 40 years, mostly for cattle pasture, though hydroelectric projects, mining, growing narcotics, and subsistence agriculture have also played a role. In the process, natural ecosystems, where a variety of plant and animal species thrive, are destroyed and typically replaced with monoculture grass. In Africa the imported cattle have never adapted to the local environment, which leads to disease and overuse of resources, although the native species are healthy. The CO2 emission from fossil fuel used in producing the beef eaten annually by the average US family equates the CO2 emission of the average American car operated for 6 months. Overall, producing beef to all in the United States annually emits approximately 144 million tons of CO2, or 3% of the annual CO2 emission in the United States. Furthermore, the world’s cattle annually emit methane equal to the warming potential of 7.1 Gt CO2, or 14.5% of the world’s total emission of greenhouse gases. But there is a new contender for foods and fodderda growing need for energy supplied as biofuels. The agricultural area needed to fill a single tank of an ethanol-fueled sport utility vehicle (SUV) could alternatively feed a person for a year, and if people want global stability, they need food for all in all regions. Biofuels may simply not be considered ethical by most drivers, electric cars seem

106

Diet as a Healthy and Cost-Effective Instrument in Environmental Protection

a better choice, and biofuels may be the wrong track toward energy for future transport, unless people simultaneously reduce the content of animal products in their diets significantly.

Conclusion Most of the serious consequences mentioned in this article can be reversed if people move toward healthier and less environmental harmful diets, and if this is done now. Although it takes time to “turn the boat around,” and politicians may seem reluctant to “do their job,” even when encouraged by a majority of the citizens, the take-home lesson of this article is crystal clear. There does not have to be a shortage of food or maybe even of energy, if it were not for costly, unhealthy, and energy-demanding, environmentally harmful diets; if only politicians act timely with relevant taxation, and the use of other relevant instruments, persuaded more people to move toward better, alternative diets, people could all join long dead Louis Armstrong in singing “What a Wonderful World.”

Acknowledgments The author is indebted to Mindful Food Solutions, to the department of Food Science at Copenhagen University and to the Nordea Foundation.

See also: Essential Nature of Water for Health: Water as Part of the Dietary Intake for Nutrients and the Role of Water in Hygiene; Food Safety and Risk Analysis.

References Etminan, M., et al., 2016. Radiative forcing of carbon dioxide, methane, and nitrous oxide: A significant revision of the methane radiative forcing. Geophysical Research Letters 43 (24), 12614–12623. https://doi.org/10.1002/2016GL071930. Jensen, J.D., et al., 2015. Cost-effectiveness of a new Nordic diet as a strategy for health promotion. International Journal of Environmental Research and Public Health 12 (2015), 7370–7391. https://doi.org/10.3390/ijerph120707370. Saxe, H., 2014. The new Nordic diet is an effective tool in environmental protection: It reduces the associated socioeconomic cost of diets. American Journal of Clinical Nutrition 99, 1117–1125. Saxe, H., Jensen, J.D., 2014. Does the environmental gain of switching to the healthy new Nordic diet outweigh the increased consumer cost? Journal of Food Science and Engineering 4, 291–300. https://doi.org/10.17265/2159-5828/2014.06.004.

Further Reading Carlsson-Kanyama, A., Pipping Ekström, M., Shanahan, H., 2002. Food and life cycle energy inputs: Consequences of diet and ways to increase efficiency. Ecological Economics 44, 293–307. Eftec, 2005. The economic, social and ecological value of ecosystem services. In: Final Report for the Department for Environment, Food and Rural Affairs. DEFRA, the Department for Environment, Food and Rural Affairs, UK. http://www.jncc.gov.uk/pdf/BRAS_SE_Newcomeetal-TheEconomic,SocialandEcologicalValueofEcosystemServices(EftecReport).pdf. (Accessed May 2017). European Environmental Agency, 2005a. Environment and health. In: EEA Report No 10. European Environmental Agency, Denmark. http://www.pedz.uni-mannheim.de/daten/edzbn/eua/06/EEA_rep_10_2005.pdf. (Accessed May 2017). European Environmental Agency (2005b) Household consumption and the environment. Denmark: European Environmental Agency. EEA_report_11_2005.pdf. (accessed May 2017). Food and Agriculture Organization of the United Nations, 2005. The state of food insecurity in the world. Monitoring progress towards the World Food Summit and Millennium Development Goals. Food and Agriculture Organization, Rome, Italy. http://www.fao.org/docrep/007/y5650e/y5650e00.htm. (Accessed March 2017). Gerbens-Leenes, P.W., Moll, H.C., Schoot Uiterkamp, A.J.M., 2003. Design and development of a measuring method for environmental sustainability in food production systems. Ecological Economics 46, 231–248. Huppes, G., de Koning, A., Suh, S., et al., 2006. Environmental impacts of consumption in the European Union: High-resolution input–output tables with detailed environmental extensions. Journal of Industrial Ecology 10 (3), 129–146. Korthals, M., 2001. Taking consumers seriously: Two concepts of consumer sovereignty. Journal of Agricultural and Environmental Ethics 14, 201–214. Kramer, K.J., Moll, H.C., Nonhebel, S., Wilting, H.C., 1999. Greenhouse gas emissions related to Dutch food consumption. Energy Policy 27, 203–216. Lang T and Rayner G (2002) Why health is the key to the future of food and farming. A Joint Report Submitted to the Policy Commission on the Future of Farming and Food. https://www.iatp.org/documents/why-health-is-the-key-to-the-future-of-food-and-farming (accessed May 2017). Michaelis, M., Lorek, S., 2004. Consumption and the environment in Europe. In: Trends and futures environmental project no. 904. The Danish Environmental Protection Agency, the Ministry of the Environment, Denmark. Naturvårdsverket, 1997. Att a¨ ta fo¨r en b¨attre miljo¨: Slutrapport fra˚n systemstudie Livsmedel (in Swedish). Naturvårdsverket Förlag, Stockholm, Sweden. Nordic Council of Ministers, 2004. Nordic nutrition recommendations, 4th edn. NORD. 2004:13. Olesen, J.E., Bindi, M., 2002. Consequences of climate change for European agricultural productivity, land use and policy. European Journal of Agronomy 16, 239–262. Pimentel, D., Pimentel, M., 2003. Sustainability of meat-based and plant-based diets and the environment. American Journal of Clinical Nutrition 78 (Suppl. 3), 660S–663S.

Diet as a Healthy and Cost-Effective Instrument in Environmental Protection

107

Robertson, G.P., Paul, E.A., Harwood, R.R., 2000. Greenhouse gases in intensive agriculture: Contributions of individual gases to the radiative forcing of the atmosphere. Science 289, 1922–1925. Saxe, H., Busk, B.J., 2006. Fødevarers miljøeffekterddet politiske ansvar og det personlige valg (in Danish). The Environmental Assessment Institute, Copenhagen, Denmark. Serpa, B.S., Christensen, A.L., Hansen, U.M., Skjoldborg, E.H., 2008. Fremtidens forebyggelsedifølge danskerne. Mandagmorgen and TrygFonden, Denmark. Weidema, B.P., Nielsen, A.M., Christiansen, K., et al., 2005. Prioritization within the integrated product policy. The Danish Environmental Agency, Copenhagen, Denmark. http:// www2.mst.dk/Udgiv/publications/2005/87-7614-517-4/pdf/87-7614-518-2.pdf. (Accessed May 2017). WHO/IARC, 2003. IARC handbooks of cancer prevention. In: Fruit and Vegetables, vol. 8. IARC Press, Lyon, France. World Health Organization, 2004. Food and health in Europe, a new basis for action. European Series, No. 96. WHO Regional Publications. http://www.euro.who.int/document/ E82161.pdf. (Accessed May 2017).

Relevant Websites http://www.agassessment.org/index.cfm?Page¼IAASTD%20Reports&ItemID¼2713dIAASTD, International Assessment of Agricultural Knowledge, Science and Technology for Development (accessed May 2017). http://www.oecd.org/trade/agricultural-trade/40715381.pdfdOECD-FAO, Agricultural Outlook 2008–2017 (accessed May 2017). http://www.worldwatch.orgdThe Worldwatch Institute, State of the World. Innovations for a Sustainable Economy, Chapter 5 (accessed May 2017).

Diethylstilbestrol (DES) Exposure in Mothers and Offspring Consolato M Sergi, University of Alberta, Edmonton, AB, Canada © 2019 Elsevier B.V. All rights reserved.

Chemistry Diethylstilbestrol (DES) is a synthetic nonsteroidal estrogen, which has been newly reclassified in the class of endocrine disruptors. DES belongs to the stilbestrol group of molecular compounds and is pictured chemically as a nonsteroidal open-ring analog of the estradiol, a steroidal estrogen. DES is derived from anethole, which occurs naturally in the environment. Anethole is an estrogenic constituent of anise and fennel. The production of DES starts with the demethylation of anethole to form anol, which spontaneously dimerizes into dianol and hexestrol. DES results from a structural modification of hexestrol. Physically, DES is an odorless white crystalline powder at room temperature and is insoluble in water, but soluble in alcohol, ether, chloroform, fatty and vegetable oils, dilute hydroxides, acetone, dioxane, ethyl acetate, and methanol. The cis-isomer form of DES is not stable tending to convert to the trans-isomer form of this molecular compound, which is instead steady in the environment. Physiologically, DES reaches a peak concentration within 20–40 min following the oral administration, has a half-life of 3–6 h, and is primarily excreted in urine.

Indications The similarity of DES to the natural estrogen has broadened its initial clinical evidence as antimiscarriage. Other than being prescribed during the 1st trimester of pregnancy to prevent miscarriage, DES may also prevent premature labor at the end of the pregnancy, but also targets the gynecological bleeding. Moreover, DES inhibits lactation and postpartum engorgement, and it may be used as hormone-replacement therapy or postcoital contraceptive. A few more gynecologic conditions have been treated with DES, including the menopausal and postovariectomy symptoms, atrophic vaginitis, vulvar dystrophy, and female hypogonadism. Occasionally, DES has been used to treat advanced prostate cancer because it starves the tumor cells by reducing the production of testosterone and advanced postmenopausal breast cancer. DES administration took place mainly in the United States, Europe, and Australia and the actual number of women exposed to DES worldwide is still unknown. It has been attempted to assess this number and, probably, up to 10 million of individuals have been exposed to DES between the 1940s and 1970s. Currently, the use of DES is forbidden in many countries and is not commercially available for clinical use. In the early 1970s, the use of DES to stimulate the fattening of beef cattle and chickens was also banned in many countries following some concern over trace amounts of the xenoestrogen in meat that would be consumed by humans.

Carcinogenesis The oxidative metabolism of DES can occur in fetal mouse tissues after uterine exposure. There is substantial evidence that DES binds covalently to DNA in the fetal target tissue, that is, the uterus. This covalent bond determines oxidative damage to DNA and lipids, which induces permanent changes in the mitosis of the target cells. Moreover, there is clear evidence that DES alters the expression of specific enzymes involved in the metabolism of this compound in the rat. It has been ascertained that aneuploidy of both animal and human cells arises from an interference of DES with the microtubular system, which entails oxidative metabolic activation. A number of chromosomal aberrations, for example, breaks, have been associated with DES. According to several authorities and references, the chromosomal aberrations are probably the major mechanism of DES-induced carcinogenicity. DES can transform human breast cell lines and even immortalize primary embryo cells in vitro. Cell proliferation is increased in both animal and human cervical and uterine cell lines in consequence of their interaction with DES. In addition, newborns exposed to DES have persistent changes in gene expression and abnormal DNA methylation patterns in target tissues. In the epithelial components of

108

Encyclopedia of Environmental Health, 2nd edition, Volume 2

https://doi.org/10.1016/B978-0-12-409548-9.11037-1

Diethylstilbestrol (DES) Exposure in Mothers and Offspring

109

both mammary gland and prostatic gland of exposed mice, the hormone responsiveness is permanently altered. This change in the endocrine response is accompanied by inflammatory and dysplastic lesions observed microscopically. It is now known that the estrogen receptor alpha (ER-a) is at least in part responsible for some of the above described effects, including mitogenic and gene expression. The immune system in both animals and humans is also altered, because there is some evidence highlighting some modulatory effects of perinatal exposure. At least two of these categories seem to be involved for inducing cancer in both animals and human. The estrogen-receptor mediated effects and genotoxicity are the main factors for the carcinogenesis of DES, while it seems plausible that other mechanisms under intense investigation are involved. The changes occurring in the early developmental stages of the female and male genital tract exposed to DES may result in epigenetic events that will shape the spectrum of the oncologic and nononcologic pathology of both sexes.

Cancer in Experimental Animals DES causes tumors in different animals at several estrogen-sensitive tissue sites. In female mice, its oral administration induces tumors of the female genital tract, including ovary, endometrium, and cervix. In Transgenic rasH2 (3 copies of human c-Ha-ras oncogenes with endogenous promoter and enhancer in tandem) and the DNA repair-deficient Xpa/p53þ/ knockout mice (Xeroderma pigmentosum group A). male mice, DES induces osteosarcoma and testicular tumor of the interstitial cells (Leydig cell tumors), respectively. In female Wistar rats, the subcutaneous implantation of DES causes tumors of the mammary gland. In a perinatal environment of experiments in animals, DES exposure causes malignant lymphomas, sarcomas of the uterus, adenocarcinomas, and tumors of the pituitary gland, vagina, and ovary in female mice. Some of these neoplasms (uterine adenocarcinoma, tumors of the mammary gland and vagina) have also been observed in the female rat. In hamsters, perinatal DES exposure induces tumors of the kidney.

Cancer in Humans There are several target organs of DES, and a few primary cancers affect mainly the female gender. As indicated in its origin as a xenoestrogen, DES activates the ER-a with a similar receptor affinity as the naturally occurring estradiol. DES is associated with an increased risk of oncologic events, including clear cell adenocarcinoma of the vagina and cervix and carcinoma of the mammary gland. Since 1971, the prescription of DES has been discontinued, but the adverse events discovered in women exposed to DES, and the reproductive pathologies encountered by their offspring and subsequent generations, continue to persist. It is now more than 40 years since DES prescription for other uses has been banned, but its legacy continues to affect not only DES-exposed mothers and children but also their offspring. Consequently, DES has been labeled a “biological time bomb.” Social media may need to sensitize properly the general population about the DES-legacy of the decades ahead.

Offspring and Future Implications of the DES Legacy DES daughters are the female offspring born to pregnant women who were prescribed DES. The exposure to this compound during sensitive stages of early life led to a variety of permanent adverse health outcomes in significant fractions of the transplacentally exposed populations. The most extensive effect of DES exposure in utero on DES daughters is on reproduction abnormalities with vaginal adenosis, which is a metaplastic cervical or endometrial epithelium within the postnatal vaginal wall and considered to be islets of persistent Müllerian (paramesonephric) epithelium of the postembryonic life. Vaginal adenosis affects 34%–91% of DES daughters and is a preneoplastic condition. In 1970, the clear cell adenocarcinoma of the vagina was reported in seven DES daughters. Uterine malformations have been described as a hypoplastic cavity, constriction bands, T-shaped uterus, “irregular borders,” and uterine fibroids/leiomyomata. Additional reproductive tract abnormalities may include the absence of corpus luteum, induction of polycystic ovary, and vaginal cornification. DES daughters have an increased risk for several pregnancy-related conditions, including infertility, spontaneous abortion (< 14 weeks gestation), loss of pregnancy in the 2nd trimester, ectopic pregnancy, preeclampsia, stillbirth (> 27 weeks), preterm delivery (< 37 weeks), and neonatal death within the 1st month of life. DES daughters had a 40-fold risk of vaginal or cervical clear cell adenocarcinoma with an estimated incidence of 0.2%. The early menopause may rely on a small follicle pool, more rapid follicle depletion or changes in hormone synthesis and metabolism in DES-exposed daughters. The incidence of breast cancer is at twofold higher in the DES daughters at an age older than 40 years, but it has been argued that DES daughters may be suggested to a more intensive mammographic scan than nonexposed women. DES sons are the male offspring born to pregnant women who were prescribed DES. It is still unclear if the sons of the DESexposed mothers would experience the similar reproductive abnormalities as DES daughters. In a randomized trial of 308 DESexposed and 307 DES-unexposed sons of women at the University of Chicago, a higher prevalence of epididymal cysts and hypoplastic testes were reported in exposed men compared to unexposed men. In the male, there is an increased frequency of cryptorchidism, epididymal cysts, and testicular inflammation/infection. It is predicted that cross-generational responses to the exposure of DES are possible due to epigenetic changes in the DNA. The studies on the cohort of grandchildren whose mothers were exposed to DES prenatally (i.e., grandchildren had no direct DES exposure) are limited, but it seems that the DES third generation also has an

110

Diethylstilbestrol (DES) Exposure in Mothers and Offspring

increased risk for cancer. An increase of congenital disabilities in the sons and daughters of the third generation was also observed. In DES grandsons, the incidence of hypospadias is 20 times higher than unexposed grandsons. An exposure of DES might have induced fundamental epigenetic changes in primordial germ cells and, if the additive effect of other endocrine disruptors, such as bisphenol A (BPA), is taken into consideration, this interaction may be crucial for triggering events of carcinogenesis. An aspect to consider is that the offspring of DES-mothers may have an increased risk of anxiety as well as depression. Mental illness is still an entirely unsatisfactory explored field of DES-exposed mothers and their family.

Conclusion If DES had undergone appropriate preclinical testing, it would never get passed under current regulation. DES tragedy illustrates the change in the failure of conducting strict clinical trials with adequacy in the recruitment of cohorts, diligent recording, and careful interpretation. Before massive production, very few toxicological studies were performed on this drug. In conclusion, there is an urgent need to identify women with the inheritance cycle of DES that could prevent the risk of cancer in DES-exposed third and fourth generations. The age and modalities DES alone or in combination with other endocrine disruptors may affect health in the offspring, and subsequent generations are still entirely unknown. The introduction of electronic medical records and digital pathology may help in more efficient monitoring of these generations in the future. The increased surveillance in educating, caring, and referring the DES offspring and following generations may be vital in decreasing costs of morbidity and mortality rates.

Further Reading Al Jishi, T., Sergi, C., 2017. Current perspective of diethylstilbestrol (DES) exposure in mothers and offspring. Reproductive Toxicology 71, 71–77. Gibson, D.A., Saunders, P.T., 2014. Endocrine disruption of oestrogen action and female reproductive tract cancers. Endocrine-Related Cancer 21 (2), T13–T31. Gill, W.B., Schumacher, G.F., Bibbo, M., Straus 2nd, F.H., Schoenberg, H.W., 1979. Association of diethylstilbestrol exposure in utero with cryptorchidism, testicular hypoplasia and semen abnormalities. The Journal of Urology 122 (1), 36–39. Goldberg, J.M., Falcone, T., 1999. Effect of diethylstilbestrol on reproductive function. Fertility and Sterility 72 (1), 1–7. Harris, R.M., Waring, R.H., 2012. DiethylstilboestroldA long-term legacy. Maturitas 72 (2), 108–112. Hatch, E.E., Troisi, R., Wise, L.A., Hyer, M., Palmer, J.R., Titus-Ernstoff, L., Strohsnitter, W., Kaufman, R., Adam, E., Noller, K.L., Herbst, A.L., Robboy, S., Hartge, P., Hoover, R.N., 2006. Age at natural menopause in women exposed to diethylstilbestrol in utero. American Journal of Epidemiology 164 (7), 682–688. Hoover, R.N., Hyer, M., Pfeiffer, R.M., Adam, E., Bond, B., Cheville, A.L., Colton, T., Hartge, P., Hatch, E.E., Herbst, A.L., Karlan, B.Y., Kaufman, R., Noller, K.L., Palmer, J.R., Robboy, S.J., Saal, R.C., Strohsnitter, W., Titus-Ernstoff, L., Troisi, R., 2011. Adverse health outcomes in women exposed in utero to diethylstilbestrol. The New England Journal of Medicine 365 (14), 1304–1314. IARC, 1979. Diethylstilboestrol and diethylstilboestrol dipropionate. WHO/IARC, Geneva. IARC, 2012. Pharmaceuticals: Diethylstilbestrol. A review of human carcinogens. WHO/IARC, Geneva. Laronda, M.M., Unno, K., Butler, L.M., Kurita, T., 2012. The development of cervical and vaginal adenosis as a result of diethylstilbestrol exposure in utero. Differentiation 84 (3), 252–260. McLachlan, J.A., 2006. Commentary: Prenatal exposure to diethylstilbestrol (DES): A continuing story. International Journal of Epidemiology 35 (4), 868–870. Newbold, R.R., Padilla-Banks, E., Jefferson, W.N., 2006. Adverse effects of the model environmental estrogen diethylstilbestrol are transmitted to subsequent generations. Endocrinology 147 (6 Suppl), S11–S17. Senekjian, E.K., Potkul, R.K., Frey, K., Herbst, A.L., 1988. Infertility among daughters either exposed or not exposed to diethylstilbestrol. American Journal of Obstetrics and Gynecology 158 (3 Pt 1), 493–498. Sergi CM (2019) Digital Pathology: The Time Is Now to Bridge the Gap between Medicine and Technological Singularity [Online First], IntechOpen, https://doi.org/10.5772/ intechopen.84329. Available from: https://www.intechopen.com/online-first/digital-pathology-the-time-is-now-to-bridge-the-gap-between-medicine-and-technologicalsingularity. Titus-Ernstoff, L., Troisi, R., Hatch, E.E., Hyer, M., Wise, L.A., Palmer, J.R., Kaufman, R., Adam, E., Noller, K., Herbst, A.L., Strohsnitter, W., Cole, B.F., Hartge, P., Hoover, R.N., 2008. Offspring of women exposed in utero to diethylstilbestrol (DES): A preliminary report of benign and malignant pathology in the third generation. Epidemiology 19 (2), 251–257. Veurink, M., Koster, M., Berg, L.T., 2005. The history of DES, lessons to be learned. Pharmacy World & Science 27 (3), 139–143.

Diffusive Gradients in Thin-Films (DGT): An Effective and Simple Tool for Assessing Contaminant Bioavailability in Waters, Soils and Sediments Dong-Xing Guan, Institute of Surface-Earth System Science, Tianjin University, Tianjin, China © 2019 Elsevier B.V. All rights reserved.

Abbreviations BLM Biotic ligand model DBL Diffusive boundary layer DGT Diffusive gradients in thin-films DIFS DGT-induced fluxes in soils and sediments EC50 50% effect concentrations FIAM Free ion activity model ICP-MS Inductively coupled plasma mass spectrometry LA Laser ablation PO Planar optode POCIS Polar organic chemical integrative sampler ROL Radial oxygen loss SPR-IDA Suspended particulate reagent-iminodiacetate SWI Sediment-water interface

Introduction Various anthropogenic activities including mining, smelting and agricultural practices, and natural processes such as volcanic eruption have led many chemicals entering into the Earth surface system. Generally, when the concentration of a chemical in environmental media reaches above a threshold value, negative effects on the ecosystems and humans will be caused, making the chemical to be termed as a contaminant. In some occasions, the threshold values for some chemicals may be quite low or be almost zero, such as inorganic As-containing compounds. Many contaminants are persistent and co-exist in the environment, and bio-accumulate through the food chain. The occurrence of various contaminants including toxic metals (e.g., Cd, Pb, Ni, As, Hg) and organic compounds (e.g., antibiotics, pesticides, perfluoroalkyl substances, organophosphorus flame retardants, and endocrine disrupting chemicals) in terrestrial and aquatic environments affects not only the ecosystem service functions but also eventually human health. Nonpoint pollution from the intensive agricultural practices has rendered many nutrients (N- and P-compounds) entering into water bodies, causing eutrophication. It’s widely accepted that the bioavailable concentration of contaminant rather than the total concentration controls the risk in environmental media and the uptake by biota. Therefore, to study the behavior, risk and fate of contaminants in the environment, special attention needs to be paid to their bioavailability. From a chemical perspective, bioavailability refers to the fraction of freely available (not sorbed or sequestrated) contaminant that is mobile and thus most likely to lead to biota exposure. According to Fig. 1, processes that determine exposure to contamination include release of a solid-bound contaminant (A) and subsequent transport (B), transport of bound contaminants (C), uptake across a physiological membrane (D), and incorporation into a living system (E). Processes A–D are defined as bioavailability processes, which exclude process E because soil and sediment no longer play a role in biological response. Note that A, B, and C can occur internal to an organism, such as in the lumen of the gut. Typically, some steps will be most restrictive, and these

Fig. 1 Bioavailability processes in soil or sediment. Adapted from Ehlers LJ and Luthy RG (2003). Peer reviewed: Contaminant bioavailability in soil and sediment. Environmental Science and Technology 37(15), 295A–302A, with permission from American Chemical Society.

Encyclopedia of Environmental Health, 2nd edition, Volume 2

https://doi.org/10.1016/B978-0-12-409548-9.11403-4

111

112

Diffusive Gradients in Thin-Films (DGT)

are considered to control bioavailability. Many methods have been developed to evaluate the bioavailability of contaminants in the environment, including, but not exclusively, in vivo bioassays, in vitro assays using simulated human digestive fluids or cells, spectromicroscopies (e.g., infrared spectroscopy, and synchrontron-based X-ray absorption spectroscopy), equilibrium-based chemical extractions (single, mixed, or sequential), and kinetic-based extractions (e.g., diffusive gradients in thin-films, DGT, and polar organic chemical integrative sampler, POCIS). From the outset, DGT was developed as a kinetic passive sampler for quantifying trace metals in waters. Because of the inherent merits of DGT (e.g., simplicity and ease of use) and the firm theoretical developments in the last 20 years, DGT has much wider uses than some other passive samplers for just measuring concentrations of contaminants in environmental media. Besides, DGT can be used (i) as a chemical speciation tool, (ii) to provide images of concentrations in the porewaters of sediments and soils in twodimensions at very high spatial resolution, and (iii) to predict bioavailability and toxicity. In this article, theory related to DGT was introduced firstly, followed by in situ measurement and mapping of contaminants in environments using DGT. Afterwards, DGT and models of contaminant uptake by organisms were illustrated, and DGT as a predictor of bioavailability of contaminants to terrestrial plants, aquatic plants and other organisms were elaborately stated. A conclusion was also included with an outlook of using DGT as a versatile tool to predict bioavailability of contaminants.

DGT Theory The technique of DGT is invented by William Davison and Hao Zhang from Lancaster University, United Kingdom, in 1993. A typical DGT device consists of a binding gel layer, an ion-permeable diffusive gel layer, a filter membrane and a plastic base and cap (Fig. 2C). Based on Fick’s first diffusion law, solute continuously diffuses across the well-defined diffusion layer (diffusive gel þ filter membrane), and a pseudo steady-state and linear concentration gradient is formed (Fig. 2A). Then, the solute is immediately captured by the functionalized binding gel, making the concentration at the interface of the diffusive gel and binding gel be effectively zero. According to Fick’s first law, the flux (F) of a solute diffusing through the diffusion layer is given in Eq. (1). F¼D

vC ðC  C0 Þ ¼D vx Dg

(1)

Fig. 2 Schematic diagram of a DGT assembly deployed in water (A) or soil/sediment (B), structural diagram of a piston-type (C) or flat-type (D) DGT sampler, picture of commercial piston-type (E) or flat-type (F) DGT product, and photograph showing the deployment of DGT in freshwater (G), soil (H), or sediment core (I).

Diffusive Gradients in Thin-Films (DGT)

113

where D is the temperature-depended diffusion coefficient of solute in the diffusive layer, v C/v x is the concentration gradient, C is the concentration in the solution, C0 is the concentration at the interface of the binding gel and the diffusive gel, and 6g is the thickness of the diffusive layer. Before the binding gel gets saturated with the sorbed solute, C0 equals to zero. Therefore, Eq. (1) can be transformed to Eq. (2). F¼D

C

Dg

(2)

Meanwhile, by definition, F can be expressed as Eq. (3). F¼

M At

(3)

where M is the mass of the solute diffused through a given area (A) at a given time (t). By combining Eqs. (2) and (3), Eq. (4) is obtained. CDGT ¼ C ¼

MDg DAt

(4)

To get the total mass (M) of solute accumulated in the binding gel, a commonly adopted practice is to elute the gel using known volume (Ve) of appropriate eluents (e.g., diluted HNO3 or NaOH solutions) and use Eq. (5) for calculation.  Ce Vgel þ Ve M¼ (5) fe where Ce is the concentration of solute in the eluent, fe is the elution efficiency of solute from the binding gel, and Vgel is the volume of the binding gel. When the device is deployed in a wetted soil (soil slurry) or water sediment, the continual removal of the target solute from the soil solution or sediment porewater to the binding gel sink induces a concentration gradient within the DGT diffusion layer and immediately adjacent soil or sediment (Fig. 2B). The gradient through the DGT diffusion layer is determined by its thickness, Dg, and the interfacial concentration of labile solute, Ci, at the DGT-soil/sediment interface. The time-integrated interfacial concentration is also known as CDGT. The ratio (R, see Eq. 6) of CDGT to the measured solution concentration, Cd, is an indicator of the extent of the depletion of solution concentrations at the DGT-soil/sediment interface. A software, that is, DIFS (DGT-induced fluxes in soils and sediments), can also used to model R values. Comparison of the measured and modeled R values allows the estimation of parameters such as the labile distribution coefficient (Kdl, see Eq. 7) of solute between solid phase and soil solution and the soil response time to depletion (Tc, see Eq. 8), which is the time needed to bring the interfacial concentration of solute, Ci, from 0% to 63% of its pseudo steady state value. R¼

(6)

Cls k1 ¼ Cd Pc k1

(7)

1 1 1 z ¼ k1 þ k1 k1 ð1 þ Kdl Pc Þ k1 Kdl

(8)

Kdl ¼ Tc ¼

CDGT Cd

The concentration of labile solute in the solid phase is expressed as Cls, Pc is the particle density, k1 and k 1 are the first order sorption and desorption rate constant, respectively. With increasing deployment time, solute progressively accumulates in the DGT, causing Ci declining. The possible response of the soil or sediment to the perturbation exerted by DGT can be considered. There are two extreme cases of solute supply to DGT in sediments and soils, that is, fully sustained case and diffusion only case. For the former case, supply is continuous from the solid and solution phases at a rate nearly equal to the flux into the DGT, and R is then close to 1. For the diffusive case, virtually no release of solute from the solid phase, such that diffusion is the only supply. In this case, R is at its minimum possible value and is termed Rdiff (R ¼ Rdiff). The intermediate scenario of these two extremes is the partially sustained case. Therefore, DGT is best regarded as a robust tool for conducting in situ perturbation experiments by introducing a localized sink, rather than as just a simple device for measuring concentrations in the bulk solution of the porewater. To account for both the solution concentration and the adsorbed concentration that can be potentially mobilized by depleting solute in solution phase, a concept of effective concentration, CE, is introduced. Conceptually, CE is a hypothetical concentration that represents the concentration of solute in soil/sediment porewater if supply was only dependent on diffusion, without resupply of solute from the solid phase. The CE is calculated from CDGT, using the Rdiff ratio (Eq. 9). CE ¼

CDGT Rdiff

(9)

The software DIFS, which is available online, can be used to simulate values of Rdiff, Kdl, and Tc. The user interface of the software 2D DIFS (version 1.2.3-3, 2005, Lancaster, U.K.) is shown in Fig. 3. Besides DGT measurements, the additional measurable

114

Fig. 3

Diffusive Gradients in Thin-Films (DGT)

User interface of the software 2D DIFS.

parameters required for DIFS fitting are soil porosity (4s), diffusive gel porosity (4d, typically 0.95) and thickness (Dg), the effective diffusion coefficient in the soil (Ds) and the diffusion gel (Dd), and deployment time (T). To simulate “diffusion only” conditions (Rdiff), the system response time, Tc, was set to 1  1010 s and Kdl was set as 0 cm3 g 1.

In Situ Measurement and Mapping of Contaminants in Environments Using DGT To build up a novel DGT sampler for a specific analyte or a group of analytes, a binding phase with analyte selectivity (strong binding) should be carefully chosen and the sampler’s performance characteristics systematically evaluated in the laboratory. Generally, to acquire parameters for calculating the DGT-measured concentration using the “DGT equation” (Eq. 4) and to explicitly define the sampler’s application scope, the diffusion coefficient (D) of analyte in the diffusive gel, elution efficiency (fe) from the binding gel, capacity of the sampler, and possible effects of environmental conditions (e.g., pH, ionic strength, and coexisting ions) on DGT measurement should be firstly investigated. The second step is the validation of the DGT sampler in field trials, such as freshwater or wastewater. The general features of DGT for water quality monitoring include that it accumulates target analytes quantitatively in situ, uses equation rather than calibration to get concentration, is easy to use in laboratory or deploy in situ, is robust considering sample storage and preservation compared to active sampling (also known as spot sampling), provides time-weighted mean concentration, and avoids matrix problem. The fairly long diffusive path, approaching 1 mm in a standard device, ensures that the DGT measurement is almost insensitive to the rate of water flow above a threshold value of 2 cm s 1. Different DGT samplers with distinct binding phases in the binding gel have been developed, and many of them have been proven to provide reliable determinations independent of pH from 4 to 10 and ionic strength from 1 to 500 mmol L 1, meaning that DGT can effectively measure the target analytes in acidic to alkaline fresh waters, and even in marine and estuarine systems. Until now, DGT has been established to measure > 50 elements in the periodic table (Fig. 4). Besides metals, inorganic nutrients and radionuclides, in recent years, DGT has been further extended to measure element isotopic signatures, nanoparticles and organics (Table 1). The DGT technique was initially developed to measure metal concentration in waters, and was soon extended to include the measurement of metal fluxes in sediments and soils. It’s well accepted that DGT can be used to provide a direct measure of analytes that are both mobile and labile. The term mobile refers to the fact that they must be capable of diffusing at a reasonable rate through the diffusion layer. The term labile is used to denote species which can interconvert, within the timescale of their diffusional transport, to a form that can bind. With the advancement of the theory, more sophisticated interpretation in terms of the distribution of species in solution and the rates at which they interconvert is allowed. In water analysis, DGT has been widely used to sample the labile species. To interpret the measurements, the obtained DGT results are usually compared with the dissolved concentrations through filtration (typically using 0.45 mm filters) from traditional grab sampling, the equilibrium distribution of species calculated using speciation modeling softwares (e.g., WHAM, Visual MINTEQ), and other techniques measuring lability, such as competitive ligand exchangedadsorptive cathodic stripping voltammetry, solid phase extraction procedure. By doing so, metal fractionation

Diffusive Gradients in Thin-Films (DGT)

Fig. 4

115

A summary of elements in the periodic table that are effectively measured using DGT.

processes, for example, labile and inert species, inorganically and organically complexed species, and dissolved and colloid species, are delicately explored, replenishing to the existing dataset. DGT is also known as a useful tool for the selective determination of a specific species. A common practice is to incorporate a high-selectivity binding agent into the binding gel to make DGT preferentially sample only one species, excluding other species. For example, DGT device with Whatman DE81 as a binding agent (“DE81 DGT”) was developed to measure UO2(CO3)22 , whereas classic Chelex DGT was adopted to sample the fraction of U species dissociated from uranyl carbonates during their diffusion in the diffusion layer. DGT with 3-mercaptopropyl-functionalized silica gel has been used to selectively capture methylmercury, As(III) and Sb(III). N-Methyl-D-glucamine functional resin was incorporated into the DGT binding phase for selective measurement of Cr(VI), presenting negligible accumulation of Cr(III). A different approach for measuring two metal species in a single DGT device was also proposed, wherein the same binding gel of precipitated zirconia (ZrO2 formed through in-situ precipitation in the gel rather than incorporating ZrO2 power into the gel), both Cr(III) and Cr(VI) are retains. The separation of the species is performed by eluting the gel with NaOH solution, which is able to exclusively elute all Cr(VI), and then by acid digestion to recover the left mass of Cr, namely Cr(III). This precipitated zirconia DGT can also sample both phosphate (Pþ V) and phosphite (Pþ III). The separation of these two species is performed by analyzing the same eluting solution with different methods, that is, capillary-column-configured-dual-ion-chromatography for Pþ III quantification and molybdenum blue colorimetry using a spectrophotometer for Pþ V determination. Another approach in speciation analysis of the inorganic species of As is to use different properties of diffusive gel instead of the binding gel. While the diffusion coefficients for As(III) and As(V) on the conventional polyacrylamide diffusive gel are similar, by using the negatively charged NafionÒ membrane a significant increase in As(III) species is achieved. Approaches using DGT with varying thicknesses of the diffusive gel provide in situ kinetic information of metal complexes and worth further exploration. Accurate isotope ratio measurements by inductively coupled plasma mass spectrometry (ICP-MS) typically require laborious pre-concentration and matrix separation steps before analysis. DGT can pre-concentrate and transfer analytes from real environmental samples with complex matrix to the simple binding gel or its eluent. Recent studies have shown that DGT has the potential in studying the isotopic composition of dissolved or labile species of analytes. Use of Chelex as the binding gel introduced no fractionation of Pb isotopes due to the diffusion process through the diffusion layer in DGT within the reported precision of multicollector ICP-MS measurements, provided quantitative elution was obtained. For Zn isotopes, fractionation did occur. However, this bias can be easily corrected by using a simple relation independent of the time of exposure and the thickness of diffusion layer. Similar to Pb isotopes, no isotopic fractionation caused by diffusion or elution of sulfate from Amberlite IRA-400 binding gel was observed below a gel disc loading of  79 mg S (SO4 2 ). Significantly and systematically lower 34S/32S isotope ratios of the DGT-measured SO4 2 than those of water-extractable sulfate SO4 2 in soils indicate mineralization of organic S during DGT application. DGT technique, in combination with laser ablation (LA) multicollector ICP-MS, has also been used to study twodimensional S isotope variations of dissolved porewater sulphide in freshwater and marine sediments. A method was proposed to discriminate nanoparticles from other species using DGT. A 1000 molecular weight cut off dialysis membrane (thickness z 0.05 mm) was inserted between the diffusive gel and filter membrane to prevent diffusion of nanoparticles of ZnO. The mass of Zn accumulated in DGT devices without and with dialysis membrane reflects the concentrations of both ZnO nanoparticles and Zn2 þ, and only Zn2 þ, respectively. The discrepancy between these two measurement was thought to be estimates of concentrations of nanoparticles. Similar methodology has been used for discrimination of Ag nanoparticles from dissolved species. Even though some progress has been made, key knowledge gaps concerning nanoparticle measurement by DGT still exist. In the last several years, DGT samplers based on XAD resin, activated charcoal or hydrophilic-lipophilic balance powder as the binding agent have been extended to measure trace organics in waters. Until now, DGT methods for quantifying antibiotics,

116 Table 1

Diffusive Gradients in Thin-Films (DGT) Analytes measured by DGT and the corresponding binding agent in the binding gel

Category

Analytes

Binding agent

Year

Metals and metalloids

Zn(II) Cu(II), Ni(II), Fe(II), Mn(II), Ni(II) As(III) Co(II), Al(III), Ba(II) Pb(II) Ca(II) Fe(III) Ca(II), Mg(II) Cr(III) Ga(III) As(III þ V) Mo(VI) Ag(I) Se(VI) V V(V), As(V), Se(VI), Mo(VI), Sb(V), W(VI) Se(IV) As(III) Au(III) Ba(II) Al(III) As(V) Cr(VI) Sb(III) Phosphate K(I) NH4 þ  N NO3   N NH4 þ  N F(eI) NO3   N Phosphite Cs, Sr Tc U Ra Pu 143 Nd/144Nd 66 Zn/64Zn 34 32 S/ S (S2 ) 66 Zn/64Zn, 206Pb/204Pb 34 32 S/ S (SO4 2 ) ZnO nanoparticles Ag nanoparticles Methylmercury Antibiotics Monomethylarsonate, dimethylarsinate 4-Chlorophenol Bisphenols Glyphosate, aminomethylphosphonic acid Low-molecular-weight organic phosphorus Pharmaceuticals, pesticides Illicit drugs, estrogens Perfluoroalkyl substances Organophosphorus flame retardants S(-II) Rare earth elements SO4 2

Chelex 100 Chelex 100 SPR-IDA Chelex 100 Chelex 100 Chelex 100 Chelex 100 Chelex 100 Chelex 100 Chelex 100 Ferrihydrite Chelex-ferrihydrite Chelex 100 Ferrihydrite Ferrihydrite Ferrihydrite Metsorb 3-Mercaptopropyl-functionalized silica Activated carbon P81 membrane Metsorb Amberlite IRA 910 N-Methyl-D-glucamine resin 3-Mercaptopropyl-functionalized silica Ferrihydrite Amberlite IRP-69 (þferrihydrite) Zeolite Putolite A520E Microlite PrCH Fe-Al-Ce oxides SIR-100-HP Precipitated zirconia AG50W-X8 TEVA resin Whatman® DE81 MnO2 Chelex 100 Chelex 100 Chelex 100 AgI Chelex 100 Amberlite IRA-400 Chelex 100 Chelex 100 3-Mercaptopropyl-functionalized silica XAD18 Ferrihydrite Molecularly imprinted polymer Activated charcoal Metsorb Ferrihydrite Hydrophilic-lipophilic-balanced XAD18 XAD18 Hydrophilic-lipophilic-balanced AgI Chelex 100 Amberlite IRA-400

1994 1995 1997 1999 1999 2000 2001 2002 2002 2003 2003 2005 2008 2008 2008 2010 2010 2011 2012 2012 2012 2014 2015 2016 1998 2012 2015 2016 2016 2016 2017 2018 1998 2005 2006 2009 2014 2005 2005 2012 2015 2016 2013 2015 2006 2012 2012 2014 2015 2015 2015 2016 2017 2018 2018 1999 2003 2016

Nutrients

Radionuclides

Isotopes

Nanoparticles Organics

Others

Diffusive Gradients in Thin-Films (DGT)

117

bisphenols, polar organic contaminants, household and personal care products (including preservatives, antioxidants and disinfectants), illicit drugs, anionic pesticides, endocrine disrupting compounds, perfluoroalkyl substances, and organophosphorus flame retardants have been established. Under typical water flow conditions ( 2 cm s 1), the calculation of DGT-measured concentration relies mainly on the sampling time and temperature-dependent diffusion coefficient, D, which can be easily corrected especially using accessible temperature data-loggers. Compared to POCIS-based passive samplers, DGT measurements are generally less affected by environmental hydrodynamic conditions (i.e., water flow) due to much thicker diffusive gel layer compared to the DBL (diffusive boundary layer), but the sampling rates for organic contaminants (e.g., perfluorooctanoic acid and perfluorooctane sulfonate) by commerical DGT devices were generally one order lower than those obtained using POCIS-based samplers. This lower rate for DGT indicates that a longer deployment time is needed to achieve the same detection limit. There are methods to shorten the deployment time, such as reducing the thickness of the diffusion gel or adopting purpose-built DGT samplers with a larger exposure window size, but caution needs to be exercised when water flow is slow due to the increased DBL. Overall, these pioneering studies shed light on using DGT as a fully quantitative passive sampling technique to monitor polar organics in aquatic systems. Contaminants usually have a very heterogeneous distribution in environments, such as soils and sediments, creating dispersed but highly concentrated groupings of contaminant hotspots. These zones together with sediment-water interface (SWI) and plant rhizosphere are characterized by intense chemical activity and localized features that provide information about the contaminant mobilization mechanism. To resolve such fine scale chemical processes, lateral and horizontal measurements scales should not exceed 1 mm. The DGT technique provides the opportunity to quantify contaminant transfer across interfaces or hotspots at sub-millimeter gradients if the functionalized particles that bind with the target analyte are small ( 10 mm) and are homogeneously distributed in the binding gel. Choosing ultra-fine materials (such as suspended particulate reagent-iminodiacetate, SPR-IDA, bead size 0.2 mm) as the binding agent or in situ precipitation of ferrihydrite, zirconia and AgI within a precast hydrogel (precipitated or high-resolution gels) has been validated as a reliable technique to meet these requirements. LA ICP-MS remains one of only few techniques with sufficiently high sensitivities to image the multi-element distribution patterns captured by DGT. The SPR-IDA, precipitated ferrihydrite or zirconia, and AgI gel based DGT coupled with LA ICP-MS analysis has been validated for the concentration/flux measurement of cationic metals, oxyanions and sulfide, respectively in soils and sediments, and their interfaces with waters and biota (e.g., plant roots) at submillimeter spatial resolution. Planar optode (PO) is another promising chemical imaging technique for real-time measurement of changes in analyte concentrations. Based on photoluminescence, PO is a fluorescent assay that uses reversible fluorophores immobilized in a thin layer of an analyte-permeable matrix. Using PO, dynamic changes of solutes (e.g., O2, pH, pCO2) across SWI and in the vicinity of roots during plant growth can be monitored in real time and in situ. The application of PO and DGT in parallel, although still quite limited (Table 2), has significantly improved our understanding of element behaviors at micro-interfaces including SWI and the rhizosphere. In 2017, DGT-PO hybrid sensor that combines an optical pH or O2 sensor and DGT was developed to create noninvasive chemical images of pH or dissolved oxygen concentrations and labile P or metal fluxes near the roots of plants (Salix smithiana and Vallisneria spiralis) and across SWI at sub-millimeter spatial resolution.

DGT and Models of Contaminant Uptake by Organisms Contaminant/element uptake by organisms is governed by both biological processes and the ability of supply from bulk medium (hydroponic solutions, natural/waste waters, soils and sediments). It is well recognized that only a proportion of the total contaminant concentration is available to organisms, which is referred to as bioavailability. Bioavailability measurement and prediction are helpful to answer that if a specific concentration of contaminant is at a toxic level or not, i.e. whether we need to mitigate the

Table 2

Application of combined DGT and planar optode (PO) techniques in environmental sciences PO-measured parameters

Medium/interface type

Year

Fe, Ni, Cu, and Pb Pb, Fe, As, Co, Mn, Zn, and Ni Mn, Zn, and Cd P, Mn, Ni, Cu, Zn, Cd, and Pb Mn, Fe, Co, Ni, S2 , Cu, Zn and Pb P, S2 , Fe, and Ca

O2 pH, O2 O2 O2 O2

Sediment Rhizosphere of rice (RIL 46) Rhizosphere of willow (Salix smithiana) Soil SWI

2012 2014 2015 2016 2017

pH, O2

Rhizosphere of seagrass (Cymodocea serrulata)

2017

P, Al, Ca, Fe, Mg, and Mn

pH

2018

S2 

O2

Rhizosphere of wheat (Triticum aestivum L. cv. Carazinho), buckwheat (Fagopyrum esculentum Moench) and lupine (Lupinus albus L.) Rhizosphere of seagrasses (Halophila ovalis and Zostera muelleri)

DGT binding agent

DGT-measured elements

SPR-IDA SPR-IDA SPR-IDA Zr(OH)2-SPR-IDA SPR-IDA Precipitated zirconia, AgI or Zr(OH)2-SPR-IDA Zr(OH)2-SPR-IDA AgI

2019

118

Diffusive Gradients in Thin-Films (DGT)

contamination. For instance, a sustained decrease in bioavailable fractions of contaminants may be sufficient to optimize agricultural yields and quality, instead of lowering total concentrations down to threshold values. The biouptake processes of contaminants are determined dominantly by supply controlled or biota limiting uptake mechanisms. The former mechanism is also known as diffusion-limited mechanism, which describes the diffusive transport to a high demand sink. The latter mechanism is also called internalization-limited mechanism, which depicts the uptake determined by interactions with the biotic surface. Different uptake mechanisms may exist for organisms under a specific circumstance, and therefore different amounts of labile fractions of a certain contaminant may be uptaken. Contaminants in solution of a given environmental medium are present as free ions (or technically as aquo-complexes) and complexes with inorganic or organic ligands, or associated with colloids. Free ion is usually considered as the species that is uptaken by biota and DGT directly. All fractions are not constant over time. They transform one another when chemical equilibrium state is perturbed. The “equilibrium models” of solute uptake by organisms would be firstly introduced, following was an elaborate comparison of contaminants accumulated by DGT and organisms. If the rate of uptake of solute (or free ion) is very slow compared to the rate of supply from solution by diffusion, there will be negligible perturbation of the distribution of species in the adjacent solution and the concentration of solute at the organism’s membrane surface will not be lowered. In this case it is biological uptake that is rate limiting. For this situation, which is often referred to as the free ion activity model (FIAM), the amount taken up by organism is proportional to the free ion activity in the media. The biotic ligand model (BLM) is an extension of the FIAM but further considering the effect of competing ions in the uptake process. In BLM, both free ion interaction with binding sites at the surface of organism and the competition from other ions (e.g., Ca2 þ and Mg2 þ) for these sites are considered. For instance, at the same free ion activity, the binding of a solute to the uptake sites in hard water will be less than that in soft water if Ca2 þ and Mg2 þ compete with the metal ion for binding at the so-called biotic ligand. Both BLM and FIAM apply when the supply flux of free ion is greater than the biouptake flux, which means that solution equilibria are negligibly perturbed. These two models have been successfully used to predict uptake of metals by many aquatic organisms. If the uptake of free ion by an organism is fast compared to the rate of supply by diffusion, the uptake will be diffusion limited. The free ion concentration at the surface decreased to zero, which induces a resupply from the complexes in environmental medium such as bulk solution and solid phase. In this case, diffusion gradient forms between biota surface and bulk medium. Thus, the uptake will be determined by not only the free ion but all the species that contribute to the diffusion flux. The discussion here also indicates that the bioavailable part of a contaminant is not predetermined or unchangeable. Contaminants of different fractions have the potential, although varied, to be bioavailable, depending on the actually rate of biouptake and supply. Briefly, if an organism removes contaminant more slowly than it can be supplied by diffusion, there is negligible depletion at the cell membrane; this is the uptake limiting condition, and the equilibrium models applies. If the flux of biouptake is larger than diffusional supply, the concentration of the free ion is depleted at the biota-medium interface, which induces a resupply from the complexes in the soil solution and from the solid phase. This situation is called supply limitation, which is a clear parallel with the principle of DGT, the dynamic method. As previously mentioned, DGT binding gel has a high affinity for free ions to target analyte and acts as a zero sink, thus inducing diffusion throughout the diffusion layer (i.e., the hydrogel and filter membrane), before their being captured on the binding gel. The magnitude of the flux measured by DGT is determined by the concentration in solution, the rate of diffusion and the rate of resupply from both the solid phase and complexes in solution. These labile species are related to the time available for complexes to dissociate in soil solution and fractions to desorb from solid phase. This time scale depends on mainly thickness of diffusive layer, but the thickness of binding layer also plays a part. Under supply limiting condition, similar diffusion layer thicknesses mean similar diffusive time scales (Fig. 5). A diffusion layer thickness of DGT in the mm-range, agrees well with the diffusion layer thicknesses of plant roots, organs of macroorganisms and biofilms under supply controlled conditions (Fig. 5), making fluxes of contaminants to these organisms and to DGT devices be similar. Therefore, DGT is able to provide important information for understanding contaminant uptake mechanisms and bioavailability.

Fig. 5 Thickness of the diffusion layer for analytical samplers/sensors and supply controlled biouptake processes and the corresponding diffusional time scale. Conceptually, DGT and voltammetric microelectrodes are similar, both measuring the diffusion flux toward a device. Adapted from van Leeuwen HP et al. (2005). Dynamic speciation analysis and bioavailability of metals in aquatic systems. Environmental Science and Technology 39:8545–8556, with permission from American Chemical Society.

Diffusive Gradients in Thin-Films (DGT)

119

Note that whereas DGT measured fluxes might correlate well with plant uptake fluxes, they are unlikely to be exactly the same, because of: (i) different moisture contents (if organisms not grown hydroponically); (ii) effect of deployment time on CDGT in some cases; (iii) different geometries (DGT window is planar, plant root is cylindrical and microorganism is spherical) and (iv) plant induced processes (e.g., root exudation) in rhizosphere. The comparisons among the relationships of concentrations of a contaminant in plant versus the results from dynamic based measurements (e.g., DGT) and equilibrium based methods (e.g., chemical extraction) and models (e.g., FIAM, BLM), provide key evidences for deducing information about the biouptake mechanisms. Based on existing observations from hydroculture and soil experiments, the uptake of Cd and Zn by plants is strongly limited by diffusion at environmentally relevant concentrations, whereas for Ni, the uptake is on the border between supply and plant control. For example, soil-based experiments suggested that uptake of Ni by hyperaccumulator (Thlaspi goesingense) and non-hyperaccumulator (Thlaspi arvense) is limited by supply and plant, respectively, whereas for Cd, supply controls the uptake by both kinds of plants. Recent study also revealed that As uptake by hyperaccumulator (Pteris vittata) is supply controlled. A study published in 2012 observed the contribution of nanoparticulate P species loaded in synthetic Al2O3 nanoparticle solution to both uptake of P by plant (Brassica napus) and DGT measurements, suggesting supply-controlled P uptake by plant roots from nutrient solution.

DGT as a Predictor of Bioavailability of Contaminants to Terrestrial Plants In this section, terrestrial plants are specifically referred to as higher plants which uptake water and solutes through roots from soils. Typically, there are two scenarios for contaminant uptake by roots from soils: (i) plant demand induced internalization flux of a contaminant is low and diffusion flux in the soil is high, so there is little depletion of solute at root surface induced by uptake, which usually happens when plants are grown under toxic conditions; (ii) plant demand flux is higher than soil could supply, corresponding to supply limitation with continuous solute depletion at the root surface (Fig. 6). In principal, DGT is likely to best mimic plant uptake process under the latter scenario by acting as a high-demand sink, inducing depletion of the free ion near the DGT/plant-soil interface. In practice, DGT has been proved to be a good predictor of plant uptake or phytoavailability of contaminants under supply limitation. Undeniably, under plant limitation, good correlations between plant uptake and DGTmeasured concentrations may still be observed. This is probably caused by the co-variations in concentrations (total, dissolved, free, DGT-labile) from different methods, especially for the cases where a single soil is amended with contaminants at different levels through spiking. When evaluating the labile concentrations of contaminants using DGT, both the exchange and diffusion process in soil solution and the dynamic balance process of resupply to soil solution from solid phase are considered. To better understand these processes, it is necessary to expound what DGT measures in soils. Generally, inorganic contaminant (e.g., toxic metals) in soil solution is

Fig. 6 Two scenarios for contaminant uptake by roots from soils: (a) plant control; (b) diffusion control. The width of the arrows indicate the relative sizes of diffusive fluxes.

120

Diffusive Gradients in Thin-Films (DGT)

composed of mineral colloids, organic complexes (may also be colloids), inorganic complexes, and free ion, whereas in the solid phase is represented by tightly bound and exchangeable fractions (Fig. 7). DGT measures mainly the free ion and simple inorganic complexes, which dissociate fast enough and have similar diffusion coefficients as that of free ion. It also measures part of organic complexes which are labile. These labile organic complexes are usually dominated by complexes with fulvic and humic acids, whose sizes are large compared to free ions but very small in the colloidal range. However under typical deployment time (hours to days), DGT does not measure most colloids as they diffuse much more slowly than free ions in the diffusion layer. The DGT-induced local depletion of contaminant concentration in solution causes release of contaminant from the readily exchangeable fraction in the solid phase, which also contributes to the quantity measured by DGT. As a result, DGT measurement integrates all soil properties into one single key parameter, that is, labile concentration or flux. When DGT is deployed in a soil, which is usually wetted to saturation (soil slurry), it perturbs the chemical equilibria by lowering the concentration in soil solution immediately adjacent to the device. A key point is that DGT continuously removes and accumulates contaminants, so that supply processes from the soil must be considered dynamically. To express contaminant concentration that is effectively available from both soil solution and solid-phase, effective concentration, CE, is proposed (Eq. 9). If plant uptake also locally lowers soil contaminant concentrations, CE is expected to relate directly to plant uptake. The ratio between CDGT and CE, that is, Rdiff, depends on the amount of water added during soil slurry preparation and the resulting porosity, tortuosity and diffusive coefficients in soil, and is usually around 0.1 for a 24-h deployment time. So CDGT and CE are generally strongly correlated if the dataset doesn’t cover soils with contrasting porosities. Since 1999, substantial studies have related DGT-measured concentration to accumulated concentration in plant. Relationships between the concentrations of inorganic contaminants in plant tissue and (a) the DGT measurement, either CDGT or CE, and (b) other assessments, have been obtained for a range of soils. By and large, DGT provides a better prediction of plant uptake compared to conventional chemical extraction or soil solution sampling techniques. Generally, strong correlations between DGT measurements and concentrations of cation metals (e.g., Cd, Cu, Pb, Zn, and Ni) in terrestrial plants (such as wheat, potato, lettuce, lupin and sorghum), especially in the roots, were found in pot trials with soils covering a wide range of soil properties. For example, in a pot trail with 29 different soils covering a large range of Cu concentrations, uptake of Cu in the above-ground plant (Lepidium heterophyllum) tissue was linearly related and highly correlated with DGT measurements (CE) but was more scattered and nonlinear with respect to free ion activity, EDTA extraction or soil solution concentrations. Note that concentrations of contaminants in the above-ground tissue depend on not only the uptake from roots but also the upward translocation within the plant. As a result, while DGT measurements may have good correlation with concentrations in the total plant or in the under-ground roots, the relationship with above-ground tissue (shoot or grain) concentrations may be less pronounced. For instance, poor correlation between Cd concentrations in paddy rice grains and DGT measurements (and also other determinations) were obtained across a large dataset of soils. It should be pointed out that compared to some conventional extraction techniques, the database so far for soil monitoring using DGT is still small, calling for more studies. From DGT deployment and DIFS modeling, soil parameters, such as labile distribution coefficient (Kdl), response time (Tc) and desorption rate constant (k 1), can be obtained. These parameters reflect the dynamic processes of contaminant uptake by plants from certain aspects. To study the rhizosphere characteristics of As hyperaccumulator (Pteris vittata) for phytoextracting As from contaminated soils, an experiment was design to distinguish rhizosphere and bulk soils. It’s found that the response (Tc) of the rhizosphere soil is only one-third of that in the bulk soil, due almost entirely to the slower rate of release (k 1). This work and several subsequent studies for other hyperaccumulators prove that DGT is a robust tool to study the efficiency of phytoextraction and potential resupply from bioavailable pools after phytoextraction has ceased. A recent study proposed to use the difference between R (Eq. 6) and Rdiff (R  Rdiff value) as another soil test parameter, as it well correlated with DGT measurement and best showed the resupply of P from the soil solid phase.

Fig. 7 A simplified illustration of what is measured by DGT (purple) and by a typical soil solution measurement (blue). Adapted from Zhang and Davion (2015). Use of diffusive gradients in thin-films for studies of chemical speciation and bioavailability. Environmental Chemistry 12:85–101, with permission from CSIRO.

Diffusive Gradients in Thin-Films (DGT)

121

DGT is also used, although still quite limited, to estimate the toxicity threshold values (50% effect concentrations, EC50) and thus predict the toxicity of metals to plants. As stated above, under toxic conditions plant uptake is probably not limited by diffusion, and DGT is not expected to give good prediction. Nevertheless, DGT may still be a good predictor of toxicity due to its correlation with other measurements (e.g., soil solution concentration). For example, an investigation of Cu toxicity to tomato (Lycopersicon esculentum) shoot growth in 18 different soils revealed that the variations in EC50 thresholds for Cu between soils decrease as free ion activities > total concentration > soil solution > DGT measurement (CE), suggesting that DGT-measured concentrations improved the prediction of toxicity. In studies of toxicity of Mo to plant shoot yield of oilseed rape (Brassica napus L.), red clover (Trifolium pratense L.), ryegrass (Lolium perenne L.) and tomato (Lycopersicon esculentum L.), and Zn to barley seedling growth in the field-contaminated soils, differences in EC50 thresholds based on DGT, although larger than soil solution (Mo) or CaCl2 extraction (Zn), were smaller than other measurements. Unparallel to inorganic contaminants, the study of bioavailability of organic contaminants to terrestrial plants using DGT is rare. The limited studies focus on contaminants of herbicides and methylmercury. The bioavailabilities of herbicides (glyphosate, atrazine and its metabolites) to maize (Zea mays), Chinese pennisetum (Pennisetum alopecuroides), wheat (Triticum aestivum) or lupins (Lupinus angustifolius) were studied. Generally, biouptake correlated better with concentrations from DGT measurements than using other methods (e.g., water extraction, solvent extraction). Specifically, DGT was demonstrated as a good indicator of assessing the atrazine degradation pathway and thus predicting the bioavailability of total atrazine (atrazine and its metabolites, e.g., hydroxyatrazine) to the roots and shoots of maize. For methylmercury, a study was designed to measure its uptake by rice (Oryza sativa L.) grown in contaminated paddy field and compare the results with DGT measurements using DGT probes inserted in the soil. Uptake fluxes from soil to rice plants via roots showed a strong positive relationship with the DGT-measured fluxes, and it was suggested that DGT could be a useful monitoring tool to assess risk of methylmercury-contaminated rice.

DGT as a Predictor of Bioavailability of Contaminants to Aquatic Plants In both water and soil scenarios, models of BLM and FIAM are widely used to evaluate the bioavailability of contaminants, especially toxic metals and metalloids. When it comes to DGT study, researches related to bioavailability in water environments are far less than those in soil environments. It’s not surprising that for a specific contaminant, the bioavailability evaluation results from DGT measurements (comparisons of uptake/accumulation concentrations in plants vs. DGT-measured concentrations/fluxes) are not consistent and sometimes even contrasting, varying depending on the plant species and external conditions. The underlying reason is the discrepancy between the real biouptake processes and DGT-mimicked diffusion/binding processes. DGT has been adopted to predict the bioavailability of contaminants (mainly toxic metals) to phytoplankton. DGT and hollow fiber permeation liquid membrane (measuring free ion concentration) were used to predict metal (Cd, Cu, Ni, and Pb) uptake by green microalga (Chlorella salina) in Black Sea and artificial seawaters. The biouptake fluxes were very strongly correlated with free ion concentrations and were smaller than DGT-measured fluxes, suggesting that uptake was plant control. In another study, a significant relationship was found between the contents of Pb and Cd in macroalgae (Padina pavonica L.) collected from five contaminated coastal sites in south-western Sardinia (Italy) and DGT-labile concentrations in seawater. But for Cu, no significant relationship was found. This suggests that levels of Pb and Cd in algal tissues are apparently controlled by the abundance of DGT-labile metal species in the ambient seawater, whereas for Cu, metal species that were not captured by DGT also contribute to biouptake. Since DGT has been widely used to study the distribution and bioavailability of contaminants in sediments, it has also been extended to predict the accumulation, toxicity and bioavailability of contaminants to sediment-dwelling plants. In such studies, DGT measurements are usually made either by deploying a DGT device (Fig. 2E) horizontally on the sediment surface or by inserting a DGT probe (Fig. 2F) vertically into the sediment. In a study investigating Cu accumulation and toxicity in an aquatic plant (Myriophyllum aquaticum) after exposure to Cu-spiked sediments, biouptake correlated better with measured concentrations of Cu using DGT devices than with other metal concentrations (total dissolved, free). However, the correlation was relatively weak and the relationship non-linear, suggesting that the uptake was not supply limited due to saturation of uptake processes as well as toxicity. In another study, the DGT technique was evaluated for predicting metal bioaccumulation in common reed (Phragmites australis) growing in contaminated river sediments. Bioaccumulation of Cr, Cu, Zn, and Cd was strongly correlated (positively) with their measured concentrations in the sediments using DGT probes and with total metal concentrations, but the correlation for Ni was negative, indicating an inhibited uptake of Ni by other metals. Caution needs to be exercised when prediction of metal bioaccumulation using DGT in environments where multiple metals are present. The emerged and submerged macrophytes are widely distributed in lakes worldwide. The roots of macrophyte affect the biogeochemical cycling of nutrients and metals at SWI through both abiotic and biotic processes. In two recent studies, the transfer, uptake and bioavailability of P and metals (Cu, Cd, Zn, and Pb) in rhizosphere of macrophytes (Zizania latifolia and Myriophyllum verticiilaturn) cultivated in rhizoboxes in Lake Erhai (China) is evaluated by DGT technique and DIFS model. Uptake by roots correlated better with DGT measurement (CDGT using DGT devices) than with other P/metal concentrations (porewater, sediment total). The resupply ability (R), the labile pool in solid phase (Kdl) or the kinetic constants (k 1) controlled the dynamic exchange between porewater-solid interface in rhizosphere. Sediment pH, dissolved organic carbon in porewater and the ratio of dissolved organic P versus dissolved total P affected the phytoavailability of metals (e.g., Cu and Cd) and P in rhizosphere. It’s concluded that DGT-rhizobox-DIFS is a reliable method to reveal the dynamic processes of P and metals in rhizosphere and evaluate the phytoremediation ability and mechanism of macrophyte for contaminants in lake sediment.

122

Diffusive Gradients in Thin-Films (DGT)

DGT provides not only a labile concentration or flux of a contaminant, but also 1D and 2D spatial distribution profiles of concentration or flux of a contaminant at millimeter or sub-millimeter scale. Using 2D DGT, a round-shape distribution of labile P with DGT flux minima in the rhizosphere of submerged macrophyte (Vallisneria natans) was visually observed. Further analysis indicated that the decrease of labile P fluxes in rhizosphere was due to enrichment of P by iron plaque formed on the root surfaces. In other studies, direct confirmation was provided that continuous radial oxygen loss (ROL) from roots of submerged macrophyte (V. spiralis) induced changes in redox conditions and thus played a major role in regulating P availability within the rhizosphere. The dynamics of ROL and P availability were monitored by using a 2D DGT-PO hybrid sensor for simultaneously measuring O2 concentrations and P fluxes. As mentioned in Table 2, by combining imaging techniques (2D DGT for S2 , PO for O2, and confocal fluorescence in situ hybridization) with microbial community profiling, the role of ROL from actively growing root tips in protecting seagrasses (Halophila ovalis and Zostera muelleri) from S2  intrusion was clearly deciphered. ROL not only abiotically oxidized S2  in the rhizosphere of young roots, but also influenced the abundance and spatial distribution of sulphate-reducing and S2 oxidizing bacteria. Combining several 2D DGT probes (for P, S2 , Fe, and Ca) and PO sensors (for pH and O2), it was verified that P and Fe mobilization in the rhizosphere of seagrass (Cymodocea serrulata) was enhanced via root-induced local acidification, leading to dissolution of carbonates and release of phosphate, and via local stimulation of microbial S2  production, causing reduction of insoluble Fe(III) oxyhydroxides to dissolved Fe(II) with concomitant phosphate release into the rhizosphere porewater. These nutrient mobilization mechanisms had a direct link to root-derived ROL and secretion of dissolved organic carbon from the below-ground tissue into the rhizosphere.

DGT as a Predictor of Bioavailability of Contaminants to Other Organisms In the last 15 years, DGT has been gradually extended to link to the biological uptake and response in aquatic animals and terrestrial earthworms. The coupling of DGT technique with bioassay method can obtain valuable information about the speciation processes and potential bioavailability of different fractions of contaminants in natural waters, sediments and soils. To summarize, related DGT studies have focused on analyzing the labile concentrations or fluxes of P and almost 20 metals, and corresponding bioassay work involves mainly fauna species belonging to microcrustacean, fish, benthic invertebrates, sea urchin, bivalve, shrimp, peixe, mosquito, worm, gastropod, and earthworm. When contaminants are uptaken by fauna under kinetically or thermodynamically controlled conditions, stress may be imposed on the organism itself, which in turn causes biotic responses, such as growth, lethality, reproduction rate, and physiological indexes. In some occasions, these biotic responses of organisms can well correlate with the labile concentrations of contaminants measured by DGT. For example, a strong relationship was observed between DGT-measured Cu concentration and induced lethality to bivalve (Tellina deltoidalis) in sediments with different characteristics, and between DGT-measured Cd concentration and the metallothionein activity in bivalve (Corbicula flumine) in Cd-spiked sediments. Contaminants uptaken by an organism accumulate in its organs, tissues or even the whole body. If the uptake rate of contaminants by organism is higher than the excretion rate, bioaccumulation occurs. In theory, such a time-integrated characteristic is also owned by DGT technique. Several studies have reported the strong correlations between Cu concentrations from DGT measurement and its accumulation levels in biota, such as bivalve (Saccostrea glomerate) from coastal waters in Australia, the hepatopancreas of transplanted shrimp (Litopenaeus vannamei) from coastal waters in China, caged bivalve (Perna viridis) and native bivalve (Polymesoda expansa) from mangroves in Singapore, and the benthic worm (Tubifex sp.) after 7 days of exposure to sediments from floodplains in Netherlands. Similarly, DGT gave reasonable predication of Cd bioaccumulation by microcrustacean (Gammarus pules) in mineral waters spiked with organic ligands and Ca, and the accumulation of CH3Hg in bivalve (Macoma balthica) grown in reconstituted seawater. Biomonitoring involves the use of organisms to assess environmental contamination, such as of surrounding water. In principal, it can be done qualitatively by observing and noting changes in organisms, or quantitatively by measuring accumulation of chemicals in organism tissues. To obtain the organisms with dwelling pollution information, we can either collect wild specimens of animals from target sites, or transplant organisms to the environment in cages and retrieve them after a certain period of exposure. To facilitate the biomonitoring, DGT has been increasingly induced as a complementary technique for in situ chemical measurement. A recent study revealed that the high concentrations of Cu determined using DGT in the estuarine waters in Australia were in accordance with the high contents of Cu observed in the tissues of three wild fish species with different feeding behaviors (Arrhamphus sclerolepis, Acanthopagrus australis, and Mugil cephalus). Much more studies were conducted using transplanted organisms because (i) in places where there are no potential bioindicator species, biomonitoring studies can also be performed; (ii) more importantly, by doing so it is possible to control some biotic factors, such as sex, age, exposure time and site of exposure. Significant correlations between DGT-measured concentrations of metals (e.g., Cd and Pb) and metal contents in the tissues of transplanted bivalve (Mytilus galloprovincialis) were found. Compared to bivalves, DGT technique seems to be efficient in discriminating differences between contaminations of the sites. Consistent results were also found that the levels of Co and Mn in transplanted bivalve (Dreissena polymorpha) exposed to the river water affected by the dredging activity in France increase with the labile concentrations of metals measured by DGT. Organisms and the environment affect each other. By definition, bioturbation is the reworking of soils and sediments by organisms, including burrowing, ingestion and defecation of sediment grains. Bioturbation activities have a profound effect on the environment and are thought to be a primary driver of biodiversity. With the development of 2D DGT, spatial high-resolution

Diffusive Gradients in Thin-Films (DGT)

123

(sub-millimeter to millimeter) chemical information can be acquired. This powerful tool has great potential in studying the mobility behavior of contaminants in sediments of high heterogeneity and across the SWI. Such information from DGT measurement can be linked with the bioturbation activity in sediments. High fluxes of labile P were observed in depths where the bioturbation activity of tubificid worms was pronounced. The lability of metals (Cd, Ni, Pb, and Zn) measured by DGT in waters and sediments from coastal areas in Australia was higher when the samples were exposed to high bioturbation activity group (with Tellina deltoidalis and the amphipod Victoriopisa australiensis) compared to the control group (absence of bioturbation) and low bioturbation (with just T. deltoidalis). In contrast, if the bioturbation activity caused oxygen input into the anaerobic sediments, the lability and bioavailability of P and As captured by DGT decreased. In the presence of larval activity, an inhibitive effect of labile P, As, and CH3Hg formation was identified from DGT measurement. There is much work coupling DGT technique with bioassay method to study bioavailability of contaminants in the media of waters and sediments, with much less work in the soil settings. A very recent work set out to investigate biological responses of earthworm (Eisenia fetida) to soil Cd, based on metal bioavailability measurement including DGT. E. fetida was cultured for 14 days in three selected Chinese soils amended with different levels of Cd (0.1–40 mg kg 1). Biological responses from antioxidant system were more highly correlated with Cd concentrations evaluated using bioavailable methods than with total concentrations. Based on biological responses, Cd concentration measured using DGT provided a narrow range of lowest observed effect concentration values, indicating the potential use of DGT measurements in management of soil quality and setting soil remediation standards. Integrative approaches coupling bioassay and DGT measurements have been proven to be effective for assessing the bioavailability of contaminants in the environment. Based on the existing knowledge, it is not convincing or even wrong to conclude that DGT provides a generally applicable tool for assessing the responses of fauna in sediments and soils. As mentioned earlier, DGT is expected to correlate well with the bioavailable fractions of contaminants when diffusion limited conditions are present, but in other occasions, DGT may not be a good predictor of contaminant bioavailability. DGT does not assess the bioavailability of contaminants that comes from dietary exposure (e.g., sediment ingestion). The high heterogeneity nature of soils/sediments and the presence of diverse compounds make the prediction of bioavailability using DGT even more uneasiness. Moreover, the uptake and response of contaminants by organisms are situational and species-specific. The labile fraction that is bioavailable for one species may be not available for another. Therefore, the precise representation of all mechanisms and processes involved in the uptake of contaminants by aquatic and terrestrial fauna represents a challenging task for a single technique. DGT is no exception, but it does mimic some key processes that may be dominant for some situations, especially in dynamic environment.

Concluding Remarks In the last two decades, numerous studies have generally shown that DGT is an effective and simple tool for assessing the potential bioavailability of contaminants in waters, soils and sediments. Principally, if uptake of contaminant by organisms is controlled by soil/sediment diffusion or supply process, DGT gives good predication. Conversely, the comparisons among the relationships of biota uptake versus DGT measurements and other assessments (e.g., chemical extractions, FIAM and BLM models) provide key evidences for deducing information about the biouptake mechanisms, that is, supply or organism itself limitation. As relationships between contaminant accumulated by DGT and in organisms depend on the organism species, medium studied, and the contaminant and its chemical form, DGT is not an infallible predictive tool. Until now, most studies have focused on inorganic contaminants with results of relationships between plant uptake and DGT measurements being various but in general mostly positive. The relationships between inorganic contaminants uptake by fauna uptake and accumulated in DGT are somewhat complex especially if dietary exposure (e.g., sediment ingestion) also makes a contribution. Nevertheless, DGT does mimic some key processes that may be dominant for some situations, especially in dynamic environment. The coupling of dynamic DGT technique with equilibriumbased chemical methods/models and bioassays will provide a holistic view of contaminant bioavailability in environmental media. DGT is also adopted to study the toxicity of contaminants to organisms. Although, studies in this area is still limited, the accessible dataset still indicates that DGT measurements may provide a narrow range of toxicity thresholds compared to total concentrations and free ion activities. This sheds light on evaluating the health of a watershed, soil or sediment using DGT, but further study is needed to embrace much larger datasets. In the last several years, great progress has been made to use DGT for sub-millimeter high-resolution imaging of contaminants in heterogeneous media especially near the SWI and plant roots, i.e. rhizosphere. This application well demonstrates the merit and somewhat the irreplaceability of DGT. The combination of DGT and other diffusion-based 2D methods (e.g., planar optode, soil zymography) for imaging of various key solutes related to contaminant mobility and cycling at micro-scale is quite appealing and worthy of more investigations. A large quantity of bioavailability studies using DGT are performed in laboratorial pot or mescosm experiments. To promote knowledge transfer from science to environmental management and to improve regulatory work, more work should be done to further connect chemical analysis from DGT to biological responses especially under field conditions so that the applicable boundary can be set up and practical standard protocols proposed. In principal, DGT can predict the uptake of organic contaminants by organisms under diffusion limitation. Such investigations remain in their infancy but worth to be done, just as the recent pioneering studies have demonstrated. Meanwhile, the comparisons between DGT and other diffusion-based passive samplers for bioavailability study will attract attention from wider scientific community.

124

Diffusive Gradients in Thin-Films (DGT)

Acknowledgments This work was funded by the National Natural Science Foundation of China (41807353) and the China Postdoctoral Science Foundation (2016M601770 and 2017T100350).

Further Reading Davison, W., 2016. Diffusive gradients in thin-films for environmental measurements. Cambridge University Press, Cambridge. Davison, W., Fones, G.R., Grime, G.W., 1997. Dissolved metals in surface sediment and a microbial mat at 100-mm resolution. Nature 387, 885–888. Davison, W., Zhang, H., 1994. In situ speciation measurements of trace components in natural waters using thin-film gels. Nature 367, 546–548. Davison, W., Zhang, H., 2012. Progress in understanding the use of diffusive gradients in thin films (DGT)dback to basics. Environmental Chemistry 9 (1), 13. Degryse, F., Smolders, E., Zhang, H., Davison, W., 2009. Predicting availability of mineral elements to plants with the DGT technique: A review of experimental data and interpretation by modeling. Environmental Chemistry 6, 198–218. Eismann, C.E., Menegário, A.A., Gemeiner, H., Williams, P.N., 2019. Predicting trace metal exposure in aquatic ecosystems: Evaluating DGT as a biomonitoring tool. Exposure and Health. https://doi.org/10.1007/s12403-018-0280-3. Galceran, J., Puy, J., 2015. Interpretation of diffusion gradients in thin films (DGT) measurements: A systematic approach. Environmental Chemistry 12, 112–122. Guan, D.X., Li, Y.Q., Yu, N.Y., et al., 2018. In situ measurement of perfluoroalkyl substances in aquatic systems using diffusive gradients in thin-films technique. Water Research 144, 162–171. Guan, D.X., Williams, P.N., Luo, J., et al., 2015. Novel precipitated zirconia-based DGT technique for high-resolution imaging of oxyanions in waters and sediments. Environmental Science & Technology 49, 3653–3661. Menegário, A.A., Yabuki, L.N.M., Luko, K.S., Williams, P.N., Blackburn, D.M., 2017. Use of diffusive gradient in thin films for in situ measurements: A review on the progress in chemical fractionation, speciation and bioavailability of metals in waters. Analytica Chimica Acta 983, 54–66. Oburger, E., Schmidt, H., 2016. New methods to unravel rhizosphere processes. Trends in Plant Science 21, 243–255. Santner, J., Larsen, M., Kreuzeder, A., Glud, R.N., 2015. Two decades of chemical imaging of solutes in sediments and soilsdA review. Analytica Chimica Acta 878, 9–42. Stockdale, A., Davison, W., Zhang, H., 2009. Micro-scale biogeochemical heterogeneity in sediments: A review of available technology and observed evidence. Earth-Science Reviews 92, 81–97. Zhang, H., Davison, W., 1995. Performance characteristics of diffusion gradients in thin films for the in situ measurement of trace metals in aqueous solution. Analytical Chemistry 67, 3391–3400. Zhang, H., Davison, W., 2015. Use of diffusive gradients in thin-films for studies of chemical speciation and bioavailability. Environmental Chemistry 12, 85–101.

Dioxinsq Prashant S Kulkarni, Defence Institute of Advanced Technology (DU), Pune, India © 2019 Elsevier B.V. All rights reserved.

Introduction Dioxins are a class of structurally and chemically related polyhalogenated aromatic hydrocarbons that mainly includes polychlorinated dibenzo-p-dioxins (PCDDs or dioxins), dibenzofurans (PCDFs or furans), and the “dioxin-like” biphenyls (DL-PCBs). They are nonpolar, water insoluble, lipophilic, and stable chemicals. Dioxins are unintentional byproducts of several chemical processes and usually occur as a mixture of congeners. Their presence in the incinerator fly ash samples was firstly reported in the year 1977 and had come to public attention in the year 1976 when an explosion at ICMESA factory in Seveso, Italy, deposited these chemicals over an area of 2.8 km2. The chlorinated dibenzo-p-dioxins and dibenzofurans are tricyclic aromatic compounds that are quite similar structurally. There are 75 possible different positional congeners of PCDDs and 135 different PCDF congeners. Only seven of the 75 possible PCDD congeners, and 10 of the 135 possible PCDF congeners, those with chlorine substitution in the 2,3,7,8 positions, have dioxin-like toxicity. Likewise, there are 209 possible PCB congeners, only 12 of which have dioxin-like toxicity. These dioxin-like PCB congeners have four or more chlorine atoms and are sometimes referred to as coplanar PCBs, since their rings can rotate into the same plane. Physical and chemical properties of each congener vary according to the degree and position of chlorine substitution. Fig. 1 and Table 1 depict the basic structural formula of PCDDs, PCDFs, and PCBs together with the numbering convention at the positions on benzene rings where chlorine or other halogen atoms can be substituted. The isomer 2,3,7,8 tetrachlorodibenzo-p-dioxin (TCDD) has been called the most toxic synthetic compound known to human being.

Toxicity and Risk Assessment of Dioxins The general population exposure to dioxins chemicals occur as exposure to a mixture of different congeners, effects due to specific individual congeners are difficult to determine. However, the effects are mediated through an interaction of dioxins with the aryl hydrocarbon (Ah) receptor presenting inside the cell. The Ah receptor is a member of the family basic-helix-loop-helix (bHLH) transcription factors (a protein that binds to specific DNA sequences and thereby controls the transfer or transcription of genetic information from DNA to RNA). Ah receptor ligands have been generally classified into two categories, synthetic or naturally occurring. The first ligands to be discovered were synthetic and members of the halogenated aromatic hydrocarbons (dibenzo-dioxins, dibenzofurans, and biphenyls) and polycyclic aromatic hydrocarbons (3-methylcholanthrene, benzopyrene, benzanthracenes, and benzoflavones). Naturally occurring compounds that have been identified as ligands of Ah receptor include derivatives of tryptophan such as indigo and indirubin, tetrapyroles such as bilirubin, the arachidonic acid metabolites lipoxin A4 and prostaglandin G. The Ah receptor is prone to binding with halogenated aromatic hydrocarbons, including dioxins and polychlorinated biphenyls (PCBs) which can cause changes in gene expression, affecting cell growth, form and function. The essential steps in the mechanism of dioxins action include Ah receptor binding of the ligand (e.g., TCDD) in the cytoplasm of cells associated with two molecules of chaperone heatshock protein (Hsp90) and Ah receptor interactive protein. Dioxins require three or four lateral chlorine atoms on the dibenzo-p-dioxin or dibenzofurans backbone to bind this receptor. The binding results in the release of Hsp90 which is followed by the translocation of the receptor to the nucleus. Association with the Ah receptor nuclear translocator (Arnt) protein in the nucleus turns the receptor complex into a ligand-activated form. Further this heterodimer complex binds with specific DNA sequences (dioxin responsive elements) adjacent to the CYP1A1 gene, which leads to DNA bending, chromatin disruption, increased promoter accessibility, and increased rates of transcription initiation of the CYP1A1 gene, with the subsequent accumulation of cytochrome P4501A1-specific mRNA. Fig. 2 depicts a simple schematic model of action of dioxin inside cell. Therefore dioxins can induce a broad spectrum of biological responses, including induction of gene expression for cytochrome P450, for example, CYP1A1, and CYP1A2, disruption of normal hormone signaling pathways, reproductive and developmental defects, wasting syndrome and immune suppression, liver damage, and cancer. These depend on species, strain, age and gender. Briefly, it indicates that the inappropriate modulation of gene expression represents the initial steps in a series of biochemical, cellular and tissue changes that result in the toxicity observed. The variation in toxicity among the dioxins and in their binding affinities to the Ahr is ca. 10,000-fold, with TCDD being the most potent congener.

q

Change History: April 2019. Author Prashant S. Kulkarni is involved in preparing the update. (1) The section sampling and analysis of dioxins has been added in the chapter. (2) List of the sections (only) that have been updated: Introduction, Sampling and Analysis of Dioxins, Further Reading. This is an update of P.S. Kulkarni, J.G. Crespo, C.A.M. Afonso, Dioxins, Editor(s): J.O. Nriagu, Encyclopedia of Environmental Health, Elsevier, 2011, Pages 83–92.

Encyclopedia of Environmental Health, 2nd edition, Volume 2

https://doi.org/10.1016/B978-0-12-409548-9.11832-9

125

126

Dioxins

(A)

9

8 7 Cl

(B)

1

O O

6

Cl

4

2

8

3

7 Cl

9

3′

2

6

2

2′

3 4 Cl

4′ Cl

3 Cl

O

5′

4

PCDDs Fig. 1

(C)

1

6′

6

5

PCBs

PCDFs

Chemical structures of (A) PCDD, (B) PCDF, and (C) PCBs.

Table 1

Possible number of positions of PCDD, PCDF and PCB congeners Number of congeners

Halogen substitution

PCDD

PCDF

PCB

Mono Di Tri Tetra Penta Hexa Hepta Octa Nona Deca Total

2 10 14 22 14 10 2 1 0 0 75

4 16 28 38 28 16 4 1 0 0 135

3 12 24 42 46 42 24 12 3 1 209

Cytoplasm

TCDD (Ligand)

AhR

Protein kinases?

TCDD-AhR complex

TCDD AhR

Arnt

Transformation Translocation

Hsp Hsp

CYP1A1

Nucleus

TCDD-AhR-Arnt complex DNA polymerase

ER Protein synthesis

RNA polymerase Nuclear RNA

mRNA Cell proliferation

Fig. 2

A schematic model of the action of dioxins in cell.

As the toxicity of dioxins is mediated through the aryl hydrocarbon receptor; a toxic equivalency factor (TEF) is used, assuming that the effects are additive and act via a common mechanism to cause toxicity. The toxicity of dioxins are expressed as toxic equivalent quantities (TEQs) where the most toxic congener TCDD is rated as 1.0 and the less toxic congeners as a fractions of this. The TEF system was initiated for dioxins and furans in 1998 by NATO/CCMS scheme, adopted internationally and termed InternationalTEFs (I-TEFs). Many of the other PCDDs and PCDFs and certain PCBs are less potent than TCDD, but vary considerably in their respective concentrations. Each congener can be assigned a potency value relative to TCDD [TEF]. When a TEF is multiplied by the congener concentration level, a toxic equivalency (TEQ) value is obtained. In the early 1990s, WHO added TEFs for PCBs.

Dioxins Table 2

127

Toxic equivalent factors (TEFs) for dioxins

Congener PCDD 2,3,7,8-TCDD 1,2,3,7,8-PeCDD 1,2,3,4,7,8-HxCDD 1,2,3,7,8,9-HxCDD 1,2,3,6,7,8-HxCDD 1,2,3,4,6,7,8-HpCDD OCDD PCDF 2,3,7,8-TCDF 2,3,4,7,8-PeCDF 1,2,3,7,8-PeCDF 1,2,3,4,7,8-HxCDF 1,2,3,7,8,9-HxCDF 1,2,3,6,7,8-HxCDF 2,3,4,6,7,8-HxCDF 1,2,3,4,6,7,8-HpCDF 1,2,3,4,7,8,9-HpCDF OCDF PCB 3,4,40 ,5-TrCB 3,30 ,4,40 -TrCB 3,30 ,4,40 5-PeCB 3,30 ,4,40 ,5,50 -HxCB 2,3,30 ,4,40 -PeCB 2,3,4,40 ,50 -PeCB 2,30 ,4,40 5-PeCB 20 ,3,4,40 ,5-PeCB 2,3,30 ,4,40 ,5-HxCB 2,3,30 ,4,40 ,50 -HxCB 2,30 ,4,40 ,5,50 -HxCB 2,3,30 ,4,40 ,5,50 -HpCB

TEFs, WHO 1998

TEFs, WHO 2005

1 1 0.1 0.1 0.1 0.01 0.0001

1 1 0.1 0.1 0.1 0.01 0.0003

0.1 0.5 0.05 0.1 0.1 0.1 0.1 0.01 0.01 0.0001

0.1 0.3 0.03 0.1 0.1 0.1 0.1 0.01 0.01 0.0003

0.0001 0.0001 0.1 0.01 0.0001 0.0005 0.0001 0.0001 0.0005 0.0005 0.00001 0.0001

0.0003 0.0001 0.1 0.03 0.00003 0.00003 0.00003 0.00003 0.00003 0.00003 0.00003 0.00003

The coplanar-polychlorinated biphenyls have less potency, but their concentrations are often much higher than concentrations of TCDD, so their relative contribution to the total TEQ is potentially sizable. The seven dioxin congeners, 10 furan congeners (all chlorinated in at least the 2,3,7,8 position) and the 12 PCBs which exhibit “dioxin-like activity” were rated with TEFs (see Table 2). Thus, the toxic contribution of the PCDDs and PCDFs and certain PCBs can then be compared. In 1998 and 2005 the WHO expert meeting derived consensus TEFs for both human and wildlife risk assessment. The most toxic dioxin isomers TCDD have been detected in soils and sediments, and in the adipose tissue of livestock and fish in civil and industrial areas. People are exposed primarily through foods that are contaminated with dioxins as a result of accumulation of these substances in food chain and in high fat foods, such as, dairy products, eggs, animal fats, and some fish. Several adverse health effects have been associated with dioxins, including soft tissue, sarcomas, lymphomas, skin lesions (chloracne), stomach cancer, cardiovascular system, immune system, developmental, and neurological effects. TCDD has an acute LD50 (lethal dose) value of 10 to > 9600 mg/kg in rats (LD50 value is the dose that kills 50% of the exposed animals). Apart from the toxicity of TCDD and its presence in the environment, many researchers have shown the compound to be highly resistant to biodegradation. Part of this resistance may be due to its poor bioavailability. The physical properties controlling environmental transport of TCDD are water solubility (19.3 ng/L), octanol–water partition coefficient (1.4  106), vapor pressure (7.4  10 10 Torr at 25 C), and molecular weight (321.974 g/mol). With its low vapor pressure and aqueous solubility, strong sorption to soils, and hydrophobicity, the mobility of TCDD in a soil environment is low. TCDD in sediment and soil will tend to biologically, and chemically degrade slowly. It also has a strong potential to bioaccumulate within ecosystems. A number of countries and organizations have studied various approaches to the health risk assessment of dioxins as carcinogenic promoters and have defined tolerable daily intake (TDI) based on No Observed Adverse Effect Level (NOEAL) derived from animal studies. In assessing the risk of 2,3,7,8-TCDD the USEPA came up with a virtual safe dose of 6 fg/kg body weight per day. The recent health risk assessments, carried out by the Health Council of the Netherlands in 1996, WHO in 1998, and European Commission (Scientific Committee for Food) in 2000 are based on developmental effects initiated during gestation and/or lactation. These assessments are based on the developmental effects and/or carcinogenicity of 2,3,7,8-TCDD and provides protection from toxic effects, as well. The reactions of the various member states of the European Union to these risk evaluations have put an emission limit of 0.1 ng/m3 I-TEQ primarily waste incineration plants and tolerable daily intake of 1–4 pg I-Teq/kg BW/day.

128

Dioxins

Sources of Dioxins The presence of dioxin and dioxin-like compounds in environment occurs principally as a result of anthropogenic sources. They are released to the environment in a variety of ways and in varying quantities depending upon the source. The major identified sources of environmental release have been grouped into four major categories viz. incineration, combustion, industrial, and reservoir processes.

Incineration Sources Municipal Solid Waste Incinerators: Dioxins are largely produced by municipal solid waste incineration processes. It is observed that the emission of dioxins into the environment can be explained mainly by two principal surface catalytic processes: (i) formation from precursors and (ii) formation by de novo synthesis. Several past studies demonstrated the presence of significant quantities of dioxins and dioxin precursors in municipal solid waste at around 50 ng I-TEQ/kg. Hospital Waste Incinerators: Hospital waste include human organs, bandages, blood tubes, test tubes, needles, syringes, tissue cell culture, and other plastic materials. The hospital waste incinerators are high in number, burn high chlorine content waste and hence are important source of dioxin emissions. Hazardous Waste Incinerator: The harmful products of chemical processes produced from industries are called hazardous waste. The waste can be explosive, highly flammable, infectious, corrosive, mutagenic, irritant, toxic, or carcinogenic. A practice of separate incineration for hazardous waste has started several years ago. Sewage Sludge Incinerator: Wastewater treatment generates a solid residue with high organic and toxic metal contents called sewage sludge. The limitations facing land filling and recycling and the planned ban on sea disposal has led to the use of incineration processes for the disposal of sewage sludge.

Combustion Sources Cement Kilns: The switch to burning hazardous waste as fuels for cement kilns has created problem for individuals and organizations. About 16% of the facilities burn hazardous waste as an auxiliary fuel; limited data suggests that dioxins level in clinker dust and stack emissions of these kilns may be significantly higher than the kilns which do not burn hazardous waste. Wood Burning: A number of studies have found dioxins in the emissions and ash/soot from wood fires in nonindustrial situations. According to the European Emission Inventory, wood combustion is at present one of the most important air emission sources for dioxins. It is reported that the dioxins emission from wood burning is about 945 g I-TEQ/year. Diesel Vehicles: Researchers from Sweden and Norway have studied dioxins emission from diesel vehicles. As these studies depend on the fuel used in a particular country more studies are required in order to reach a conclusive estimation. Crematoria: Crematoria procedures can be a ready source of organic material and chlorine, and hence are possible source of dioxins emission. Inventory estimates rate this source as 0.3% of European output and 0.24% of US output. Coal-Fired Utilities: Although emission of dioxins compared to the wood burning is very less, they are numerous, large in size and their high stacks indicate that they could impact very large areas. Uncontrolled Fires in Landfill Sites: Landfill fires can be a significant source of dioxins emission into the air, water, and soil. The degree of risk depends in part on the contents buried in the landfill, the geography of the landfill, and the nature of the fire.

Industrial Sources Pulp and Paper Mills: The manufacture of bleached pulp and paper has in the past resulted in dioxin releases to water, land, and paper products. These compounds can be formed through the chlorination of naturally occurring phenolic compounds such as those present in wood pulp. It is reported that the waste generated from a pulp mill of China produces dioxins concentration of 300 pg/L I-TEQ. Metals Industry: The metallurgical processes such as high temperature steel production, smelting operations, and scrap metal recovery furnaces are found to be typical sources of dioxins. Processes in the primary metals industry, such as sintering of iron ore, have also been identified as potential sources. In several countries the annual release of dioxins is estimated to be 500– 4000 g I-TEQ. Chemical Manufacturing: Dioxins can be formed as by-products from the manufacture of chlorinated compounds such as chlorinated phenols, phenoxy herbicides, chlorinated aliphatic compounds, chlorinated catalysts, and halogenated diphenyl ethers. Although the manufacture of many chlorinated phenolic intermediates and products was terminated in the late 1970s in the United States, production continued around the world until 1990, and continued, limited use and disposal of these compounds can result in release of dioxins into the environment.

Dioxins

129

Reservoir Sources The persistent and hydrophobic nature of these compounds causes them to accumulate in soils, sediments, landfill sites, vegetation and organic matter. The dioxin compounds in the “reservoirs” can be redistributed and circulated in the environment by dust or sediment resuspension and transport. The major reservoir sources include- Biological processes: The action of microorganisms on chlorinated phenolic compounds results in the formation of dioxins under certain environmental conditions. Photochemical processes: Dioxins like OCDD (1,2,3,4,5,6,7,8,9-octachlorodibenzodioxin), HPCD (1,2,3,4,5,6,7,8heptachlorodibenzodioxin) formation occurs by photolytic radical reactions of pentachlorophenol. Accidental sources: The incidents of dioxins release at Seveso, Italy and Yusho, Japan can be considered as an accidental release of dioxins into atmosphere. Further, forest fires and volcanoes also come under this category. Miscellaneous sources: Miscellaneous sources includes formation of dioxins in FBC (Fluidized Bed Combustion) boilers, thermal oxygen cutting of scrap metal at demolition sites, power generation, PVC in house fires, Kraft liquor boilers, laboratory waste, drum and barrel reclaimers, tire combustors, carbon reactivation furnaces and scrap electric wire recovery facilities, etc.

Sampling and Analysis of Dioxins Extraction and Clean-Up Techniques Several extraction and determination methods of dioxins from different matrices have been studied. Air samples: For extraction of dioxins from air samples, it is suggested passing the sample through a 1:1 mixture of hexadecane and dichloromethane. In order to eliminate any undissolved particulate matter, the resulting solution was passed through a multilayer silica gel column and then through an activated carbon column. The adsorbed dioxins can then be eluted using dichloromethane-hexadecane mixture with all samples are stored in cold prior their analysis. Water samples: In water samples, it is proposed filtering and passing the water samples through a column containing a hydrophobic resin, such as XAD-2, that traps dissolved dioxins. After drying, a 1:1 solution of dichloromethane and hexadecane elutes the dioxins with the final solution being collected and stored in cold temperature prior the analysis. Soil and dust samples: For example, the use of solvent extraction for soil and dust samples with large solids, such as pebbles and twigs, which must be removed manually. Next, the soil samples are placed in a sieve and shaken to eliminate larger particulate matter then mixed with a 1:1 mixture of acetone and hexadecane with the resulting mixture being filtered and stored in a cold storage until analysis is done. Sludge samples: In the case of sludge samples, sludge extracts are transferred to hexane and acid treated with concentrated sulfuric acid. Furthermore, after evaporation, extracts are scrubbed in a multilayered silica gel column, followed by a basic alumina column, and finally through a PX-21 active carbon column. Contaminated soil samples: Analysis of dioxins contaminated in soil is very critical, hence, advanced soil mineralogy methods have to be used in this process. In specific, the extraction of dioxins can be done through various methods such as pressurized liquid extraction strategy with sulfuric acid-impregnated silica is used for the extractions of the compounds. The absorbance arising is inversely proportional to the amount of dioxins, from which success from this technique becomes operational. This method requires calculation of the toxicity equivalence of the test by the provisions of the US environmental protection agency report. Rapid extraction technique has an extraction efficiency of  81%. It automatically gives an indication that it is possible to extract dioxins following the possibility of omitting air drying step. Food samples: Other reported methods exist for food, blood plasma, and animal tissues where the extraction and clean-up is performed in several steps including an initial extraction with a C18 bonded silica cartridge, followed by clean-up with a dual-cartridge composed of bonded benzenesulfonic acid cartridge in series with a silica cartridge, and a final step incorporating a florisil cartridge. Moreover, other studies have reported analysis of milk using Soxhlet-extracted and then cleaned up using Power-Prep system automatically and/or the use of gel permeation chromatography, alumina clean-up and porous graphitized carbon chromatography. In general, based on the matrix the extraction and clean up procedures will vary.

Quantification and Detection Techniques Chemical analysis: Traditionally, toxicity levels of PCDDs/Fs have been determined by gas chromatography/high resolution mass spectrometry (HRGC/HRMS) analysis for congener concentrations and reported as TEQs. The total TEQ of a mixture is the sum of the TEF of each individual congener times its concentration. Some common parameters for the GC/MS analysis include the injection temperature at 300 C with the column temperature being ramped from 120 C to 330 C. Identification of PCDDs/Fs is typically achieved by isotopic dilution, where at least two ions per congener could be identified. Techniques incorporating gas chromatography coupled to Tandem Mass Spectrometry (GC-QITMS/MS) provide excellent sensitivity and linearity. In the case of low-resolution GC/MS two mass-selected congeners from the molecular cluster were monitored in a single ion monitoring (SIM). Congeners are identified based on isotopic ratio and retention time in both techniques. Due to their widespread presence in the environment, PCDDs and PCDFs have become routinely measured by the environmental agencies of several countries around the world. In 2000 the European Commission (EC) began to propose legislation to

130

Dioxins

regulate Maximum Residual Levels (MLs) for PCDDs, PCDFs and DL-PCBs in food stuffs and feed products, as well as guidelines for analytical methods to support and implement continuous monitoring of food and feed. Recently, the European Regulations laying down methods of sampling and analysis for the European Union official control of levels of PCDDs/Fs in food and feed have been amended by European Union (Regulation Nos. 589/2014 and 709/2014) with a major update is the recognition of gas chromatography (GC) triple quadrupole mass spectrometry (GC–QQQMS/MS) as a confirmatory tool for checking compliance with maximum levels (ML). Several methods have been reported by the United States Environmental Protection Agency (USEPA) for analyzing dioxins and furans in environmental compartments. A study performed in the United States computed the emission index using the EPA database and the annual emission was reported in ng TEQ/Yr using a HRGC/HRMS operated in positive electron ionization mode with a mass resolution above 10,000 following the guidelines set by the EPA. In addition, an interesting publication, reported on the quantitative measurements of dioxins and furans by only using infrared absorption laser spectroscopy was presented in a direct absorption mode, making it the first study to report on the potential use of IR spectroscopy. The nature of the sample determines the most appropriate method that can be used. Sludge, liquid, air, and soil samples work best in different methods based on their physical nature. However, they have a conventional method of presenting their findings in a way that is easy to understand after an interpretation of the results has been made. Bioassay: The above highlighted quantitative chemical analysis of dioxins requires sophisticated methods and the cost may vary according to the type of sample. Therefore, the analysis of PCDD/F can also be done by rapid, cheap, and reliable complementary techniquedthe bioassay. It is a bioanalytical tool, based on the ability of key biological molecules (e.g., antibodies, receptors, enzymes) to recognize a unique structural property of dioxin-like compounds, or on the ability of cells or organisms to show a specific response to dioxin-like compounds. There are many bioanalytical methods in actual use, for example, chemicalactivated luciferase gene expression (CALUX), ethoxyresorufin-O-deethylase (EROD), enzyme-linked immunosorbent assay (ELISA), etc. CALUX and EROD are based on toxicity of dioxins signaling routes and ELISA is based on the enzyme immunoassays. The CALUX bioassay is the most sensitive and reliable method among all of these screening methods. The CALUX bioassay has been successfully applied to determine the dioxin TEQ in various environmental and biological matrices, for example, fish oil and feed, human serum, fly ash, soil and sewage sludge and human breast milk. Such screening methods allow more analyses at a lower cost, and in case of a positive screening test, confirmation of results can be carried out by more complex chemical analysis.

Methods for Treatment of Dioxins It was observed that dioxins enter into the environment mainly from the flue gas originated from incineration and combustion processes, formation of fly ash (generated from incineration and combustion processes) and dioxins contaminated soil occurred due to industrial and reservoir sources. A brief state-of-the-art study on the treatment of dioxins is highlighted below.

Treatment of Flue Gases Incineration and combustion processes releases large amount of flue gases which are one of the bulk sources of dioxin emissions in the environment. The formation of dioxins in the flue gases of the incinerator system occurs by precursors and de novo synthesis at temperature of 200–400 C. A schematic diagram of a typical incinerator system is shown in Fig. 3. The composition of dioxins in the flue gases varies from 1 to 500 ng I-TEQ/m3. Following methods were adopted for the reduction in emission of dioxins. Particulate matter collection: Particle bound dioxins can be eliminated with a dust collector. At temperatures below 200 C the collection of particle bound dioxins overcomes the de novo synthesis. The removal of particle-bound dioxins from the waste gas coming from an iron ore sintering plant with a cloth filter yielded a reduction of the dioxins up to 73%. However fabric filters and electrostatic precipitators (ESP) have good efficiency in the removal of particle bound dioxins and are currently used as dust collectors during the incineration processes. It was observed that with the use of the combined system, dioxins removal rates of 90%–92% can be achieved.

2 5 1

3

7

4

8 1100−1200 °C

190−210 °C

180−200 °C

80−90 °C

6

60−65 °C

9 55−60 °C

Fig. 3 A schematic diagram of the incinerator system: (1) bunker waste, (2) boiler, (3) electrostatic precipitator, (4) spray absorber or dry sorbent injection, (5) bag filter or fabric filter, (6) fly ash for treatment, (7) wet scrubber, (8) AC (Activated Carbon) unit, and (9) Chimney.

Dioxins

131

Scrubbers or spray absorber: Scrubbers followed by electrostatic precipitators have been in use for many years in waste incinerator for reduction of dioxins emission. The absorbent (lime slurry) is atomized in the spray tower. The gas is first absorbed by the liquid phase and then by the solid phase. The lime slurry mixes with the combustion gases within the reactor. The neutralizing capacity of the lime reduces the percentage of acid gas constituents (e.g., HCl and SO2 gas) in the reactor. It was also observed that the addition of coke made from bituminous coal in a quantity of up to 500 mg/m3, a much higher dioxin collection efficiency of  90% can be achieved. Sorbent or Flow injection process: The flow injection process is generally based on the injection of finely grained coke stemming from anthracite or bituminous coal mixed with limestone, lime or inert material into the waste gas flow with a temperature of  120 C. So the material is suspended in the flow homogeneously and subsequently settles in a layer on the surface of the cloth filter. The inert material which is added in an amount of more than 80% serves to take up the heat that is developed by the exothermic reactions involved in the adsorption process. It also helps to prevent ignition of the coke. The use of naturally and synthetically occurring zeolites is also found to be a good alternative. Flow injection processes are being used in Europe and United States in a number of waste incineration plants for the collection of dioxins, HCl, HF, and SO2. Due to the necessary high amounts of inert material, the residual matters left from the process are considerable. With this process dioxins removal efficiency of 99% can be achieved. Fluidized-bed process with adsorbent recycling: In this process, the flue gas passes through the grate from the bottom and forms a fluid bed of coke stemming from bituminous coal and inert material with a temperature of about 100–120 C. A limestone or lime can be used as inert material and the amount of coke can be higher than in the flow injection process. The adsorbent is separated from the flue gas in a dust collector and recirculated to the fluidized bed. Usually the adsorbent can be recycled many times, so that, it is possible to collect other acid components such as HCl, HF, and SO2. The advantages of the fluidized-bed process, lies in the high residence times of the adsorbent and in better utilization of sorbent because of the more favorable mass and heat transfer conditions and longer solids retention time in the system. Fixed-bed or moving-bed processes: This process uses the same adsorbent as that of the fluidized bed process. But, the coke moves slowly from top to bottom while the waste gas flows in opposite direction. The activated coke takes up contaminants during its entire residence time in the reactor, which may be several 1000 operating hours. The time period during which an effective exchange of matter takes place is in fixed-bed or moving-bed processes about 10 times longer than in flow injection or fluidized-bed processes. The difference between fixed-bed and moving-bed process is in the former the bed of activated coke of cross-flow adsorbers is not moved during the time adsorption takes place and the spent coke is withdrawn and replaced by new coke. In moving-bed reactors the coke bed travels continuously. A very high dioxins separation efficiency of more than 99% can be achieved with the moving-bed process. Catalytic decomposition of dioxins: A method of selective catalytic reduction for the NOx gases can be also applied for the dioxins remediation. The present evidence shows that the catalysts used in selective reduction of the NOx in the flue gas suppressed the formation of dioxins by 85%. It proves that a single, effectively designed catalyst can be used in the removal of the oxides of nitrogen and dioxins. The catalysts are mostly composed of the oxides of Ti, V, and W. Additionally, oxides of Pt and Au supported on silicaboria-alumina are found to be effective for the destruction of dioxins at 200 C. The advantage of selective catalytic reduction (SCR) over the other methods is the elimination of complicated disposal problems of residual matter. Electron irradiation processes: It is a new process for destruction of dioxins compounds in the flue gas. The method has following features: (i) no possibility of secondary pollution because of the direct decomposition of dioxins which is different from the recovery method using a filter, (ii) no need for temperature control, and (iii) very simple process resulting in easy installation to existent incinerators. It involves gas-phase degradation of dioxin molecules by OH radicals formed under the action of ionizing radiation on gas macro components. The benefits of this process are decomposition products are only organic acids and low energy consumption.

Treatment of Fly Ash The incineration processes of hospital, hazardous, sewage sludge and municipal solid waste produces thick solid residues or cake called fly ash. It contains dioxins and heavy metals. The dioxins concentration in fly ash varies from 100 to 5000 ng/kg. In many countries, the environmental protection legislation classifies municipal solid waste incineration fly ash as hazardous material and further treatment is required before they are released in to the atmosphere or disposed of in landfills. Following methods were practiced for the destruction of dioxins in fly ash. Thermal treatment: Thermal treatment is a process by which heat is applied to the waste in order to sanitize it. The primary function of thermal treatment is to convert the waste to a stable and usable end product and reduce the amount that requires final disposal in landfills. It is observed that dioxins present in fly ash can be decomposed by thermal treatment under suitable conditions. It is found that more than 95% destruction of dioxins can be obtained using thermal treatment equipments such as electrical, oven, coke-bed melting furnace, rotary kiln with electric heater, sintering in LPG burning furnace, plasma melting furnace, etc. Nonthermal plasma: The application of nonthermal plasma for the destruction of dioxins has several advantages over the conventional control devices. It performs effectively and economically at very low concentrations under ambient temperature condition and low maintenance. Moreover, it doesn’t require auxiliary fuel and eliminates disposal problems and sensitivity to poisoning by sulfur or halogen containing compounds. It is observed that different isomer compounds show different toxic

132

Dioxins

removal effects and the higher the toxicity of the compounds is, the higher is the destruction efficiency. Among all of the congener contained in the fly ash, the isomer 2,3,7,8-TCDD which has the highest toxicity shows the highest destruction efficiency up to 81%. UV irradiation (Photolytic): A photocatalytic degradation of dioxins using semiconductor films such as TiO2, ZnO, CdS, and Fe2O3 under UV or solar light is a highly promising method, as it operates at ambient temperature and pressure with low energy photons. This process use light to generate conduction band (CB) electrons and valence band (VB) holes (e  and h þ) which are able to initiate redox chemical reactions on semiconductors. TiO2 has been predominantly used as a semiconductor photocatalyst. The VB holes of TiO2 are powerful oxidants that initiate the degradation reactions of a wide variety of organic compounds. Photocatalytic degradation of highly chlorinated dioxin compounds found that degradation rates of dioxin decreased with the number of chlorine and increases with the intensity of light and the TiO2 coating weight. The products obtained after the completion of process were CO2 and HCl. Chemical reaction: A chemical reagent method involves use of a reagent and medium for the decomposition of polychlorinated aromatic compounds. In the past years, research was mainly focused on the removal and destruction of dioxins and incineration was favored over the other methods. Nevertheless, the interest in the recovery of reusable materials (e.g., PCBs are present mostly in transformer oils) and the necessity to treat contaminated products with low concentration of PCBs have renewed the interest in the dechlorination methods. The dehalogenation methods mostly involve use of low-valent metal such as alkali metal in alcohol, Mg, and Zn/acidic or basic solution. This decomposition method for dioxins is one of the most environment friendly and economic detoxification methods with respect to the energy and safety of the reagents. Hydrothermal treatment: As a large amount of fly ashes are generated annually, there is a continuing interest in establishing ways in which they may be used (e.g., in cement manufacturing). A hydrothermal treatment can be of great use in such cases. It is a physico-chemical process based on the T/RH/t relation (temperature, relative humidity, time). Fly ashes were put into water or a solution and subject to hydrothermal treatment at high pressure and temperature. An effective solution for dioxins decomposition was found to be NaOH containing methanol; fly ashes containing 1100 ng/g total dioxins subjected to hydrothermal treatment using this solution at 300 C for 20 min were found to have only 0.45 ng/g total dioxins. It was suggested that the process is superior to purely thermal treatment at the same temperature and the regenerated fly ashes can be used in the cement industries. Supercritical water oxidation (SCWO): A waste treatment process using supercritical water, which exists as a phase above the critical temperature (647.3 K) and critical pressure (22.12 MPa) has proved to be a novel way for an effective dioxin remediation. Scientists applied the process for the decomposition of dioxins in fly ashes with oxidizer such as air, pure oxygen gas, and hydrogen peroxide. The reaction was performed under the conditions of temperature 673 K, pressure 30 MPa and time 30 min. With the use of supercritical water and hydrogen peroxide the decomposition yield of dioxins was found to be 99.7%. Recently, a hybrid process for the destruction of dioxins in fly ashes was proposed in which dioxins from fly ashes were extracted using supercritical fluid (CO2) and concentrated by adsorption (activated carbon), and destructed by SCWO.

Remediation of Soil and Sediment Environmental problems created by forest fires, oil tanker accidents and oil spillage from cars and trucks, leaky containers, industrial accidents and poorly disposed of wastes contributes to the contamination of soil. Numerous tons of soil and sediment in the world were contaminated with dioxins that need an appropriate remediation method. Following on site or in situ and off site or ex situ methods can be used for the remediation of soil and sediment. Radiolytic degradation: Ionizing radiation in the form of high-energy electron beams and gamma rays is a potential nonthermal destruction technique. Theoretical and some empirical assessments suggest that these high-energy sources may be well suited to transforming dioxin to innocuous products. Gamma radiolysis has been shown to be effective in the degradation of PCDD and PCBs in organic solvents and in the disinfection of wastewaters. The study of byproducts and theoretical target theory calculations indicate that TCDD destruction proceeds through reductive dechlorination. It was also found that addition of promoters (e.g., active carbon) to the toxicants increases the percentage of destruction under electron beam radiation. Base catalyzed dechlorination: The base-catalyzed decomposition (BCD) process is a chemical dehalogenation process. It involves the addition of an alkali or alkaline earth metal carbonate, bicarbonate or hydroxide to the contaminated medium. BCD is initiated in a medium temperature thermal desorber (MTTD) at temperatures ranging from 315 C to 426 C. Alkali is added to the contaminated medium in proportions ranging from 1% to about 20% by weight. A hydrogen donor compound is added to the mixture to provide hydrogen ions for reaction, if these ions are not already present in the contaminated material. The BCD process then chemically detoxifies the chlorinated organic contaminants by removing chlorine from the contaminants and replacing it with hydrogen. For example PCB and dioxin-contaminated oils were remediated with Na/NH3 as were PCB-contaminated soils and sludges from contaminated sites. Subcritical water treatment: Water which is held in liquid state above 100 C by applying a pressure is called subcritical water. It has properties similar to the organic solvents and can act as a benign medium. It has been used to extract PCBs and other organic pollutants from soil and sediment. A use of zero-valent (ZVI) iron in reductive dechlorination of PCDDs and remediation of contaminated soils with subcritical water as reaction medium and extractive solvent was studied by some researchers. It was found that by using iron powder as matrix higher chlorinated congeners were practically completely reduced to less than tetra-substituted homolog. Zero-valent iron has become accepted as one of the most effective means of environmental remediation. It is inexpensive,

Dioxins

133

easy to handle and effective in treating a wide range of chlorinated compounds or heavy metals. It has been widely applied in situ, ex situ or as part of a controlled treatment process in wastewater, drinking water soil amendment stabilization and mine tailing applications. Thermal desorption: Thermal desorption is a separation process frequently used to remediate many Superfund sites. It is an ex situ remediation technology that uses heat to physically separate petroleum hydrocarbons from excavated soils. Thermal desorbers are designed to heat soils to temperatures sufficient to cause constituents to volatilize and desorb (physically separate) from the soil. Although they are not designed to decompose organic constituents, thermal desorbers can, depending upon the specific organics present and the temperature of the desorber system, cause some of the constituents to completely or partially decompose. The vaporized hydrocarbons are generally treated in a secondary treatment unit (e.g., an afterburner, catalytic oxidation chamber, condenser, or carbon adsorption unit) prior to discharge to the atmosphere. Afterburners and oxidizers destroy the organic constituents. Condensers and carbon adsorption units trap organic compounds for subsequent treatment or disposal. In situ photolysis: In this method dioxins can undergo photolysis by sunlight under proper conditions. It is cost effective and less destructive to the site. An organic solvent mixture is added to the contaminated soil and time is then allowed for dioxin solubilization, transport, and photodegradation. For this purpose, the surface of the soil is sprayed with the low-toxicity organic solvent and allowed to photodegrade under the sunlight. Several researchers have used this approach, finding that dioxins on the soil surface rapidly decomposed after being sprayed with various organics such as isooctane, hexane, cyclohexane, etc. It was found that solarinduced photolytic reactions can be a principal mechanism for the transformation of these chemicals to less toxic degradation products. Convective upward movement of the dioxins as the volatile solvents evaporated was the major transport mechanism in these studies. The effectiveness of this process depends on a balance between two rate controlling factors: convective transport to the surface and sunlight availability for photodegradation. Solvent and liquefied gas extraction: Extraction is a physico-chemical means of separating organic contaminants from soil and sediment, thereby concentrating and reducing the volume of contaminants that needs to be destroyed. This is an ex situ process and requires the contaminated site soil to be excavated and mixed with the solvent. Eventually, it produces relatively clean soil and sediment that can be returned to the site. The US Environmental Protection Agency’s (EPA) evaluated a pilot scale solvent extraction process that uses liquefied propane to extract organic contaminants from soil and sediments. Approximately 1000 pounds of soil, with an average polychlorinated biphenyl (PCB) concentration of 260 mg/kg, was obtained from a remote Superfund site. Results showed that PCB removal efficiencies varied between 91.4% and 99.4%, with the propane-extracted soils retaining low concentrations of PCBs (19.0–1.8 mg/kg). Overall extraction efficiency was found to be dependant upon the number of extraction cycles used. Steam distillation: A distillation in which vaporization of the volatile constituents of a liquid mixture takes place at a lower temperature (than the boiling points of the either of the pure liquids) by the introduction of steam directly into the charge. It is an ideal way to separate volatile compounds from nonvolatile contaminants in high yield. Steam distillation is effective with microwave energy to treat contaminated soil and sediments. Microwave treatments can be adjusted to individual waste streams: depending on the soil, the contaminants and their concentrations, remediation treatment can be conducted in several steps until the desired clean-up level is reached. All contaminants could be removed to nondetectable or trace levels. Steam distillation was found to be effective for the removal of 2,7-dichlorodibenzo-p-dioxin (DCDD) from DCDD-applied soil. The DCDD concentration (250 mg/ 50 g soil) in the original soil decreased to less than 5% after steam distillation for only 20 min. The results suggest that steam distillation could be a new remedial method for soils contaminated with dioxins. Mechanochemical (MC): In this technology the mechanical energy is transferred from the milling bodies to the solid system through shear stresses or compression, depending on the device used. A significant part of the milling energy is converted into heat and a minor part is used to induce breaks, stretches, and compression at micro and macroscopic level or for performing a reaction. MC degradation can be easily performed using ball mills that are readily available in different sizes (treatment of materials up to several tons is possible) and constructions. The pollutants are eliminated directly inside a contaminated material, regardless of complex structure and strong nature of the pollutant. This method has a high potential to dispose of organic wastes at any desired locations with flexible operation due to its use of a portable facility composed of a mill and a washing tank with a filter. Although this method needs a dechlorinating reagent such as CaO in the grinding operation, it does not require any heating operation. To support use of the MC dechlorination method, it would be useful to have a correlation between the dechlorination rate of organic waste and the grinding (MC) conditions to determine the optimum condition in a scaled-up MC reactor. The method offers several economic and ecological benefits: ball milling requires a low energy input only. Because of the strikingly benign reaction conditions, toxic compounds can be converted to defined and usable products. No harmful emissions to the environment have to be expected. This opened up the development of novel, innovative ex situ dioxins remediation and decontamination processes. Biodegradation process: Bioremediation is a treatment process which uses microorganisms such as fungi and bacteria to degrade hazardous substances into nontoxic substances. The microorganisms break down the organic contaminants into harmless productsdmainly carbon dioxide and water. Once the contaminants are degraded, the microbial population is reduced because they have used their entire food source. The extent of biodegradation is highly dependent on the toxicity and initial concentrations of the contaminants, their biodegradability, the properties of the contaminated soil and the type of microorganism selected. There are mainly two types of microorganisms: indigenous and exogenous. The former are those microorganisms that are found already living at a given site. To stimulate the growth of these indigenous microorganisms, the proper soil temperature, oxygen, and nutrient

134

Dioxins

content may need to be provided. If the biological activity needed to degrade a particular contaminant is not present in the soil at the site, microorganisms from other locations, whose effectiveness has been tested, can be added to the contaminated soil. These are called exogenous microorganisms.

See also: Dioxins: Health Effects; Electronic Waste and Human Health; Estrogenic Chemicals and Cardiovascular Disease; Monetary Valuation of Trace Pollutants Emitted Into Air by Industrial Facilities; Persistent Organohalogen Pollutants and Phthalates: Effects on Male Reproductive Function; Prenatal Exposure to Polycyclic Aromatic Hydrocarbons (PAH).

Further Reading Alcock, R.E., Jones, K.C., 1996. Dioxins in the environment: A review of trend data. Environmental Science and Technology 30, 3133–3143. Buekens, A., Huang, H., 1998. Comparative evaluation of techniques for controlling the formation and emission of chlorinated dioxins/furans in municipal waste incineration. Journal of Hazardous Materials 62, 1–33. Davy, C.W., 2004. Legislation with respect to dioxins in the work place. Environment International 30, 219–233. European Commission, 1994. PCDD/PCDF emission limits from municipal waste incineration plants. Journal of European Commission 34, 1365–1385. Kerkvliet, N.I., 2002. Recent advances in understanding the mechanisms of TCDD immunotoxicity. International Immunopharmacology 2, 277–291. Kulkarni, P.S., Afonso, C.A.M., Crespo, J.P., 2008. Dioxins sources and current remediation technologiesdA review. Environment International 34, 139–153. L’Homme, B., Scholl, G., Eppe, G., Focant, F., 2015. Validation of a gas chromatography triple quadrupole mass spectrometry method for confirmatory analysis of dioxins and dioxin-like polychlorobiphenyls in feed following new EU Regulation709/2014. Journal of Chromatography A 1376, 149–158. Mandal, P.K., 2005. Dioxin: A review of its environmental effects and its aryl hydrocarbon receptor biology. Journal of Comparative Physiology B-Biochemical Systemic and Environmental Physiology 175, 221–230. Mitrou, P.I., Dimitriadis, G., Raptis, S.A., 2001. Toxic effects of 2,3,7,8-tetrachlorodibenzo-p-dioxin and related compounds. European Journal of Internal Medicine 12, 406–411. Reiner, E.J., Clement, R.E., Okey, A.B., Marvin, C.H., 2006. Advances in analytical techniques for polychlorinated dibenzo-p-dioxins, polychlorinated dibenzofurans and dioxin-like PCBs. Analytical and Bioanalytical Chemistry 386, 791–806. Schecter, A., Birnbaum, L., Ryan, J.J., Constable, J.D., 2006. Dioxins: An overview. Environmental Research 101, 419–428. Tuppurainen, K., Halonen, I., Ruokojarvi, P., Tarhanen, J., Ruuskanen, J., 1998. Formation of PCDDs and PCDFs in municipal waste incineration and its inhibition mechanisms: A review. Chemosphere 36, 1493–1511. Van den Berg, M., Birnbaum, L., Bosveld, A.T.C., et al., 1998. Toxic equivalency factors (TEFs) for PCBs, PCDDs, PCDFs for humans and wildlife. Environmental Health Perspectives 106, 775–792. Van den Berg, M., Birnbaum, L.S., Denison, M., et al., 2006. The 2005 World Health Organization reevaluation of human and mammalian toxic equivalency factors for dioxins and dioxins-like compounds. Toxicological Sciences 93, 223–241. Weber, R., 2007. Relevance of PCDD/PCDF formation for the evaluation of POPs destruction technologiesdReview on current status and assessment gaps. Chemosphere 67, S109–S117. Web-based Resources USEPA, 1994a. Health assessment document for 2, 3, 7, 8-tertachlorodibenzo-p-dioxin (TCDD) and related compounds. EPA/600/Bp-92/001c estimating exposure to dioxin-like compounds, epa/600/6-88/005cb. Office of Research and Development, Washington, DC. USEPA (1994b) Combustion Emission Technical Resource Document (CETRED), Report No. EPA 530-R-94-014, Washington, DC. USEPA (1998) The inventory of sources of dioxins in the United States. EPA/600/P-98/002Aa.

Dioxins: Health Effectsq AJ Schecter, University of Texas School of Public Health, Dallas, TX, United States JA Colacino, University of Michigan School of Public Health, Ann Arbor, MI, United States LS Birnbaum, National Institute of Environmental Health Sciences/National Institutes of Health, Research Triangle Park, NC, United States © 2019 Elsevier B.V. All rights reserved.

Abbreviations AhR Aryl hydrocarbon receptor CDC Centers for disease control and prevention DL Dioxin-like HRGC HRMS High-resolution gas chromatography high-resolution mass spectrometry NATO CCMS North Atlantic Treaty Organization Committee on the Challenges of Modern Society PCB Polychlorinated biphenyl PCDD Polychlorinated dibenzo-p-dioxin PCDF Polychlorinated dibenzofuran PCQ Polychlorinated quaterphenyl ppt Parts per thousand TCDD Tetrachlorodibenzo-p-dioxin TEF Toxic equivalency factor TEQ Toxic equivalent

Polychlorinated dibenzo-p-dioxins (PCDDs) and dibenzofurans (PCDFs) are a class of chemical compounds that are largely of anthropogenic origin. In conjunction with the polychlorinated biphenyls (PCBs), which were intentionally produced, these compounds are known as dioxin-like (DL) compounds, or more simply “dioxins.” Dioxins can be inadvertently synthesized as unwanted contaminants during industrial processes, including chlorine-based bleaching of paper or pulp and also uncontrolled combustion. They can contaminate phenoxy herbicides such as 2,4,5-trichlorophenoxyacetic acid, which was half of the herbicidal ingredient in Agent Orange, which was extensively used during the Vietnam War. Dioxins have also been found to contaminate chlorophenols, and can be produced in PCB transformer fires. Small amounts of dioxins have been deliberately synthesized for scientific research. The chemical structure of PCDDs consists of two benzene rings that are connected by a third middle ring that consists of two oxygen atoms in the para positions, and also have four to eight chlorine atoms attached (Fig. 1). There are 75 total PCDD compounds or congeners. PCDDs are toxic only if they contain chlorines at the 2, 3, 7, and 8 positions. As a general rule, toxicity tends to decrease as more chlorine atoms are present The 135 PCDF congeners are very similar, chemically and toxicologically, to dioxins. The structure differs in that the center ring contains only one oxygen atom, the other being replaced by a carbon–carbon bond. Some, but not the majority, of the 209 congeners of PCBs have toxicological properties similar to those of dioxins. PCBs consist of two connected biphenyl rings with no oxygen atoms. Since the beginning of the industrial age, increased levels of 7 toxic dioxins, 10 PCDFs, and 12 DL PCBs have usually been found in humans. Brominated dioxins and dibenzofurans are believed to have toxicity similar to that of their chlorinated counterparts, and are therefore also important chemicals from an environmental health standpoint. Characteristic levels and patterns of dioxin congeners in human tissues are associated with levels of industrialization and contamination in a given country. The existence of dioxins and dibenzofurans in the general population was first documented in the 1980s by Schecter and Tiernan as well as, Rappe and Masuda. Dioxins are very persistent in the environment as well as bioaccumulative and undergo biomagnification up the food chain. The half-life of the most toxic of the 75 chlorinated dioxins, tetrachlorodibenzo-p-dioxin (TCDD), is usually 2–4 weeks in rodents. However, in humans the half-life has been estimated to be 7–11 years with a wide variation between individuals. Pharmacokinetic

q

Change History: December 2018. O.A. Ogunseitan reviewed the manuscript for currency of the contents. Confirmed that WHO consensus positions remain current as of 2018. This article has been reviewed by the Environmental Protection Agency’s Office of Research and Development, and approved for publication. Approval does not signify that the contents necessarily reflect the views and policies of the US government nor does mention of trade names or commercial products constitute endorsement or recommendation for use. This is an update of A.J. Schecter, J.A. Colacino, L.S. Birnbaum, Dioxins: Health Effects, Encyclopedia of Environmental Health, Editor(s): J.O. Nriagu, Elsevier, 2011, Pages 93–101.

Encyclopedia of Environmental Health, 2nd edition, Volume 2

https://doi.org/10.1016/B978-0-12-409548-9.11755-5

135

136

Dioxins: Health Effects

(A)

Cl

O

Cl

Cl

O

Cl

(B)

Cl

Cl

Cl

O

Cl

Cl

O

Cl Cl

Cl (C)

Cl

Cl

Cl

Cl

Cl

Cl

Cl

Cl

Cl

Cl

Fig. 1 Chemical structure of (A) 2,3,7,8-TCDD, the most toxic dioxin, (B) OCDD, the least toxic dioxin, and (C) a less toxic but ubiquitous dioxin PCB 209, the decachlorinated biphenyl.

studies have demonstrated that the half-life of dioxin is dose-dependent. A higher concentration of dioxin leads to a faster rate of elimination. Body composition has also been shown to affect elimination rates, with higher concentration of body fat leading to increased persistence of dioxins. The half-life of elimination for other dioxin congeners and DL compounds varies widely, with a half-life of elimination as low as 6 months for some PCDFs, and a half-life as high as 20 years for others. Since the 1980s, the gold standard for the detection and quantification of dioxin and DL congeners has been congener-specific high-resolution gas chromatography–high-resolution mass spectrometry (HRGC-HRMS). This method was first used in the 1970s to detect TCDD in human milk and fish exposed to TCDD-contaminated Agent Orange in southern Vietnam. In the 1980s, HRGCHRMS was used to identify specific congeners of dioxins and PDCFs in human milk, blood, and adipose tissue. Specificity and sensitivity of HRGC-HRMS measurements has improved greatly in recent years with new methods of extraction, chemical cleanup, and the use of known chemical standards. Most established dioxin researchers worldwide now utilize this method to assess dioxin exposure, including the Centers for Disease Control and Prevention (CDC), the United States Air Force, and the Agency for Toxic Diseases and Substances and Diseases Registry (ATSDR). The World Health Organization (WHO) has certified a relatively small number of laboratories worldwide that use HRGC-HRMS to analyze dioxin levels in blood. Using this method, to date all human tissues have been demonstrated to have detectable levels of congeners of dioxins and PCDFs. Other less-expensive and more rapid screening methods, including bioassays and immunoassays, have also been employed to estimate the total DL activity in environmental and biological samples. However, HRGC-HRMS currently remains the only viable method for measuring specific congeners in a sample. This is essential to link sources to exposures. The establishment of the total dioxin toxic equivalent (TEQ) value has been an international approach to express the toxicity of a mixture of dioxins. This approach uses a weight-of-evidence approach to all of the scientific evidence and judgment. The first known actual laboratory demonstration of dioxin and dibenzofuran congener toxicity was performed at the New York State Department of Health using soot from the Binghamton State Office Building and a mixture of congeners found in the building as well. Later, the state of California decided to assign a conservative or human health protective value of 1.0 for 2,3,7,8-TCDD and all other dioxin and dibenzofuran congeners with chlorines in the 2, 3, 7, and 8 positions. The Environmental Protection Agency (EPA) then assigned toxic equivalency factors (TEFs) to dioxins and dibenzofurans, first in 1987, followed by international values from the North Atlantic Treaty Organization Committee on the Challenges of Modern Society (NATO-CCMS) in 1988. In 1998, the WHO published consensus TEFs, and revised these values in 2005, which remained current as of 2018. The TEQ of a mixture of dioxins is determined by multiplying the measured level of each “dioxin” congener by the assigned TEF for that congener, and then summing all of the products. The total dioxin toxicity of the mixture is the sum of the TEQs from the PCDDs, PCDFs, and the DL PCBs. Analyses of sediment cores from lakes and rivers, and human blood and adipose tissue across decades suggest that human dioxin exposure was low in the early 20th century, increased to a peak in the 1960s and 1970s, and has decreased until the present day. Because of the low, and declining, concentration of the dioxins in humans, a relatively small and short-lived exposure from food, work, or the environment can be difficult to detect, even with a method as sensitive as HRGC-HRMS. However, high levels of

Dioxins: Health Effects

137

exposure, which are sometimes seen in chemical workers, can cause elevated levels of dioxin in the blood, milk, and other lipidcontaining tissues, that can be measured at least up to 35 years following initial exposure. Some US Vietnam War veterans who were exposed to Agent Orange during the conflict were shown in health studies to have serum lipid 2,3,7,8-TCDD (TCDD) levels up to 600 ppt many years after leaving Vietnam, with an estimated peak of > 3000 ppt at the time of exposure. Today, the general US population usually has serum lipid TCDD levels of only < 1–2 ppt and a total TEQ of  20 ppt, which is age associated. In areas of Vietnam that were sprayed with Agent Orange or where the herbicide leaked from storage tanks, TCDD levels as high as 1,000,000 ppt have been found in soil and sediment samples three to four decades after the initial Agent Orange use took place. Elevated levels of TCDD have also been measured in Vietnamese civilians from contaminated areas, as well as Vietnamese food and wildlife.

Health Effects of Dioxins All of the dioxin and DL compounds exert their physiological effects via high-affinity binding to the intracellular ligand-activated transcription factor AhR (aryl hydrocarbon receptor). This receptor belongs to a family of proteins that has shown to be highly evolutionarily conserved throughout the animal kingdom, and related proteins have also been observed in plants. However, it is important to note that only vertebrates express the specific aryl hydrocarbon binding receptor that requires ligand binding for dimerization and activation. The activated form of AhR has been shown to interact with other regulatory proteins including cell cycle control proteins, cellular kinases, and various proteins involved in the apoptosis pathway. Studies in cells or transgenic mice with a constitutively active form of AhR or in mice with an AhR knockout suggest that AhR is a key regulator of homeostasis and normal development. AhR knockout mice are resistant to the toxic effects of dioxins. Many vertebrates, from fish to mammals, including humans, show similar toxicological outcomes when exposed to dioxins. These effects include an increase in cancers, from induction or promotion. Studies in rodents conducted in the late 1970s and early 1980s first established the carcinogenicity of TCDD. These studies formed the basis of the conclusion that TCDD alone was able to induce tumors in both rats and mice at multiple sites, including the liver, thyroid, and lung. Tumor development at sites distant from the point of administration was also noted. In fact, TCDD has been shown to be carcinogenic in all 19 animal studies conducted in mice, rats, hamsters, and fish. A more recent study has shown that DL compounds administered to female rats individually and in mixtures can induce carcinogenicity in a dose-additive fashion. Epidemiological evidence also shows statistically significant increases in lung cancer, prostate cancer, skin cancer, and all cancers combined in occupational cohorts with high TCDD exposures. A recent evaluation of a cohort of inadvertently exposed individuals from Seveso, Italy, found elevated mortality from lymphatic and hematopoietic cancer, with a suggested elevated mortality from rectal and lung cancer. TCDD has been classified as a known human carcinogen by the International Agency for Research on Cancer, the US EPA, the US Department of Health and Human Services, and other government agencies in the United States and abroad. Dioxin exposure has also been linked to various immune system alterations, including immunosuppression. Experiments on laboratory rodents showed that a single low-level dose of TCDD was sufficient to suppress both cell-mediated and antibodymediated immune responses. Rodents that were exposed to TCDD also showed increased susceptibility to disease, increased severity of disease symptoms, and increased infection-related mortalities. Immunosuppressive effects of TCDD are dose-dependent, and can occur at levels lower than those required for acute toxicity. Long-term epidemiological studies of exposed cohorts have found immune suppression, including decreased plasma IgG levels, following high-level dioxin exposures. Perinatal exposure of nursing infants to PCBs has been linked to increased likelihood of childhood ear infections and chickenpox, and reduced allergy symptoms, suggesting immune system suppression. Developmental and reproductive disorders have been noted following exposure to dioxins. Animal studies on a variety of mammals (mice, rats, hamsters, guinea pigs, rabbits, and monkeys) showed that in utero or lactational exposure to low levels of TCDD and related chemicals resulted in structural malformations, reduced viability of offspring, retardation of growth, and functional alterations. Higher levels of maternal exposure to TCDD during pregnancy result in prenatal mortality in all laboratory animal species studied. Evidence suggests that individuals who are exposed prenatally and while young are more susceptible to the toxic effects of dioxins than adults. Reproductive and developmental effects on humans are more difficult to quantify because of the background levels of multiple chemicals to which all human infants are exposed and the inherent ethical issues involved with dosing human subjects with known toxic compounds. In humans exposed to TCDD from Agent Orange in Vietnam, increased levels of dioxin were observed in semen samples. Babies born to women affected by rice oil contaminated with PCBs and polychlorinated dibenzo furans (PCDFs) in Yusho and Yucheng suffered from reported developmental toxicity including prenatal mortality and low birth weight. Follow-up studies on the cohort of Yucheng children revealed decreased height and muscle growth in children born to mothers who were exposed. In addition, the nervous system often is targeted during developmental exposure, leading to negatively affected learning, behavior alterations, and hearing impairment. Alterations in sex ratio of babies born to affected mothers were observed in Seveso, Italy, following a large-scale dioxin exposure. Observational studies of this population suggest that male fetuses could be more susceptible to TCDD-induced prenatal mortality than female fetuses, although the exact mechanism of this action remains unclear. Additional data from Seveso of exposed individuals suggests that fathers who were < 20 years old when exposed to TCDD are more likely to sire female offspring. Dioxins have been shown to be very potent endocrine disruptors and can act on the endocrine system at multiple points by binding to or altering the number of hormone receptors, affecting both the synthesis and breakdown of hormones, and interfering

138

Dioxins: Health Effects

with the transport of hormones in the blood. A decrease in the level of circulating testosterone was observed in both an occupational cohort exposed to dioxins and the Operation Ranch Hand veterans. At higher levels of exposure to dioxins, perturbation of thyroid homeostasis is observed. In rodents, dioxin exposure has been linked to decreases in circulating thyroxine (T4) levels. Decreases in circulating T4 levels lead to an increase in circulating thyroid-stimulating hormone (TSH). Although T4 levels can eventually return to normal levels, TSH levels remain elevated. Elevated levels of maternal TSH have been associated with a decrease in IQ scores in their children. Members of the Ranch Hand cohort have been shown to exhibit increased TSH levels correlating with increasing dioxin exposure. Additionally, endometriosis has also been epidemiologically linked to dioxin exposure in humans. In rhesus monkeys, TCDD exposure was shown to increase both the incidence and severity of endometriosis in a dose-dependent manner. In nude mice, growth of injected human endometrial cells was enhanced by exposure to dioxins. Cohort studies involving surgical diagnosis of endometriosis have found a positive correlation between dioxin exposure and development of endometriosis. Other studies have failed to duplicate the same findings, showing no significant increase of endometriosis following dioxin or PCB exposure. However, a recent review has pointed out that definition of disease status and controls has made determination of this relationship difficult. Owing to the inconsistency of these results, further research into the effect of dioxins on the development of endometriosis is required. Dioxin exposure has also been found to be linked to the development of diabetes mellitus in members of the United States Air Force who served in the Vietnam War as well as exposed members of the Seveso cohort. In vitro studies show that dioxin can decrease uptake of glucose in human cells. Epidemiological studies have described an increased incidence of short-term memory loss and peripheral neuropathy following exposure to both PCBs and dioxins. Neuronal signaling has been shown to be affected by PCBs, leading to alterations in central and peripheral nervous system function. PCBs can induce changes in neurological function by inducing an increase in intracellular calcium concentration. The resulting shift in the calcium gradient can alter neurotransmitter function, increase oxidative stress, and even cause cell death. Dioxins can also affect cell signaling in the nervous system with susceptibility of the neurons being dependent on activity and maturation. Some of the best characterized and easily observed pathologies resulting from high-dose dioxin exposure are skin disorders. In the late 1800s and early 1900s a red skin rash, or erythema, followed by an acneiform eruption was first noted in groups of exposed chemical workers. This was later characterized as chloracne, or acne caused by exposure to chlorinated or brominated synthetic organic compounds. Chloracne following dioxin exposure was recently and perhaps most famously seen in Ukraine President Viktor Yushchenko, following a deliberate poisoning with TCDD (Fig. 2). Chloracne has also been observed in various exposed cohorts, including persons from Seveso, Italy, and those exposed to contaminated rice oil in the Yusho and Yucheng incidents. It is important to note that chloracne is a relatively insensitive and rare pathology that usually requires blood lipid levels of dioxin > 8000–10,000 ppt. The absence of chloracne does not preclude a high-level exposure to dioxins. In the Seveso population, many individuals with blood lipid levels of dioxin exceeding 8000 ppt never developed the condition, and of those who did, most were children. In some cohorts, chloracne can persist for years or decades; however, in the Seveso population most resolved within 1 year of exposure. In addition to chloracne, hyperpigmentation of skin can occur following dioxin exposure. This result was seen in adults and frequently in the “cola-colored” babies following the Yusho and Yucheng incidents. Low levels of dioxin exposure can also lead to aberrant dental development including oral pigmentation and missing permanent teeth, as observed in animals as well young

Fig. 2 President Viktor Yushchenko of Ukraine before and after developing chloracne following dioxin poisoning with 2,3,7,8-TCDD. Courtesy of the Associated Press.

Dioxins: Health Effects

139

children accidentally exposed. Additionally, hyperpigmentation of nails, dilation of the hair follicular orifice, and hyperkeratosis was observed following dioxin exposure. The occupational literature has provided insight into other acute health effects in populations exposed to large doses of dioxins. In highly exposed German chemical workers, an increase in death from ischemic cardiovascular events has been described. This association has been recently supported by an analysis of multiple cohorts showing a clear dose-related increase in cardiovascular disease with dioxin exposure. Additionally, liver damage, an increase in blood lipid levels, and other forms of short-term toxicity have been described in the human literature. In the occupational literature, reports of headaches, nausea, fatigue, and decreased libido and sexual ability follow exposure to dioxins and DL chemicals. The acute health effects of high-level dioxin exposures were first characterized in animal studies. Dioxins, even in very small doses, were shown to cause death in many laboratory animals and wildlife species. This lethal effect after exposure to very small doses has led TCDD to be called “the most toxic man-made chemical.” Typically, death in laboratory animals is preceded by a wasting syndrome that varies in length according to species, ranging from 2 to 4 weeks in rodents to 6–8 weeks in nonhuman primates. This wasting involves a dramatic and steady loss of body weight that cannot be attributed to reduced feeding. The median lethal dose, or LD50, for TCDD has not yet been determined for humans. For guinea pigs, the experimentally determined LD50 is  1 mg kg 1 body weight, whereas for hamsters, the LD50 is  1000 mg kg 1 body weight. From measuring serum dioxin levels following high-level poisoning episodes, it is known that the LD50 for humans is certainly higher than that of guinea pigs. Although the lethal dose of dioxins varies across species, developmental toxicity, neurotoxicity, and other adverse effects are observed at similar doses in multiple vertebrate species. Although the source of dioxins is largely industrial, the route of exposure for the general population is almost exclusively through the consumption of animal foods. Dioxins are fat soluble, and are therefore found in animal-based food products, such as meat, fish, and dairy products. Reduction of intake of animal fat can result in a lower dioxin intake. Skim milk has no dioxin content, whereas whole milk has 2% or higher lipid levels and contains dioxins. Similarly, low-fat yogurt has a much lower concentration of dioxins when compared to regular fat ice cream. The method of cooking food can also have an effect on the final amount of dioxins consumed. A marked decrease in the amount of dioxins is observed in meat or fish that is broiled and the fat is allowed to drip off. Fruits, vegetables, and grains typically have extremely low levels of dioxins due to their low lipid levels. To this effect, longterm vegans exhibit a lower dioxin body burden than the general population that consumes animal products. An average adult living in the United States typically has a daily TEQ intake < 1 pg kg 1, whereas an exclusively nursing infant has a much higher daily consumption from 35 to 53 pg kg 1. However, due to the infant’s rapid growth rate and higher elimination rate, the body burden usually does not exceed that of the general adult population by more than threefold. Various attempts to reduce elevated human dioxin body burden have been made with little to no clinical success. These include dietary intake of mineral oil, activated charcoal, rice bran, cholestyramine, and the fat substitute OlestraÒ. Attempts at cutaneous elimination by the dermal application of petroleum jelly were also met with little success.

Historical Episodes of Dioxin Exposure Although animal and in vitro studies are informative about the biological mechanisms of toxic dioxin action, older occupational literature and exposure events paint a much clearer picture about the acute health effects in humans of dioxin intake. These unfortunate incidents provide learning tools about not only negative health outcomes but also methods to prevent future dioxin exposures.

Agent Orange in Vietnam Between 1962 and 1971, the United States sprayed a variety of phenoxy herbicides over southern Vietnam to destroy the thick jungle in which enemy troops could hide as well as destroy food crops to support these troops. The most common of these herbicides sprayed was Agent Orange, a 1:1 mixture of 2,4-dichlorophenoxyacetic acid (2,4-D) and 2,4,5-trichlorophenoxyacetic acid (2,4,5-T), the latter of which was contaminated with TCDD, the most toxic dioxin. Commonly used paths of enemy troop movement, such as the Ho Chi Minh Trail, rice crops used to support both the enemy troops and civilians, and areas adjacent to base camps were all sprayed by the US military. Those who were at highest risk for dioxin exposure were Vietnamese soldiers and civilians serving or living in areas known to have been sprayed, and certain US Vietnam veterans, especially those who actively participated in Operation Ranch Hand, the military designation for the deforestation mission. Elevated dioxin levels have been found in the fat, blood, and milk of exposed Vietnamese and in US Vietnam veterans decades after the war. Samples of breast milk of nursing Vietnamese women taken in the midst of the conflict revealed the highest recorded levels of TCDD in breast milk ever reported, up to 1850 ppt in the lipid portion of the milk. By way of comparison, human breast milk normally has 1–5 ppt TCDD in the lipid portion. Soil samples taken from areas that were heavily sprayed with Agent Orange were found to contain TCDD levels up to 1,000,000 ppt, compared to normal worldwide background levels of 1–10 ppt. Recent studies of Vietnamese civilians living in areas that were used as an air base during wartime where Agent Orange was stored have found that very high levels of TCDD still exist in the blood of some individuals, up to 400 ppt lipid. Similarly, elevated levels of TCDD are currently found in wildlife in the same region, including ducks and fish. It is hypothesized that the contamination of the

140

Dioxins: Health Effects

soil and sediment leads to the bioaccumulation and bioconcentration of TCDD in these animals near the top of the aquatic food chain in the region. The majority of contamination of people living in these affected areas appears to be coming from their food. The major focus of Agent Orange research has now shifted to remediation and cleanup of the areas with the greatest dioxin contamination.

Seveso, Italy On 10 July 1976, a reactor producing 2,4,5-trichlorophenol (TCP) at the ICMESA Givaudan-Hoffmann-LaRoche plant in Milan, Italy, underwent an exothermic reaction that raised the temperature and pressure inside the reactor beyond limits. Safety devices were unable to stop the reaction, and due to the high heat and pressure inside the reactor, an unknown amount of TCDD and other dioxins and DL chemicals were produced. The overheated fluid in the reactor burst through a pipe and up into the open air above the plant, expelling approximately 2900 kg of organic matter, including at least 600 kg of sodium trichlorophenate and an estimated 600 g of TCDD. The chemicals that settled out of the resulting toxic cloud were found on the ground as far as 6 km south of the ICMESA facility. Within days the effect of the accident was observed on local wildlife around the plant, with the leaves of plants, animals, and birds being seriously affected or dying. Blood samples taken from inhabitants within several months after the accident showed greatly elevated TCDD levels, a median value of 447 ppt in the blood from 296 total samples. Blood samples collected from the same regions in 1993–94 showed continuing elevated TCDD levels, with a median of 63 ppt in female inhabitants and 73.3 ppt in male inhabitants. The most widespread health effect observed shortly after the incident was chloracne, especially in children under 15 years of age. Chloracne distribution in the community repeated the TCDD contamination patterns, with the highest rates in the area immediately downwind of the ICMESA plant. Peripheral neuropathy was also observed in the exposed population at a rate three times greater than the unexposed population used as a control. Long-term health effects on the population exposed during the Seveso incident include an increase in cancer incidence, specifically lymphatic and hematopoietic, with a suggestion of increased mortality from several other cancers including rectal and lung. Additionally, increased rates of mortality from heart disease, chronic obstructive pulmonary disease, and diabetes mellitus have also been observed. Decreases in sperm quality have been noted in men who were infants when exposed.

Yusho and Yucheng Rice Oil Poisonings In 1968, a mass poisoning, referred to as Yusho, occurred in Western Japan, mainly in the Fukuoka and Nagasaki prefectures. The poisoning was caused by ingestion of rice oil that was contaminated with PCBs, PCDFs, and polychlorinated quaterphenyls (PCQs). Samples analyzed later from the contaminated rice oil found PCB levels of approximately 1000 ppm and levels of PCDFs around 5 ppm. The majority of the patients affected by consumption of the contaminated oil reported their illness in the 9 month span between February 1968, when the oil was released to the market, and October 1968, when the Yusho epidemic was reported to the public. The most common symptoms reported by patients were hyperpigmentation (dark brown color) of nails, skin, and mucous membranes, acneiform (chloracne) eruptions, increased eye discharge, increased sweating of the palms, and a general feeling of weakness. Fewer patients reported visual disturbances, headaches, numbness in limbs, and hearing problems. The official number of affected individuals in the poisoning incident was 1870. Babies born to women in the Yusho cohort exhibited hyperpigmented skin at birth, a condition that faded away after 2–3 months. An incident similar to the Yusho rice poisoning occurred in Taiwan 11 years later. This exposure was referred to as Yucheng, which, like Yusho, translates to “oil disease.” The method of rice oil contamination in the Yucheng incident was the same as in Yusho, with the rice oil being contaminated by a Japanese-manufactured mixture of PCBs. The outbreak was first noticed by a local Taiwanese health bureau in May 1979. A group of students and staff at the Hwei-Ming School for Blind Children first reported skin diseases with symptoms including acne, hyperpigmentation of the nails and skin, and hypersecretion of sebaceous glands. At first, local clinicians did not identify the acneiform eruptions as chloracne; however, once the outbreak was traced back to rice oil consumption, the Taiwan Department of Health contacted Japanese scientists who were involved in the Yusho investigation. By February 1983, 2061 victims of the incident had been identified. Investigations into the level of PCB and PCDF contamination of the rice oil in the Yucheng incident revealed contamination levels about one-tenth of those seen in the Yusho incident. However, because the contaminated oil remained on the Taiwanese market for a much longer time, the average amount of consumed oil for the Taiwanese patients was 10 times more than that of the Japanese patients, and therefore similar levels of PCBs were consumed by both cohorts. Health effects noticed in the Yucheng cohort were very similar to those observed in the Yusho cohort, with the addition of observations of developmental effects such as decreased growth in the schoolchildren who had consumed the contaminated oil. Investigations into the long-term effects of these exposures have revealed that girls born to mothers exposed during the rice oil poisoning reported abnormal menstruation and increased levels of serum FSH and estradiol. Adverse effects on pregnancy outcome in the exposed women have been noted in Yusho. A prolonged time to pregnancy and reduced fertility has been observed in Yucheng. Mortality rates due to chronic liver disease, cirrhosis, as well as lupus were found to be increased in the exposed population.

Dioxins: Health Effects

141

Binghamton State Office Building, New York In February 1981, an electrical surge caused overheating and leakage of a large electrical transformer in the Binghamton State Office Building, in New York State. An estimated 180–200 gal of transformer fluid, or Pyranol, leaked from the transformer, which originally contained approximately 1060 total gallons of fluid. The fluid, contained a mixture of 65% PCBs and 35% tri- and tetrachlorinated benzenes. Cleanup workers and firefighters were exposed to a mixture of PCBs, PCDFs, and PCDDs following the incident. Surgical biopsies of fat tissue from eight exposed workers showed a high mean level of dioxins and dibenzofurans ( 1047 ppt wet weight). This incident is significant because it marked the first time that congener-specific dioxin and dibenzofuran measurements were made in exposed US workers and general population.

Severe Intoxications Individual case studies of highly exposed individuals have been useful in determining health outcomes following severe poisonings. In one case, in March 1998, a 30-year-old woman was admitted to the Department of Dermatology at the University of Vienna Medical School. The patient reported acute centrofacial inflammation and acne, which had begun shortly after she moved into a new office space in a textile research building. In the following weeks after initial admission, hundreds of cysts developed on the patient’s face, eyelids, genitals, trunk, and limbs. An astute dermatologist at the medical school suspected chloracne and subsequent tests revealed a TCDD contamination of 144,000 ppt blood lipid, the highest level ever reported in a human being. Besides the dermatological disorders, the patient reported nausea, vomiting, and loss of appetite. She also experienced fatigue and pain in her extremities, and was clinically ill for 2 years following the initial poisoning. A second case, a colleague of the first case who worked in the same office at the textile plant, reported to the same hospital showing similar symptoms of skin disorder, but to a much lesser degree. TCDD blood testing revealed a blood lipid level of 26,000 ppt. Other than the skin disorders, the patient exhibited marginally elevated cholesterol and lipase, an elevated number of B-lymphocytes, and a decreased percentage of NK cells. Her levels of thyroid-stimulating hormone (TSH) and prolactin were elevated on a single-occasion measurement, but other thyroid and sex hormone levels remained normal. Both of these patients have been monitored closely for long-term health effects due to high-level dioxin exposure. The levels of dioxins in both patients gradually declined.

Dioxins and Challenges in Occupational Medicine Small exposures to dioxins are unlikely to manifest clinically, and therefore are extremely difficult for physicians to detect. Epidemiological evidence and toxicological literature suggest that dioxins can cause population-wide adverse effects. Epidemiological studies point to the evidence that dioxin exposure in humans can cause long-term chronic effects such as cancer at similar dose levels that cause cancer in laboratory animals. However, a maximum increased risk of 1/1000 due to dioxin-related cancer death would be very difficult to detect because cancer is a very common disease and a leading cause of death. Physicians face major challenges in recognizing, diagnosing, and treating dioxin exposures, especially in patients without a welldefined exposure history. Besides chloracne, dioxin exposure shows no pathognomonic lesion as is seen in exposures to other occupational hazards such as asbestosis, mesothelioma, and lung cancer with asbestos present or angiosarcoma of the liver with vinyl chloride. Because of this, dioxin exposure is frequently seen as a public health or epidemiological issue rather than a clinical issue. Employees who are most likely to have been occupationally exposed to dioxins are chemical and incinerator workers. Proper diagnosis of heavy dioxin exposure can be very difficult. Chloracne is the hallmark outcome following exposure to synthesized halogenated organic compounds such as dioxins; however, it is rarely observed even at high levels of exposure, especially in adult populations. It is important to note that a lack of chloracne does not mean that exposure did not occur. By observing similar trends in large worker or civilian populations, as seen in the Yusho and Yucheng rice oil poisoning incidents, an observant physician can be alerted to the correct diagnosis. In this regard, communication between the treating physician, the company physician, the Department of Health, and the environmental safety officer can be very useful. If dioxin contamination of a worksite is suspected, environmental samples can be taken of soot, ash, or chemical residues of the factory. Dioxin levels in these samples can be compared to those in exposed workers’ serum by using the gold standard HRGSHRMS test to detect levels of various dioxin congeners. However, these tests are very expensive, around $1200 for a congenerspecific test run by an experienced dioxin laboratory, and this analysis is rarely covered by health insurance and therefore is not ordered routinely. The biological screening tests that exist are useful for identification of highly exposed environments or people, but require confirmation. If elevated levels of dioxin are found in the patient’s sample, it is still important for the physician to keep in mind other possible causes for medical abnormalities that have been reported after dioxin exposure such as skin rashes, aberrant liver enzyme levels, headaches, and nausea. Other serious medical conditions could exist that require diagnosis and treatment. If high-level dioxin exposure is established as the cause of the symptoms, no definitive method of reducing the elevated dioxin levels has been established. Therefore, the physician must treat the symptoms of these disorders as reduction of body burden of dioxins is not currently feasible. A medical specialist in occupational exposure to dioxins can be invaluable in assisting patients and treating physicians of what to expect following dioxin exposure. Measurements ensure that dioxin levels will approach population background levels over time and allow detection of any ongoing dioxin exposure that continues. There is currently no test to detect which individuals may be more sensitive to dioxin exposure. Considerable research about the health effects and routes of exposure

142

Dioxins: Health Effects

of dioxin and DL chemicals is currently being conducted. In the future, these new understandings will influence treatment and detection of dioxin exposure as well as elucidate the population effects of these toxic and persistent chemicals. As a result of regulations throughout the world, the levels of dioxins have decreased significantly over the past 20 years. Given the extreme toxicity of this class of compounds, it is imperative that human exposure continues to decrease in the future.

See also: Dioxins; Electronic Waste and Human Health; Estrogenic Chemicals and Cardiovascular Disease; Monetary Valuation of Trace Pollutants Emitted Into Air by Industrial Facilities; Persistent Organohalogen Pollutants and Phthalates: Effects on Male Reproductive Function; Prenatal Exposure to Polycyclic Aromatic Hydrocarbons (PAH).

Further Reading Baughman, R., Meselson, M., 1973. An analytical method for detecting TCDD (dioxin): Levels of TCDD in samples from Vietnam. Environmental Health Perspectives 5, 27–35. Birnbaum, L.S., Staskal, D.F., Diliberto, J.J., 2003. Health effects of polybrominated dibenzo-p-dioxins (PBDDs) and dibenzofurans (PBDFs). Environment International 29 (6), 855–860. Birnbaum, L.S., Tuomisto, J., 2000. Non-carcinogenic effects of TCDD in animals. Food Additives and Contaminants 17, 275–288. Eadon, G., Kaminsky, L., Silkworth, J., et al., 1986. Calculation of 2,3,7,8-TCDD equivalent concentrations of complex environmental contaminant mixtures. Environmental Health Perspectives 70, 221–227. Environmental Protection Agency (EPA), 2003. Exposure and human health reassessment of 2,3,7,8-tetrachlorodibenzo-p-dioxin (TCDD) and related compounds: National Academy of Sciences (NAS). http://www.epa.gov/ncea/pdfs/dioxin/nas-review/. (Accessed 28 July 2009). Eskenazi, B., Mocarelli, P., Warner, M., et al., 2002. Serum dioxin concentrations and endometriosis: A cohort study in Seveso, Italy. Environmental Health Perspectives 110 (7), 629–634. Flesch-Janys, D., Berger, J., Gum, P., et al., 1995. Exposure to polychlorinated dioxins and furans (PCDD/F) and mortality in a cohort of workers from herbicide-producing plant in Hamburg, Federal Republic of Germany. American Journal of Epidemiology 142, 1165–1175. Kociba, R.J., Keyes, D.G., Beyer, J.E., et al., 1978. Results of a two-year chronic toxicity and oncogenicity study of 2,3,7,8-tetrachlorodibenzo-p-dioxin in rats. Toxicology and Applied Pharmacology 46 (2), 279–303. Pavuk, M., Schecter, A.J., Akhtar, F., 2001. Serum TCDD levels and thyroid system effects among US air force veterans. Organohalogen Compounds 52, 201–205. Pirkle, J.L., Wolfe, W.H., Patterson, D.G., et al., 1989. Estimates of the half-life of 2,3,7,8-tetrachlorodibenzo-p-dioxin in Vietnam veterans of operation ranch hand. Journal of Toxicology and Environmental Health 27, 165–171. Schecter, A.J., Birnbaum, L.S., Ryan, J.J., Constable, J.D., 2006. Dioxins: An overview. Environmental Research 101, 419–428. Schecter, A.J., Dai, L.C., Thuy, L.T.B., et al., 1995. Agent Orange and the Vietnamese: The persistence of elevated dioxin levels in human tissue. American Journal of Public Health 85 (4), 516–522. Schecter, A.J., Gasiewicz, T. (Eds.), 2003. Dioxins and health, 2nd edn. Wiley, Hoboken, NJ. Schecter, A.J., Tiernan, T.O., 1985. Occupational exposure to polychlorinated dioxins, polychlorinated furans, polychlorinated biphenyls, and biphenylenes after an electrical panel and transformer accident in an office building in Binghamton, NY. Environmental Health Perspectives 60, 305–313. Van den Berg, M., Birnbaum, L., Denison, M., et al., 2006. The 2005 World Health Organization reevaluation of human and mammalian toxic equivalency factors for dioxins and dioxin-like compounds. Journal of Toxicological Sciences 93 (2), 223–241.

Disabling Environmentsq S French and J Swain, Northumbria University, Newcastle upon Tyne, United Kingdom © 2019 Elsevier B.V. All rights reserved.

Introduction The concept of environmental health is broad in scope, as is evident in other articles in this encyclopedia. It is a segment of public health concerning the impact of people on their environment and the impact of the environment on them. Care areas can include pollution, housing, and health and safety. The social aspects of environmental health are reflected in the power and social control reverberating access to health services and the so-called public environment of transport, housing, service provision, and amenities, such as parks and reserves. The concept of disabling environments encompasses the analysis of the structural, environmental, and attitudinal barriers that marginalize and segregate disabled people, with enforced dependency and institutionalization. Changing disabling environments involves shifts in environmental design away from special solutions and adaptations. It engages with possibilities for a more inclusive society, toward increasing accessibility and inclusivity in mainstream design, often under the notions of inclusive or universal design. First, the understandings of disability are looked at in this article. The individual model has been the dominant understanding. Tyler et al., for instance, concentrate on the role of environmental health in causing impairment. They stated in a study in 2008: Prenatal and childhood environmental exposures are an underrecognized primary cause of intellectual and other developmental disabilities. In addition, individuals with established disabilities are vulnerable to further harm from subsequent environmental exposures. In individuals with communicative impairment or limited ability to independently escape from hazards, these subsequent exposures, too, may occur undetected or untreated.

Within the individual model, it is the individual’s impairment or limited ability that is the problem. It can be argued that the individual approach created “little incentive to accommodate the needs of disabled persons in mainstream society.” The social model, on the other hand, has underpinned the growing realization that disability arises not within the individual, due to impairment or incapability, but is the result of environments and services that fail to take account of disabled people, their views, needs, and aspirations. The social model, generated by the experiences of disabled people themselves, provides a framework to address the disabling barriers and power relations creating disability. In this article, thus, the views and experiences of disabled people are documented. Then the discussion turns to health inequalities. Certain groups within society such as women, old people, people from ethnic minorities, and disabled people are also disadvantaged partly because of their over representation in the lower socioeconomic groups. This section of the article will examine the meaning of health inequalities and its implications for understanding environmental health. Then the discussions are broadened to consider disabling environments. This topic is, by definition, extensive and could include transport, work environments, leisure, education, accessibility of information, and public amenities. Legislation and regulations are important here particularly in the creation of a more inclusive society/environment, including the Disability Discrimination Act. Housing is the particular focus in this article. It has been known at least since Victorian times that the quality of housing impacts people’s physical and mental health and this is no less true today. The article concludes by returning to the possibilities for challenging power relations and promoting the inclusion of the voices of disabled people in challenging the dominant social aspects of environmental health.

Understanding Disability In this section, two central models of disability touched on in the preceding text will be examined, the individual model and the social model, to illustrate the ways in which underlying ideas and concepts can shape social policy and practice. Within every society, there are competing models of disability, with some being more dominant than others at different times. In earlier centuries, for example, models of disability were based on the religion. Although often in conflict, models of disability may gradually influence and modify each other. The models put forward by powerful groups within society, such as the medical profession, tend to dominate the models of less powerful groups, such as disabled people themselves. It is essential to explore these models of

q

Change History: December 2018. S. French and J. Swain have updated the text throughout the article. This is an update of S. French, J. Swain, Disabling Environments, Editor(s): J.O. Nriagu, Encyclopedia of Environmental Health, Elsevier, 2011, Pages 102–109.

Encyclopedia of Environmental Health, 2nd edition, Volume 2

https://doi.org/10.1016/B978-0-12-409548-9.11676-8

143

144

Disabling Environments

disability, for attitudes and behavior toward disabled people, policy, professional practice, and the running of institutions are based, at least in part, on them. As Oliver stated in a study in 1993: The ‘lack of fit’ between able-bodied and disabled people’s definitions is more than just a semantic quibble for it has important implications both for the provision of services and the ability to control one’s life.

The Individual Model of Disability The most widespread view of disability at the present time, at least in the Western world, is based on the assumption that the difficulties disabled people experience are a direct result of their individual impairments. Thus, the blind person who falls down a hole in the pavement does so because he or she cannot see it, and the person with a motor impairment fails to get into the building because of his or her inability to walk. Problems are thus viewed as residing within the individual. Individualistic definitions of disability certainly have the potential to do serious harm. The medicalization of learning disability, whereby people were institutionalized and abused, is one example. Another is the practice of oralism, where deaf children were prevented from using sign language and punished for using it. Goble explores in a study in 2008 the origins and nature of what has been called “institutional abuse.” None of these arguments imply that considering the individual needs of disabled individuals is wrong; the argument is that the individual model has tended to view disability only in those terms, focusing almost exclusively on attempts to modify people’s impairments and return them or approximate them to “normal.” The effect of the physical, attitudinal, and social environment on disabled people has been largely ignored or regarded as relatively fixed, which has maintained the status quo and kept disabled people in their disadvantaged state within society. Thus, the onus is on disabled people to adapt to a disabling environment.

The Social Model of Disability The social model views disability not in terms of the individual’s impairment, but in terms of environmental, structural, and attitudinal barriers that impinge upon the lives of disabled people and that have the potential to impede their inclusion and progress in many areas of life, including employment, education, and leisure, unless they are minimized or removed. The social model of disability has arisen from the thinking, writings, and growing cultural identity of disabled people themselves. The following definition of impairment and disability is that of the Union of the Physically Impaired against Segregation (UPIAS), provided in 1976, which was an early radical group in the Disabled People’s Movement. Its major importance is that it breaks the link between impairment and disability.

Impairment Lacking part or all of a limb, or having a defective limb, organ, or mechanism of the body.

Disability The disadvantage or restriction of activity caused by a contemporary social organization that takes no or little account of people who have physical impairments and thus excludes them from participation in the mainstream of social activities. Physical disability is therefore a particular form of social oppression. The word “physical” is now frequently removed from this definition so as to include people with learning difficulties and users of the mental health system. This, and similar definitions, break the connection between impairment and disability, which are viewed as separate entities with no causal link. In recent years, however, it has been recognized that the body is more than a biological entity. Just as height, weight, age, and physique have social and cultural dimensions and consequences, so too does impairment. Disability is viewed within the social model in terms of barriers. There are three types of barriers, which all interact: 1. Structural barriers: These refer to the underlying norms, mores, and ideologies of organizations and institutions that are based on judgments of “normality” and that are sustained by hierarchies of power. 2. Environmental barriers: These refer to physical barriers within the environment, for example, steps, holes in the pavement, and lack of resources for disabled people, such as lack of Braille and sign language interpreters. They also refer to the ways things are done, which may exclude disabled people, for example, the way meetings are conducted and the time allowed for tasks. 3. Attitudinal barriers: These refer to the adverse attitudes and behavior of people toward disabled people. It can be seen that the social model of disability locates disability not within the individual disabled person but within the society. Finkelstein has argued in a 1998 study that nondisabled people would be equally disabled if the environment was not designed with their needs in mind, for example, if the height of doorways only accommodated wheelchair users. Human beings fashion the world to suit their own capabilities and limitations and disabled people are wanting no more than that.

Disabling Environments

145

Health Inequalities This section will examine the meaning of health inequalities in relation to disabled people. To do this, it is necessary to consider what is meant by “health.” In 1984, the World Health Organization (WHO) defined health as . the extent to which an individual or group is able, on the one hand, to realise aspirations and satisfy needs; and, on the other hand, to change or cope with the environment. Health is, therefore, seen as a resource for everyday life, not the objective of living; it is a positive concept emphasising social and personal resources as well as physical capacities.

It can be argued that unless one feels good about oneself and has meaning in one’s life, such as going to work, raising a family, learning new skills, visiting friends, or pursuing hobbies and interests, one cannot be fully healthy. The majority of research reports over the years demonstrate that mortality, morbidity, and life expectancy are strongly correlated with socioeconomic class, with those in the lower social classes being at a considerable disadvantage. People of the lowest socioeconomic status are at far higher risk, not only of physical illness and early death, but also of accidents, premature births, mental illness, and suicide. Smith and Goldblatt report in a 2004 study that in 1997 the infant mortality rate in Britain was 1.5 times higher for babies born into the lowest social class than those born into the highest social class and that there was a 5-year difference in longevity between the two social classes. Furthermore, of the 66 major causes of death, 62 were found to be more prevalent in the lowest two social classes. A similar pattern has been found with regard to accident rates among children. House found in a study in 2001 that all risk factors for health (behavioral, social, psychological, and environmental) increase with low socioeconomic status; however, it is measured, for instance, by income, education, housing, or occupation. Although Britain has become healthier and wealthier over the years, health inequalities persist; in fact, the gap between the richest and poorest sectors of society has widened. Most disabled people are of lower socioeconomic status and are thus more likely to experience ill-health. There are many influences on all aspects of one’s health. Dahlgren and Whitehouse in a study in 1995 depict these as layers piled on top of each other. At the bottom of the pile are biological factors. These include the sex and age and the genes inherited from one’s parents. Many diseases become more common as one grows older (e.g., cancer and cardiovascular disease); some diseases are specific to men or women (e.g., prostate and ovarian cancer), whereas others are genetic or congenital in origin (e.g., cystic fibrosis and congenital heart disease). The second layer focuses on the personal behavior. This includes whether or not one smokes cigarettes or eats too much, the amount of exercise one takes, and how much stress one is under. Most policy initiatives from government have focused on this layer where attempts have been made to change people’s behavior in order to improve their health. This emphasis on personal behavior has been criticized. Asthana and Halliday stated in a study in 2006: “. The government’s strategy suggests an implicit assumption that health inequalities can be reduced without changing overall levels of inequality.” This has implications for disabled people who have been expected to change themselves and accommodate themselves to existing structures. The next layer concerns social and community influences. The people around an individual, including family members, neighbors, colleagues, and friends, can influence his or her health by giving meaning to his or her life and providing assistance and support in times of illness, difficulty, and stress. Organizations such as the church and self-help groups may also be important. Conversely, these people can have a detrimental effect on one’s health by neglect, abuse, or failing to take account of one’s needs. Feelings of isolation can lead to poor physical and mental health. Eberstadt and Satel noted in a study in 2004 that people who are socially isolated die at twice the rate of those who are well connected and that they are prone to depression, which can lead to poor health habits and risk-taking behavior. Conversely, positive social relations are linked to good health. Berkman and Melchior pointed out in a study in 2006 that social networks provide opportunities for support, access, social engagement, and social and economic advancement, allowing individuals to participate in work, community, and family life. Social networks can, however, also lead to discrimination, hostility, and exclusion. Disabled people are likely to experience social isolation and discrimination because the barriers within society (environmental, attitudinal, and structural) make it difficult or impossible for them to participate as full citizens. This, in turn, is likely to impact adversely on their physical and mental health. Living and working conditions comprise the next layer of influence. It is well known, for example, that the type of house in which one lives and the environment at work can affect one’s health. Work pressure or noisy neighbors may cause depression and anxiety that can lead to physical ill-health, and physical hazards such as dampness, poor architectural design, and dangerous work practices can cause disease and injury. Living in deprived neighborhoods also increases the risk of ill-health and mortality despite the individual’s personal situation. It is well known that unemployment is correlated with poor health and high mortality rates, but employment can also have adverse effects on one’s health. Low-salaried workers have the worst physical and psychosocial environment at work, which, in turn, can lead to poor physical and mental health. Siegnal and Theorell stated in a study in 2006 that people in jobs in which there is high demand and low control are particularly likely to experience stress, although low demand and low control is also stressful, particularly if it is linked to low levels of support. High effort and low reward is also stressful and likely to lead to ill-health. In this situation, there is lack of reciprocity in the arrangement, which is likely to give rise to negative emotions. Certain people, including

146

Disabling Environments

disabled people, are more likely to accept work of this type due to lack of opportunity and choice. Disabled people are also far more likely than nondisabled people to be unemployed, which may have an adverse effect on their health. The outermost layer affecting one’s health concerns general socioeconomic, cultural, and environmental conditions. This includes the economic state of the country, the level of employment, the tax system, the degree of environmental pollution, and one’s attitudes, for example, toward ethnic minorities and disabled people. It is at this level that government can be particularly influential by implementing policy and passing legislation to bring about wide social change, for instance, seat belt legislation, restrictions on cigarette smoking, and equality legislation such as the Disability Discrimination Act (1995). It is important to realize that the health of a country does not equate to its wealth but rather to how fairly the wealth is distributed. Asthana and Halliday noted in a study in 2006 that longevity rises in societies that are more equal and socially cohesive, especially when infectious diseases have been controlled. They contend that psychosocial stress is related to feelings of relative disadvantage and to subordinate status, which, in turn, can lead to mental and physical ill-health. Siegrist and Marmot stated in a study in 2006: An exclusive emphasis on the physical life circumstances associated with low income fails to recognise the importance of a broad spectrum of psychosocial influences on health that interact with material conditions and that may be decisive in explaining the social gradient among populations with standards of living above a certain threshold of poverty and deprivation.

However, Dahlet et al. pointed out in a study in 2006 that health inequalities are not consistently better in social democratic countries than in those with conservative and liberal regimes. It is clear that the levels of influence on one’s health that Dahlgren and Whitehouse describe in a study in 1995 interact and influence each other. If the economic state of the country is favorable, for example, people are likely to have more disposable income that may improve their health by allowing them to buy good-quality food and housing of a better standard, engage in leisure pursuits, give their children more opportunities, and enjoy relaxing holidays to reduce stress. Similarly, if a person is attempting to give up drugs, success is more likely if community support is strong and if government is willing to act by establishing and financing supportive policies. It is also important to note that social adversity can influence one’s behavior. People in deprived circumstances have the fewest choices, including those concerning their health. Smoking or excessive alcohol consumption, for instance, may serve to reduce stress caused by poor income, discrimination, or lack of support. Ewles and Simnett in a 2003 study believed that: We cannot assume that individual behaviour is the primary cause of ill health .. There is a danger that focusing on the individual detracts attention from the more significant (and, of course, politically sensitive) determinants of health, such as the social and economic factors of racism, relative deprivation, poverty, housing and unemployment.

Asthana and Halliday point out in a study in 2006 that structural causes of ill-health are difficult to measure and are still poorly understood. The stresses in people’s lives tend to be cumulative, with adverse physical, social, and psychological conditions, which often start in early infancy or before birth, leading to biological, psychological, and social disadvantage. Power and Kuh point in a study in 2006 to insecure attachment, institutionalization, and lack of emotional support as being damaging factors. Disabled people are at high risk of experiencing these conditions. Asthana and Halliday stated in a study in 2006: During infancy and early childhood, neglect, abuse and social deprivation can . produce a cascade of neurobiological events that in turn effect emotional, behavioural, cognitive and physiological development.

Turning to policy, despite the individualistic stance of most health policies, in the United Kingdom the government has taken some heed of the social determinants of health in, for example, the provision of child tax credits, initiatives such as Sure Start, and attempts to improve education. However, Asthana and Halliday in a 2006 study note a shift from a concern with macrosociological factors, as seen in the treasury review Tackling Health Inequalities (2002), to a concern with changing behavior, which is evident in the white paper Choosing Health: Making Health Choices Easier. It can be seen from this account that the health of disabled people is particularly at risk. Most disabled children are born into families of low socioeconomic status, for instance, low-birth-weight babies are particularly likely to have impairments. Disabled people are less likely to gain a good education and are more likely to be unemployed or in low-paid monotonous work. They must contend with an inaccessible and sometimes hostile environment, in terms, for instance, of inaccessible transport and buildings, which, in turn, makes the establishment of supportive social networks, including friendships and intimate relationships, more difficult. Furthermore, these factors accumulate over the disabled person’s life. A focus on the environment, which, through disabled people’s own efforts, has started to take effect, is the surest way to improve the health of disabled people in the broadest sense.

Disabling Environments

147

Environmental Access It cannot be emphasized strongly enough that inclusion in society and environmental health goes far beyond the design of domestic dwellings. Imrie stated in a study in 2004: One of the most significant problems for disabled people relates to physical obstacles and barriers within the built environment. Many commercial and public buildings are inaccessible to wheelchair users, while few buildings provide appropriate design features to enable people with a range of sensory impairments to move around with confidence and ease. Accessible public transport is a rarity while most housing lacks basic adaptations or design features to facilitate independent living for disabled people .. As some have argued this is tantamount to an infringement of disabled people’s civil liberties.

The early pioneers of community living were well aware that accessible housing in isolation would not be sufficient. Personal assistance in the home may be required, and to become fully involved in the community, an accessible environment is essential in terms of transport, public buildings, and information. Appropriate attitudes and behavior, and flexible social structures that, for example, allow disabled people to participate in education and paid employment, are also essential. Imrie stated in a study in 1998: Western cities are characterised by a design apartheid where building form and design are inscribed into the values of an ‘able-bodied’ society .. This has led some commentators to regard the built environment as disablist, that is projecting ‘able-bodied’ values which legitimate oppressive and discriminatory practices against disabled people purely on the basis that they have physical and mental impairments.

This section of the article is based on interviews conducted with disabled people who have had significant experiences with housing. The purpose was not to provide a representative sample of service users but to gather some “real-world” experiences that, it is hoped, will illustrate some of the issues relevant to environmental health from the disabled person’s viewpoint. The interviews also illustrate, with specific examples, many of the issues discussed in the preceding text and found in disability studies literature. What, then, might be important to disabled people in relation to where they live? Perhaps not surprisingly, many issues were similar and, in a general sense, would be significant for many nondisabled people too. Location, for example, can matter for many different reasons. David told the authors: Location, where it is, is very important to me. I like to have a degree of accessibility in and around where I live, so the site needs to be accessible, and then I would say that about quarter of a mile around would be nice to be accessible, though it’s not top of my list because I have the car. So obviously proximate parking, or building adjacent covered parking, because to function, especially in the winter months, I need it to be right bang on my doorstep.

As a wheelchair user, access within the house starts for David with plenty of room to move around: Space, because I use the chair. Lots of space. I find that most adapted premises are short on space, unless they are purpose built for a wheelchair user they are inadequate.

Access to the whole of the property, including the garden, is also important and, as David indicates, can have an impact on family life: One of the things that often falls short in terms of access for me is the garden. If it’s there I can’t get to it. And I certainly think the way the grant schemes are structured at the moment, that’s a shortfall, particularly for things like child care – if you can’t get to your garden you can’t supervise or care for your children adequately.

Dawn lives with her partner and seven children from previous marriages, two of whom are disabled. They have recently moved and consideration of the whole family was crucial for Dawn. Access is clearly very significant, particularly as it provides a context for relationships within the family: It’s important that the whole family have access around the entire house. That’s the biggest priority. We have just moved house and the thing that was imperative was that everyone could get access to every room, that includes the laundry room, the cupboards. Obviously for us having M with mobility difficulties, and balance, it means that there has got to be circulation space .. It’s single storey. The reason for that is simply that M has access without having to shout for anybody. It is terribly intrusive to have to ask somebody to escort you if you feel you would like to go on your own and M does like to wonder round on his own.

148

Disabling Environments

A major theme in the interviews was that a house is not simply a place to live, but a “home” with all the psychological and social connotations this holds. Housing issues for disabled people, as for nondisabled people, are certainly more than the building or place. Home has a variety of meanings for the person who lives in it and is not entirely a separate entity from that person: Home can play a part in making manifest a personal identity and affect the construction of social relations (hence the current popularity of home decorating programs). Having a home, and having the choice to stay within it, is of the utmost importance to most people. Norman, talking of older people, stated in a study in 1998: It is not sufficiently realised that the loss of one’s home – however good the reasons for losing it – can be experienced as a form of bereavement and can produce the same grief reaction as the loss of a close relative.

Even if the home is not entirely suitable physically, many people still prefer to stay where they are because of the memories and associations that surround it. Barbara is a woman losing her sight in old age. The notion of home, together with associated relationships, is clearly apparent in the following exchange: Sally: Have you made any changes to your house since you had problems with your sight? Barbara: No I haven’t made any changes at all because I’ve lived here so long I know the number of stairs to go up and down. The two steps we have in the passage don’t bother me because I know where they are. It might be a different problem if I was moving to a new place to live. Sally: Would it put you off moving? Barbara: I wouldn’t want to move from here because I like the house and we’ve got it nice and warm and it’s convenient – not too far from the shops. Crossing the road is a difficulty but I’m fortunate that I have a husband who always accompanies me but it must be very, very difficult for somebody on their own.

The notion of home is linked with many personal and social understandings, including comfort, security, love, caring, quality of life, and lifestyledalthough it can, of course, be associated with the lack of these qualities. Central to this is choice and controldor lack of choice and control. Home is the place that one makes one’s own, the expression of self, starting with the choice of where one lives. Choice is, of course, always limited and disability can play a major part in such limitations. Choice is important for Arlene, a woman with multiple impairments and a powerchair user, but her experience illustrates what it is like to have no choice: I had no choice in the area where I had to live when I became disabled. It was a choice of living here or living in hospital. This house was found for me and adapted while I spent a year in hospital. I hadn’t been in this area before and I didn’t know anybody. So not only was I facing the fact that I was going to be disabled, and that was a new experience, I had no social network round here. I came into sort of an alien environment, they didn’t want a disabled resident to live here and I wasn’t told this. I have approached counsellors and said ‘Get me out of here’ on numerous occasions, and they’ve said ‘Well we can put you in a pensioner’s bungalow but it is too small for your needs’ .. The housing situation is also that I’ve got such an array of adaptations now that to re-house me would cost them a lot of money and they are not prepared to do that.

The limitation of choice experienced by Arlene is not restricted to bricks and mortar: They actually got a petition up to stop a disabled person moving in here. So I came in, said hello to my neighbours, and was told we don’t want you . like you shouldn’t be in the building, you should be in an institution. They put me in a situation where I faced harassment. They hadn’t explored the environment I was going to be living in. They also caused problems because they asked able-bodied people where my ramp should be situated, rather than asking me, and even to this day, it’s 13 years since I moved here, my ramp is at the back of the building and the able-bodied people come in at the front. Up ‘til about two years ago I had no lighting coming in at the rear entrance because it’s down past garages. They didn’t have a street light there so it was jet black .. The tenants, even after 13 years, have caused problems .. I had to seek advice from a solicitor. I got a warning letter about my conduct as a tenant from the council saying that I was slamming doors within the flat and it’s an open plan flat – there’s only one sliding door and the other one’s automatic. So they hadn’t checked anything out. They complained about my district nurses coming in the morning, they come in at 8.30, and they complained about the noise the nurses made coming into the building. So the council, instead of telling them to get lost, carpeted the outside of the flat – it’s the only block of flats here to have any carpeting – and there was also in the letter of complaint about the fact that my wheelchair left trailing marks, as I came in the back door, on the carpet.

An understanding that the whole environment needs to be accessible has led to the concept of universal design, which has at its core the principle of designing for all people, disabled, nondisabled, young, and old, and in such a way that environments are flexible and adjustable. The original principles of universal design were developed throughout the 1990s at the Centre for Universal Design at North Carolina State University. Storey documented in 2001 the following six principles, each of which is accompanied by further guidelines:

• •

Make it easy to understand. Make it easy to operate.

Disabling Environments

• • • •

149

Communicate with the user. Design for user error. Accommodate a range of methods of use. Allow space for access.

O’Brien stated in a study in 2006: Universal design also promotes holistic and inclusive thinking about the totality of human need in society.

The dissatisfaction with the notion of “special needs” housing has led to the concept of Lifetime Homes promoted by the Joseph Rowntree Foundation. Lifetime Homes are built with many standard features, such as a downstairs toilet and sufficient turning space for a wheelchair, and are built to be easily adjusted as circumstances change, allowing, for example, the fitting of a stair lift. Stewart stated in a study in 1999: Lifetime houses can be thought of as universalist in that anyone could occupy them and in consequence they neither stigmatise nor create dependency, whilst the decision to adapt fully can still be related to individual needs and circumstances.

Conclusion A truly holistic approach to the development of environmental health takes account of independent living, from the viewpoint of disabled people; accessible and effective health and social care provision; social inclusion; equality of opportunity; and inclusive environmental design, housing, transport, and public amenities. There are clear examples of the development of policy and practice frameworks including the Disability Discrimination Act, Prime Minister’s Strategy Unit (2005) Improving the Life Chances of Disabled People, Department of Health’s 2005 booklet Independence, Well-Being and Choice, and as seen in the preceding text, universal design. Yet inequalities and discrimination continue to limit the day-to-day lives of disabled people. They are embedded in the power relations between health and social care professionals and disabled service users, and between policy makers and disabled people. In relation to the concept of universal design, for instance, it can be argued that a major flaw is that it ignores the political and social dimensions of inclusion. As Imrie pointed out in a study in 2004: Its principles are apolitical in that there is little explicit recognition of the relationship between the social, technical, political and economic processes underpinning building and design.

If disabled people are to be truly included in the community, then a profound transformation of society, in all its aspects, is required. It is clear that for policy makers, thorough consultation with disabled people is essential but, until there is sufficient will to make society inclusive to all disabled people, it is naive to imagine that development of environmental health will be anything more than a nominal gesture. Similarly, if the impact of service providers is to move beyond tokenism, they need to heighten their awareness of disability from the viewpoint of disabled people, work in partnership with disabled people in removing disabling barriers, recognize the expertise of disabled people, and use their professional power to facilitate disabled people in their struggle for full participative citizenship.

See also: Environmental Justice: An Overview; Environmental Noise; Medical Anthropology; Overview of How Ecosystem Changes Can Affect Human Health.

Further Reading Asthana, S., Halliday, J., 2006. What Works in Tackling Health Inequalities? Polity Press, Bristol. Berkman, L.F., Melchior, M., 2006. The shape of things to come: How social policy impacts social integration and family structure to produce population health. In: Siegrist, J., Marmot, M. (Eds.), Social Inequalities in Health: New Evidence and Policy Implications. Oxford University Press, Oxford. Dahl, E., Fritzell, J., Lahelma, E., et al., 2006. Welfare state regimes and health inequalities. In: Siegrist, J., Marmot, M. (Eds.), Social Inequalities in Health: New Evidence and Policy Implications. Oxford University Press, Oxford. Dahlgren, G., Whitehead, M., 1995. Policies and strategies to promote social equity in health. In: Benzeval, M., Judge, K., Whitehead, M. (Eds.), Tackling Inequalities in Health: An Agenda for Action. The King’s Fund, London. Department of Health, 2005. Independence, Wellbeing and Choice. The Stationery Office, London. Eberstadt, N., Satel, S., 2004. Health and the Income Inequality Hypothesis. AEI Press, Washington.

150

Disabling Environments

Ewles, L., Simnett, I., 2003. Promoting Health: A Practical Guide, 5th edn. Bailliere Tindall, London. Finkelstein, V., 1998. Emancipating disability studies. In: Shakespeare, T. (Ed.), The Disability Reader: Social Science Perspectives. Cassell, London. French, S., Swain, J., 2006. Housing: The user’s perspective. In: Clutton, S., Grisbrooke, J., Pengelly, S. (Eds.), Occupational Therapy in Housing: Building of Firm Foundations. Whurr Publishers, London. Goble, C., 2008. Institutional abuse. In: Swain, J., French, S. (Eds.), Disability on Equal Terms. Sage, London. Green, J., 2001. Children and accidents. In: Davey, B. (Ed.), Birth to Old Age: Health in Transition. Open University Press, Buckingham. Imrie, R., 1998. Oppression, disability and access in the built environment. In: Shakespeare, T. (Ed.), The Disability Reader: Social Science Perspectives. Cassell, London. Imrie, R., 2004. From universal to inclusive design in the built environment. In: Swain, J., French, S., Barnes, C., Thomas, C. (Eds.), Disabling Barriers–Enabling Environments, 3rd edn. Sage, London. Norman, A., 1998. Losing your home. In: Allott, M., Robb, M. (Eds.), Understanding Health and Social Care: A Reader. Sage, London. O’Brien, P., 2006. Access standards: Evolution of inclusive housing. In: Clutton, S., Grisbrooke, J., Pengelly, S. (Eds.), Occupational Therapy in Housing: Building of Firm Foundations. Whurr Publishers, London. Oliver, M., 1993. Re-defining disability: A challenge to research. In: Swain, J., Finkelstein, V., French, S., Oliver, M. (Eds.), Disabling Barriers–Enabling. Sage, London. Power, C., Kuh, D., 2006. Life course development of unequal health. In: Siegrist, J., Marmot, M. (Eds.), Social Inequalities in Health: New Evidence and Policy Implications. Oxford University Press, Oxford. Prime Minister’s Strategy Unit, 2005. Improving the Life Chances of Disabled People. Strategy Unit, London. Satirist, J., Marmot, M., 2006. Social inequalities in health: Basic facts. In: Siegrist, J., Marmot, M. (Eds.), Social Inequalities in Health: New Evidence and Policy Implications. Oxford University Press, Oxford. Siegnal, J., Theorell, T., 2006. Socio-economic position and health: The role of work and employment. In: Siegrist, J., Marmot, M. (Eds.), Social Inequalities in Health: New Evidence and Policy Implications. Oxford University Press, Oxford. Stewart, J., Harris, J., Sapey, B., 1999. Disability and dependency: Origins and futures of ‘special needs’ housing for disabled people. Disability & Society 14 (1), 5–20. Storey, M.F., 2001. Principles of universal design. In: Preiser, W., Ostroff, E. (Eds.), Universal Design Handbook. McGraw-Hill, New York. Tyler, C., White-Scott, S., Ekvall, S., Abulafia, L., 2008. Environmental health and developmental disabilities: A life span approach. Family & Community Health 31 (4), 287–304. Union of the Physically Impaired Against Segregation, 1976. Fundamental Principles of Disability. Union of the Physically Impaired against Segregation, London.

Disentangling Physical, Chemical, Nutritional and Social Environmental Influences on Asthma Disparities: The Promise of the Exposomeq RJ Wright and RO Wright, Icahn School of Medicine at Mount Sinai, New York, NY, United States; and Institute for Exposomic Research, New York, NY, United States © 2019 Elsevier B.V. All rights reserved.

Abbreviations EC Elemental carbon GIS Geographic information system HUD Housing and urban development ICAS Inner-city asthma study IgE Immunoglobulin E MTO Moving to opportunity study ND Neighborhood disadvantage PAH Polycyclic aromatic hydrocarbon PM2.5 Fine particulate matter SES Socioeconomic status

Introduction Worldwide trends in asthma prevalence and the associated morbidity have been on the rise in recent decades, albeit the increase has been far from uniform. In the United States (US), these trends have disproportionately impacted nonwhite children living in urban areas and more broadly, children living in poverty. Particular racial and ethnic minority groups and persons of lower socioeconomic status (SES) experience greater asthma morbidity than their white, non-Hispanic, and more affluent counterparts. Despite advances in our understanding of asthma pathophysiology and more effective asthma treatments, hospitalizations and death rates have increased, again primarily in these high-risk populations. More recent evidence suggests that the epidemiology of asthma is still more complex. Urban residence has been associated with increased asthma risk regardless of race/ethnicity. Racial/ethnic disparities seem to exist independent of SES in some studies while others find that racial/ethnic disparities exist only among the very poor. Low SES, ethnic minority group status, and residence in the inner-city urban environment are closely intertwined in the United States, making it particularly challenging to determine the relative importance of these demographic characteristics, particularly using existing research paradigms.

Physical Environmental Determinants To date, attempts to explain these disparities have been nested in our current understanding of asthma risk, that is, those related to physical environmental factors. These are briefly discussed in the following text with more detailed reviews cited in the “Further Reading” section.

Indoor Environment: Allergens The relationship between asthma and hypersensitivity to aeroallergens has been documented in both cross-sectional and prospective studies. Hypersensitivity to environmental allergens is present in many children and young adults with asthma, and exposure to allergens appears to be involved in the initial development of asthma as well as the exacerbation of existing disease. It has been demonstrated that allergen concentrations in urban homes vary widely and are associated with race/ethnicity and SES. In the

q

Change History: November 2018. RJ Wright (original author) and RO Wright (assisted with adding text on exposomics) are responsible for making updated edits. The title was updated to include exposome framework. The section titled “Need for an Exposomic Framework” is a new addition as well as Figure 1. Added references including Carraro S et al.; Andra SS et al.; Wood BL, et al.; Rubin LP; Khoury MJ et al.; Wright RO; Brunst KJ et al; Rosa MJ et al.; and Lee A et al. Other sections were miminally changed adding reference to the exposome. This is an update of R.J. Wright, M.J. Sternthal, Physical and Social Environmental Influences on Asthma and Asthma Disparities, Editor(s): J.O. Nriagu, Encyclopedia of Environmental Health, Elsevier, 2011, Pages 511–515.

Encyclopedia of Environmental Health, 2nd edition, Volume 2

https://doi.org/10.1016/B978-0-12-409548-9.11743-9

151

152

Disentangling Physical, Chemical, Nutritional and Social Environmental Influences on Asthma Disparities

northeastern United States, high levels of cockroach allergen in particular have been associated with lower SES, African–American race, and urban residence. Studies also demonstrate that rat and mouse allergens are commonly found in urban housing and suggest that increased asthma morbidity may be associated with rodent sensitization.

Air Pollution The increase in respiratory allergic diseases in urban areas has also been linked to air pollution. Laboratory studies confirm the epidemiological evidence that inhalation of some pollutants adversely affects lung function in asthmatics. The most abundant outdoor air pollutants in urban areas with high levels of vehicle traffic are respirable particulate matter (PM2.5), nitrogen dioxide (NO2), and ozone. Although NO2 does not exert consistent effects on lung function, ozone and respirable PM2.5 impair lung function and lead to increased airway responsiveness and bronchial obstruction in predisposed subjects. In addition to acting as irritants, airborne pollutants modulate the allergenicity of antigens carried by airborne particles. Moreover, air pollutants such as diesel exhaust emissions are thought to modulate the immune response by increasing immunoglobulin E (IgE) synthesis, thus facilitating allergic sensitization in subjects and the subsequent development of asthma. Indoor air pollution is also linked to greater asthma morbidity. While studies have shown little correlation between indoor and outdoor air pollutants, when increased exposure occurs in both environments, additive or synergistic effects can occur. To what degree differential exposure to air pollution informs the social patterning of asthma is increasingly being explored. Evolving computational methods which will allow scientists to consider multiple pollutants concurrently (e.g., mixtures) will inform disparities research even further.

Cigarette Smoke The respiratory health effects of smoking have been well documented. Maternal cigarette smoking is associated with higher risk of asthma prevalence in early childhood, and with higher risk of asthma morbidity, wheeze, and respiratory infection in children of all ages. Tobacco smoke exposure in utero is associated with increased airway resistance/obstruction in infancy and childhood, but its influence on allergic, as opposed to irritant, airway inflammation remains uncertain. It has also been suggested that mite sensitization is more common among smoke-exposed children. Cigarette smoke is the greatest contributor to indoor air pollution in US samples.

Social Environmental Determinants Although physical characteristics of neighborhood and housing environments such as air pollution, dampness, dust, and the presence of pests are contributors to variations in the risk of asthma within populations, these factors alone do not fully account for the social distribution of childhood asthma. Rather, evidence suggests that the social patterning of asthma reflects differential exposure to pathogenic factors in both the physical and social environment. Although a number of theoretical models explaining how social conditions influence physical health outcomes such as asthma have been proposed, a psychosocial stress model may offer the greatest promise in this regard. Moreover, much of the association between SES and health disparities may be determined by increased exposure to acute and chronic stress compounded by the presence of overburdened or absent social supports, psychological morbidity (i.e., anxiety and depression), and lack of control over one’s life. Studies have linked stressors at both the individual and family level (e.g., caregiving stress) to the expression of asthma. More recently, asthma researchers have adopted broader ecological views on health that recognize that individual-level health risks and behaviors have multilevel determinants, that is, the degree of chronic stress may also be significantly influenced by the characteristics of the communities in which one lives. One specific type of chronic stress that has been investigated in relation to urban children’s development is neighborhood disadvantage (ND), characterized by the presence of a number of communitylevel stressors including poverty, unemployment/underemployment, limited social capital or social cohesion, substandard housing, and high crime/violence exposure rates. In the United States, many urban communities are characterized by high levels of ND characterized in a number of ways. For example, studies are beginning to explore the health effects of living in a violent environment, with a chronic pervasive atmosphere of fear and the perceived threat of violence. Ongoing work by Wright and colleagues has identified violence exposure as a prevalent factor that concerns residents of Boston communities and, in turn, influences asthma morbidity examined across a number of US urban communities. Community violence serves as one example of how distal social processes operate to impact the health of individuals living in a particular neighborhood. Social capital is strongly correlated with violent crime rates, which impact community resilience by undermining social cohesion. Thus, high rates of violence and crime within a community and society are not only chronic psychosocial stressors, but also indicators of compromised collective wellbeing and nonoptimal social relations, or social cohesion. Empirical evidence suggests that exposure to violence may contribute to the burden of asthma morbidity for the urban poor. The Moving to Opportunity (MTO) study sponsored by Housing and Urban Development (HUD) suggests that there may be an important link between asthma and violence. The MTO study randomized families from high-poverty areas (more than 40% poverty census tracts) and public housing to receive vouchers to pay for rental housing from private landlords in census tracts with less than 10% poverty. It found that families with children with asthma who moved to apartments in better neighborhoods rated their children’s asthma better, independent of other risk factors. Qualitative fieldwork in the Boston cohort in the initial

Disentangling Physical, Chemical, Nutritional and Social Environmental Influences on Asthma Disparities

153

phases of the MTO study using in-depth interviews with community residents indicated that stress around community violence and worry about safety were important to their health and their biggest motivation for wanting to move. Although the initial hypothesis around the benefits of moving participants from high-poverty to low-poverty neighborhoods centered on quality housing and reduced exposure to indoor allergens, qualitative data collection redirected the focus of the quantitative survey to include the domains of violence, crime, safety, and health. In a population-based study in Boston, lifetime exposure to violence was ascertained retrospectively through a parental-report interview questionnaire administered to 416 caregivers and their children who were followed longitudinally for respiratory health outcomes, including asthma. Preliminary analyses suggest a link between high lifetime exposure to community violence and an increased risk of asthma and wheeze syndromes and prescription bronchodilator use among these inner-city children. Wright and colleagues also demonstrated an association between higher levels of community violence and increased caretaker-reported asthma symptoms in a study of 851 children aged 5–12 years and their caretakers enrolled in the Inner-City Asthma Study (ICAS). The caretakers reported community violence prevalence, other negative life events, perceived stress, unwanted thoughts and memories (rumination), caretaker behaviors (e.g., keeping children indoors, smoking, and medication adherence), as well as a number of sociodemographic factors (e.g., income, employment, race/ethnicity, and housing quality/dilapidation). Increased frequency of exposure to violence in the communities predicted a greater number of asthma symptom days among the children in a graded fashion even after control for socioeconomic factors and housing dilapidation. This association was in part attenuated by controlling for perceived stress and behavioral differences albeit the overall trend remained significant. Caretakers reporting high-level violence in these analyses were also more likely to ruminate. Ongoing rumination may impact problem-solving skills, erode perceived control, and decrease motivation to manage ongoing challenges, including management of a chronic illness such as asthma. Caregivers who use ruminative coping strategies may experience greater stress and psychological comorbidity, especially depression, which, as noted previously, may more directly influence the child. Caretaker’s psychological adjustment may affect the child’s asthma morbidity by contributing to less effective parenting style, inappropriate utilization of health care services, and poor medication adherence. Still other research suggests that we need to broaden our view to consider stress experienced over the mother’s lifecourse and not just the stress she experiences more immediately around or during pregnancy when characterizing links between prenatal stress and asthma risk in her child.

Need to Consider the Social and Physical Environment Together As noted in recent consensus statements by both the Institute of Medicine and the National Academy of Science together with the National Institutes of Environmental Health Sciences (NIEHS), advancements in our understanding of disparities in environmental health require attention to both physical environmental hazards and social conditions together in future studies. When considered only at the individual level, it has been difficult to fully explain how exposure to known physical factors contributes to asthma disparities. The role of neighborhood social context in shaping individual exposure and vulnerability to a host of harmful health effects has received increasing attention within both health and sociological fields. Health risks and resources appear to be spatially and socially distributed across neighborhoods with asthma-inducing pathogenic risk factors concentrated in poor, segregated neighborhoods. Given that ethnic minorities are far more likely to reside in these neighborhoods, contextual factors specific to such neighborhoods may therefore explain some portion of the disparities in asthma. Conceptual advances and developing statistical methodologies often adapted from other areas of scholarship (e.g., social sciences, economics, and geography) as reviewed by Wright and Subramanian are beginning to be applied in asthma epidemiology to facilitate such a multilevel framework. Understanding the more distal social influences that determine the distribution of relevant asthma toxicants may better inform future prevention and intervention strategies. For example, smoking behaviors are also socially patterned. Smoking can be viewed as a strategy to cope with negative affect or stress. Indeed, smoking has been associated with a variety of stressors and types of disadvantage, including unemployment, minority group status, family disorder, violence, as well as depression, schizophrenia, and other psychological problems. Stress in particular is associated with earlier age of onset of smoking, smokers’ reported desire for a cigarette, and being unsuccessful at quitting. These relationships among stress and smoking may be considered from a neighborhood perspective as well. Studies have demonstrated effects of neighborhood social factors on smoking behavior. It has been hypothesized that neighborhood SES may be related to increased social tolerance and norms supporting behavioral risk factors such as smoking. In adult African–American populations, prevalence of smoking is higher relative to whites. Evidence from the 1987 General Social Survey suggests that stress may be one factor promoting increased prevalence of smoking in African–American communities. Although the overall prevalence of cigarette smoking in the US declined from 40% in 1965 to 16.8% in 2014, the decline has been less pronounced among those with lower education. Cigarette smoking varies by ethnicity and national origin, and cigarette companies have targeted minorities in an attempt to increase smoking where rates have traditionally been low. More worrisome is the fact that after several years of substantial decline among adolescents in four ethnic minority groups, in the 1990s smoking prevalence increased among African–American and Hispanic youth. Successful smoking cessation is more difficult among pregnant women and mothers dealing with the circumstances surrounding socioeconomic disadvantage. Moreover, environmental tobacco smoke exposure of children at greatest risk of adverse asthma outcomes (e.g., children of low-income families) may come from caregivers additional to the mother or parents (e.g., grandparents and day care), and successful interventions must take into account all early childhood sources of environmental tobacco smoke.

154

Disentangling Physical, Chemical, Nutritional and Social Environmental Influences on Asthma Disparities

A similar case can be made for anthropomorphic factors (low birth weight, prematurity, and obesity) that contribute to asthma. Underweight and obesity, which paradoxically may have similar origins in fetal life, may both be risk factors for childhood wheezing illnesses or asthma. Prematurity and low birth weight (adjusted for gestational age) and later obesity can be influenced by maternal smoking, maternal–fetal nutrition, infection, and maternal psychological as well as physical stress. The risk for factors that may adversely impact fetal growth may, in turn, be higher in socioeconomically disadvantaged groups. Exigencies of urban living and socioeconomic disadvantage may also contribute to obesity. Although obesity may be a primary contributor to asthma risk, it may be more informative to consider the distal social circumstances contributing to obesity that may influence risk. The decision to keep children indoors more because of fear of community violence (decreasing activity and increasing indoor allergen exposures) and the lack of access to playgrounds or healthy foods may in part explain the association between obesity and asthma, for example. Geographic variation in the distribution of environmental pollution has increasingly been a focus of research over the past 15 years. There is evidence that some diesel exhaust components can vary substantially across an urban area as a function of traffic volume and type and road and housing characteristics. For example, in a pilot study in Harlem, New York, elemental carbon (EC) levels ranged by a factor of 4 across sites in close proximity to one another, while levels of PM2.5 were quite similar. EC levels (measured as black smoke) near major roads in the Netherlands were 2.6 times greater than levels at background sites versus a factor of 1.3 for PM2.5. Similarly, polycyclic aromatic hydrocarbon (PAH) concentrations may differ by a factor of 3 between measurements on a street and those in a park in some urban sites, with traffic contributing an estimated 80% of ambient concentrations. Ultrafine particle concentrations have also been strongly correlated with traffic patterns. Other research has found that the percentage of children living in block groups with high traffic density increases with decreasing median family income for all race and ethnicities except white and that children of color are about three times more likely to live in high-traffic areas compared to white children. Increased exposure to traffic-related air pollution including PM2.5 and nitrates contribute to increased asthma risk and lower levels of lung function in preschool- and early school-aged children. Distal factors that determine where one lives may result in differential exposure to both physical and social toxins based on race/ ethnicity and economic status. For example, Williams, Sternthal, and Wright reviewed evidence pointing to residential segregation, the physical separation of races by enforced residence in restricted areas, as a central determinant of racial and ethnic disparities in socioeconomic circumstances at the neighborhood level and thus a potential fundamental cause of racial disparities in asthma in the United States. Research reveals that segregation may affect health in general, and asthma in particular, in multiple ways. It is a key determinant of racial differences in SES, producing a concentration of poverty and social isolation, and creating pathogenic conditions in social and physical residential environments. Segregation may also lead to poor residential conditions such as crowding, which may predispose to viral illnesses (a known asthma exacerbating factor), and deteriorating housing stock, which could increase exposure to indoor aeroallergens. Research demonstrates that segregated inner-city areas have higher rates of air pollution. Segregation is also associated with increased social disorder and violence. Existing evidence that psychological stress and other physical environmental toxins operate through overlapping mechanisms suggests that interactive or synergistic effects may be important to understand. For example, air pollution exposures have been linked to disruption of neuroimmune responses and autonomic reactivity (particularly increased parasympathetic tone) even in young healthy subjects. Psychological stress may have similar influences on these systems. Moreover, air pollutants may generate reactive oxidative species to influence health through oxidative stress pathways similar to psychological stressors. It is thus plausible that the biologically compromised system(s) related to ongoing stress experiences may be more vulnerable to subsequent environmental toxins and vice versa. Evidence for the combined effects of physical and social environmental factors on asthma risk continues to grow. For example, Wright and colleagues examined the effect of associations among lifetime community violence exposure (conceptualized as a chronic social stressor) and traffic-related air pollutants, on childhood asthma risk in an urban Boston cohort. Geographic information system (GIS)-based models leveraging satellite data to inform pollution levels were developed to retrospectively estimate residential exposures to traffic-related NO2. The hypothesis that chronic stress may enhance the individual’s susceptibility to air pollution in childhood asthma etiology has been explored. This can be justified given the potential spatial covariance across the exposures and because stress and pollutants may operate through common physiological pathways (e.g., oxidative stress and enhanced inflammation). After adjusting for a number of individual-level confounders including gender, SES, race/ethnicity, tobacco smoke exposure, and history of lower respiratory tract illnesses, there was an elevated risk of asthma associated with a one standard deviation increase in NO2 exposure specifically among children who also were above the median for community violence exposure. Similar hypotheses could be explored related to psychological impact of other environmental conditions that are distributed differently based on one’s race/ethnicity or SES. For example, housing conditions may result in differential exposure to physical environmental risk factors (i.e., aeroallergens) as well as having an emotional dimension resulting in increased psychological stress. Given that psychological stress may disrupt key physiological systems leading to altered immune functioning, stress may potentiate the effect of household allergens such that children who are exposed to both may be more likely to become sensitized to allergens and be at greater risk for asthma. This may help explain the observations that do not seem to be simply due to differential exposure to specific allergens. For example, in a national sample of US children, ethnic minority children (particularly African–Americans) were significantly more likely than white children to be sensitized to allergens relevant to asthma. Moreover, it seems from recent data that children of lower SES communities are more likely to be responsive to multiple allergens and that they become sensitized even when exposed to relatively low levels of allergens in their homes. Although impoverished households are more likely to be reservoirs for allergens in higher concentrations than more affluent settings, this does not explain the observed differences completely. Although it is possible that certain allergens, such as cockroach, mouse, or rat, are more potent sources

Disentangling Physical, Chemical, Nutritional and Social Environmental Influences on Asthma Disparities

155

of allergic or nonallergic airway inflammation, it also may be the case that cooccurring environmental factors (i.e., those that are also disproportionately distributed in lower income and segregated communities (e.g., pollutants and toxicants, and psychological stress)) increase vulnerability to the effects of these exposures in sensitized individuals. Future studies need to examine the links among ND, minority group status, low levels of social capital, violence exposure, and other social influences (and the heightened stress that they may elicit) as risk factors for childhood asthma analogous to physical environmental exposures (e.g., allergens, tobacco smoke, and air pollution). Such studies are likely to further our understanding of the increased asthma burden on populations of children living in poverty in urban areas or other disadvantaged communities.

Need for an Exposomic Framework Our understanding of the environmental influences on respiratory disease programming is growing increasingly complex with a multitude of environmental and microbial exposures (e.g., ambient pollutants, smoking, psychological stress, diet, indoor/ outdoor allergens, viral infections, chemical toxins) playing a role. Moreover, the underlying pathogenesis of multi-factorial diseases such as asthma, with variable onset, severity and natural history, reflect development-specific exposures and individual response to these exposures influenced by underlying genetic predisposition. To date, the field of environment and child respiratory programming, particularly starting in utero, has largely focused on single exposure–health effect relationships, with a few considering two-way interactions between environmental factors. Moreover, while the increasing availability of high-throughput technologies enabling profiling of the genome, transcriptome, epigenome, and microbiome on a system-wide (omics) scale has revealed genetic factors and networks that advance our understanding to some extent, it is clearly recognized that disease causation reflects interactions between an individual’s genetic susceptibility and his/her environment. Unlike the genome which is static, relevant exposures as well as our response changes over time. For example, the respiratory system and related regulatory phenomena (e.g., immune function, autonomic nervous system, neuroendocrine systems) develop sequentially starting in utero with specific processes occurring in a timed cascade. Thus toxin effects depend both on dose and timing of exposure. Programming effects result from toxin-induced shifts in a host of molecular, cellular, and physiological states and their interacting systems. Few, if any, exposures (social, physical, or chemical) impact a single system. Moreover, pregnant women and the developing child are not exposed to a single chemical, nutritional, or social factor, but to complex mixtures. Social or nonchemical stressors also covary and interact with chemical stressors. This complexity has fostered the concept of the exposome, a framework of disease programming that considers multiple external exposures (both detrimental and beneficial) as well as consequences of exposures, conceptualized as the internal environment, indexed via physiological response biomarkers considered on comparable omics scales accounting for exposure timing. The scientific community increasingly recognizes the need for traditional targeted environmental biomonitoring coupled with untargeted discovery of unknown exposures as critical tools in understanding chemical factors in exposomic research. A broader public health exposome framework, that also incorporates domains including the natural, built, social and policy environments will be needed to achieve meaningful understanding of asthma disparities. In order to demonstrate the exposome framework in this context, we focus on the pregnancy exposome in programming the onset of asthma and other respiratory outcomes in the developing fetus/child (Fig. 1). The exposome is ideally defined as the totality of environmental exposure from conception to death complementing the genome. The pregnancy period is a key starting point to describe the exposome, due to heightened sensitivity and potential lifetime impact on respiratory health and disease in the developing fetus. New and emerging technologies make it increasingly possible to apply the exposome concept in population-based studies with the practical understanding that even a partial characterization will bring major advances to understanding disease etiology and variability. Many of the tools to measure environment on an “omics” scale already exist, and recent advances in analytical chemistry, GIS-based modeling and informatics, and the same scientific and cultural developments that made smart phones ubiquitous now make the goal of estimating the exposome possible. The exposome includes an external domain, measured by methods including geo-spatial modeling, questionnaire and targeted and untargeted biomonitoring of external exposures while the internal domain is commonly assessed through molecular omics platforms as seen in Fig. 1. A major breakthrough in being able to recreate prenatal exposures on an omics scale is the development of a novel tooth biomarker by Arora and colleagues (see Andra, Austin, Wright, and Arora below). The internal domain, in part, reflects the biological response to the external domain. Moreover, new statistical frameworks are required to integrate and assess exposome-health effectsdthese are also emerging. A handful of studies have started to move towards an exposome approach in assessing the effects of multiple exposures during pregnancy on child development, primarily taking a targeted approach. Future research will continue to implement new and emerging tools and methods (traditional environmental/response biomonitoring, omics-based approaches for untargeted discovery of environmental exposures, remote sensing and GIS-based spatial methods, personal exposure devices, mobile apps, statistical tools for combined exposures) to characterize early-life exposure, beginning in pregnancy, to a wide range of chemical (using targeted and untargeted approaches), nonchemical (e.g., social context and social stressors), nutritional, and physical external environmental factors as well as the internal environment, thus developing an exposome approach.

Summary Evidence suggests that the social patterning of asthma reflects differential exposure to pathogenic factors in both the physical and social environment. The social environment may contribute to asthma risk due to more distal social factors (e.g., segregation) that

156

Fig. 1

Disentangling Physical, Chemical, Nutritional and Social Environmental Influences on Asthma Disparities

Integrated approach to advancing exposomic research on asthma disparities.

determine differential exposure to relevant asthma pathogens and more proximately by contributing to the experience of psychological stress that is increasingly linked to asthma expression. One also needs to better understand how the physical and psychological demands of living in a relatively deprived environment may potentiate an individual’s susceptibility to cumulative exposures across these domains. The likelihood of multiple mechanistic pathways with complex interdependencies must be considered when examining the integrative influence of social and physical environmental toxins on asthma expression. Because these factors tend to cluster in the most socially disadvantaged, this line of research may better inform the etiology of growing health disparities. Design of future epidemiological studies and effective intervention programs will need to address environmental toxicants and social stress jointly to impact public health most effectively. The exposome framework promises to better disentangle these complexities.

See also: Air Pollution and Development of Children’s Pulmonary Function; Asthma: Environmental and Occupational Risk Factors; Automobile Exhaust: Detrimental Effects on Pulmonary and Extrapulmonary Tissues and Offspring.

Further Reading Gold, D.R., Wright, R.J., 2005. Population disparities in asthma. Annual Review in Public Health 26, 89–113. Wright, R.J., 2006. Health effects of socially toxic neighborhoods: The violence and urban asthma paradigm. Clinics in Chest Medicine 27 (3), 413–421. Wright, R.J., Subramanian, S.V., 2007. Advancing a multi-level framework for epidemiological research on asthma disparities. Chest 132, 757S–769S. Williams, D.R., Sternthal, M., Wright, R.J., 2009. Social determinants: Taking the social context of asthma seriously. Pediatrics 123, S174–S184. Wright, R.J., Fisher, E., 2003. Putting asthma into context: Community influences on risk, behavior, and intervention. In: Kawachi, I., Berkman, L. (Eds.), Neighborhoods and health. Oxford University Press, New York, NY, pp. 233–262. Carraro, S., Scheltema, N., Bont, L., Baraldi, E., 2014. Early-life origins of chronic respiratory diseases: Understanding and promoting healthy ageing. European Respiratory Journal 44, 1682–1696. Andra, S.S., Austin, C., Wright, R.O., Arora, M., 2015. Reconstructing pre-natal and early childhood exposure to multi-class organic chemicals using teeth: Towards a retrospective temporal exposome. Environment International 83, 137–145. Wood, B.L., Miller, B.D., Lehman, H.K., 2015. Review of family relational stress and pediatric asthma: The value of biopsychosocial systemic models. Family Process 54, 376–389. Rubin, L.P., 2016. Maternal and pediatric health and disease: Integrating biopsychosocial models and epigenetics. Pediatric Research 79, 127–135. Khoury, M.J., Iademarco, M.F., Riley, W.T., 2016. Precision public health for the era of precision medicine. American Journal Preventive Medicine 50 (3), 398–401. Wright, R.O., 2017. Environment, susceptibility windows, development, and child health. Current Opinion Pediatrics 29, 211–217. Brunst, K.J., Rosa, M.J., Jara, C., Lipton, L.R., Lee, A., Coull, B.A., Wright, R.J., 2017. Impact of maternal lifetime interpersonal trauma on children’s asthma: Mediation through maternal active asthma during pregnancy. Psychosomatic Medicine 79 (1), 91–100. Rosa, M.J., Lee, A.G., Wright, R.J., 2018. Evidence establishing a link between prenatal and early life stress and asthma development. Current Opinion Allergy Clinical Immunology 18 (2), 148–158. Lee, A., Leon Hsu, H.H., Mathilda Chiu, Y.H., Bose, S., Rosa, M.J., Kloog, I., Wilson, A., Schwartz, J., Cohen, S., Coull, B.A., Wright, R.O., Wright, R.J., 2018. Prenatal fine particulate exposure and early childhood asthma: Effect of maternal stress and fetal sex. Journal of Allergy Clinical Immunology 141 (5), 1880–1886.

Drinking Water Treatment and Distribution Systems: Their Role in Reducing Risks and Protecting Public Healthq Robert M Clark, Environmental Engineering and Public Health Consultant, Cincinnati, OH, United States © 2019 Elsevier B.V. All rights reserved.

Introduction In the United States and throughout the world drinking water utilities face the challenge of providing potable water to their consumers despite the many factors that can result in the degradation of water quality before it is delivered. Frequently, raw water is derived from surface or ground water sources that may be subject to naturally occurring or accidental contamination (Gullick et al., 2003; ILSI, 1999). Treated water may also be transmitted through a network of corroded or deteriorating pipes. The first municipal water utility in the United States was established in Boston in 1652 in order to provide domestic water and fire protection (Hanke, 1972). Many water supplies in the United States were subsequently constructed in cities, primarily for fire prevention, but most were eventually adapted to serve commercial and residential properties with water. By 1860, there were 136 water systems in the United States, and most of these systems supplied water from springs low in turbidity and relatively free from pollution (Baker, 1948). However, by the end of the 19th century waterborne disease had become recognized as a serious problem in industrialized river valleys which led to the more routine use of water treatment prior to its distribution. The successful application of chlorine as a disinfectant was first demonstrated in England in 1908, and Jersey City (NJ) initiated the use of chlorine for water disinfection in the United States. This approach subsequently spread to other locations, and soon the rates of common epidemics such as typhoid and cholera dropped dramatically. For example, effective water treatment resulted in a decline in the typhoid death rate in Pittsburgh, PA from 158 deaths per 100,000 in the 1880s to 5 per 100,000 in 1935 (Fujiwara et al., 1995). Similarly, both typhoid case and death rates for the City of Cincinnati declined more than tenfold during the period 1898–1928 due to the use of sand filtration, disinfection via chlorination, and the application of drinking water standards (Clark et al., 1985). Fig. 1 shows the decline in typhoid death rate for the Cincinnati Water Works and Fig. 2 shows a similar decline for the outbreak rate during this period. Such dramatic reductions in waterborne disease outbreaks were brought about by the application of drinking water standards and applying “multiple barriers” of protection. The multiple-barrier concept includes the use of conventional treatment (e.g., sand filtration) in combination with disinfection to provide safe and aesthetically acceptable drinking water. Residual disinfectant levels serve to protect water quality within a distribution system prior to its delivery (Clark et al., 1991a,b,c). It is generally accepted that water treatment in the United States has been a major contributor to ensuring the nation’s public health.

1000.

DEATH RATE/100000

500.

Filtration begins Nov. 1, 1907

100. 50.

Chlorination begins Full time - 1915

10. 5.

1. 1895.

Fig. 1

1900.

1905.

1910. 1915. 1920. 1925. YEARS TYPHOID DEATHS OVER TIME

1930.

Typhoid death rate for cincinnati water works.

q

Change History: March 2019. R Clark updated text, sections headings and references. This is an update of R. Clark, Drinking Water Distribution Systems: Their Role in Reducing Risks and Protecting Public Health, Editor(s): J.O. Nriagu, Encyclopedia of Environmental Health, Elsevier, 2011, Pages 158–166.

Encyclopedia of Environmental Health, 2nd edition, Volume 2

https://doi.org/10.1016/B978-0-12-409548-9.02238-7

157

158

Drinking Water Treatment and Distribution Systems: Their Role in Reducing Risks and Protecting Public Health

Filtration begins Nov. 1, 1907

1000.

CASE RATE/100000

500.

50.

10. 5.

1. 1895.

Fig. 2

Chlorination begins Full time - 1915

100.

1900.

1910. 1915. 1920. 1925. 1930. YEARS TYPHOID CASES OVER TIME 1905.

Typhoid case rate for cincinnati water works.

Development of Legislation and Regulations Since the late 1890s, concern over waterborne disease and uncontrolled water pollution has regularly translated into legislation at the state and federal level. The first federal water quality-related regulation was promulgated in 1912 under the Interstate Quarantine Act of 1893. At that time interstate railroads provided a common drinking water cup to share among passengers while traveling on the train but the Act prohibited this practice. Several sets of federal drinking water standards were issued prior to 1962, but they also applied only to interstate carriers (Clark, 1978; Grindler, 1967). By the 1960s, each of the states and trust territories had established their own drinking water regulations, although there were many inconsistencies among them. As a consequence, reported waterborne disease outbreaks declined from 45 per 100,000 in 1938–40 to 15 per 100,000 in 1966–70. However, the annual number of waterborne disease outbreaks ceased to fall around 1951 and may have increased slightly after that time, leading, in part, to the passage of the Safe Drinking Water Act (SDWA) of 1974 (Clark, 1978). On December 16, 1974, the U.S. Congress passed the Safe Drinking Water Act (SDWA), which authorized the EPA to promulgate the first set of federal regulations which would “protect health to the extent feasible, using technology, treatment techniques, and other means, which the Administrator determines are generally available (taking costs into consideration).” (SDWA, 1974). As a result, a set of regulations was promulgated in 1975 which became effective June 24, 1977. These were known as the National Interim Primary Drinking Water Regulations (NIPDWR). The NIPDWR established enforceable Maximum Contaminant Levels (MCLs) for 10 inorganic contaminants, six organic contaminants, turbidity, coliform, radium-226, radium-228, gross alpha activity, and man-made radionuclides. The NIPDWR also established monitoring and analytical requirements for determining compliance. EPA has promulgated many rules and regulations as a result of the SDWA that require drinking water utilities to meet specific guidelines and numeric standards for water quality. Some of the rules that specifically target water quality within the distribution system are the Lead and Copper Rule (LCR), the Surface Water Treatment Rule (SWTR), the Total Coliform Rule (TCR), and the Disinfectants/Disinfection By-Products Rule (D/DBPR). The LCR established monitoring requirements for lead and copper within tap water samples. The SWTR establishes the minimum required detectable disinfectant residual and the maximum allowed heterotrophic bacterial plate count, within the distribution system. The TCR requires monitoring of the distribution systems for total coliforms, fecal coliforms, and/or Escherichia coli. The D/DBPR addresses the maximum disinfectant residual and concentration of disinfection byproducts such as total trihalomethanes and haloacetic acids in the distribution systems (Panguluri et al., 2005). A significant “new” regulation promulgated under the SDWA is the Revised Total Coliform Rule (RTCR). It was signed by the US Environmental Protection Agency (USEPA) administrator on December 20, 2012. The RTCR will affect every water system in the United States and systems must be in compliance by April 1, 2016 (Roberson, 2013). Development of the regulation has taken > 10 years and is a highly significant change in monitoring for drinking water system contamination. The RTCR represents a change from the 1989 TCR which focused on public notification to a requirement to conduct assessments that look for any potential problems that contribute to total coliform occurrence. If no obvious cause can be found then no further action would be required. If any sanitary defects are found then corrective action must be taken. If a system incurs an E. coli maximum contaminant violation then a system must correct any sanitary defects found. The requirement for public notification based only on the presence of total coliforms is eliminated. The RTCR also establishes a health goal (i.e., MCLG) and an MCL for E. coli, which is a more specific indicator of fecal contamination. The RTCR replaces the MCLG and MCL for total coliforms with a treatment technique that requires assessment and corrective action (Roberson, 2013). In the United States disinfection has become an essential part of a drinking water treatment train and is considered to be one of the major public health advances of the 20th century. Chlorine, and chloramines are most often used because they are very effective disinfectants, and residual concentrations can be maintained in the water distribution system. Some utilities (in the U.S. and

Drinking Water Treatment and Distribution Systems: Their Role in Reducing Risks and Protecting Public Health

159

Europe) use ozone and chlorine dioxide as oxidizing agents for primary disinfection prior to the addition of chlorine or chloramines for residual disinfection. For example, the Netherlands identifies ozone as the primary disinfectant, as well as common use of chlorine dioxide, but typically uses no chlorine or other disinfectant residual in the distribution system (Connell, 1998). While disinfectants are effective in controlling many microorganisms, they can react with naturally occurring organic (and/or inorganic) matter (NOM) in the treated and/or distributed water to form potentially harmful disinfection byproducts (DBPs). To minimize the formation of DBPs, the USEPA has promulgated regulations that specify maximum residual disinfectant level goals (MRDLGs) for chlorine (4 mg per liter [mg L 1] as chlorine), chloramines (4 mg L 1 as chlorine), and chlorine dioxide (0.8 mg L 1 as chlorine dioxide). In addition, MCLs for the DBPs total trihalomethanes (TTHMs) and haloacetic acids (HAA5) have been established as 0.080 and 0.060 mg L 1, respectively. The TTHMs include chloroform, bromodichloromethane, dibromochloromethane and bromoform. The HAA5 include monochloroacetic acid, dichloroacetic acid, trichloroacetic acid, monobromoacetic acid and dibromoacetic acid. In order to meet these requirements, utilities may need to remove DBP precursor material from water prior to disinfection by applying appropriate treatment techniques or modify their disinfection process (Panguluri et al., 2005). An example of the evolution of federal drinking water regulation since the passage of the SDWA in 1974 is presented in Fig. 3 and Table 1 summarizes the current SDWA Rules and Regulations governing distributed drinking water through 2005. Most of these regulations resulted from the passage of the Act in 1974 and the various Amendments promulgated up to and including 1996. Since 1996 the act has been amended several times. These Amendments include (Tiemann, 2017): (P.L. 107-188, 2002; P.L. 111-380. 2011; P.L. 113-64, 2013; P.L. 114-45, 2015; P.L. 114-98, 2015; P.L. 114-322 in 2016). In addition to the rules and regulations promulgated under the SDWA, security has become an issue for the water utility industry (Clark et al., 2011). Security of water systems is not a new issue. The potential for natural, accidental, and purposeful contamination of water supplies has been the subject of many studies. For example, in May 1998, President Clinton issued Presidential Decision Directive (PDD) 63 that outlined a policy on critical infrastructure protection, including our nation’s water supplies. However, it was not until after September 11, 2001, that the water industry focused on the vulnerability of the nation’s water supplies to security threats. In recognition of these issues, President George W. Bush signed the Public Health Security and Bioterrorism Preparedness and Response Act of 2002 (Bioterrorism Act) into law in June 2002 (PL107-188). Under the requirements of the Bioterrorism Act, community water systems (CWSs) serving > 3300 people are required to prepare vulnerability assessments and emergency response plans. CWSs are PWSs that supply water to the same population throughout the year.

Fig. 3

An example of the evolution of federal drinking water regulations (1974–2005).

160

Drinking Water Treatment and Distribution Systems: Their Role in Reducing Risks and Protecting Public Health

Table 1

Selected rules and regulations dealing with distribution systems (not inclusive) (Panguluri et al. 2005; Roberson, 2013)

Regulation

Key distribution system requirements

SDWA NIPDWR

Gives EPA the authority to establish national primary and secondary drinking water regulations (MCLs and MCLGs) The NIPDWR which was adopted at the passage of the SDWA required that representative coliform samples be collected throughout the distribution system Established a standard for TTHMs as 0.1 mg L 1 Established the MCLG concept Regulates coliform bacteria which are used as surrogate organisms to indicate whether or not treatment is effective and system contamination is occurring Requires using chlorine or some other disinfectant Monitoring for compliance with the LCR is based entirely on samples taken at the consumer’s tap Provided data to support the interim and long-term enhanced SWTR, and Stage 2 DBP rule Has many provisions dealing with distribution systems, including the role that surface water quality can play in influencing the quality of distributed water Provisions to enhance protection from pathogens, including Cryptosporidium, and intended to prevent increases in microbial risk while large systems comply with the DBPR1 This standard applies to all. Has lowered the standard for TTHMs from 0.1 to 0.08 mg L 1. Community water supplies in the U.S. and requires monitoring and compliance at selected points in the distribution system Provisions to enhance protection from pathogens, including Cryptosporidium, and prevent increases in microbial risk for systems serving less than 10,000 people while they comply with the DBPR1 Revises the TCR to focus on conducting assessments that look for any potential problems contributing to total coliform occurrence

TTHM 86SDWAA TCR SWTR LCR ICR 96SDWAA IESWTR DBPR1 LT1ESWTR RTCR

Effectiveness of SDWA Regulations Although waterborne disease outbreaks and other water quality issues continue to occur in the US, the passage of the SDWA has been very effective in protecting the public health of American drinking consumers. The SDWA has resulted in the reduction or elimination of exposure from drinking water contaminants ranging from potentially carcinogenic disinfection byproducts to neurotoxic contaminants such as lead. For example Fig. 4 shows the number of waterborne outbreaks associated with drinking water in the US between 1971 and 2010. Before the passage of the SDWA and even shortly after its passage waterborne outbreaks continued to be reported. In fact, it was this growing number of outbreaks that was a major part of the rationale for passage of the SDWA. In the 1980–83 period, as the various regulations were being considered and then promulgated, as shown in Fig. 4, the number of reported outbreaks stabilized and then began to drop. During that period there was much discussion between the various state drinking water regulatory agencies and the USEPA regarding actions needed to control waterborne disease. Of particular note are first the rise and then the decrease in parasitic disease outbreaks. The USEPA in collaboration with the relevant state agencies and the drinking water utilities implemented the surface water treatment rule (SWTR) and the total coliform rule (TCR) which were finally promulgated on June 29, 1989. However, even after the enactment of the Safe Drinking Water Act Amendments of 1996 (August 6) there has continued to be a low level of water borne disease outbreaks. This illustrates the need for continued vigilance. Most recently there has an increase in the number of outbreaks associated with Legionella spp. (Hilborn et al., 2013). Nevertheless, the SDWA provides an outstanding example of the successful collaboration of local authorities (drinking water utilities), state agencies and the Federal government in protecting the health and welfare of the American public.

Water System Diversity in the US Water utilities in the United States vary in size, ownership, and type of operation. However, the SDWA defines public water systems as consisting of community water supply (CWS) systems, transient, non-community water supply (TNCWS) systems, and nontransient, non-community water supply (NTNCWS) systems. CWS systems provide service to year-round residents and range in size from those that serve as few as 25 people to those that serve several million. TNCWS systems serve areas such as campgrounds or gas stations where people do not remain for long periods of time. A NTNCWS system serves primarily non-residential customers but must serve at least 25 of the same people for at least 6 months of the year (such as schools, hospitals, factories that have their own water supply). It is estimated that there are 152,713 water systems in the United States that meet the federal definition of a public water system (U.S. EPA, 2011). Thirty-three (33) percent (52838) of these systems are categorized as CWS systems, 55% are categorized as TNCWSs, and 12% (19375) are NTNCWS systems. Overall, public water systems (PWSs) serve 299 million residential and commercial customers. Although the vast majority (82%) of PWSs serve < 10,000 people, almost three quarters of all Americans get their water from community water supplies serving > 10,000 people. Some water systems deliver water to other water supplies, rather than directly to customers. CWS systems are defined as “consecutive systems” if they receive their water from another community water supply through one or more interconnections (Fujiwara et al., 1995).

Drinking Water Treatment and Distribution Systems: Their Role in Reducing Risks and Protecting Public Health

161

Fig. 4 Number of waterborne disease outbreaks associated with drinking water (N ¼ 851) by year and etiology-United States 1971–2010. Source: Hilborn, ED, Wade, TJ, Hicks, L, Garrison, L, Adam, E, Mull, B, Yoder, J, Roberts, V, Gargano, JW (2013). Surveillance for waterborne disease outbreaks associated with drinking water and other non-recreational waterdUnited States, 2009–2010. Morbidity and Mortality Weekly Report 35: 714–720.

Surface water is the primary source of 22% of the CWS systems, while 78% rely on ground water. Ninety-seven percent of the non-community water supply systems (both transient and non-transient), are served by groundwater. Many systems use multiple sources of supply such as a combination of groundwater and/or surface water. In a grid/looped system, the mixing of water from different sources can have a detrimental influence on water quality, including taste and odor (Clark et al., 1988, 1991a,b). Some utilities in the US own large areas of the watersheds from which their water source is derived, while other utilities depend on water pumped directly from major rivers such as the Mississippi or Ohio Rivers, and therefore own little if any watershed land. The SDWA was amended in 1986 and again in 1996 to emphasize source water protection in order to prevent microbial contaminants from entering drinking water supplies (Borst et al., 2001). Owning or controlling its watershed provides an opportunity for a drinking water utility to exercise increased control of its source water quality (Peckenham et al., 2005). The water supply industry in the United States has a long history of local government control, with varying degrees of oversight and regulation by state and federal government. Water supply systems serving cities and towns are generally administered by city departments or counties (public systems) or by investor owned companies (private systems). Public systems are predominately owned by local municipal governments, and they serve approximately 78% of the total population that relies on CWS. Approximately 82% of urban water systems (those serving > 50,000 persons) are publicly owned. There are approximately 33,000 privately owned water systems that serve the remaining 22% of people served by CWS. In the larger size categories private systems are usually investor-owned and can include many small systems as part of one large organization. In the small- and medium-sized categories, privately owned systems tend to be owned by homeowners associations or developers. Finally, there are also state chartered public corporations, quasi-governmental units, and municipally owned systems that operate differently than traditional public and private systems. These systems include special districts, independent non-political boards, and state chartered corporations (Fujiwara et al., 1995). Table 2 summaries the size, and population served by public water systems in the United States (U.S. EPA, 2011).

Distribution System Design and Operation Distribution system infrastructure is the major asset of most water utilities, even though most of the components are either buried or located inconspicuously. Drinking water distribution systems are designed to deliver water from a source (usually a treatment facility) in the required quantity, quality, and at satisfactory pressure to individual consumers in a utility’s service area. Drinking water infrastructure generally consists of storage reservoirs/tanks, and a network of pipes, pumps, valves, and other appurtenances

162 Table 2

Drinking Water Treatment and Distribution Systems: Their Role in Reducing Risks and Protecting Public Health Public water system inventory data (U.S. EPA, 2011) Water system population size category

CWS

NTNCWS

TNCWS

# Systems Pop. served % of systems % of Pop. # Systems Pop. served % of systems % of Pop. # Systems Pop. served % of systems % of pop. Total # of systems

Very small (500 or less)

Small (501–3300)

Medium (3301–10,000)

Large (10,001–100,000)

Very large (>100,000)

Totals

283,462 4,763,672 55% 2% 15,461 2,164,594 85% 35% 80,347 7,171,054 97% 57% 124,270

13,737 19,661,787 27% 7% 2566 2,674,694 14% 43% 2726 2,630,931 3% 21% 19,029

4936 28,737,564 10% 10% 132 705,320 1% 11% 92 514,925 0% 4% 5160

3802 108,770,014 7% 36% 18 441,827 0% 7% 13 334,715 0% 3% 3833

419 137,283,104 1% 46% 1 203,000 0% 3% 1 2,000,000 0% 16% 421

51,356 299,216,141 100% 100% 18,178 6,189,435 100% 100% 83,179 12,651,625 100% 100% 152,713

and is collectively referred to as the drinking water distribution system (Walski et al., 2003). Each of the major drinking water distribution system components is described briefly below.

Storage Tanks/Reservoirs Tanks and reservoirs are used to provide storage capacity to meet fluctuations in demand, to provide reserves for fire-fighting use and other emergency situations, and to equalize pressures in the distribution system. The most frequently used storage facility is the elevated tank, but other types of tanks and reservoirs include in-ground tanks and open or closed reservoirs. Construction materials include concrete and steel. An issue that has drawn a great deal of interest is the problem of water turnover within storage facilities since much of the water volume in storage tanks is dedicated to fire protection. Unless utilities make a deliberate effort to exercise (fill and draw) their tanks, or to downsize the tanks when it can experience both water aging and mixing problems. The latter can lead to stratification and/or large stagnant zones within the water volume and can lead to a deterioration of water quality.

Pipe Network The system of pipes or “mains” that carry water from the source (such as a treatment plant) to the consumer are often categorized as transmission/trunk, distribution, and service mains. Transmission/trunk mains usually convey large amounts of water over long distances, such as from a treatment facility to a storage tank within the distribution system. Distribution mains are typically smaller in diameter than the transmission mains and generally follow city streets. Service mains are pipes that carry water from the distribution main to the building or property being served. Even a medium-sized water utility may have thousands of miles of pipes constructed from various types of materials, ranging from new, lined or plastic pipes to unlined pipes that are > 50 years old. Over time, biofilms and tubercles attached to pipe walls can result in both loss of carrying capacity and a significant loss of disinfectant residual, thereby adversely affecting water quality (Clark and Tippen, 1990). There is concern that excess capacity can lead to long residence times and thus contribute to deterioration in water quality.

Valves There are two general types of valves in a distribution system: isolation valves and control valves. Isolation valves are used in the distribution system to isolate sections for maintenance and repair.

Pumps Pumps are used to impart energy to the water in order to boost it to higher elevations or to increase pressure. Routine maintenance, proper design and operation, and testing are required to insure that they will meet their specific objectives.

Hydrants and Other Appurtenances Hydrants are primarily a part of the fire-fighting infrastructure of a water system. Although water utilities usually have no legal responsibility for fire flow, developmental requirements often include fire flows, and thus, distribution systems are designed to support needed fire flows where practical (AWWA, 1998).

Drinking Water Treatment and Distribution Systems: Their Role in Reducing Risks and Protecting Public Health

163

Basic Design and Operation Philosophy A detailed understanding of ‘how water is used’ is critical to understanding water distribution system design and operation because the manner in which industrial and residential customers use water drives the overall design and operation of a water distribution system. Generally, water use varies both spatially and temporally. Besides customer consumption, a major function of most distribution systems is to provide adequate standby fire-flow capacity (Fair and Geyer, 1971). For this purpose, fire hydrants are installed in areas that are easily accessible by fire fighters and are not obstacles to pedestrians and vehicles. The ready-to-serve requirements for fire-fighting are governed by the National Fire Protection Association (NFPA) that establishes standards for fire-fighting capacity of distribution systems (NFPA, 2003). Conservative design philosophies, aging water supply infrastructure, and increasingly stringent drinking water standards have resulted in concerns over the viability of drinking water systems in the U.S. Questions have been raised over the structural integrity of these systems as well as their ability to maintain water quality from the treatment plant to the consumer. The Clean Water and Drinking Water Infrastructure Gap Analysis (US EPA, 2002), which identified potential funding gaps between projected needs and spending from 2000 through 2019, estimated a potential 20-year funding gap for drinking water capital, and operations and maintenance, ranging from $45 billion to $263 billion, depending on spending levels. Based on current spending levels, the U.S. faces a shortfall of $11 billion annually to replace aging facilities and comply with safe drinking water regulations. Federal funding for drinking water in 2005 remained level at $850 milliond 650 people and caused 7 deaths (Hrudey and Hrudey, 2004). At the time of the outbreak, Gideon had a population of 1100. In early November, the town water system had experienced a major taste and odor event. In response, the water system was systematically flushed on November 10. The first cases of acute gastroenteritis were reported on November 29 and diagnosed as Salmonella typhimurium. However, the outbreak investigation later revealed that diarrhea cases in Gideon started around November 12 with a peak incidence around November 20. By early December, there was a 250% increase in absenteeism in the Gideon schools and a 600% increase in anti-diarrheal medication sales. Over 40% of nursing home residents suffered from diarrhea and seven people died (Angulo et al., 1997). The U.S. EPA was requested to conduct a field study by the Missouri Department of Health (MDOH) and the CDC (Clark et al., 1996) in early January of 1994. The study utilized water quality modeling to reach the conclusion that the contamination source was bird droppings in the city’s largest municipal tank. The tank’s hatches had severely deteriorated leaving the surface of the water open to contamination by roosting birds.

Walkerton Ontario, Canada The first documented outbreak of Escherichia coli 0157:H7 and Campylobacter spp. bacterial gastroenteritis associated with a municipal water supply in Canada occurred in the small rural town of Walkerton, Ontario (population 1261) in May 2000 (Grayman et al., 2004). At the time of the outbreak, the town’s drinking water was supplied by three wells (Wells 5, 6, and 7), which fed a common distribution system. In order to understand the factors that caused the outbreak, a water quality model of the Walkerton water distribution system (WDS) was developed. Results of this study clearly supported the hypothesis that Well 5 was likely the only well involved in the Walkerton E. coli/Campylobacter waterborne outbreak. The results also suggested that an extreme rainfall event, which occurred just prior to the peak of the outbreak, may have played a significant role in the propagation of the contaminants. The primary cause of the contamination event, however, was human negligence. The Well five chlorinator was not working prior to the outbreak and the responsible operator knew it, but did not report nor correct the problem.

Flint, Michigan Flint Michigan which is near Detroit experienced a major violation of the Lead and Cooper Rule in April 2014 when Flint changed its water source. It switched from treated Detroit Water and Sewerage Department water (Lake Huron and the Detroit River) to the Flint River which is highly corrosive. Treatment plant operators failed to apply corrosion inhibitors to the water which resulted in series of problems that culminated with lead contamination, and created a serious public health danger. Lead leached from lead service lines and household sauder, leading to extremely elevated levels of the heavy metal neurotoxin. As a result between 6000 and 12,000 children have been exposed to drinking water with high levels of lead and may experience a range of serious health problems. It is estimated that the percentage of Flint children with elevated blood-lead levels may have risen from about 2.5% in 2013 to as much as 5% in 2015 (Hanna-Attisha, et al., 2015). The water change is also a possible cause of an outbreak of Legionnaires’ disease in the county that has killed 10 people and affected another 77 (Al Hajal, 2016).

Dover Township New Jersey In August 1995, the New Jersey Department of Health (now New Jersey Department of Health and Senior Services [NJDHSS]) determined that the childhood cancer incidence rate in Dover Township (and the Toms River section) was higher than expected. This determination included all malignant cancers and included brain cancer, central nervous system cancer, and leukemia. Consequently, NJDHSS and made a formal request for an evaluation by the Agency for Toxic Substance and Disease Registry (ATSDR). NJDHSS and ATSDR developed a joint Public Health Response Plan (PHRP) describing actions they would take to investigate this unexpected increase (NJDHSS, 2003). The PHRP included several items including the identification of potential environmental exposure pathways associated with two National Priorities List (NPL) sites in Dover Township. Two public water supply well fields (Parkway and Holly) located in the vicinity of the NPL sites were identified as potential routes of exposure. These well fields also served areas that had statistically higher childhood cancer rates. Follow-up studies revealed the presence of a previously unidentified compound, styrene acrylonitrile (SAN), in the groundwater at the Parkway well field, which could be traced to the Reich Farm NPL site. A search of historical records also revealed contamination (primarily semivolatile organics [SVOCs]) of the Holly well fields that could be traced to the Ciba-Geigy NPL site. A hypothesis was developed that the higher cancer incident rate was related to the higher exposure to public water supplies with documented contamination (the Parkway and Holly well fields). ATSDR developed a water distribution model for the study area using the EPANET software to help NJDHSS test this hypothesis. This network model was used to simulate historical characteristics of the water distribution system serving Dover Township from 1962 to 1996. A problem with the approach taken by ATSDR was the

Drinking Water Treatment and Distribution Systems: Their Role in Reducing Risks and Protecting Public Health

167

lack of historical contaminant-specific data during most of the period covered by the epidemiologic study. Therefore, the modeling effort focused on estimating the percentage of water that an exposed individual might have received from each well that supplied water to the study area. The health scientists using the resulting percentage of water derived from the different sources developed exposure indices for each subject in the study area. The results from the case-control study showed an association between prenatal exposure to contaminated community water and leukemia in female children (NJDHSS, 2003). For example, female leukemia cases were found to be five times more likely to have occurred in children born to mothers exposed during the prenatal period to a high percentage of Parkway well water than were control children. The control children were those living in the study area but not exposed to water from the contaminated well fields. Some of the innovations documented by the Dover Township historical reconstruction analysis were (Maslia et al., 2000, 2001):

• • •

Water distribution system modeling and source tracing could be used to quantify exposure on a monthly basis for all locations historically served by the distribution system. Sensitivity analyses indicated that operating system changes did not appreciably change the proportionate contribution of water to Dover Township locations. The association between exposure and disease was based on being able to integrate distribution system modeling and epidemiologic analyses.

Other Water Quality Issues Some important distribution system water quality concerns are: maintenance of proper disinfectant levels; minimization of DBP formation; turbidity, taste, color, and odor issues; distribution tank mixing and utilization; main repair and pressure stabilization; flow management; cross-connection control and back-flow prevention. It should be noted that water quality goals can be difficult to achieve and can be contradictory. For example, an important goal is to maintain a positive disinfectant residual in order to protect against microbial contamination. However, DBPs (TTHMs) will increase as water moves through the network as long as disinfectant residual and NOM is available. Other DBPs (HAA5) are degraded biologically when free chlorine or chloramines are nearly absent. A problem of growing concern in the effect that pressure transients can have on water quality.

Pressure Transients Pressure transient regimes are inevitable because all systems will, at some time, be started up, switched off, or undergo rapid flow changes such as those caused by hydrant flushing. They will also likely experience the effects of human errors, equipment breakdowns, earthquakes, or other risky disturbances (Boulos et al., 2005, 2006; Wood et al., 2005). LeChevallier et al. (2003) reported the existence of low and negative pressure transients in a number of distribution systems. Gullick et al. (2004) studied intrusion occurrences in live distribution systems and observed 15 surge events that resulted in a negative pressure. Friedman et al. (2004) confirmed that negative pressure transients can occur in the distribution system and that the intruded water can travel downstream from the site of entry. In fact, soil and water samples were recently collected adjacent to drinking water pipelines and then tested for occurrence of total and fecal coliforms, Clostridium perfringens, Bacillus subtilis, coliphage, and enteric viruses (Karim et al., 2003). The study found that indicator microorganisms and enteric viruses were detected in > 50% of the samples examined.

Examples of Intentional Contamination The President’s Commission on Critical Infrastructure Protection (PDD 63 1998; PCCIP 1997) identified several features of U.S. drinking water systems that are particularly vulnerable to intentional contamination or even terrorist attack. According to Gleick (2006), attacks on water supply systems have been recorded as long as 4500 years ago. Hickman (1999) showed that significant harm to public health could be caused by introducing chemical or biological agents into drinking water supplies and the distribution system. Hickman concluded that, “Any adversary with access to basic chemical, petrochemical, pharmaceutical, biotechnological or related industry can produce chemical or biological weapons” (Hickman 1999). Thus, the internet and a small amount of money are sufficient for capability. Hickman identified tanks, reservoirs, and the distribution system as key vulnerabilities. Burrows and Renner (1999) identified a list of biological agents that could be used to efficiently contaminate water supplies. Clark and Deininger (2001) effectively combined the work of Hickman, and Burrows and Renner to highlight how the release of biological organisms into the distribution system could significantly affect public health. Allmann and Carlson (2005) showed how commercially available distribution system modeling tools could be used to study intentional contamination events and demonstrated that service connections and fire hydrants were likely the most vulnerable components of the water system. The following two case studies are examples of intentional contamination events in a water system. It is noteworthy that in the first example the perpetrators were able to culture the bacterium in their own laboratory. The second example illustrates that a small amount of a pesticide can be strategically placed to cause a significant amount of damage and loss of service.

168

Drinking Water Treatment and Distribution Systems: Their Role in Reducing Risks and Protecting Public Health

The Dalles, Oregon, United States In 1984, the Rajneeshee religious cult, using vials of the highly toxic bacterium S. typhimurium [S. enterica serovar Typhimurium], attempted to contaminate a water supply tank and salad bars in a number of area restaurants in The Dalles, Oregon. Their intent was to cause massive causalities or widespread panic. The attack resulted in a community outbreak of salmonellosis in which at least 751 cases were documented in a county that typically reports fewer than 5 cases per year. It is not clear if the WDS was chlorinated or what role, if any, disinfectant played in possibly mitigating the consequences from the contamination event. The cult apparently cultured the organisms in their own laboratories (Clark and Deininger 2000; Gleick 2006).

Pittsburgh, Pennsylvania, United States In 1980, an unknown perpetrator introduced chlordane into the Pittsburgh PA distribution system at an isolated valve location on a large distribution main feeding (Welter et al., 2009). It affected an area of the distribution system serving approximately 10,500 people. Eight or more gallons of commercial grade chlordane were estimated as having been introduced into the system. The highest measured concentrations of chlordane were 144,000 mg L 1. The estimated average concentration across the area was estimated to be about 100 mg L 1 or about 50 times the maximum contaminant level (MCL) permitted for chlordane in drinking water (Welter et al., 2009). The event was first discovered and reported to the utility by customers experiencing taste and odor problems with their tap water (Welter et al., 2009). The utility sought to contain the event, by closing valves in order to prevent the contamination from reaching a storage tank. Restoration plans were developed and implemented after the contamination was believed to be contained. Water usage was restored in 1 month, but 9 months of flushing and monitoring were required prior to the release of the water for unrestricted use. Some residential appliances and selected pipes had to be replaced (Welter et al., 2009). Although utility and public health officials initially considered shutting down the water system the problems associated with no water for sanitation or fire-fighting were deemed too critical (Welter et al., 2009). Drinking water was brought in and distributed throughout the contaminated area, especially for residences experiencing high concentrations of chlordane. Residents with sensitive skin were offered the opportunity to bathe outside the contaminated area. Health authorities established progressively lower action levels during the course of the restoration to ensure that customer exposure was minimized. Monitoring continued for months after the system had been restored to unrestricted use (Welter et al., 2009).

Cybersecurity Growth in the use of the Internet throughout the world has dramatically changed the way that both private and public sectors organizations communicate and conduct business (Clark et al., 2011). The move from proprietary technologies to more standardized and open software solutions together with the increased number of connections between SCADA systems and office networks has made SCADA systems more vulnerable to attacks (Panguluri et al., 2011). SCADA systems are widely used in US drinking water utilities. The security of some SCADA-based systems has come into question as they are seen as potentially vulnerable to cyberattacks. In particular, security researchers are concerned about:

• • • •

Lack of concern about security and authentication in the design, deployment, and operation of some existing SCADA networks Believing that SCADA systems have the benefit of security through obscurity through the use of specialized protocols and proprietary interfaces Believing that SCADA networks are secure because they are physically secured Believing that SCADA networks are secure because they are disconnected from the Internet.

Water systems may also be vulnerable to attacks against supporting electrical utilities. Clarke and Knake (2010) cite an example of a power failure in combination with a programming glitch in a widely used SCADA system; the glitch slowed utility responses to a falling tree, which created a power surge in Ohio. The surge resulted in a power outage that encompassed 8 states, 2 Canadian provinces, and 50 million people. The Cleveland water system was left without electricity causing their pumps to fail and placing the utility in a near crisis. A hacker attack was launched against an electrical system in Brazil with similar results. An attack that threatened public health and safety was carried out in on Maroochy Shire Council’s sewage control system in Queensland, Australia (Weiss, 2014). Shortly after a contractor installed a SCADA system in January 2000, system components began to function erratically. Pumps did not run when needed and alarms were not reported. Sewage flooded a nearby park and contaminated an open surface-water drainage ditch and flowed into a tidal canal. The SCADA system was directing sewage valves to open when the design protocol should have kept them closed. Monitoring of the system logs revealed the malfunctions was the result of cyberattacks. It was found that the attacks were made by a disgruntled employee of the company that had installed the SCADA system.

US Experience The water utility industry has been active in a number of ways to improve cybersecurity (AWWA, 2015). For example, the Virginia Department of Health, in collaboration with the United States Environmental Protection Agency (USEPA) Region 3, has undertaken an evaluation of cybersecurity practices in 24 utilities of varying sizes and characteristics (Manalo et al., 2015). In California, various

Drinking Water Treatment and Distribution Systems: Their Role in Reducing Risks and Protecting Public Health

169

water districts have formed a committee to take the lead in promoting awareness of cybersecurity throughout the state’s public water utilities (Johnson and Edwards 2007). A major cybersecurity problem occurred in Boca Raton, FL, which has a medium-sized water and wastewater facility. The utility experienced a series of cybersecurity incidents resulting in plant shutdowns. Eventually, the SCADA system locked up and caused the water plant to shut down, and it took 8 h to re-establish control of the system. There was no monitoring system for the network traffic, so it was difficult to diagnose the source of the problem. It was ultimately concluded that the network had experienced a data storm. Eventually, the utility was able to update the SCADA system without losing any of the system’s functionality (Horta, 2007).

Recommendations From the NRC Report As mentioned previously the National Research Council of the National Academy of Science (NAS) convened a committee on “Drinking Water Distribution Systems: Assessing and Reducing Risks” in 2004. It released a report of its findings on September 8, 2006 entitled “Drinking Water Distribution Systems: Assessing and Reducing Risks.” The report suggests that the distribution system is the remaining component of public water supplies yet to be adequately addressed in national efforts to eradicate waterborne disease. The report provides the following recommendations. The report recommends that the USEPA should focus on working with representatives from states, water systems, and local jurisdictions to establish the elements that constitute an acceptable cross-connection control programs. Existing plumbing codes should be consolidated into one uniform national code and that utilities develop. The report recommends that:

• • • • • • • • • • • •

Storage facilities should be inspected on a regular basis. Better sanitary practices are needed during installation, repair, replacement, and rehabilitation of distribution system infrastructure. Water residence times in pipes, storage facilities, and premise plumbing should be minimized. Positive water pressure should be maintained. Distribution system monitoring and modeling are critical to maintaining hydraulic integrity. Microbial growth and biofilm development in distribution systems should be minimized. Residual disinfectant choices should be balanced to meet the overall goal of protecting public health. Standards for materials used in distribution systems should be updated to address their impact on water quality, and research is needed to develop new materials that will have minimal impacts. Although it is difficult and costly to perform, condition assessment of buried infrastructure should be a top priority for utilities. Cross-connection control should be in place for all water utilities. Where feasible, surge protection devices should be installed. Prior to distribution, the quality of treated water should be adjusted to minimize deterioration of water quality.

The NRC devoted a considerable amount of the report to the role of premise plumbing and its role in protecting water quality. The report recommends that:

• • •

Communities squarely address the problem of Legionella, both via changes to the plumbing code and new technologies. Communities should better assess cross connections in the premise plumbing of privately owned buildings, and inspections for cross connections and other code violations at the time of property sale be required. EPA should create a homeowner’s guide and website that highlights the nature of the health threat associated with premise plumbing and mitigation strategies that can be implemented to reduce the magnitude of the risk.

The report also identified the following research gaps:

• • • • • • • • •

Distribution system ecology is poorly understood, making risk assessment via pathogen occurrence measurements difficult. There is inadequate investigation of waterborne disease outbreaks associated with distribution systems, especially in premise plumbing. Epidemiology studies that specifically target the distribution system component of waterborne disease are needed. External and internal corrosion should be better researched and controlled in standardized ways. Research is needed to better understand how to analyze data from on-line, real-time monitors in a distribution system. Research is needed that specifically addresses potential problems arising from premise plumbing. Current microbial monitoring is limited in its ability to indicate distribution system contamination events, such that new methods and strategies are needed. A rigorous standardized set of network model development and calibration protocols should be developed. Additional research, development, and experimental applications in data integration are needed so that distribution system models can be used in real-time operation.

Summary In the United States drinking water treatment and distribution of water under positive pressure have been a major contributor to the Nation’s public health. Distribution system infrastructure is a major asset of most water utilities. It serves many important functions

170

Drinking Water Treatment and Distribution Systems: Their Role in Reducing Risks and Protecting Public Health

in a community, such as promoting economic growth, supporting public safety, and protecting public health. In order for a community to grow and prosper, it must have the physical infrastructure to provide basic services such as water supply. There is major interest in the United States in addressing the issue of aging infrastructure. Investment in water supply treatment and distribution has saved countless lives from waterborne diseases. However, maintaining this record of success will require significant attention to aging and failing water supply infrastructure in the future (Allan et al., 2018). In addition to the economic implications of adequate water supply, water systems play a critical role in supporting public safety through the provision of fire protection capacity. Frequently, insurance rates in a community are tied to the fire protection capability of the water system. Water systems play a key role in protecting a community’s public health by providing safe drinking water to water consumers.

Acknowledgments The author would like to acknowledge the contribution of Mr. Jeffrey Sawrtfeger of the Greater Cincinnati Water Works and Mr. Robert Janke of the US Environmental Protection Agency.

References Al Hajal K (2016) “87 cases, 10 fatal, of Legionella bacteria found in Flint area; connection to water crisis unclear". The Flint Journal. Michigan Live. http://www.mlive.com/news/ detroit/index.ssf/2016/01/legionaires_disease_spike_disc.html. Retrieved January 30, 2016. Allen, M., Clark, R.M., Cotruvo, J.A., Grigg, N., 2018. Drinking water and public health in an era of aging distribution infrastructure. Public Works Management & Policy 23 (4), 301–309. Allmann, T.P., Carlson, K.H., 2005. Extended summary: Modeling intentional distribution system contamination and detection. Journal-American Water Works Association 97 (1), 58. American Society of Civil Engineers (ASCE), 2005. Report card for America’s infrastructure, drinking water. ASCE, Reston, VA. American Water Works Association (AWWA), 1998. Distribution system requirements for fire protection. In: AWWA Manual M31. AWWA, Denver, CO. American Water Works Association (AWWA), 2015. G430-14 security practices for operation and management. AWWA, Denver. American Water Works Association Research Foundation (AwwaRF), 2002. Impacts of fire flow on distribution system water quality, design, and operation. American Water Works Association Research Foundation, Denver, CO. Angulo, F.J., Tippen, S., Sharp, D.J., Payne, B.J., Collier, C., Hill, J.E., et al., 1997. A community waterborne outbreak of salmonellosis and the effectiveness of a boil water order. American Journal of Public Health 87 (4), 580–584. Baker, M.H., 1948. The quest for pure water. The American Water Works Association/Lancaster Press, Lancaster, PA. Blackburn, B.G., Craun, G.F., Yoder, J.S., Hill, V., Calderon, R.L., Chen, N., et al., 2004. Surveillance for waterborne-disease outbreaks associated with drinking waterdUnited States, 2001–2002. MMWR 53 (SS-8), 23–45. Borst, M., Krudner, M., O’Shea, L., Perdek, J.M., Reasoner, D., Royer, M.D., 2001. Source water protection: its role in controlling disinfection by-products (DBPs) and microbial contaminants. EPA/600/R-01/110. In: Clark, R.M., Boutin, B.K. (Eds.), Controlling disinfection by-products and microbial contaminants in drinking water. EPA Office of Research and Development, Washington, DC. Boulos, P.F., Karney, B.W., Wood, D.J., Lingireddy, S., 2005. Hydraulic transient guidelines for protecting water distribution systems. Journal of the American Water Works Association 97 (5), 111–124. Boulos, P.F., Lansey, K.E., Karney, B.W., 2006. Comprehensive water distribution systems analysis handbook for engineers and planners, 2nd edn. MWH Soft Pub, Broomfield, CO. 660 pp. Burrows, W.D., Renner, S.E., 1999. Biological warfare agents as threats to potable water. Environmental Health Perspectives 107 (12), 975–984. Camper, A.K., Brastrup, K., Sandvig, A., Clement, J., Spencer, C., Capuzzi, A.J., 2003. Impact of distribution system materials on bacterial regrowth. Journal of American Water Works Association 95 (7), 107–121. Clark, R.M., 1978. The safe drinking water act: Implications for planning. In: Holtz, D., Sebastian, S. (Eds.), Municipal water systems-the challenge for urban resources management. Indiana University Press, Bloomington, IN, pp. 117–137. Clark, R.M., 2011. U.S. water and wastewater critical infrastructure. In: Clark, R.M., Hakim, S., Ostfeld, A. (Eds.), The handbook for securing water and wastewater systems. Springer, New York. Clark, R.M., 2015. The USEPA’s distribution system water quality modelling program: a historical perspective. Water and Environment Journal 29, 320–330. Clark, R.M., Deininger, R.A., 2000. Protecting the nation’s critical infrastructure: The vulnerability of U.S. water supply systems. Journal of Contingencies & Crisis Management 8 (2), 73–80. Clark, R.M., Deininger, R.A., 2001. Minimizing the vulnerability of water supplies to natural and terrorist threats. In: Proceedings of the American Water Works Association’s IMTech conference, Atlanta, GA, 8-11 April, 2001, pp. 1–20. Clark, R.M., Tippen, D.L., 1990. Water supply. In: Corbitt, R.A. (Ed.), Standard handbook of environmental engineering. McGraw-Hill Publishing, New York, pp. 5.173–5.220. Clark, R.M., Goodrich, J.A., Ireland, J.C., 1985. Costs and benefits of drinking water treatment. Journal of Environmental Systems 14 (1), 1–30. Clark, R.M., Grayman, W.M., Males, R.M., 1988. Contaminant propagation in distribution systems. Journal of Environmental Engineering, ASCE 114 (2), 929–943. Clark, R.M., Ehreth, D.J., Convery, J.J., 1991a. Water legislation in the US: An overview of the safe drinking water act. Toxicology and Industrial Health 7 (516), 43–52. Clark, R.M., Grayman, W.M., Goodrich, J.A., 1991b. Water quality modeling: Its regulatory implications. In: Proceedings of the AWWARF/EPA conference on water quality modeling in distribution systems. Cincinnati, OH. Clark, R.M., Grayman, W.M., Goodrich, J.A., Deininger, R.A., Hess, A.F., 1991c. Field testing of distribution water quality models. Journal of the American Water Works Association 83 (7), 67–75. Clark, R.M., Grayman, W.M., Males, R.M., Hess, A.F., 1993a. Modeling contaminant propagation in drinking water distribution systems. Journal of Environmental Engineering, ASCE 119 (2), 349–364. Clark, R.M., Goodrich, J.A., Wymer, L.J., 1993b. Effect of the distribution system on drinking water quality. Journal of Water Supply Research and Technology-AQUA 42 (1), 30–38. Clark, R.M., Geldreich, E.E., Fox, K.R., Rice, E.W., Johnson, C.H., Goodrich, J.A., et al., 1996. Tracking a Salmonella serovar Typhimurium outbreak in Gideon, Missouri: Role of contaminant propagation modeling. Journal of Water Supply Research and Technology-AQUA 45 (4), 171–183. Clark, R.M., Hakim, S., Ostfeld, A., 2011. Securing water and wastewater systems: An overview. In: Clark, R.M., Hakim, S., Ostfeld, A. (Eds.), Handbook of water and wastewater systems protection. Springer-Science, Springer Street New York, NY, pp. 1–25. Clark RM, Panguluri S, Nelson TD, and Wyman RD (2017) “Protecting drinking water utilities from cyberthreats”. Journal of the American Water Works Association 109, 50–58.

Drinking Water Treatment and Distribution Systems: Their Role in Reducing Risks and Protecting Public Health

171

Clarke, R.A., Knake, R.K., 2010. Cyber war: The next threat to national security and what to do about it. HarperCollins Publishers, New York. Connell GF (1998) European water disinfection practices parallel U.S. treatment methods, drinking water and health quarterly. http://www.clo2.com/reading/waternews/ european.html. Craun, G.F., Calderon, R., 2001. Waterborne disease outbreaks caused by distribution system deficiencies. Journal of the American Water Works Association 93 (9), 64–75. Fair, G.M., Geyer, J.C., 1971. Water supply and waste-water disposal. Wiley, NY. Fox, K.R., Lytle, D.A., 1996. Milwaukee’s crypto outbreak: Investigation and recommendations. Journal-American Water Works Association 88 (9), 87–94. Friedman, M., Radder, L., Harrison, S., Howie, D., Britton, M., Boyd, G., et al., 2004. Verification and control of pressure transients and intrusion in distribution systems. AWWA Research Foundation, Denver, CO. Fujiwara, M., Manwaring, J.M., Clark, R.M., 1995. Drinking water in Japan and the United States: Conference objectives. In: Clark, R.M., Clark, D.A. (Eds.), Drinking water quality management. Technomic Publishing, Lancaster, PA. Geldreich, E.E., Nash, H.D., Reasoner, D.J., Taylor, R.H., 1972. The necessity of controlling bacterial populations in potable water: Community water supply. Journal of the American Water Works Association 64, 596–602. Geldreich, E.E., Fox, K.R., Goodrich, J.A., Rice, E.W., Clark, R.M., Swerdlow, D.L., 1992. Searching for a water supply connection in the Cabool, Missouri disease outbreak of Escherichia coli 0157:H7. Water Research 26 (8), 1127–1137. Gleick, P.H., 2006. Water and terrorism. Water Policy 8, 481–503. Grayman, W.M., Clark, R.M., Harding, B.L., Maslia, M.L., Aramini, J., 2004. Reconstructing historical contamination events. In: Mays, L. (Ed.), Water supply systems security. McGraw-Hill, NY, pp. 10.1–10.55. Grigg, N.S., 2005. Assessment and renewal of water distribution systems. Journal of the American Water Works Association 97 (2), 58–68. Grindler, B.J., 1967. Water and water rights: A treatise on the laws of water and allied problems: Eastern, western, federal, vol. 3. The Allan Smith Company, Indianapolis, IN. Gullick, R.W., Grayman, W.M., Deininger, R.A., Males, R.M., 2003. Design of early warning monitoring systems for source waters. Journal of the American Water Works Association 95 (11), 58–72. Gullick, R.W., LeChevallier, M.W., Svindland, R.C., Friedman, M.J., 2004. Occurrence of transient low and negative pressures in distribution systems. Journal of the American Water Works Association 96 (11), 52–66. Hanke, S.H., 1972. Pricing urban water. In: Mushkin, S. (Ed.), Public prices for public products. The Urban Institute, Washington, DC, pp. 283–306. Hanna-Attisha, M., LaChance, J., Sadler, R.C., Champney Schnepp, A., 2015. Elevated blood Lead levels in children associated with the Flint drinking water crisis: A spatial analysis of risk and public health response. American Journal of Public Health 106 (2), 283–290. Hickman DC (1999) A chemical and biological warfare threat: USAF water systems at risk. Future Warfare Series No. 3. Air University, U.S. Air Force Counterproliferation Center, Maxwell AFB, Alabama, p. 36. Accessed 23 April, 2013. http://www.au.af.mil/au/awc/awcgate/cpc-pubs/hickman.htm. Hilborn, E.D., Wade, T.J., Hicks, L., Garrison, L., Adam, E., Mull, B., Yoder, J., Roberts, V., Gargano, J.W., 2013. Surveillance for waterborne disease outbreaks associated with drinking water and other non-recreational waterdUnited States, 2009-2010. Morbidity and Mortality Weekly Report 35, 714–720. Horta, R., 2007. Final reportdThe City of Boca Raton: A case study in water utility cybersecurity. Journal AWWA 99 (3), 48. Hrudey, S.E., Hrudey, E.J., 2004. Safe drinking water: Lessons from recent outbreaks in affluent nations. IWA Publishing, London. International Life Sciences Institute (ILSI), 1999. In: Brosnan, T.M. (Ed.), Early warning monitoring to detect hazardous events in water supplies. International Life Sciences Institute, Washington, D.C. Janke, R., Tryby, M.E., Clark, R.M., 2014. Protecting water supply critical infrastructure: An overview. In: Securing water and wastewater systems: Global experiences (protecting critical infrastructure: Volume 2). Springer International Publishers, Switzerland. Johnson, S., Edwards, D., 2007. Why water and wastewater utilities should be concerned about cyber security. Journal AWWA 99 (9), 89. Karim, M.R., Abbaszadegan, M., LeChevallier, M.W., 2003. Potential for pathogen intrusion during pressure transients. Journal of the American Water Works Association 95 (5), 134–146. Kirmeyer, G., Richards, W., Smith, C.D., 1994. An assessment of water distribution systems and associated research needs. AWWARF, Denver, CO. LeChevallier, M.W., Babcock, T.M., Lee, R.G., 1987. Examination and characterization of distribution system biofilms. Applied and Environmental Microbiology 53, 2714–2724. LeChevallier, M.W., Gullick, R.W., Karim, M.R., Friedman, M., Funk, J.E., 2003. The potential for health risks from intrusion of contaminants into distribution systems from pressure transients. Journal of Water and Health 1 (1), 3–14. Lee, S.H., Levy, D.A., Craun, G.F., Beach, M.J., Calderon, R.L., 2002. Surveillance for waterborne-disease outbreaks in the United States, 1999–2000. MMWR 51 (SS-8), 1–49. MacKenzie, W.R., Hoxie, N.J., Proctor, M.E., Gradus, M.S., Blair, K.A., Peterson, D.E., Kazmierczak, J.J., Addiss, D.G., Fox, K.R., Rose, J.B., Davis, J.P., 1994. A massive outbreak in Milwaukee of Cryptosporidium infection transmitted through the public water supply. New England Journal of Medicine 33, 161–167. Manalo, C., Noble, T., Miller, K., Ferro, C., 2015. Control systems cybersecurity: Lessons learned from Virginia assessments. Journal AWWA 107 (12), 60. https://doi.org/10.5942/ jawwa.2015.107.0174. Maslia, M.L., Sautner, J.B., Aral, M.M., 2000. Analysis of the 1998 water-distribution system serving the Dover Township area, New Jersey: Field-data collection activities and water- distribution system modeling. Agency for Toxic Substances and Disease Registry, Atlanta, GA. Maslia, M.L., Sautner, J.B., Aral, M.M., Gillig, R.E., Reyes, J.J., Williams, R.C., 2001. Historical reconstruction of the water-distribution system serving the Dover Township area, New Jersey: January 1962–December 1996. Agency for Toxic Substances and Disease Registry, Atlanta, GA. Maul, A., El-Shaarawi, A.H., Block, J.C., 1985a. Heterotrophic bacteria in water distribution systemsdI. Spatial and temporal variation. The Science of the Total Environment 44, 201–214. Maul, A., El-Shaarawi, A.H., Block, J.C., 1985b. Heterotrophic bacteria in water distribution systemsdII. Sampling design for monitoring. The Science of the Total Environment 44, 215–222. National Fire Protection Association (NFPA), 2003. In: Cotes, A.E. (Ed.), Fire protection handbook, 19th edn. National Fire Protection Association, Quincy, MA. National Research Council (NRC), 2006. Drinking water distribution systems: Assessing and reducing risks. National Academies Press, Washington, D.C. New Jersey Department of Health and Senior Services (NJDHSS), 2003. Case-control study of childhood cancers in Dover Township (Ocean County), New Jersey. NJDHSS, Division of Epidemiology, Environmental and Occupational Health, Trenton, NJ. Okun, D., 1996. Distributing reclaimed water through dual systems. Journal of the American Water Works Association 89 (11), 52–64. Panguluri, S., Grayman, W.M., Clark, R.M., 2005. Distribution system water quality report: A guide to the assessment and management of drinking water quality in distribution systems. EPA Office of Research and Development, Cincinnati, OH. Panguluri, S., Phillips, W.R., Ellis, P., 2011. Cyber security: Protecting water and wastewater infrastructure. In: Handbook of Water and Wastewater Systems Protection. Springer Science þ Business Media LLC, New York. Peckenham, J.M., Schmitt, C.V., McNelly, J.L., Tolman, A.L., 2005. Linking water quality to the watershed: Developing tools for source water protection. Journal of the American Water Works Association 97 (9), 62–69. Pierson, G., Martel, K., Hill, A., Burlingame, G., Godfree, A., 2001. Methods to prevent microbiological contamination associated with main rehabilitation and replacement. AWWARF, Denver, CO. President, Presidential Decision Directive (PDD) 63 (1998) Protecting America’s critical infrastructure. Presidential decision directive 63. The William J. Clinton Presidential Library, directives. Accessed 17 April, 2013. http://www.clintonlibrary.gov/pdd.html. President’s Commission on Critical Infrastructure Protection (PCCIP), 1997. Critical Foundations: Protecting America’s Infrastructures. PCCIP, Washington, DC. Report can be accessed at: http://www.fas.org/sgp/library/pccip.pdf.

172

Drinking Water Treatment and Distribution Systems: Their Role in Reducing Risks and Protecting Public Health

PUBLIC LAW 111–380dJAN. 4, 2011 124 STAT. 4131 Public Law 111–380 111th Congress “An Act To amend the Safe Drinking Water Act to reduce lead in drinking water”. PUBLIC LAW 113–64dDEC. 20, 2013 127 STAT. 668 Public Law 113-64 113th Congress “Community Fire Safety Act of 2013”. PUBLIC LAW 113–121dDEC. 5, 2016 128 STAT. 24. Public Law 113-121 115th Congress “Water Infrastructure Improvements for the Nation Act of 2016”. PUBLIC LAW 114–45dAUG. 7, 2015 129 STAT. 473 Public Law 114–45 114th Congress “Drinking Water Protection Act of 2015”. PUBLIC LAW 114–98dDEC 12, 2015 129 STAT.2199 Public Law 114-98 114th Congress “Grassroots Rural and Small Community Water Systems Assistance Act”. PUBLIC LAW 107–188dJUNE 12, 2002 116 STAT. 594 Public Law 107-188 107th Congress “Public Health Security and Bioterrorism Preparedness and Response Act of 2002’’. Roberson, J.A., 2013. DC beat: To(tal) coliform or not to(tal) coliform-that is the question. Journal of the American Water Works Association 105 (3), 12–16. Safe Drinking Water Act (SDWA). 1974 Public Law 93-523. Tiemann, M., 2017. Safe Drinking Water Act (SDWA): A summary of the act and its major requirements. In: Congressional Research Service 7-5700. www.crs,gov. RL31243. U.S. Environmental Protection Agency (U.S. EPA), 2011. Fiscal year 2011 ground water and drinking water statistics. EPA 816-R-13-003. U.S. Environmental Protection Agency, Office of Water, Washington, D.C. US Environmental Protection Agency (US EPA), 2002. The clean water and drinking water infrastructure gap analysis. EPA-Office of Water, Washington, D.C. Walski, T.M., Chase, D.V., Savic, D.A., Grayman, W.M., Beckwith, S., Koelle, E., 2003. Advanced water distribution modeling and management. Haestad Press, Waterbury, CT, pp. 1–4. Water Research Centre, 1976. Deterioration of bacteriological quality of water during distribution. Notes on Water Research No. 6. Weiss, J., 2014. Industrial control system (ICS) cyber security to appear in water and wastewater systems. In: Clark, R.M., Hakim, S. (Eds.), Securing water and wastewater systems: Global experiences (protecting critical infrastructure: Volume 2). Springer International Publishing, Switzerland. Welter, G., Lechevallier, M., Cotruvo, J., Moser, R., Spangler, S., 2009. Guidance for decontamination of water system infrastructure. In: Water Research Foundation, Report No. 2981.2009. Wood, D.J., Lingireddy, S., Boulos, P.F., 2005. Pressure wave analysis of transient flow in pipe distribution systems. MWH Soft Pub, Pasadena, CA. Zhang, W., DiGiano, F.A., 2002. Comparison of bacterial regrowth in distribution systems using free chlorine and chloramine: A statistical study of causative factors. Water Research 36 (6), 1469–1482.

Further Reading PDD 63, 1998. Critical Infrastructure Protection. The white house, Washington D.C. May 22, 1988.

Drinking Water Nitrate and Human Healthq Mary H Ward, National Cancer Institute, NIH, DHHS, Rockville, MD, United States Jean D Brender, Texas A&M University, College Station, TX, United States © 2019 Elsevier B.V. All rights reserved.

Abbreviations CI Confidence interval CNS Central nervous system IARC International Agency for Research on Cancer IUGR Intrauterine growth retardation MCL Maximum contaminant level MetHb Methemoglobin NHL Non-Hodgkins lymphoma NOC N-Nitroso compounds NTD Neural tube defect OR Odds ratio RR Relative risk SB Spontaneous abortion

BackgrounddHuman Exposure to Ingested Nitrate Since the mid-1920s, human activities have doubled the natural rate at which nitrogen is deposited onto land. Most important has been the production and application of nitrogen fertilizers, the combustion of fossil fuels, and replacement of natural vegetation with nitrogen-fixing crops such as soybeans. Nitrate, the final oxidation product of organic nitrogen, is an essential plant nutrient; however, nitrate in excess of plant requirements leaches into ground and surface waters. Approximately half of all applied nitrogen drains from agricultural fields to contaminate surface and groundwater. As a result of human inputs of excess nitrogen, levels of nitrate have increased in many water supplies around the world. In the United States, public water supplies are required to maintain nitrate levels below the maximum contaminant level (MCL) of 10 mg L 1 nitrate-nitrogen (about 45 mg L 1 as nitrate). The population using private water supplies can have considerably higher exposure due to the fact that these supplies are not regulated and are often located in agricultural areas with high nitrogen inputs. The US Geological Survey assessed available private well measurements in US aquifers sampled in 1991–2004 and found that 8% exceeded the 10 mg L 1 MCL for nitrate-nitrogen (N). Almost all public water supplies have nitrate levels below the MCL. However, in the past few decades, nitrate levels have risen to levels approaching the MCL in some public supplies located in agricultural areas. Similarly, in the European Union (EU), public water supplies are largely below the World Health Organization guideline of 50 mg L 1 nitrate; however, private wells in some countries have nitrate concentrations that exceed the recommended level by 10–15 times. Nitrogen fertilizer is the main contributing factor in agricultural areas; whereas, nitrogen from human waste is an important source in urban areas lacking centralized water and sanitation systems. The US maximum contaminant level and World Health Organization guideline for nitrate in public drinking water supplies were promulgated to protect infants from developing methemoglobinemia (also called “blue baby syndrome”), an acute health condition. Chronic health effects due to nitrate in drinking water have not been as well-studied. Ingested nitrate contributes toward endogenous formation of N-nitroso compounds (NOC), many of which are potent animal carcinogens and teratogens. Many factors modify this process in vivo. When nitrate levels in water supplies are below the regulatory limit of 10 mg L 1 nitrate-N, the majority of ingested nitrate comes from vegetables. Vegetables are also a source of antioxidants such as vitamins C and E, which inhibit endogenous nitrosation thus reducing exposure to potentially harmful NOC. To adequately evaluate the risk associated with nitrate ingestion, human health studies must account for the potentially different effects of dietary and water sources of nitrate.

q

Change History: April 2018. Mary H. Ward and Jean D. Brender updated the Abstract; the Background-Human Exposure to Ingested Nitrate; Adverse Pregnancy Outcomes; Cancer; Diabetes, Thyroid Effects, Age-Related Macular Degeneration, and Cardiovascular Health; References; and Further Reading. This is an update of Mary H Ward and Jean D Brender, Drinking Water Nitrate and Health, In Encyclopedia of Environmental Health, edited by J.O. Nriagu, Elsevier, 2011, Pages 167–178.

Encyclopedia of Environmental Health, 2nd edition, Volume 2

https://doi.org/10.1016/B978-0-12-409548-9.11245-X

173

174

Drinking Water Nitrate and Human Health

Nitrate ingestion via drinking water has been associated with an increased risk of certain cancers, adverse reproductive outcomes, diabetes, and thyroid conditions in epidemiologic studies. In addition to endogenous formation of NOC, there are two other primary mechanisms by which ingested nitrate from drinking water may have detrimental effects on health. Nitrate at high doses can competitively inhibit iodine uptake and induce hypertrophic changes in the thyroid, as demonstrated in animal studies. Additionally, nitrite reacts with hemoglobin to form methemoglobin and reduces the oxygen-carrying capacity of the blood. Any adverse health effects from nitrate ingestion would be likely due to a complex interaction of the amount of nitrate ingested, dietary intakes of other constituents that may exacerbate or mitigate the formation of harmful compounds from ingested nitrate, and medical and/or genetic conditions that may increase host susceptibility. Approximately 5% of ingested nitrate is converted to nitrite as the result of absorbed nitrate being secreted into the saliva and then converted to nitrite by the bacteria in the mouth (the oral microbiome). The reaction of nitrite with other compounds can result in the endogenous formation of NOC. Extensive experimental evidence that includes controlled feeding studies in humans, indicates that NOCs are formed by the chemical reaction of nitrosatable amines or amides and nitrite in the acidic environment of the stomach. Nitrosation can also occur as a result of bacterial action in the gut or infected urinary tract. Compounds that react with nitrite include drugs that contain secondary or tertiary amines or amides, certain foods, and beverages (fish, meat, cereals, spices, coffee, tea, beer, and wine), cosmetics, tobacco products, and agricultural chemicals. Dietary intakes of red and processed meat result in the formation of fecal NOC, because the heme iron in red meat stimulates nitrosation. Nitrate ingestion also results in the production of nitric oxide, a bioactive compound that plays a role in vasodilatation and in defense against periodontal bacteria and other pathogens. Some experimental evidence points to potential beneficial effects for nitrate ingestion on cardiovascular health as demonstrated by a human study demonstrating a short-term lowering of diastolic blood pressure after dietary supplementation with nitrate. To date, there have been no epidemiologic studies of cardiovascular disease incidence and drinking water nitrate ingestion; future studies would be useful for characterizing the range of human health effects (beneficial and detrimental) due to ingestion of elevated nitrate concentration in drinking water supplies. The number of epidemiologic studies of nitrate ingestion that have addressed the complexities of nitrosation as part of the study design is few for any individual health outcome, limiting the ability to draw conclusions about risk. The results of epidemiologic studies of acute and chronic health effects related to ingestion of nitrate in drinking water are discussed below.

Methemoglobinemia As already noted, approximately 5% of ingested nitrate is converted to nitrite. Methemoglobin is formed when nitrite oxidizes the ferrous ion in hemoglobin to the ferric form. Methemoglobinemia can occur when levels of methemoglobin exceed approximately 10% of circulating hemoglobin and interfere with the oxygen-carrying capacity of the blood. For several reasons, infants less than 4 months of age are particularly susceptible to developing methemoglobinemia. Young infants have a higher gastric pH than adults with resultant proliferation of intestinal flora that can reduce ingested nitrate to nitrite. Compared with adults, they also have a greater fluid intake relative to body weight than adults, residual circulating fetal hemoglobin that is more rapidly oxidized to methemoglobin than adult hemoglobin, and lower levels of the enzyme cytochrome b5 reductase that converts methemoglobin to hemoglobin. In 1945, Comly reported methemoglobinemia in infants who were fed formula prepared with well-water with high nitrate levels. A subsequent survey was conducted in the United States by the American Public Health Association that consisted of case reports of infant methemoglobinemia. In this survey, no cases of methemoglobinemia were observed when drinking water concentrations were less than 10 mg L 1 nitrate-nitrogen, and most cases were associated with nitrate-nitrogen levels greater than 30 mg L 1. This work provided the basis of the current regulatory limits and guidelines for nitrate in drinking water (i.e., 10 mg L 1 nitrate-nitrogen in the United States and 50 mg of nitrate L 1 in the European Union). The scientific community has disagreed about the appropriateness of the current limits set for nitrates in drinking water. In the original studies, cases of methemoglobinemia were always associated with wells contaminated with human and/or animal excrement along with high concentrations of nitrate. Therefore, some experts suggest that methemoglobinemia resulted from the presence of bacteria in the water rather than the nitrate per se. Other factors besides bacterial contamination and nitrate that can increase risk of methemoglobinemia include intake of foods with high nitrate content, enteric infections (without any apparent exposure to exogenous methemoglobin-forming agents), and certain drugs such as local anesthetics (benzocaine), nitrofurans, sulfones, acetaminophen, amyl nitrate, Dapsone, nitroglycerine, nitroprusside, pyridium, and sulfanilamide. Studies examining the relationship between nitrate levels in drinking water and methemoglobin levels in infants have produced mixed results, possibly due to factors other than nitrate that were not measured and that may have affected methemoglobin levels. However, several well-designed studies demonstrated an association between consumption of well-water with high nitrate and high methemoglobin levels and shed some light on the importance of cofactors. In an epidemiologic study conducted in South West Africa, Super and colleagues noted that 33% of infants had methemoglobin values exceeding 3.0% if they lived in a region with water nitrate exceeding 20 mg L 1 nitrate-N compared with 13% of infants who lived in regions with water nitrates less than or equal to 20 mg L 1. The proportion of methemoglobin values greater than 3.0% was reduced regardless of nitrate region if the infant received daily vitamin C supplementation. Using a nested case–control study design in a Romanian population, Zeman and colleagues observed that case-children with methemoglobinemia were more likely than control-children to be exposed to higher levels of nitrate through formula and tea made with water, to have breast-fed for a shorter duration of time, and to have

Drinking Water Nitrate and Human Health

175

frequent episodes of diarrhea. In the same study, case-children were less likely than control-children to receive vitamin supplementation. In a study of 411 Moroccan children, Sadeq and colleagues found that children who drank well-water with a nitrate concentration of greater than 50 mg L 1 (as nitrate) were 1.8 times more likely (95% CI 1.2–2.6) to have methemoglobinemia than those drinking well water with a nitrate concentration at or below 50 mg L 1 and 1.6 times more likely (95% CI 1.2–2.2) to have this condition than municipal water drinkers. In summary, high nitrate levels in well-water have been linked to methemoglobinemia, although co-exposures to factors affecting nitrite formation appear to be important. It is noteworthy that few cases of methemoglobinemia have occurred in the United States since the MCL of 10 mg L 1 for nitrate-nitrogen was promulgated. Future studies on the interaction of factors that lead to methemoglobinemia will help identify the conditions under which exposure to nitrate in drinking water poses a risk of methemoglobinemia.

Adverse Pregnancy Outcomes Results from studies on human populations have suggested that elevated levels of nitrate in drinking water, in some instances below the US MCL, may increase risk for adverse pregnancy outcomes including spontaneous abortion, fetal deaths, prematurity (delivery before 37 weeks gestation), intrauterine growth retardation (less than 10% of predicted fetal weight for gestational age), low birth weight (birth weight less than 2500 g regardless of gestational age), congenital malformations, and neonatal and infant mortality. The observed effects of nitrate, nitrite, and NOCs on reproductive outcomes in animal models offer some biological plausibility for the potential adverse effects of these compounds on pregnancy outcomes. Various NOCs have been observed to be mutagenic and teratogenic in animal models; these agents cause DNA alkylation of target organs resulting in abnormal development. Malformations associated with exposures to NOCs in animal models include craniofacial defects, skeletal abnormalities, microcephalus, exencephaly, hydrocephalus, spina bifida, and gastroschisis. In this section, the effects of ingestion of nitrate via drinking water on human pregnancy outcomes will be discussed.

Spontaneous Abortion In a cohort study published in 1961, Schmitz noted higher average methemoglobin levels among women during the first trimester who threatened to abort (mean: 0.56 g dL 1) or who aborted (mean of 0.58 g dL 1) than women with normal pregnancies (mean of 0.38 g dL 1). Since then, numerous studies have examined the association between drinking water nitrate and adverse reproductive outcomes, although there are only a few studies for any particular adverse reproductive outcome. Table 1 summarizes these studies by population, study design, exposure assessment, and findings; the studies are sorted by outcome and year of publication. An investigation of a cluster of spontaneous abortions in Indiana (United States) revealed that wells supplying drinking water for the households of affected women were contaminated with high levels of nitrate (19–26 mg L 1 nitrate-nitrogen). In contrast, nitrate levels in wells serving the households of women in the area who gave birth to full-term, live-born infants were below the MCL (1.6–8.4 mg L 1). A hospital-based case–control study by Aschengrau and colleagues in Massachusetts (United States) evaluated nitrate levels in public water supplies (0.1–5.5 mg L 1 vs. no detectable nitrate) and found a significantly reduced risk of spontaneous abortion among women using supplies with detectable nitrate levels.

Preterm Birth and Reduced Birth Weight In a cross-sectional study in Namibia, Super and colleagues found no association between living in high nitrate regions (well-water nitrate > 20 mg L 1) and the prevalence of premature births or low birth weight among 486 infants. Tabacova and associates examined the relationship between methemoglobin levels in cord blood and adverse pregnancy outcomes among 51 Bulgarian women who were exposed to excessive amounts of oxidized nitrogen compounds via ambient air, drinking water, and food. Mean cord blood methemoglobin was four times higher among infants with preterm births and 1.5 times higher in low birth weight babies compared with babies of normal gestation and birth weight. In a study conducted by Bukowski and associates in Prince Edward Island (Canada), a dose–response relationship was detected between the average nitrate levels in public water supplies across geographic regions defined by women’s residential postal codes and the pregnancy outcomes of intrauterine growth retardation and preterm birth. Nitrate levels (measured as nitrate-nitrogen) ranged from nondetectable to 37.5 mg L 1 with the highest exposure category having a median of greater than or equal to 5.4 mg L 1. In a historic cohort study conducted in France, investigators examined the relation between nitrate and atrazine in community water systems and preterm and small-for-gestational age (SGA) births. Exposure to the second tercile of drinking water nitrate without detectable levels of atrazine was associated with SGA (OR 1.74, 95% CI 1.10, 2.75). However, higher nitrate in these water systems was not significantly associated with preterm birth. Using drinking water and birth data from four Midwestern states in the United States, Stayner and colleagues noted a significant linear exposure–response relationship between water nitrate in finished water that was averaged over 9 months prior to birth and very low birth weight and very preterm births (rate ratios for 1 ppm increase in water nitrate respectively, 1.17 [95% CI 1.08, 1.25] and 1.08 [95% CI 1.02, 1.15]).

Epidemiologic studies of drinking water nitrate and reproductive outcomes

176

Table 1

Study design Regional description

Year of outcome ascertainment

Aschengrau et al. (1989) United States

Hospital case–control study Massachusetts

1976–78

Grant et al. (1996) United States Super et al. (1981) South West Africa

Cluster investigation Indiana Cross-sectional study South West Africa/Namibia

1991–93

Tabacova et al. (1998) Bulgaria

Hospital cohort study South Central Bulgaria in area polluted with nitrogen oxides and by nitrates in drinking water

Not reported

Analyzed MetHb in maternal and cord blood as a biomarker of individual internal exposure to oxidized nitrogen compounds

Bukowski et al. (2001) Canada

Population-based, case–control study Prince Edward Island

1991–94

Residential postal code at time of delivery linked to nitrate-level exposure map (as nitrate-nitrogen)

IUGR, premature birth

Migeot et al. (2013) France

Historic cohort study Deux-Sevres

2005–09

SGA

Albouy-Liaty et al. (2016) France

Historic cohort study Deux-Sevres

2005–10

Water samples for atrazine metabolites and nitrate drawn from respective community water systems and averaged for each trimester of pregnancy Water samples for atrazine metabolites and nitrate drawn from respective community water systems and averaged for each trimester of pregnancy

Stayner et al. (2017) United States

Ecologic study Indiana, Iowa, Missouri, and Ohio

2004–08

Exposure estimates for each birth based on county-specific weighted average monthly estimates of nitrate (as nitrate-nitrogen) in finished water

Preterm birth low birth weight

Not reported

Exposure description a Matched maternal residence at pregnancy outcome to results of tap water sample taken in respective city as part as part of routine analysis Wells tested for nitrates after cluster reported (as nitrate-nitrogen) Water sample taken from well used at time of home visit

Summary of findings

SBs through 27 weeks gestation

OR of 0.5 for SB with exposure to water nitrate levels of 0.1–5.5 mg L 1 relative to nondetectable levels

SBs

Water nitrate above US EPA MCL for women with SBs No association between water from high nitrate regions and prematurity or size of infant

Spontaneous premature labor, size of infant at birth Birth weight, preterm birth, Apgar score

Preterm birth

Mean cord blood MetHb 4 times higher in preterm births and 1.5 times higher in low birth weight births than normal births. Maternal MetHb approximately twofold higher in mothers of infants in fetal distress than in mothers of normal babies Dose–response relation between regional nitrate level and ORs for IUGR and prematurity; ORs of 2.0 for both outcomes with median nitrate 4.3 mg L 1 Exposure to the second tercile of nitrate without atrazine metabolites associated with SGA (OR 1.74, 95% CI 1.10, 2.75); association diminished with presence of atrazine metabolites With adjustment for neighborhood deprivation, no statistically significant association noted between exposure to atrazine/nitrate mixtures during the second trimester and preterm birth No association noted between high vs. low nitrate exposure without atrazine in drinking water and preterm birth Significant linear exposure–response relationship noted between nitrate that was averaged over 9 months prior to birth (analysis restricted to 20% well use) and both very low birth weight and very preterm births (rate ratios for 1 ppm increase respectively, 1.17 [95% CI 1.08, 1.25] and 1.08 [95% CI 1.02, 1.15])

Drinking Water Nitrate and Human Health

Reproductive outcomes included

First author (year) Country

Population-based, case–control study Mount Gambier region, South Australia

1951–79

Address at delivery linked to sources of water and data on nitrate

Congenital malformations

Arbuckle et al. (1988) Canada

Population-based, case–control study New Brunswick

1973–83

Collected and analyzed a water sample at maternal residence at time of index birth (as nitrate)

Congenital malformations of CNS

Ericson et al. (1988) Sweden

Population-based, case–control study All deliveries in Sweden Population-based, case–control study All live born deliveries in Massachusetts

1976–77

NTDs

Aschengrau et al. (1993) United States

Hospital case–control study Massachusetts

1977–80

Croen et al. (2001) United States

Population-based, case–control study California live births, stillbirths, and terminations

1989–91

Earliest known maternal address linked to water nitrate results Address at the date of conception matched to water nitrate samples of respective water utilities that were drawn on the date closest to the conception date Matched maternal residence during pregnancy or outcome to results of tap water sample taken in respective city as part as part of routine analysis Periconceptional addresses linked to water utilities and nitrate databases (as nitrate)

Cedergren et al. (2002) Sweden

Population-based, retrospective cohort study Infants born in Ostergotland County Population-based, case–control study Texas counties along Texas-Mexico border

1982–96

Linked address at periconception or early pregnancy to water supplies

Any congenital cardiac defect

1995–2000

Usual periconceptional drinking-water source tested for nitrates (as nitrate)

NTDs

Zierler et al. (1988) United States

Brender et al. (2004) United States

1980–83

Elevated ORs for any congenital malformation (2.8, 95% CI 1.6, 5.1), defects of CNS (3.5, 95% CI 1.1, 14.6), musculoskeletal system (2.9, 95% CI 1.2, 8.0) if primarily drank groundwater. Elevated ORs for congenital malformations associated with nitrate levels 5 mg L 1 relative to nitrate levels 45 mg L 1 associated with anencephaly (OR 4.0, 95% CI 1.0, 15.4) but not with spina bifida. Increased risk of anencephaly with water nitrate levels below US EPA MCL among groundwater drinkers only (5–15 mg L 1: OR 2.1, 16–35 mg L 1: OR 2.3, and 36–67 mg L 1: OR 6.9). Dietary nitrate and nitrite not associated with NTDs Weak positive association (OR 1.2, 95% CI 0.97, 1.4) between water nitrate 2 mg L 1 and cardiac malformations

NTDs

OR of 1.9 (95% CI 0.8, 4.6) for NTDs if water nitrate 3.52 mg L 1. Increased water nitrate associated with spina bifida (OR 7.8) but not with anencephaly (OR 1.0). Drinking water nitrate modified association between nitrosatable drug exposure and NTDs (Continued)

Drinking Water Nitrate and Human Health

Scragg et al. (1982) and Dorsch et al. (1984) Australia

177

Epidemiologic studies of drinking water nitrate and reproductive outcomesdcont'd Study design Regional description

Year of outcome ascertainment

Mattix et al. (2007) United States

Ecologic study Indiana

1990–2002

Winchester et al. (2009) United States

Ecologic study United States

1996–2002

Waller et al. (2010) United States

Population-based, case–control study Washington State

1987–2006

Brender et al. (2013) and Weyer et al. (2014) United States

Population-based, case–control study Iowa and Texas

1997–2005

Holtby et al. (2014) Canada

Population-based, case–control study Kings County, Nova Scotia

1988–2006

Exposure description a

Reproductive outcomes included

Summary of findings

Monthly abdominal wall defect rates linked to monthly surface water nitrate results from US Geological Survey (USGS) data Rates of combined and specific birth defects (for each last menstrual period [LMP] month) compared with monthly surface water nitrate results from USGS data

Abdominal wall defects

No significant correlation noted between nitrate levels in surface water and monthly abdominal wall defect rates

22 categories of birth defects

Calculated distance between maternal residence and closest site of increased amounts of agrichemicals including nitrate (as nitrate-nitrogen) and nitrite (as nitrite-nitrogen) in surface water as determined by the USGS Maternal addresses linked to public water utilities and nitrate results; nitrate intake from bottled water estimated with survey and laboratory testing; nitrate from private wells predicted through modeling; nitrate (as nitrate) ingestion estimated from reported water consumption

Gastroschisis

Nitrate (mg L 1 log-transformed) associated with birth defect category “other congenital anomalies” in the simple logistic model (one agrichemical predictor) (OR 1.149, 95% CI 1.120, 1.178) and the multiple agrichemical predictor model (OR 1.177, 95% CI 1.143, 1.212) Gastroschisis in offspring not significantly associated with maternal residential proximity to surface water with elevated nitrate (>10 mg L 1) or nitrite (>1 mg L 1)

Maternal addresses at delivery linked to municipal water supplies and respective median nitrate (as nitrate-nitrogen) concentrations; nitrate in rural wells estimated from historic sampling and kriging

Congenital heart defects Limb deficiencies NTDs Oral cleft defects

Congenital malformations combined

Using the lowest tertile of nitrate ingestion from drinking water as the referent group, casemothers of babies with spina bifida were 2.0 times more likely (95% CI 1.3, 3.2) than controlmothers to ingest 5 mg nitrate daily from drinking water; case-mothers of babies with limb deficiencies, cleft palate, and cleft lip were, respectively, 1.8 (95% CI 1.1, 3.1), 1.9 (95% CI 1.2, 3.1), and 1.8 (95% CI 1.1, 3.1) times more likely than control-mothers to ingest 5.42 mg of nitrate daily from drinking water No positive associations noted for conceptions during 1987–97; for conceptions 1998–2006, ORs for birth defects with 1–5.56 mg L 1 and >5.56 mg L 1 drinking water nitrate, respectively (referent of 0.1 mg L 1 and risk of neonatal death. Several population-based case–control studies reported on the relationship between drinking water nitrate concentrations and neural tube defects (NTDs). Among women in Sweden, the average drinking water nitrate level did not differ appreciably between women who gave birth to offspring with NTDs (mean nitrate: 4.9 mg L 1) and control women (5.1 mg L 1). Croen and colleagues in California (United States) estimated maternal drinking water nitrate exposure during the periconceptional period by linking residential histories to public water supply monitoring results. Women whose public water supplies had nitrate above 45 mg L 1 (relative to 45 mg L 1 or less) were four times more likely to have offspring with anencephaly; however, they observed no association with risk of spina bifida. Furthermore, increased risks for anencephalic offspring were observed with nitrate levels below the MCL for groundwater drinkers. They also estimated intake of dietary nitrate, nitrite, and N-nitroso compounds during the 3 months before conception and found no association between higher intake of these compounds and NTD risk. Among Mexican–American women in Texas (United States), drinking water nitrate concentrations were measured in the drinking water source used during the periconceptional period as feasible. Women with a drinking water nitrate level (as nitrate) of 3.5 mg L 1 or greater were 1.9 times more likely to have an NTD-affected pregnancy relative to women with drinking water nitrates below 3.5 mg L 1. Exposure to drugs that can react with nitrite to form NOCs was also assessed, and the association with nitrosatable drug exposure was greatly increased among women who were concurrently exposed to higher drinking water nitrate during the pregnancy. The relation between nitrate in surface water and abdominal wall defects have been investigated among several US populations. In an ecologic study conducted among Indiana births, Mattix and colleagues found no significant correlation between nitrate levels measured in surface water and monthly abdominal wall defect rates. In a population-based, case–control study of births in Washington state, gastroschisis in offspring was not significantly associated with maternal residential proximity to surface water with elevated nitrate (> 10 mg L 1) or nitrite (> 1 mg L 1). Winchester and his study team also examined birth defects in relation to maternal residential proximity to surface water nitrate for 22 categories of specific birth defects and all birth defects combined. Surface water nitrate was significantly associated with the birth defect category of “other congenital anomalies” in both the simple logistic model (one agrichemical predictor) and in the multiple agrichemical predictor model. The authors did not specify in the published paper which anomalies were included in the “other congenital anomalies” category. In a population-based, case–control study among Iowa and Texas births that were part of the US National Birth Defects Prevention Study, Brender and colleagues linked maternal addresses to public water utilities and monitoring results. Because they had interview data available regarding drinking water sources and consumption patterns in early pregnancy, they also took into account bottled water consumption (through a survey of nitrate in bottled water) and private well use (nitrate levels predicted through modeling). Unique to this study, daily intake of nitrate from drinking water was calculated from what women reported as drinking water sources and daily consumption of water. Using the lowest tertile of nitrate ingestion from drinking water as the referent group, case-mothers of babies with spina bifida, limb deficiencies, cleft palate, and cleft lip were significantly more likely than control mothers to have estimated daily nitrate ingestion from water in the highest tertile; no significant associations were noted with congenital heart defects. Since the investigators also examined the effects of nitrosatable drug exposure in the study population, the impact of water nitrate on this type of drug exposure was investigated. Higher water nitrate intake did not increase associations

180

Drinking Water Nitrate and Human Health

between prenatal nitrosatable drug use and birth defects, but higher intake of total nitrites (food nitrite þ 5% [water þ food nitrate]) significantly increased drug associations with cleft lip, cleft palate, limb deficiencies, and single ventricle. Holtby examined the association between nitrate in municipal water supplies and private wells and congenital malformations combined in a case–control study conducted in Nova Scotia (Canada). While no positive associations were noted for conceptions during 1987–97, ORs for this association were elevated for conceptions during 1998–2006 with less than 1 mg L 1 drinking water nitrate serving as the referent category. The authors suggested that the differences in associations by time period might have been due to improved nitrate exposure classification starting in 1998 and/or the unmasking of the nitrate/birth defects association after fortification of grain products with folic acid. In conclusion, the results of studies that examined the relation between drinking water nitrate and risk of spontaneous abortions, stillbirths, premature birth, and intrauterine growth retardation have been inconsistent. These inconsistencies could indicate no true effect of water nitrate on these reproductive outcomes at the levels evaluated. On the other hand, different results across studies might be due to differing time periods for which exposure was assessed, varying levels of water nitrate, or differences in exposure to other cofactors. It is notable that five out of six studies found a positive relationship between water nitrate ingestion during pregnancy and NTDs or central nervous system defects combined, and exposure levels were generally less than 10 mg L 1 nitratenitrogen (45 mg L 1 as nitrate). Future studies on drinking water nitrates and adverse pregnancy outcomes should focus on the complex interactions between this exposure and intake of nitrosatable compounds, compounds that inhibit nitrosation such as vitamin C, and host factors that might increase nitrosation.

Cancer Human biomonitoring studies demonstrate that ingestion of nitrate via drinking water contributes toward the formation of NOC. Most NOC are potent animal carcinogens, causing tumors at many sites. Several NOC formed endogenously in humans from dietary precursors are considered probable human carcinogens by the International Agency for Research on Cancer (IARC). In a 2006 review of the evidence for the carcinogenicity of ingested nitrate and nitrite, an IARC Working Group concluded that “ingested nitrate or nitrite under conditions that result in endogenous formation of N-nitroso compounds is probably carcinogenic to humans” (Group 2A). There was no separate evaluation for nitrate or nitrite per se, because nitrite is produced endogenously from nitrate and the conditions leading to endogenous formation of N-nitroso compounds are frequently present in the normal human stomach (nitrite and nitrosatable amines or amides in an acidic environment). The Working Group reviewed the epidemiologic studies of ingested nitrate and nitrite through mid-2006. The evidence for dietary nitrite ingestion and cancer was considered limited based on epidemiologic studies of stomach and esophageal cancer. The epidemiologic evidence for drinking water nitrate was considered inadequate. Most of the early epidemiologic studies of drinking water nitrate and cancer have been ecologic in design, linking incidence or mortality rates to drinking water nitrate levels for large groups of people at the town or county level. This study design is useful for generating hypotheses about disease risk factors, but studies with individual exposure information are needed in order to establish causality. This is particularly true for the evaluation of nitrate exposure via drinking water because of the complex process by which nitrate intake forms potentially carcinogenic N-nitroso compounds. Study designs that assess individual exposure include case– control studies, which compare exposure among group with a disease or condition with a group who do not have the disease or condition, and cohort studies, which track a group of people with a common characteristic for a period of time. The early ecologic studies focused on stomach cancer mortality and most used drinking water nitrate measurements concurrent with the time period of cancer mortality. Results were mixed, with some studies showing positive associations, many showing no association and a few showing inverse associations. More recent ecologic studies of stomach cancer in Slovakia, Spain, and Hungary with historical measurements and exposure levels near or above the MCL have found positive correlations with stomach cancer incidence or mortality. The Slovakian study also found significantly elevated incidence rates for NHL and colon cancer among men and women exposed to public supply nitrate levels of 4.5–11.3 mg L 1 nitrate-N; there was no association with bladder and kidney cancer incidence. In the Spanish study, there was a positive correlation between nitrate levels in public supplies and prostate cancer mortality, but no relation with bladder and colon cancer. Thus, the studies are most consistent for high nitrate exposures and stomach cancer; however, interpretation is limited by the ecologic study design. Over the past 30 years, individual-based studies have been conducted in the United States, Germany, Spain, and the Netherlands, where nitrate levels show a range of concentrations across public water supplies but were almost always below the MCL of 10 mg L 1 nitrate-N (Table 2). A cohort study of older women in Iowa (United States) published in 2001 found a 2.8-fold and 1.8-fold increased risk of bladder and ovarian cancers, respectively, associated with the highest quartile (> 2.46 mg L 1 nitrate-N) of the long-term average nitrate levels at the residence where women resided at enrollment. They observed significant inverse associations for uterine and rectal cancer and no significant associations for non-Hodgkin lymphoma, leukemia, melanoma, and cancers of the breast, colon, rectum, pancreas, kidney, and lung. Analyses of cancers of the ovary, bladder, kidney, breast, and thyroid were published after additional follow-up of the cohort. With over 10 additional years of cancer ascertainment, risks of bladder and ovarian cancer remained increased. The association between water nitrate ingestion and ovarian cancer was stronger among women with lower vitamin C intakes. Smoking, but not vitamin C intake, modified the association with nitrate in water and bladder cancer; increased risk was apparent only in current smokers. Breast cancer risk was not associated with drinking water nitrate ingestion overall but was increased among women with higher folate intake. Kidney cancer risk was increased among women whose average exposure

Table 2

Case–control and cohort studies of drinking water nitrate (NO3-N) and cancer (2001–17)

First author (year) Country Mueller et al. (2001) and Mueller et al. (2004) 6 countries

Weyer et al. (2001) United States

Study design Years Regional description

Exposure description

Dipstick measurements of nitrate and nitrite in Population-based case–control the pregnancy water supply among women Incidence, 1976–94 who had not moved (185 cases, 341 controls); Los Angeles County, San Francisco population excluding bottled water users Bay Area, Seattle–Puget Sound (131 cases, 241 controls) United States (2001); Paris, France; Milan, Italy; Valencia, Spain; Winnipeg, Canada (2004, pooled with US study) Prospective cohort study Average nitrate level (1955–88) in public water supplies for residence at enrollment >10 years Incidence, 1989–98 Iowa (highest quartile: >2.46 mg L 1)

Population-based case–control Incidence, 1986–89 Iowa

Average nitrate level in public water supplies 1960–87 (highest quartile men: 3.1 mg L 1; women: 2.4 mg L 1); Years of exposure >10 mg L 1

De Roos et al. (2004) United States

Population-based case–control Incidence, 1986–89 Iowa

Average nitrate level in public water supplies 1960–87 categorized into four levels (lowest: 5 mg L 1); years of exposure > 5 and >10 mg L 1

Coss et al. (2004) United States

Population-based case control Incidence, 1986–87 Iowa Case–control study Incidence, 1988–95 Cape Cod; Massachusetts Population-based case–control Incidence, 1989–93 66 counties in Eastern Nebraska Population-based case–control Incidence, 1998–2000 Iowa

Average nitrate level in public supplies 1960–87 (highest quartile: > 2.8 mg L 1); years of exposure >7.5, 10 mg L 1 Average nitrate level in public supply wells after 1972; years exposed >1 mg L 1 nitrate-N

Brody et al. (2006) United States Ward et al. (2005) United States Ward et al. (2006) United States

Average nitrate level in public water supplies 1960–86 Nitrate in public water supplies among those with estimates for >70% of person-years >1960 (181 cases, 142 controls); nitrate measurements for private well users at time of interviews (1998–2000; 54 cases, 44 controls)

Summary of findings a,b

Childhood (< 15 years) malignant brain tumors

>11 mg L 1 NO3-N vs. ND OR ¼ 1.0 (CI: 0.4, 2.2); exclude bottled water users OR ¼ 1.5 (CI: 0.6, 3.8) >1.5 mg L 1 NO2-N vs. ND OR ¼ 2.1 (CI: 0.6, 7.4); exclude bottled water users OR ¼ 5.2 (CI: 1.2, 3.3); well water for entire pregnancy (vs. public supply) increased risk in Canada and Seattle

Bladder, breast, colon, kidney, Public supply average >2.46 mg L 1 NO3-N vs. 10 mg L 1 Colon rectum No association with average level, years >5 and 10 mg L 1; significantly elevated risk among subgroups with below median vitamin C intake or above median meat intake and 10 or more years > 5 mg L 1 Pancreas No significant associations with quartiles of public water supply average nitrate or number of years >7.5 or 10 mg L 1 Breast Public water supply average >1.2 mg L 1 NO3-N vs. 5.0 mg L 1 NO3-N vs. ND OR ¼ 0.8 (CI: 0.2, 2.5); public supply: average >2.9 mg L 1 NO3-N vs. 10 years and ipsilateral use OR (95% CI)

Cases

0.9 (0.3–2.8)a 3.9 (1.6–9.5) 1.8 (1.1–3.1)

5 12 23

2.6 (0.9–7.9) 1.6 (1.1–2.2)i

6 121

OR, Odds ratio. A measure of risk showing how many times cancer cases were exposed to mobile phones in the past compared to controls. The Interphone Studies: Denmark (Christensen et al., 2005), France (Hours et al., 2007), Germany (Schuz et al., 2006), Japan (Takebayashi et al., 2008), United Kingdom (Hepworth et al., 2006), Israel (Sadetzki et al., 2007), Norway (Klaeboe et al., 2007), Sweden (Lönn et al., 2005), Finland (Shrestha et al., 2015), Five North European countries combined (Schoemaker et al., 2005 and Lahkola et al., 2007, the latter with further cases recruited). a 6 years. b 46 months. c >25 years. d 15–10 years. e Life-long cumulative duration  896 h. f  18,360 calls. g >5.2 years. h >8 years. i Cumulative no. of calls > 5479. j 7.4 years or more. k stated as “duration of use.”

Electromagnetic Fields From Mobile Phones and Their Base Stations: Health Effects

>10 years use OR (95% CI)

Tumor type and study

304

Table 1

Electromagnetic Fields From Mobile Phones and Their Base Stations: Health Effects

305

The findings are summarized below according to tumor type: Glioma: (tumor of brain tissue) In Sweden, 1.6 and 1.2 times increased risks have been found after short-term exposure to digital mobile phones. For 10 or more years-exposure to analogous phones, risks are elevated 3.5 and 2.4 times, and 3.6 and 2.8 times for digital phones. For 10 years or more ipsilateral use, increased risks of 1.8 times for analogous and 2.3 times for digital cell phones have been found. According to the combined analysis of North European country studies, 1.4-fold increase in risk has been detected for 10 years or more ipsilateral use. In the CEFALO study on childhood brain tumors, a subgroup analysis of participants with operator data available, brain tumor risk was related to the duration of the child’s subscription of mobile phone but not to the amount of use. The CERENAT study with data collected between 2004 and 2006 has found a 2.1 times significant risk of glioma in the last quintile of cumulative number of calls. Hardell’s most recent case-control study, with cases diagnosed between 2007 and 2009, thus newer than other studies providing the availability of more cases exposed longer has found higher increased significant ORs of 1.8 and 1.6 for analogous and digital 2G mobile phones for ever exposure, while the risks were more elevated (3.3 and 2.1) for latencies of > 25 years and 15–20 years, respectively. In the same study, ipsilateral use was associated with higher risks compared to contralateral mobile and cordless phone use. Meningioma: (tumor of the membranes covering the brain) In studies from Sweden, 2.1- and 1.6-fold elevated risks have been found for 10 years or more exposure to analogous mobile phones. An OR of 2.6 was found in the CERENAT study for life-long cumulative duration of  896 h on the phone. Acoustic neuroma: (tumor of the nerve for hearing) In studies from Sweden in 2002 and 2005, increased risks of 3.0 and 9.9 times have been found for short-term exposure to analogous phones, respectively. Another study from Sweden has found 2.3-fold elevated risk for digital and 1.4-fold for analogous phones. The risk was 3.1 times for 10 years or more exposure in the same study. Another study from Sweden has found a significant increase of 3.9 times in risk for 10-years or more ipsilateral use while the North European countries’ combined analysis has revealed a 1.8 times elevated risk. Parotid gland tumors: (tumors of the largest salivary gland). An increased risk of 1.6 related to heavy use on the ipsilateral side has been detected in a study from Israel. Pituitary gland tumors: A reduced odds ratio of 0.4 was seen in a study among regular mobile phone users compared to never/ nonregular users, possibly reflecting methodological limitations. Other studies have not detected increased risks until a recent study from China had found an elevated risk of 7.6 among cases diagnosed between 2006 and 2010 for ever use. The pituitary gland could be considered at a location with less penetration of EMF waves compared to other cancers mentioned above, according to skull penetration models (Fig. 2). The publication of the Interphone studies have opened several discussions in this field. These studies have been criticized on having methodological problems and bias. Other authors have stated that the time lag for tumor development was not enough and that studies finding no risk were misleading and the emphasis on potential risks was insufficient. In the final combined analysis of Interphone studies, an elevated risk for ipsilateral and temporal lobe glioma is mentioned for heavy users with a comment that due to the limitations of the study, this relationship might not be causal and that long-term impacts on heavy users should be evaluated by future studies. Indeed, even for tobacco which includes so many carcinogens, a time lag of 20, 30 or sometimes 40 years

Fig. 2 The penetration of electromagnetic radiation from a cell phone based on age. https://www.mobilesafety.com.au/mobile-phone-radiationabsorption-rates/ (accessed August 8, 2018).

306

Electromagnetic Fields From Mobile Phones and Their Base Stations: Health Effects

passes between starting smoking and the diagnosis of a cancer, including the latency from the onset of the cancer to its diagnosis which is estimated as 13.6 years for lung and even longer as 21.9 years for the brain. Evidence from other types of epidemiologic studies not included in the table is as follows: In a case-case study on acoustic neuromas, tumor volume significantly increased with increasing cumulative hours on mobile phone (r2 ¼ 0.144, P ¼ .002) and regular mobile phone users had significantly larger tumors than nonregular users (8.1  10.7 cm3 vs. 2.7  3.8 cm3, P < .001), indicating a possible link between mobile phone use and tumor growth. In a nationwide cohort study from Denmark, 355,701 private mobile phone subscribers from 1987 to 1995 were followed until 2007 and incidence rate ratios (IRR) were calculated for skin cancers. After a follow-up period of at least 13 years, the IRRs for basal cell carcinoma and squamous cell carcinoma were around 1.0. Among men, the IRR for melanoma of the head and neck was 1.2 (0.7–2.2) and the corresponding IRR for the torso and legs was 1.2 (0.9–1.5). Although relying on smaller numbers, a similar insignificant risk pattern was seen among women. The Million Women Study, a prospective cohort from the United Kingdom, has examined 791,710 middle-aged women to explore the relation between mobile phone use and incidence of intracranial central nervous system (CNS) tumors and other cancers. The risk among ever versus never mobile phone users was 1.0 (0.9–1.1) for all intracranial CNS tumors, with no significant increased risk for specified CNS or 18 other site cancers. For > 10-year users compared with never users, the glioma risk was 0.8 (0.6– 1.1) and meningioma risk 1.1 (0.7–1.8). For acoustic neuroma, there was a significantly increased 2.5 times (1.1–5.6) risk for exposure over 10 years, the risk increasing with duration of use (trend among users, P ¼ .03). A recent meta-analysis on glioma evaluating 11 studies has found a significant relationship between > 5 years mobile phone use and glioma risk with a pooled OR of 1.4 (1.1–1.6). Fig. 3 shows an increasing trend in the incidence of especially benign/borderline cancers of the brain and other nervous system (SEER 18 areas), along with the increase in U.S. mobile-cellular subscriptions per 10 inhabitants. All of acoustic neuromas and approximately 90% of the meningiomas are benign cancers, while gliomas can be either benign or malignant. A trend analysis from the U.S. has concluded that glioma incidence rates remained generally constant between 1992 and 2008. Similarly, an ecological trend analysis from Australia between 1982 and 2012 has found significant elevations in brain cancer incidence only in the age group > 70 years but this trend had started in 1982, before the introduction of mobile phones, possibly due to increased diagnostic facilities. Another comprehensive time-series analysis from England between 1985 and 2014 has not found evidence of an increase in malignant glioma, glioblastoma multiforme, or malignant neoplasms of the parietal lobe, however, malignant neoplasms of the temporal lobe, the closest lobe to the ear and thus to mobile phones, had increased faster than expected.

Fig. 3 Recent trends in cancer incidence rates and mobile-cellular telephone subscriptions in the United States, 2000–15. (Data sources: SEER Recent Trends in Incidence Rates, 2000–15 (both sexes, all ages, all races, SEER 18 areas), NIH, NCI, Surveillance, Epidemiology, and End Results Program (SEER), SEER*Explorer, accessed August 7, 2018, https://seer.cancer.gov/explorer/application.php and ITU Country ICT Data (Until 2016) Mobile-cellular subscriptions, accessed August 8, 2018, https://www.itu.int/en/ITU-D/Statistics/Documents/statistics/2018/Mobile_cellular_2000-2016. xls, with rates/ratios adapted to appropriate denominators to ease visualization of data in a scale between 0 and 15).

Electromagnetic Fields From Mobile Phones and Their Base Stations: Health Effects

307

Points to Be Considered When Reading Studies on the Impacts of Mobile Phones on Health There are several sources of bias especially for studies on mobile phones and cancer. The most important bias in the first studies on brain tumors was the too short duration of exposure to mobile phones for cancer development, precluding the detection of a relationship between them, mentioned above. A study evaluating all mobile phone and brain tumor case-control studies up to March 2009, has detected that among the 11 single country studies of Interphone, 15 OR’s were under 1.0 versus only two OR’s greater than 1.0 indicating increased risk. According to the author, cell phones are either protective, or there are important design errors in the Interphone study protocol. According to IARC, 3.5 million Euros were received from GSM firms and 3.85 million Euros from EU funds to conduct the Interphone studies. On the other hand, Hardell et al. from Sweden, who conducted research financially independent of the industry, have found many increases in risk related to mobile and wireless phone use. Morgan has identified 11 design flaws in the Interphone studies, 8 of them with a tendency to underestimate the risk of brain tumors, like 0% prevalence of 10 years or more use in three of the studies (Norway, France, Germany) or the higher nonresponse rate among the control groups due to the rejection by people not using mobile phones, thus leading to control groups being composed of a higher prevalence of mobile use compared to the general population. If mobile phones cause cancer after a long latent period, mobile phone use today might be causing many cancers in the future but as there were much fewer users in the years that these studies were conducted, they might have caused very few of the cancers evaluated. The induction period of radiation-related cancers is usually more than 10 years. Dr. Lai’s evidence-based warning on financing bias already mentioned above should be considered as well.

Conclusions Mobile phones could interfere with the functioning of our cells, tissues and organs and might be responsible from some of the general, nonspecific symptoms observed daily by many people, showing dose-response in some studies. More research efforts are needed to arrive to more specific conclusions. As for cancer, although many studies have not yet found significant increased risks, the presence of significantly increased risks mostly after 10 years of exposure and on the ipsilateral side might be warnings of a longer time-lag association with cancer. So further studies with more prolonged durations of exposure are needed, however, as in the meantime the number of the unexposed has also dramatically decreased, cumulative exposure measures might be useful in these studies. Although epidemiologic studies find might find associations, their confirmation by laboratory tests and experimental studies, thus biologic plausibility is also important in the precision of the conclusions. Thus, until scientific evidence is clear about the risks, it would be wise to take precautionary measures in daily life, like using earphones with cables, choosing mobile phones with lower SAR values, limiting the use of mobile phones when inside vehicles or when very far from base stations and in general, minimizing their use to only necessary situations and using alternatives like fixed telephones and computers with cabled internet if accessible.

Possible Health Effects of Mobile Phone Base Stations Introduction The wireless technology has seen an unprecedented growth around the world and has become widespread. In addition to the increasing number of subscribers, the amount of data transferred via new technologies is getting larger. Consequently, the number of base stations is increasing steadily, especially in urban centers. In mobile communication, the whole field of communication is divided into small sections called cells. At the center of each cell, there exists a base station which allows us to communicate. The base stations are connected to one another with a network structure and this network structure carries the call request from any mobile phone to the relevant user. Mobile phones and base stations are linked to each other through electromagnetic waves. This cellular structure allows multiple users to communicate at the same time. However, the connection capacity of each base station is limited. Increasing number of calls and data transmission result in the need for establishment of new base stations day by day. On the other hand, the greater number of subscribers communicating within the area covered by the base station means lower performance of data transmission. The speed of data transmission is getting more and more important, so GSM operators need more base stations to fulfill their service commitments because of their commercial roles. In a review on the health impacts of base stations, it has been reported that findings about the adverse effects on health are not yet sufficient. But the absence of any evidence does not mean that there is no risk. It is noted that the studies should be focused on children and adolescents, and should be done prospectively. Base stations around public spaces such as playgrounds, parks, market places, and schools have also become a subject of debate among public officials responsible for protecting public health. There is conflicting information on the health effects of base stations in public opinion. Studies examining the health impacts of the electromagnetic radiation (EMR) have argued a wide range of effects such as various health symptoms, cancer, changes in hormone and neurotransmitter levels. The goal of this review is to compile current scientific studies evaluating the health impacts of base stations and emphasize the effects on human health, and it is aimed to contribute for meeting the need of scientific knowledge on this subject.

308

Characteristics of studies that examine the health effects of base stations

Study

Year of publication

Journal

Setting

EMF measurement

Santini et al.

2002

Pathologie Biologie

France

Santini et al.

2003

Pathologie Biologie

Navarro et al.

2003

Eger et al.

Sample

Results



Total 530,270 men, 260 women

France



Electromagnetic Biology and Medicine

Spain

þ

Total 530,270 men, 260 women 101

2004

Umwelt Medizin Gesellschaft

Germany



967

Wolf and Wolf

2004

International Journal of Cancer Prevention

Israel

þ

622 cases 1222 controls

Hutter et al.

2006

Austria

þ

365

Regel et al.

2006

Occupational and Environmental Medicine Environmental Health Perspectives

Switzerland

þ

117 healthy subjects 33 sensitive, 84 nonsensitive

Abdel-Rassoul et al.

2007

Neurotoxicology

Egypt

þ

85 cases 80 controls

Eltiti et al.

2007

Environmental Health Perspectives

United Kingdom

þ

56 cases 120 controls

Riddervold et al.

2008

Bioelectromagnetics

Denmark

þ

40 adolescents 40 adults

Depression, memory loss, dizziness, loss of libido were found to be significantly more frequent up to 100 m of the base station, compared to >300 m. Headache, sleep disturbance, discomfort, irritability were more frequent up to 200 m, only fatigue was significantly more frequent in 200–300 m of the base station More symptoms were detected among people living 0.99 NR

TTHMs DCAAs

mg/L mg/L

34. Rodrigues et al.

2007

Laboratory

21 (þ15)

NR

mg/L

35. Hong et al.

2007

Laboratory

36. Uyak et al.

2007

Field

37. Chaib and Moschandreas

2008

Laboratory

72 72 72 NR NR NR

>0.87d > 0.94d > 0.86d 0.88 0.61 0.75

TTHMs CHCl3 BDCMs DBCMs CHBr3 TTHMs TBrTHMs CHCl3 TTHMs HAAs TTHMt

mg/L mg/L mg/L

mg/L mg/L mg/L

mg/L mg/L mg/L mg/L mg/L mg/L

e( 0.325 Br ),e(0.0145 (Cl1  pH)),Cl1(2.32),e8.46(P), e 2.31(pH) K ¼ 6.62 (pH) 0.13 ,(Br þ 1)0.10 ,(Cl1) 0.75    3 Ai ¼ G þ eai ,(pH)bi ,(Cl1)ci , (P þ G)di ,e(ei  Br þ fi  (Br )2 þ gi  (Br )  i ¼ 1, 2, .,13 P ¼ m,Brm,Br  þm,Cl1 where (m ,Br) ¼ moles of Br ion and (m,Cl1) ¼ moles of initial chlorine 0.26 (Ch-a) þ 1.57 (pH) þ 28.74 [Br] – 66.72 [Br]2 – 43.63 (S) þ 1.13 (Sp) þ 2.62 (T ,S) – 0.72 (T,D) 0.32 (Ch-a) þ 0.68 (pH) þ 2.51 (D) þ 1.93 (Sp) – 22.1 (S) þ 1.38 (T,S) – 0.12 (T,D) 0.37 (Ch-a) þ 0.32 (pH) þ 16.16 [Br] – 29.82 [Br]2 þ 1.88 (D) þ 5.17 (S) – 0.37 (T ,S) – 0.12 (T,D) Linear regression in function of various THM species Single linear and nonlinear regression models for water of each utility (in function of water temperature, TOC, chlorine dose and contact time) a1D{1 – f e( kr , t) – (1 – f) e( ks , t)} Many linear and multiple linear regression models were developed Many regression models were developed and discussed exp(0.33 pH – 0.02 pH2 þ 0.12 t – 0.004 t2) exp(0.44 pH – 7.53 (log pH) – 1.1 D þ 0.20 D2) 0.33 pH – 0.02 pH2 þ 0.48 t þ 0.09 D 0.98 (log pH) þ 1.1 (log t) – 0.01 (t  D) þ 1.59 (log D) 4.53 (t)0.127 (D)0.595 (TOC)0.596 (Br)0.103 (pH)0.66 0.0071 (TOC þ 3.2)1.314 (4 pH)1.496 (D – 2.5) 0.197 (T þ 10)0.724 Various nonlinear regression models were developed based on the kinetics of the chlorine decay TTHMO þ xi d þ yi d2 Various nonlinear regression models were developed based on the kinetics of the chloramine decay 16 þ 1.6 (FA) þ 0.1 (D) þ 0.3 (T) – 0.8 (FA) (T) – 1.2 (FA)2–2.8 (D)2 3.5 þ 0.8 (FA) þ 0.02 (D) þ 0.07 (T) – 0.3 (T)2 4.5 þ 0.7 (FA) þ 0.04 (D) – 0.08 (D)2 þ 0.4 (T)2 4.0 þ 0.4 (FA) þ 0.05 (D) þ 0.01 (T) – 0.7 (FA)2  1.0 (D)2 4.0–0.2 (FA) þ 0.03 (D) þ 0.09 (T) – 0.6 (FA) (T) – 0.5 (FA)2  0.8 (D)2 0.042 [(t)0.258 (D/DOC)0.194 (pH)1.695 (T)0.507 (Br) 0.218] 0.00043 [(t)0.295 (pH)3.154 (T)0.421 (Br) 0.184] 0.177 [(t)0.21 (D/DOC)0.221 (pH)1.374 (T)0.532 (Br) 0.184] 0.916 [(t)0.174 (D)0.654 (pH)1.322 (SUVA)0.712] 0.916 [(t)0.172 (D)0.351 (pH) 1.248 (SUVA) 0.469] TTHMa þ (3.46 þ 0.54 T) sin (2pt/24)

Empirical Models to Predict Disinfection By-Products (DBPs) in Drinking Water: An Updated Review

Author

2008

Field

180

0.72

39. Semerjian et al. 40. Chowdhury and Champagne

2009 2009

Field/laboratory Field/laboratory

53–160 NR

0.11–0.70 NR

41. Chen and Westerhoff

2010

Field/laboratory

42. Shakhawat et al.

2011

Field

210 210 207 207 NR

0.87 0.88 0.84 0.84 0.77–0.96

43. Brown et al. 44. Platikanov et al. 45. Zhang et al.

2011 2012 2013

Field/laboratory Field Laboratory

NR 162 NR

NR NR NR

46. Ziv Ohar and Avi Ostfeld 47. Peleato and Andrews

2014 2015

Laboratory Field/laboratory

NR NR

NR NR

48. Ged et al. 49. Fischer

2015 2015

Field/laboratory

NR

0.52–0.80

50. Islam et al.

2016

Field

22

51. Abokifa et al.

2016

Laboratory

NR

0.724 0.677 NR

mg/L mg/L mg/L

5.2 (DOCR)0.322(DOCT)0.761(Pre-Cl2)0.206(Post-Cl2)0.184(T)0.204 2138 (DOCR)0.38(DOCT)0.774(Pre-Cl2)0.102(pHR) 2.6(T)0.204 Twelve regression models were developed and discussed k ¼ 0.0011 e0.0407T Chw ¼ Cw e(khw  kW)t 1147  (DOC)0.0  (UV) 0.83  (Br þ 1)0.27 1805  (DOC)0.11  (UV)1.22  (Br þ 1) 2.19 1151  (DOC) 0.17  (UV)0.89  (Br þ 1) 0.60 189  (DOC) 0.57  (UV)0.73  (Br þ 1) 2.42 Eight linear and nonlinear regression models were developed for each THMPP/HWT and HAAPP/HWT ktc(C1  CO) Various linear and nonlinear regression methods were used to develop model CX ¼ Cm (e k2t – e k1t )

mg/L mg/L

TTHM(t) ¼ Tf.x(t) þ Mo C1 þ C2  (D) þ C3  NOM (concentration)

TTHMs HAAs

mg/L

Critical evaluation of various existing DBP models using common data sets 1.0149(T)0.6525 (ta)0.1712 (NTU)0.3545(Alk)0.1271 exp(pH  0.2123) (F) 0.2704

THMs  HAAs  THMs DTHM Dt

mg/L

TTHMs HAAs TTHMs THMs THM (FP) CHCl3 (FP) HAA (FP) DCAA (FP) THMsPP/HWT HAAsPP/HWT TTHMs THMs THMs HAAs TTHMs THMs HAAs

mg/L mg/L mg/L mg/L

mg/L

mg/L

0.6845(Cl)1.5673 (Coag Dose) 0.0797(ta)0.1686 (T)0.2231 (Fc) 0.4625 1.02 (10) 1.53 (T)0.47 (pH4.55) (UV254 ) (10)0.62 (T) 0.81 (FRC0.8447) YH$KCl$Clb$Xb

Nomenclature: TTHMs, total trihalomethanes; TTHMO, initial TTHM concentration; TTHMt, TTHM concentration at any time of the day; TTHMa, average TTHM concentration in a given day; THMpp, total trihalomethanes for plumbing pipes; THMHWT, total trihalomethanes for hot water tank; CHCl3, chloroform; BDCMs, bromodichloromethane; DBCMs, dibromochloromethane; CHBr3, bromoform; HAApp, haloacetic acid for plumbing pipes; HAAHWT, haloacetic acid for hot water tank; MCAAs, monochloroacetic acid; DCAAs, dichloroacetic acid; TCAAs, trichloroacetic acid; CAAs, chloroacetic acid; MBAAs, monobromoacetic acid; DBAAs, dibromoacetic acid; UV, UV absorbance at 254 nm (cm 1); TOC, total organic carbon (mg/L); NVTOC, non-volatile organic carbon (mg/L); DOC, dissolved organic carbon (mg/L); POX, purgeable organic halide (mg/L); NPOX, nonpurgeable organic halide (mg/L); T, water temperature ( C); Flu, fluorescence (%); D, chlorine dose (mg/L); f, fraction of the chlorine demand attributed to rapid reactions; CO, residual chlorine at the treatment plant after chlorination (mg/L); Cl1, initial chlorine concentration; C1, initial residual chlorine (mg/L); a, parameter depending on location at which chloroform is predicted; a1, TTHM yield coefficient; 3 , random error; b, water dispersion parameter in the water distribution system; kr and ks, the first order rate constants for rapid and slow reactions, respectively; Br, bromide ion (mg/L); t, reaction time (hrs); S, dummy variable (summer); Sp, dummy variable (spring); G ¼ 1 for chlorinated compounds and G ¼ 0.0001 otherwise; Ch–a ¼ chlorophyll-a (mg/m3); ai, bi, ci, di, ei, fi, gi ¼ constants depending on type of DBP (see Clark et al., 2001); NF, dummy variable near or far; Y, year of sampling expressed by binary numbers; NR, not reported; xi, yi ¼ regression constants derived for each month i and are function of temperature T (Chaib and Moschandreas, 2006); d ¼ distance traveled in km; FA, hydrophobic fulvic acid; DOCR and DOCT, Dissolved organic carbon for raw and treated water (mg/L); Pre- and Post-Cl2, Pre- and post-chlorine dose (mg/L); pHR, pH for raw water; SUVA, specific UV absorbance; x(t), chlorine demand at time t; Tf, parameters relating the TTHM formation to chlorine demand; Mo, TTHM concentration at time t ¼ 0; Ktc, indicator of the TTHM productivity of the water (mg/l TTHM per mg/l chlorine); Clb, free chlorine concentration in the bulk solution (mgCl/L); Xb, bulk biomass concentration (mgC/L); kCl, second order reaction rate (L/mgC sec); YH, THMs formation yield as a fraction of chlorine demand (mgTHM/mgCl); CX, formation of disinfection by product during chlorination (mg/L); Cm, maximum formation of disinfection by product during chlorination(mg/L); k1, k2 ¼ rate constants (h 1); k, THM growth rate at Tk(min 1); Cw, THMs concentration in cold water (mg/L); Chw, THMs concentration in heated water(mg/L); kw, THMs formation rate in cold water (min 1); khw, THMs formation rate in heated water (min 1); C1/C2/C3, constants determined via regression; FRC, free residual chlorine (mg/L); Alk, alkalinity (mg/L of CaCO3); ta, water age(hours); coag dose, coagulant dose (mg/L); Fc/F, fluoride concentration (mg/L); NTU, turbidity; FB, formation potential. a More than one model presented. b Temperature (T) is in degrees K. c Time in minutes. d Values are R2adj.

Empirical Models to Predict Disinfection By-Products (DBPs) in Drinking Water: An Updated Review

38. Mcbean et al.

331

Summary of regression models for other DBPs

332

Table 5

Year

Data source

n

R2

Output

Units

Predictive models

1. Ozekin; Ozekin et al.

1994 1998

Laboratory

NR

NR

Bromate (BrO3)

mg/L

1.55 , 10 6 (DOC) 26 (pH) 5.82 (O3)5.82 (Br)0.73 (t)0.28 for temperatures other than 20 C, the bromate concentration can be modified by following relationship

2. Siddiqui et al.

1994

Laboratory

1996

Field

0.94 0.78 0.95 0.88 0.64 0.68 0.87–0.97

CHBr3 CHBr3 TOBr Bromate Bromate Bromate Bromate (BrO3)

mg/L

3. Song et al.

70 30 70 54 22 173 119–239

4. Korn et al.

2002

Laboratory

112 112

0.95 0.95

Chlorite Chlorate

5. Sohn et al.

2004

Laboratory

mg/L

2007 2007 2010

0.77 0.98

Bromate TOBr

6. Jarvis et al. 7. Civelekoglu et al. 8. Chiang et al.

85 98

[BrO3]T ¼ [BrO3] 20 C (1.035)T – 20 7.3 (DOC)1.33 (pH) 1.25 (O3)0.771 (Br)1.56 (T)0.909 (24 h predictions) 2.68 (DOC)1.28 (pH) 1.31 (O3)0.742 (Br)1.55 (T)0.956 (t)0.353 5.1 (DOC)1.07 (pH)1.05 (O3)0.766 (Br)1.53 (T)1.08 (24 h predictions) 1.5 , 10 3 (DOC) 0.74 (pH) 2.26 (O3)0.64 (Br)0.61 (q)2.03 1.5 (DOC) 0.75 (pH) 2.25 (p þ 1)1.31 (Br)0.60 0.26 (DOC)086 (pH)3.27 (DO3)0.22 (Br)0.67 (t þ 1)0.25 (for 0 < t < 1 h) 13 different linear regression models (one per fraction and per water source) for bromate in function of bromide, DOC, nitrogen ammonia, ozone dose, inorganic carbon, and reaction time. exp(0.346–0.07 log(pH)  0.025 log(T) – 0.597 log(C þ 1) – 0.136 log(t þ 1) – 0.0038 log(NPOC , UV254) þ 0.293 log(T) , log(C þ 1) þ 0.393 log(pH) , log(C þ 1) þ 0.67 log(NPOC , UV254) , log(C þ 1) – 0.161 log(NPOC , UV254) , log(t þ 1) exp( 1.99 þ 0.62 log(pH)  0.09 (T) þ 0.698 log(C þ 1) – 0.104 log(t þ 1) þ 0.046 log(NPOC , UV254) þ 0.389 log(T) , log(C þ 1) þ 0.346 log(C þ 1) , log(t þ 1) þ 0.486 log(NPOC , UV254) , log(C þ 1) – 0.119 log (NPOC , UV254) , log (t þ 1) Many regression models were developed and discussed

Laboratory Field/Laboratory

64 NR

0.12–0.84 >0.83

mg/L mg/L

Review paper Six multiple regression models were developed and discussed DDBP ¼ K F-DBP[DOC] – KH-DBP[DBP] Dt

9. Chen and Westerhoff

2010

Field/laboratory

166 134

0.62–0.66 0.77–0.80

Bromate HANs HKs HNMs HAN (FP) NDMA (FP)

10. Hong et al.

2015

Field

33

11. Leavey-Roback et al.

2016

Laboratory/field

NR

0.974 0.96 NR

T-HNM(Cl2) T-HNM(NH2Cl2) NDMA

12. Zhu and Zhang

2016

Laboratory

NR

>0.9

TOX

mg/L mg/L

mg/L

mg/L

mg/L ng/L

17.05  (DOC)0.15  (UV254)0.72  (Br þ 1)3.78  (DON þ 1)0.67 1.65  (DOC)0.87  (UV254)0.00  (Br þ 1)3.22 0.06  (DOC)3.00  (UV254) 1.42  (Br þ 1) 0.33  (DON þ 1) 0.06 0.09  (DOC)2.81  (UV254) 1.36  (Br þ 1)0.52 (10)5.267 (DON) 6.645 (Br)0.737 (DOC) 5.537 (D)0.333  (t) 0.165 (10) 2.481 (NH2Cl)0.451 (NO2) 0.382 (Br)0.630  (t)0.640 (T)0.581 35.8(UV254) þ 1(sucralose conc. in ug/LPI)  2.6(polyDADMAC dose in mg/L as active ingredient)  7.5(log Pre chlorination time in minutes) þ 4.4 (GAC use ¼ 1, no GAC use ¼ 0)  2.6(pH  NH2Cl)  1(Cl2/N weight ratio)  10.4(Biofilter use ¼ 1, no Biofilter use ¼ 0) þ 38.8 Two kinetic models were developed using various reactions to investigate the formation of halogenated DBP in chlorination and chloramination

Nomenclature: NR, not reported; DOC, dissolved organic carbon (mg/L); O3, transferred ozone doses (mg/L); Br, bromide ion concentration (mg/L); t, reaction time (min); UV254, UV absorbance at 254 nm (cm 1); C, chlorine dioxide concentration (mg/L); NPOC, non-purgeable organic carbon (mg/L); T, temperature ( C); DO3, dissolved ozone concentration (mg/L); TOBr, total organic bromine (mg/L); p ¼ peroxone ratio (H2O2/O3); and q ¼ ozonation temperature ( C); NDMADS, N-nitrosodimethylamine in distribution system; PolyDADMAC, a known NDMA precursor; GAC, granular activated carbon; D, chlorine dose (mg/L); DOC/DON, organic matter concentration; DON þ 1, models with DON factor; HAN, halo-acetonitrile; FP, formation potential; HKs, haloketone; TCNM, trichloronitromethane; KF, pseudo first order rate constant for formation; KH, pseudo first order rate constant for hydrolysis.

Empirical Models to Predict Disinfection By-Products (DBPs) in Drinking Water: An Updated Review

Author

Empirical Models to Predict Disinfection By-Products (DBPs) in Drinking Water: An Updated Review

333

Regression-based models are often evaluated using classical statistical criteria (e.g., coefficient of determination (R2), correlation coefficient (r), mean absolute errors (MAE) between measured and predicted levels). Authors often base their judgment on the predictive ability of models using these statistical criteria without referring to the specific conditions or circumstances at with these models were applied. In addition, most models are validated with the same data source that have been used for their calibration and they do not use external databases for this purpose. This does not allow them to generalize their findings and reduce the reliability of proposed models. In addition, only few studies performed sensitivity analysis for their independent (explanatory) variables. Finally, most studies do not include or address uncertainties in DBP models. Critical discussions on these models, and their main advantages, and major outcomes have been discussed in detail in various references (see “Further Reading”). The common criticism on these models is related to the type of (laboratory-scale) data used without considering realistic variations in water temperatures and chlorine disinfectant levels (that could be significant during a year) in water utilities. Therefore, the use of these models to directly predict seasonal variations in DBP formation at full-scale conditions has still limitations. Concerning the reported models based on full-scale data, they are often developed with a limited number of observations, which limits their use for decision-making purposes.

Artificial Neural Networks (ANNs) ANN is a modeling technique inspired by the human nervous system that allows learning by example from representative data that describes a physical phenomenon or a decision process. A unique feature of ANN is that they are able to establish empirical relationships between independent and dependent variables, and extract subtle information and complex knowledge from representative data sets. The relationships between independent and dependent variables can be established without assumptions about any mathematical representation of the phenomena. ANN models provide certain advantages over regression-based models including its capacity to deal with noisy data. ANNs consist of a layer of input nodes and layer of output nodes, connected by one or more layers of hidden nodes. Input layer nodes pass information to hidden layer nodes by firing activation functions, and hidden layer nodes fire or remain dormant depending on the evidence presented. The hidden layers apply weighting functions to the evidence, and when the value of a particular node or set of nodes in the hidden layer reaches some threshold, a value is passed to one or more nodes in the output layer. ANNs must be trained with a large number of cases (data). Application of ANNs is not possible for rare or extreme events, where data are insufficient to train the model. ANNs do not allow the incorporation of human expertise (expert opinion) to substitute for quantitative evidence. ANNs can incorporate uncertainties by estimating the likelihood of each output node. However, the assumptions under which each output is most probable are unknown (i.e., neural networks are black boxes). Hidden layer nodes have no real physical meaning, so output cannot be mapped easily to process. ANNs also require a fixed route of inference where the decision on which information to process (inputs) and how to classify it (outputs) is made in advance. To deal with nonlinearity issues in DBP formation, ANNs have been used to develop predictive models for DBP formation in drinking water. They can be used for optimization, control, and evaluating changes in DBPs formation. Moreover, it will help in good decision-making related to design and operation of drinking water facilities to meet regulatory requirements.

Other Soft Computing-Based Models The term soft computing describes an array of emerging techniques such as fuzzy logic, probabilistic and evidential reasoning, rough sets, and genetic algorithms (also includes ANNs). All these techniques are essentially heuristic which provide rational, reasoned out solutions for complex real-world problems. Recently, evidential reasoning and fuzzy-based methods have been successfully used for the modeling of DBPs. In addition, an assessment tool for health related water quality risk was created using Bayesian network modeling approach. The developed tool can be adapted to other water supply systems, in terms of providing guidance to water utility managers for long-term effective operations.

Critical Research Needs: Concluding Remarks Models presented in this paper can be categorized based on methodology for data generation, the type of dependent/independent variables, and the model’s usefulness. According to this review, the main benefit for modeling appears to be their helpfulness to identify factors contributing to DBP formation and predicting their fate after disinfection. By performing sensitivity analyses for these models, relative contributions of water quality and operational factors in the formation of DBPs can be determined. These models can be applied for predicting DBPs, but mainly subject to certain conditions (i.e., within the specific range of independent variables), and for the specific case that served for model development (experimental water or site-specific distribution network). DBP models can guide decision-making at different levels. However, there is still a significant scope to improve the feasibility of using these models to predict DBPs for operational, epidemiological, and regulatory purposes. To achieve this, it is important that future work focus on multidisciplinary research related to chemistry, engineering, toxicology, epidemiology, statistics, and governance/management. According to this review, the authors believe that in coming years, the research efforts must focus on the following aspects:

334

Empirical Models to Predict Disinfection By-Products (DBPs) in Drinking Water: An Updated Review

Evaluation and Adaptation of Laboratory-Scale Models Research efforts must focus on investigating the capacity of models derived from laboratory-scale data to estimate real seasonal and spatial variations of DBPs in the distribution networks. An important part of this research should be the quantification of the distribution network contribution, that is, “pipe effect” on increasing or diminishing different DBP species. To achieve this goal, it is required that laboratory- and full-scale data be developed at the same utilities (i.e., simultaneous generation of data) or the laboratory-scale models may be developed based on various sources of waters to generalize them for various real scenarios. A complementary research is required to develop strategies for the adaptation of laboratory- to field-scale models. A significant challenge will be the better estimation of water residence time in distribution networks (through hydraulic models, tracer studies, flow rate correlations, etc.). Model adaptation could include the identification of conditions (seasons, water quality, and operational ranges) at which laboratory-scale models have better predictive capabilities within a system and the use of correction factors for other seasons. Such correction factors may vary according to the DBP species.

Development of Methods to Estimate and Reduce the Uncertainties The DBP levels can be predicted through any empirical model if the values of independent variables are known; however, it is necessary to have more information about the confidence and certainty of these data. The use of fuzzy-based techniques may be an interesting alternative to accomplish this. In addition, a two-step calibration process of kinetic parameters can be a way to reduce the data uncertainty used for predicting DBPs. The process includes separately modifying the decay parameters and formation coefficients separately. These types of models will favor the applicability of predictive modeling for operational, epidemiological, and regulatory purposes.

Integration of Various Modeling Techniques The majority of DBP models have been based on MLR. Future research must experience alternative modeling techniques for DBPs predictions, such as ANNs, fuzzy rule-base modeling which could improve DBP predictions. Robust databases on DBPs have to be developed in order to adequately compare different techniques (with separation of calibration and validation data sets). The use of hybrid modeling approaches may also be investigated, for example, using different techniques to establish DBP kinetic coefficients and relate them to water quality and operational parameters, and then to reduce uncertainty in their predictions.

Consideration of Both Disinfection Efficiency and Residual Disinfectant Maintenance Models have been reported in the scientific literature for each of these issues but no evident effort has been made to consider them simultaneously. As mentioned earlier the factors affecting these issues are common (water temperature, organic content, chlorine dose, reaction time, etc.). Feasibility for integration of these issues in a multipurpose model has to be evaluated in the near future. The integrated models can be useful for. (1) Treating water within the reservoir of treatment plants or for booster disinfection within the distribution system. (2) Simulating the interactions of disinfectants, organic compounds, biomass, and DBPs. (3) Balancing the risks associated with microbiological and DBPs contamination. To achieve this, robust data must be developed describing seasonal variations in water quality and operational changes.

Development of Models for Other DBPs Progress in toxicological research allows in identifying specific DBPs having potential implications for human health, but limited information is available about their presence and fate in drinking water. Most of reported models are for THMs and HAAs that emphasize the need of research on the generating laboratory and field-scale data for other E-DBPs using and generating the laboratory and field-scale data, relationships between different DBPs species particularly emerging ones need to be investigated and predictive models must be developed. Examples of species to favor in modeling are highlighted in section “Un-Regulated (Emerging) DBP”. The E-DBPs are considered more genotoxic than regulated DBPs. In some cases, there exist a good correlation between chlorinated E-DBPs and chlorinated THMs and HAAs. Hence, regulated DBPs can be used as indicators to detect the presence of E-DBPs.

Development of Criteria for the Assessment of Predictive Capability It is important to develop a criteria, which favors a uniform methodology for evaluating the predictive capability of any DBP model. Such criteria may include the requirements for using minimum amount of external data for validation, specific water quality and operational conditions within which models can be applied. Context in which DBP model can be applied (e.g., geographical features, type of water source, water utility types), boundary conditions for their application, as well as their specific potential usage (e.g., operational, epidemiological, regulatory) should be included in the criteria.

Empirical Models to Predict Disinfection By-Products (DBPs) in Drinking Water: An Updated Review

335

Acknowledgments The authors extend their gratitude to the Natural Sciences and Engineering Research Council of Canada (NSERC) and the Drinking Water Research Chair of Université Laval (Quebec City, Canada) for their financial support of this research.

References Abokifa, A.A., Jeffrey Yang, Y., Lo, C.S., Biswas, P., 2016. Investigating the role of biofilms in trihalomethane formation in water distribution systems with a multicomponent model. Water Research 104, 208–219. Al-Omari, A., Fayyad, M., Abdel Qader, A., 2004. Modeling trihalomethane formation for Jabal Amman water supply in Jordan. Environmental Modeling and Assessment 9, 245–252. Amy, G., Siddiqui, M., Ozekin, K., Zhu, H-W., and Wang, C. 1998. Empirical based models for predicting chlorination and ozonation by-products: Haloacetic acids, chloral hydrate, and bromate, EPA report CX 819579. Amy, G.L., Chadik, P.A., Chowdhury, Z.K., 1987a. Developing models for predicting trihalomethane formation potential kinetics. Journal of American Water Works Association 79 (7), 89–96. Amy, G.L., Minear, R.A., Cooper, W.J., 1987b. Testing and validation of a multiple non-linear regression model for predicting trihalomethane formation potential. Water Research 21 (6), 649–659. Adin, A., Katzhendler, J., Alkaslassy, D., Rav-Acha, C., 1991. Trihalomethane formation in chlorinated drinking water: A kinetic model. Water Research 25 (7), 797–805. Brown, D., Bridgeman, J., West, J.R., 2011. Predicting chlorine decay and THM formation in water supply systems. Reviews in Environmental Science and Biotechnology 10 (1), 79–99. Chaib, E., Moschandreas, D., 2006. Modeling spatial variation of brominates trihalomethane in a water distribution system of Ontario, Canada. Journal of Environmental Science and Health Part A 41, 2447–2464. Chaib, E., Moschandreas, D., 2008. Modeling daily variation of trihalomethane compounds in drinking water system, Houston, Texas. Journal of Hazardous Materials 51 (2–3), 662–668. Chang, E.E., Chao, S., Chiang, P., Lee, J., 1996. Effects of chlorination on THM formation in raw water. Toxicology and Environmental Chemistry 56, 211–225. Chang, E.E., Chiang, P.C., Chao, S.H., Lin, Y.L., 2006. Relationship between chlorine consumption and chlorination by-products formation for model compounds. Chemosphere 64 (7), 1196–1203. Chen, B., Westerhoff, P., 2010. Predicting disinfection by-product formation potential in water. Water Research 44 (13), 3755–3762. Chiang, P.-C., Chang, E.-E., Chuang, C.-C., Liang, C.-H., Huang, C.-P., 2010. Evaluating and elucidating the formation of nitrogen-contained disinfection by-products during preozonation and chlorination. Chemosphere 80 (3), 327–333. Chowdhury, S., Champagne, P., 2009. Risk from exposure to trihalomethanes during shower: Probabilistic assessment and control. Science of the Total Environment 407 (5), 1570–1578. Chowdhury, Z.K., Amy, G.L., and Mohamed, S. 1991. Modelling effects of bromide ion concentration on the formation of brominated trihalomethanes. Water Research for the New Decade Proceedings, AWWA Annual Conference, Denver, Colorado, pp. 313–322. Civelekoglu, G., Yigit, N.O., Diamadopoulos, E., Kitis, M., 2007. Prediction of bromate formation using multi-linear regression and artificial neural networks. Ozone: Science & Engineering 29 (5), 353–362. Clark, R.M., 1998. Chlorine demand and TTHM formation kinetics: A second order model. Journal of Environmental Engineering 124 (1), 16–24. Clark, R.M., Sivaganesan, M., 1998. Predicting chlorine residuals and formation of TTHMs in drinking water. Journal of Environmental Engineering 124 (12), 1203–1210. Clark, R.M., Thurnau, R.C., Sivaganesan, M., Ringhand, P., 2001. Predicting the formation of chlorinated and brominated by-products. Journal of Environmental Engineering 127 (6), 493–501. Duirk, S.E., Valentine, R.L., 2006. Modeling dichloroacetic acid formation from the reaction of monochloramine with natural organic matter. Water Research 40 (14), 2667–2674. Espigares, M., Lardelli, P., Ortega, P., 2003. Evaluating trihalomethane content in drinking water on the basis of common monitoring parameters: Regression models. Journal of Environmental Health 66 (3), 9–13. Fischer, Seth Adam. 2015. Modeling water age and disinfection byproduct formation in drinking water distribution systems in East Tennessee. Gang, D., Clevenger, T.E., Banerji, S.K., 2003. Relationship of chlorine decay and THMs formation to NOM size. Journal of Hazardous Materials A96, 1–12. Garcia-Villanova, R.J., Garcia, C., Gomez, J.A., Garcia, M.P., Ardanuy, R., 1997a. Formation, evolution and modelling of trihalomethanes in the drinking water of a town: I. At the municipal treatment facilities. Water Research 31 (6), 1299–1308. Garcia-Villanova, R.J., Garcia, C., Gomez, J.A., Garcia, M.P., Ardanuy, R., 1997b. Formation, evolution and modelling of trihalomethanes in the drinking water of a town: II. In the water distribution system. Water Research 31 (6), 1405–1413. Ged, E.C., Chadik, P.A., Boyer, T.H., 2015. Predictive Capability of Chlorination Disinfection Byproducts Models. Journal of Environmental Management 149, 253–262. Golfinopoulos, S.K., Arhonditsis, G.B., 2002. Multiple regression models: A methodology for evaluating trihalomethane concentrations in drinking water from raw water characteristics. Chemosphere 47, 107–1018. Golfinopoulos, S.K., Xilourgidis, K., Kostopoulou, N., Lekkas, T.D., 1998. Use of a multiple regression model for predicting trihalomethane formation. Water Research 32 (9), 2821–2829. Hong, H., Qian, L., Xiong, Y., Xiao, Z., Lin, H., Haiying, Y., 2015. Use of multiple regression models to evaluate the formation of halonitromethane via chlorination/chloramination of water from Tai Lake and the Qiantang River, China. Chemosphere 119, 540–546. Hong, Y., Liang, B.P., Han, A., Mazumder, M.H., 2007. Wong modeling of trihalomethane (THM) formation via chlorination of the water from Dongjiang River (source water for Hong Kong’s drinking water). Science of the Total Environment 385 (1–3), 48–54. Huixian, Z., Sheng, Y., Xu, X., Ouyong, X., 1997. Formation of POX and NPOX with chlorination of fulvic acid in water: Empirical models. Water Research 31 (6), 1536–1541. Ibarluzea, J.M., Goni, F., Santamaria, J., 1994. Trihalomethanes in water supplies in the San Sebastian area Spain. Bulletin of Environmental Contamination and Toxicology 52, 411–418. Islam, N., Sadiq, R., Rodriguez, M.J., Legay, C., 2016. Assessing regulatory violations of disinfection by-products in water distribution networks using a non-compliance potential index. Environmental Monitoring and Assessment 188 (5), 1–17. Jarvis, P., Parsons, S.A., Smith, R., 2007. Modeling bromate formation during ozonation. Ozone: Science & Engineering 29 (6), 429–442. Korn, C., Andrews, R.C., Escobar, M.D., 2002. Development of chlorine di-oxide related by-product models for drinking water treatment. Water Research 36, 330–342. Leavey-Roback, S.L., Sugar, C.A., Krasner, S.W., Suffet, I.H.M., 2016. NDMA formation during drinking water treatment: A Multivariate analysis of factors influencing formation. Water Research 95, 300–309. Lou, J.C., Chiang, P.C., 1994. A study of trihalomethanes formation in water distribution system. Hazardous Waste and Hazardous Materials 11 (2), 333–343. Mcbean, E., Zoe, Z., Wen, Z., 2008. Systems analysis models for disinfection by-product formation in chlorinated drinking water in Ontario. Civil Engineering and Environmental Systems 25 (2), 127–138.

336

Empirical Models to Predict Disinfection By-Products (DBPs) in Drinking Water: An Updated Review

Minear, R., and Morrow, C. 1983. Raw water bromide in finished drinking water, Research Report 9, Water Resources Research Center, University of Tennessee. Milot, J., Rodriguez, M.J., Sérodes, J., 2000. Modelling the susceptibility of drinking water utilities to form high concentrations of trihalomethanes. Journal of Environmental Management 60, 155–171. Morrow, C., Minear, R., 1987. Use of regression models to link raw water characteristics to trihalomethane concentrations in drinking water. Water Research 21 (1), 41–48. Nikolaou, A.D., Golfinopoulos, S.K., Arhonditsis, G.B., Kolovoyiannis, V., Lekkas, T.D., 2004. Modeling the formation of chlorination by-products in river waters with different quality. Chemosphere 55, 409–420. Nokes, C.J., Fenton, E., Randall, C.J., 1999. Modelling the formation of brominated trihalomethanes in chlorinated drinking waters. Water Research 33 (17), 3557–3568. Ohar, Z., Ostfeld, A., 2014. Optimal design and operation of booster chlorination stations layout in water distribution systems. Water Research 58, 209–220. Ozekin, K. 1994. Modelling bromate formation during ozonation and assessing its control, Ph.D. Dissertation, University of Colorado. Ozekin, K., Westerhoff, P., Amy, G.L., Siddiqui, M., 1998. Molecular ozone and radical pathways of bromate formation during ozonation. Journal of Environmental Engineering 124 (5), 456–462. Peleato, N.M., Andrews, R.C., 2015. Comparison of three-dimensional fluorescence analysis methods for predicting formation of trihalomethanes and haloacetic acids. Journal of Environmental Sciences 27, 159–167. Platikanov, S., Martín, J., Tauler, R., 2012. Linear and non-linear chemometric modeling of THM formation in Barcelona’s water treatment plant. Science of the Total Environment 432, 365–374. Rathbun, R.E., 1996a. Regression equations for disinfection by-products for the Mississippi, Ohio and Missouri Rivers. The Science of the Total Environment 191, 235–244. Rathbun, R.E., 1996b. Speciation of trihalomethane mixtures for the Mississippi, Missouri and Ohio rivers. The Science of the Total Environment 180, 125–135. Rathbun, R.E., 1996c. Bromine incorporation factors for trihalomethane formation for the Mississippi, Missouri and Ohio rivers. The Science of the Total Environment 192, 111–118. Rodrigues, P.M.S.M., Esteves da Silva, J.C.G., Antunes, M.C.G., 2007. Factorial analysis of the trihalomethanes formation in water disinfection using chlorine. Analytica Chimica Acta 595, 266–274. Rodriguez, M.J., Serodes, J., Morin, M., 2000. Estimation of water utility compliance with trihalomethane regulations using modelling approach. Journal of Water Supply Research and TechnologydAQUA 49 (2), 57–73. Semerjian, L., Dennis, J., Ayoub, G., 2009. Modeling the formation of trihalomethanes in drinking waters of Lebanon. Environmental Monitoring and Assessment 149, 429–436. Serodes, J.B., Rodriguez, M.J., Li, H., Bouchard, C., 2003. Occurrence of THMs and HAAs in experimental chlorinated waters of the Quebec City area (Canada). Chemosphere 51 (4), 253–263. Shakhawat, C., Rodriguez, M.J., Sadiq, R., Serodes, J., 2011. Modeling DBPs formation in drinking water in residential plumbing pipes and hot water tanks. Water Research 45 (1), 337–347. Siddiqui, M., Amy, G.L., Ozekin, K., Westerhoff, P., 1994. Empirically and theoretically based models for predicting brominated ozonated by-products. Ozone Science and Engineering 16, 157–178. Sohn, J., Amy, G., Cho, J., Lee, Y., Yoon, Y., 2004. Disinfectant decay and disinfection by-products formation model development: Chlorination and ozonation by-products. Water Research 38, 2461–2478. Song, R., Donohoe, C., Minear, R., Westerhoff, P., Ozekin, K., Amy, G.L., 1996. Empirical modeling of bromate formation during ozonation of bromide-containing waters. Water Research 30 (5), 1161–1168. Urano, K., Wada, H., Takemassa, T., 1983. Empirical rate equation for trihalomethane formation with chlorination of humic substances in water. Water Research 17 (12), 1797–1802. Uyak, V., Toroz, I., Meriç, S., 2005. Monitoring and modeling of trihalomethanes (THMs) for a water treatment plant in Istanbul. Desalination 176 (1–3), 91–101. Uyak, V., Ozdemir, K., Toroz, I., 2007. Multiple linear regression modeling of disinfection by-products formation in Istanbul drinking water reservoirs. Science of the Total Environment 378 (3), 269–280. Villanueva, C.M., Kogevinas, M., Grimalt, J.O., 2003. Haloacetic acids and trihalomethanes in finished drinking waters from heterogeneous sources. Water Research 37, 953–958. Watson, M., 1993. Mathematical modelling of the formation of THMs and HAAs in chlorinated natural waters. American Water Works Association (AWWA), Denver, CO. Zhang, X.-l., Yang, H.-w., Wang, X.-m., Jing, F., Xie, Y.F., 2013. Formation of disinfection by-products: Effect of temperature and kinetic modeling. Chemosphere 90 (2), 634–639. Zhu, X., Zhang, X., 2016. Modeling the formation of TOCl, TOBr and TOI during chlor(am)ination of drinking water. Water Research 96, 166–176.

Further Reading APHA, AWWA, and WEF, 2005. Standard methods for the examination of water and wastewater, 21st edn. Washington, DC. Argoti, A., Maghirang, R.G., Barrios, A.F.G., Chou, S.-T., Fan, L.T., 2016. A generalized model for bacterial disinfection: Stochastic approach. Biochemical Engineering Journal 114, 218–225. Baribeau, H., Krasner, S.W., Chinn, R., and Singer, P.C. 2000. Impact of biomass on the stability of haloacetic acids and trihalomethanes in simulated distribution system, In: Proceedings of the AWWA Water Quality Technology Conference, Salt Lake City, UT. Baxter, C.W., Zhang, Q., Stanley, S.J., 2001. Drinking water quality and treatment: The use of artificial neural networks. Canadian Journal of Civil Engineering 28, 26–35. Bertone, E., Oz, S., Richards, R., Roiko, A., 2016. Extreme events, water quality and health: A participatory Bayesian risk assessment tool for managers of reservoirs. Journal of Cleaner Production 135, 657–667. Bonissone, P.P., 1997. Soft computing: The convergence of emerging reasoning technologies. Soft Computing 1, 6–18. Brisson, I.J., Levallois, P., Tremblay, H., Sérodes, J., Deblois, C., Charrois, J., Taguchi, V., Boyd, J., Li, X.F., Rodriguez, M.J., 2013. Spatial and temporal occurrence of Nnitrosamines in seven drinking water supply systems. Environmental Monitoring and Assessment 185 (9), 7693–7708. Caudill, M., 1991. Neural network training tips and techniques. AI Expert (January) 6, 56–61. Chowdhury, S., Champagne, P., 2009. Risk from exposure to trihalomethanes during shower: Probabilistic assessment and control. Science of the Total Environment 407 (5), 1570–1578. Chowdhury, S., Champagne, P., McLellan, P.J., 2009a. Models for predicting disinfection byproduct (DBP) formation in drinking waters: A chronological review. Science of the Total Environment 407 (14), 4189–4206. Chowdhury, S., Champagne, P., McLellan, P.J., 2009b. Uncertainty characterization approaches for risk assessment of DBPs in drinking water: A review. Journal of Environmental Management 90 (55), 1680–1691. Clark, R.M., Sivaganesan, M., 2002. Predicting chlorine residuals in drinking water: Second order model. Journal of Water Resources Planning and Management 128 (2), 152–160. Clark, R.M., Smalley, G., Goodrich, J.A., Tull, R., et al., 1994. Managing water quality in distribution systems: Simulating TTHM and chlorine residual propagation. Journal of Water Supply Research and TechnologydAQUA 43 (4), 182–191. Croué, J.P., Gorshin, G.V., Leenheer, J.A., and Benjamin, M.M. 1998. Isolation, fractionation and characterization of natural organic matter in drinking water, AWWA RF Report. Di Cristo, C., Leopardi, A., de Marinis, G., 2014. Effect of data uncertainty on trihalomethanes prediction in water supply systems using kinetic models. Procedia Engineering 70, 507–514. Dodds, L., King, W.D., Woolcott, C., Pole, J., 1999. Trihalomethanes in public water supplies and adverse birth outcomes. Epidemiology 10 (3), 233–237.

Empirical Models to Predict Disinfection By-Products (DBPs) in Drinking Water: An Updated Review

337

Doederer, K., Gernjak, W., Weinberg, H.S., Farré, M.J., 2014. Factors affecting the formation of disinfection by-products during chlorination and chloramination of secondary effluent for the production of high quality recycled water. Water Research 48, 218–228. Edzwald, J.K., Becker, W.C., Wattier, K.L., 1985. Surrogate parameters for monitoring organic matter and THM precursors. Journal of the American Water Works Association 77 (4), 122–132. Gagnon, C., Grandjean, B.P.A., Thibault, J., 1997. Modeling of coagulant dosage in a water treatment plant. Artificial Intelligence in Engineering 11, 401–404. Gallard, H., von-Gunten, U., 2002. Chlorination of natural organic matter: Kinetics of chlorination and of THM formation. Water Research 36, 65–74. Gan, W.h., Guo, W.h., Mo, J., He, Y., Liu, Y., Liu, W., Liang, Y., Yang, X., 2013. The occurrence of disinfection by-products in municipal drinking water in China’s Pearl River Delta and a multipathway cancer risk assessment. Science of the Total Environment 447, 108–115. Gavin, J.B., Nixon, J.B., Dandy, G.C., Maier, H.R., Holmes, M., 2006. Forecasting chlorine residuals in a water distribution system using a general regression neural network. Mathematical and Computer Modelling 44, 469–484. Gibbs, M.S., Morgan, N., Maier, H.R., Dandy, G.C., Nixon, J.B., 2006. Investigation into the relationship between chlorine decay and water distribution parameters using data driven methods. Mathematical and Computer Modelling 44, 485–498. Golfinopoulos, S.K., Nikolaou, A.D., 2005. Survey of disinfection by-products in drinking water in Athens, Greece. Desalination 176 (1–3), 13–24. Golfinopoulos, S.K., Nikolaou, A., Uyak, V., Toroz, I., Rodriguez, M.J., Sérodes, J.B., 2006. Occurrence, fate and transport modeling of DBPs in drinking water. In: Advances in control of disinfection by-products in drinking water systems. Nova Publishers. Guilherme, S., Rodriguez, M.J., 2015. Short-term spatial and temporal variability of disinfection by-product occurrence in small drinking water systems. Science of the Total Environment 518, 280–289. Hansen, K.M.S., Willach, S., Antoniou, M.G., Mosbæk, H., Albrechtsen, H.-J., Andersen, H.R., 2012. Effect of pH on the formation of disinfection byproducts in swimming pool water–is less THM better? Water Research 46 (19), 6399–6409. Hammerstrom, D., 1993. Neural networks at work. IEEE Spectrum (June) 26–32. Hebert, A., Forestier, D., Lenes, D., Benanou, D., Jacob, S., Arfi, C., Lambolez, L., Levi, Y., 2010. Innovative method for prioritizing emerging disinfection by-products (DBPs) in drinking water on the basis of their potential impact on public health. Water Research 44 (10), 3147–3165. Jeong, C.H., Postigo, C., Richardson, S.D., Simmons, J.E., Kimura, S.Y., Mariñas, B.J., Barcelo, D., Liang, P., Wagner, E.D., Plewa, M.J., 2015. Occurrence and comparative toxicity of haloacetaldehyde disinfection byproducts in drinking water. Environmental Science & Technology 49 (23), 13749–13759. Kavanaugh, M.C., Trussel, A.R., Cromer, J., Rhodes, R., 1980. An empirical kinetic-model of trihalomethane formation: Applications to meet the proposed THM standard. Journal American Water Works Association 72 (10), 578–582. King, W.D., Marrett, L.D., 1996. Case-control study of bladder cancer and chlorination by-products in treated water (Ontario, Canada). Cancer Causes & Control 7, 596–604. Krasner, S.W., 1999. Chemistry of disinfection by-product formation. In: Formation and control of disinfection by-products in drinking water. American Water Works Association, Denver, CO. Krasner, S.W., Mitch, W.A., Westerhoff, P., Dotson, A., 2012. Formation and control of emerging C-and N-DBPs in drinking water. Journal: American Water Works Association 104 (11). Krasner, S.W., Mitch, W.A., McCurry, D.L., Hanigan, D., Westerhoff, P., 2013. Formation, precursors, control, and occurrence of nitrosamines in drinking water: A review. Water Research 47 (13), 4433–4450. Kulkarni, P., Chellam, S., 2010. Disinfection by-product formation following chlorination of drinking water: Artificial neural network models and changes in speciation with treatment. Science of the Total Environment 408 (19), 4202–4210. Liew, D., Linge, K.L., Joll, C.A., 2016. Formation of nitrogenous disinfection by-products in 10 chlorinated and chloraminated drinking water supply systems. Environmental Monitoring and Assessment 188 (9), 518. Mao, Y.-q., Wang, X.-m., Guo, X.-f., Yang, H.-w., Xie, Y.F., 2016. Characterization of haloacetaldehyde and trihalomethane formation potentials during drinking water treatment. Chemosphere 159, 378–384. Milot, J., Rodriguez, M.J., Sérodes, J., 2002. Contribution of neural networks for modelling THM occurrence in drinking water. Journal of Water Resource Planning and ManagementdASCE 128 (5), 370–376. Nicola, L., Qing, Z., Lingling, C., Riyaz, S., 2004. Predicting total trihalomethane formation in finished water using artificial neural networks. Journal of Environmental Engineering and Science 3, S35–S43. Norton, C.D., Lechevallier, M.W., 1997. Chloramination: Its effect on distribution system water quality. Journal of American Water Works Association 89 (7), 66. Ossenbruggen, P.J., Gaudard, M.A., Collins, M.R., 1988. System model development with ill-conditioned data: Case studies of trihalomethane formation in drinking water. Civil Engineering System 5 (1), 31–41. Oxenford, J.L., 1996. Disinfection by-products: Current practices and future directions. In: Minear, R.A., Amy, G.L. (Eds.), Disinfection by-products in water treatment: The chemistry of their formation and control. Lewis Publishers, Florida. Racaud, P., Rauzy, S., 1994. Étude de la cinétique de formation des principaux sous-produits de chloration. TSM 89 (5), 243–249. Rauen, W.B., Angeloudis, A., Falconer, R.A., 2012. Appraisal of chlorine contact tank modelling practices. Water Research 46 (18), 5834–5847. Reckhow, D.A., Linden, K.G., Kim, J., Shemer, H., Makdissy, G., 2010. Effect of UV treatment on DBP formation. Journal: American Water Works Association 102 (6), 100. Rodriguez, M.J., Sérodes, J.-B., 2004. Application of back-propagation neural network modeling for free residual chlorine, total trihalomethanes and trihalomethanes speciation. Journal of Environmental Engineering and Science 3, 25–34. Rodriguez, M.J., Sérodes, J.B., 2005. Laboratory-scale chlorination to estimate the occurrence of chlorinated DBPs in full-scale distribution systems. Environmental Monitoring and Assessment 110, 323–340. Rodriguez, M.J., Sérodes, J.B., 2001. Spatial and temporal evolution of trihalomethanes in three water distribution systems. Water Research 35 (6), 1572–1586. Rodriguez, M.J., Milot, J., Sérodes, J.B., 2003. Predicting trihalomethane formation in chlorinated waters using multivariate regression and neural networks. Journal of Water Supply Research and TechnologydAQUA 52 (3), 199–215. Rodriguez, M.J., Sérodes, J.B., Levallois, P., 2004. Behavior of trihalomethanes and haloacetic acids in a drinking water distribution system. Water Research 38, 4367–4382. Rodriguez, M.J., Sérodes, J.-B., Pitre, J., Huard, M., 2005. Seasonal variations of trihalomethanes and haloacetic acids within water distribution systems: A case study in Quebec (Canada). In: Lauer, W.C. (Ed.), Water Quality in the Distribution System, AWWA Trends in Water Series. AWWA, Denver, CO. Rodriguez, M.J., Sérodes, J.B., Levallois, P., Proulx, F., 2007. Chlorinated DBPs in drinking water according to source, treatment, season and distribution location. Journal of Environmental Engineering & Science 6 (4), 355–365. Rodriguez, M.J., Serodes, J.-B., Roy, D., 2007. Formation and fate of haloacetic acids (HAAs) within the water treatment plant. Water Research 41 (18), 4222–4232. Rook, J.J., 1974. Formation of haloforms during chlorination of naturals waters. Water Treatment and Examination 23, 234–243. Rossman, L.A., Brown, R.A., Singer, P.C., Nuckols, J.R., 2001. DBP formation kinetics in a simulated distribution system. Water Research 35 (14), 3483–3489. Rumelhart, D.E., Widrow, B., Lerh, M.A., 1994. The basic ideas in neural networks. Communications of the ACM 37 (3), 87–92. Sadiq, R., Rodriguez, M.J., 2004a. Disinfection by-products (DBPs) in drinking water and the predictive models for their occurrence: A review. The Science of the Total Environment 321 (1–3), 21–46. Sadiq, R., Rodriguez, M.J., 2004b. Fuzzy synthetic evaluation of disinfection by-productsdA risk-based indexing system. Journal of Environmental Management 73 (1), 1–13. Sadiq, R., Rodriguez, M.J., 2005. Predicting water quality in the distribution system using evidential theory. Chemosphere 59 (2), 177–188. Sadiq, R., Al-Zahrani, A.M., Sheikh, A.K., Husain, T., Farooq, S., 2004. Performance evaluation of slow sand filters using fuzzy rule-based modelling. Environmental Modelling and Software 19, 507–515.

338

Empirical Models to Predict Disinfection By-Products (DBPs) in Drinking Water: An Updated Review

Sadiq, R., Rodriguez, M.J., Imran, S.A., Najjaran, H., 2007. Communicating human health risks associated with disinfection by-products in drinking water supplies: A fuzzy-based approach. Stochastic Environmental Research and Risk Assessment 21 (4), 341–353. Sarathy, S.R., Mohseni, M., 2007. The impact of UV/H2O2 advanced oxidation on molecular size distribution of chromophoric natural organic matter. Environmental Science & Technology 41 (24), 8315–8320. Serrano, M., Montesinos, I., Cardador, M.J., Silva, M., Gallego, M., 2015. Seasonal evaluation of the presence of 46 disinfection by-products throughout a drinking water treatment plant. Science of the Total Environment 517, 246–258. Shanks, C.M., Sérodes, J.-B., Rodriguez, M.J., 2013. Spatio-temporal variability of non-regulated disinfection by-products within a drinking water distribution network. Water Research 47 (9), 3231–3243. Singer, P.C., Weinberg, H.S., Brophy, K., Liang, L., Roberts, M., Grisstede, I., Krasner, S., Baribeau, H., Arora, H., Najm, I., 2002. Relative dominance of haloacetic acids and trihalomethanes in treated drinking water. AwwaRF and AWWA, Denver, CO. Sohn, J., Gatel, D., Amy, G.L., 2001. Monitoring and modelling of disinfection by-products (DBPs). Environmental Monitoring and Assessment 70, 211–222. Tung, H.H., Unz, R.F., and Xie, Y.F. 2002. Mechanisms of haloacetic acid removal by granular activated carbon, In: Proceedings of Water Quality and Technology Conference, AWWA, Seattle, WA. USEPA, 1999. Microbial and disinfection by-product rulesdSimultaneous compliance guidance manual, United States Environmental Protection Agency, EPA 815-R-99-015. Vasconcelos, J.J., Boulos, P.F., Grayman, W.M., Kiene, L., Wable, O., Biswas, P., Bhari, A., Rossman, L.A., Clark, R.M., Goodrich, J.A., 1996. Characterization and modelling of chlorine decay in distribution systems. AwwaRF, Denver, CO. Montgomery Watson, and AWWA 1991. Disinfection/disinfection by-products database and model project, AWWA, Denver, CO Westerhoff, P., Debroux, J., Amy, G.L., Gatel, D., Mary, V., Cavard, J., 2000. Applying DBP models to full-scale plants. Journal of American Water Works Association 92 (3), 89–102. Wilczak, A., Hoover, L.L., Hubert, L.H., 2003. Effects of treatment changes on chloramines demand and decay. Journal of American Water Works Association 95 (7), 94. Wu, H., Xie, Y., 2005. Effects of EBCT and water temperature on HAA removal using BAC. Journal American Water Works Association 97 (11), 94–101. Wei, X., Chen, X., Wang, X., Zheng, W., Zhang, D., Tian, D., Jiang, S., Ong, C.N., He, G., Weidong, Q., 2013. Occurrence of regulated and emerging iodinated DBPs in the Shanghai drinking water. PLoS One 8 (3), e59677. Xue, C., Qi, W., Chu, W., Templeton, M.R., 2014. The impact of changes in source water quality on trihalomethane and haloacetonitrile formation in chlorinated drinking water. Chemosphere 117, 251–255. Yang, X., Guo, W., Lee, W., 2013. Formation of disinfection byproducts upon chlorine dioxide preoxidation followed by chlorination or chloramination of natural organic matter. Chemosphere 91 (11), 1477–1485. Zhao, Y., Anichina, J., Xiufen, L., Bull, R.J., Krasner, S.W., Hrudey, S.E., Li, X.-F., 2012. Occurrence and formation of chloro- and bromo-benzoquinones during drinking water disinfection. Water Research 46 (14), 4351–4360.

Entomological Risks of Genetically Engineered Cropsq K Walker, University of Arizona, Tucson, AZ, United States R Frederick, US Environmental Protection Agency, Washington, DC, United States © 2019 Elsevier B.V. All rights reserved.

Abbreviations Bt Bacillus thuringiensis EPA Environmental Protection Agency IPM integrated pest management IRM insecticide resistance management

Insects in Agroecosystems Insects are dominant life forms in virtually all terrestrial ecosystems. Although small in size, they are so numerous they play major roles in nutrient cycling, in determining plant community structure, and in affecting populations of most animal species, including humans. The total mass of ants alone is estimated to exceed the total mass of humans. But ants are just a small part of the Class Insecta. There are well over one million known species of insects, making them by far the most diverse group of organisms on Earth. Insects occupy a vast array of ecological niches, feeding on everything from nectar to cow feces to human blood. Even in ecosystems heavily manipulated by humans, insects remain key players. In agroecosystems, certain insects and other arthropods are serious agricultural pests, causing significant crops loss if not controlled. At the same time, insects facilitate agricultural productivity by enriching soil, pollinating crops, and regulating pest populations. Finally, the agroecosystem supports biodiversity by providing habitats for many insect species that do not really impact agricultural productivity either way. The need to control pests can sometimes lead to management practices that disrupt the more beneficial activities of insects. Insects and related organisms such as mites can damage crops in a variety of ways, but pests can be generally categorized as direct, indirect, or stored product pests. Direct pests feed on the harvested portion of the crop, and their population density usually correlates fairly closely with crop loss. Although the actual pest species varies by crop and cropping regions, direct pests are commonly butterfly or moth larvae (Lepidoptera), certain beetles (Coleoptera), and certain fly larvae (Diptera). Indirect pests feed on nonharvested portions of the plant or suck on plant juices. Such pests can reduce plant vigor in high numbers, but pose a bigger threat to crops through the transmission of plant diseases. Indirect pests that do not vector diseases usually have to reach high levels to cause economically important damage. Those pests that do transmit serious plant diseases, however, can cause catastrophic damage even at relatively low population densities. Common indirect pests include leaf-feeding Lepidoptera and Coleoptera as well as sucking insects in the order Hemiptera (e.g., aphids, whiteflies, leafhoppers, and phytophagous mites). Finally, stored product pests cause damage after harvest. Postharvest losses can be enormous, particularly in developing countries with low-tech storage practices. In contrast to these pests, a wide range of insects are considered beneficial in agroecosystems. Virtually all fruit crops, many vegetable crops and seed crops such as seed alfalfa require insect pollination. Although the domesticated honeybee is the bestknown pollinator, other bee species such as bumble bees and leafcutter bees are also managed in particular cropping situations. The value of wild pollinators is harder to assess, but may also contribute significantly to agricultural production. Predatory and parasitic insects also serve important functions in agroecosystems, feeding on herbivorous insects that can damage crops. Predators can include a wide variety of insects, including certain beetles (Coleoptera), true bugs (Hemiptera), lacewings (Neuroptera), wasps and ants (Hymenoptera) as well as many other insect groups. Most parasitoid insects are wasps or flies. Although it is difficult to calculate the value of these natural enemies of pest insects, their importance can be demonstrated dramatically when their beneficial activities are disrupted by broad-spectrum insecticide applications. In 1992, Pimentel and others estimated that the loss of natural enemies costs farmers more than $500 million annually. Finally, all crop production is dependent on healthy soil, which is created and maintained by soil-dwelling insects and other animals, as well as a host of fungi and microbes.

q

Change History: December 2018. R. Frederick has updated the text throughout the article. This is an update of K. Walker, R. Frederick, Entomological Risks of Genetically Engineered Crops, Editor(s): J.O. Nriagu, Encyclopedia of Environmental Health, Elsevier, 2011, Pages 306–314.

Encyclopedia of Environmental Health, 2nd edition, Volume 2

https://doi.org/10.1016/B978-0-12-409548-9.11677-X

339

340

Entomological Risks of Genetically Engineered Crops

Traditional Pest Management Humans have been managing agricultural pests for thousands of years. Pests can include weeds, plant pathogens (certain fungi, bacteria, and viruses), rodents, and nematodes in addition to the plant-feeding insects and mites described in the preceding text, and are estimated to destroy as much as one-third of all agricultural yield. A wide array of pest control tools exist including cultural practices such as weeding, pesticide applications, and selective plant breeding for pest resistance. Chemical pesticides have played a major role in agriculture and, since World War II, have brought significant changes to agroecosystems, transforming them from generally small-scale, fairly diverse ecological communities into large monocultures. A pesticide is a substance, either natural or synthetic, used to destroy or deter pests. Early pesticides included plant-derived products as well inorganic compounds such as elemental sulfur and various arsenicals. New developments in synthetic organic chemistry opened the way to a new era of pest control starting in the early 1940s with the use of DDT to control insect-borne diseases. Since that time, agricultural pesticide use has risen dramatically in most parts of the world, and this increase in pesticide applications has been accompanied with significant increases in yield. These pesticides have not, however, reduced overall pest damage. For example, agricultural losses due to insect pests in the Unites States were estimated to be approximately 7% in 1945, but approximately 13% in 1989, in spite of the application of almost 10 times as much pesticide. This increase in estimated insect pest damage was not caused by the failure of pesticides, as crop losses would have been much higher without their use, but by changes in agricultural practices and consumer standards. Agricultural intensification, monocropping, irrigation, and the reduced use of crop rotation have made agroecosystems more hospitable to herbivorous insects that are crop pests and less suitable for many beneficial insects. In the early years of adoption of synthetic organic pesticides, limited government regulation, lack of environmental awareness, and the relative low cost of such insecticides as DDT encouraged agricultural producers to apply pesticides in large quantities, with little or no effort to protect humans or nontarget organisms from exposure. In many crops, pesticides were applied on a calendar basis, even when the targeted pests were not present. In the 1960s, the publication of Silent Spring drew attention to growing evidence of widespread pesticide contamination. Public concern began to mount over possible risks to human health and the environment associated with pesticides. Newer organophosphate and carbamate insecticides that were much more acutely toxic than older synthetic pesticides caused poisoning accidents. The need to protect human health and the environment led to the adoption of stricter regulations on pesticide manufacture, sale, and use in many countries, although enforcement of such protections remain weak in some parts of the world. In addition to posing risks to humans and to nontarget organisms outside of agroecosystems, heavy use of insecticides and other pesticides incurs significant direct and indirect costs to agricultural producers. Frequent applications of an insecticide can limit its effective lifespan as the target pest population develops resistance to that chemical. At the same time, the broad spectrum of activity of many pesticides leads to the creation of new pests by killing beneficial insects and other organisms that would normally regulate pest populations. Pesticide exposure can be lethal to honeybees and other pollinators that are necessary for the production of all fruit and most vegetable crops. Finally, as most conventional pesticides must be applied externally to the plants (although some are systemic), pesticides may not be effective under certain circumstances (e.g., when weather conditions prevent applications or when pests feed inside plant tissue). To balance the need for pest control with the desire to avoid or at least reduce the negative impacts of heavy pesticide use, many agricultural producers have adopted integrated pest management (IPM). IPM is a sustainable approach to managing pests by combining different tools and strategies including pesticides, natural enemies, cultural practices, and host–plant resistance in a way that minimizes both economic and environmental risks. The approach is generally knowledge-intensive rather than product-intensive and strives to preserve and enhance natural controls of pests. Although most IPM programs in specific crops involve pesticide applications, the pesticides chosen tend to be fairly specific in their activity and pose less risk to humans and other nontarget organisms than other pesticides. IPM approaches also often include modifications to pesticide application practices to reduce human and environmental exposure.

Genetic Engineering of Crops In the first wave of plant biotechnology development, genetic modification of crop plants has provided new pest management tools to agricultural producers. The genetic manipulation of crop plants is an extension of the 50 years of development of molecular biology that provided scientists with basic understanding of cellular genetics and the tools to manipulate heritable qualities of organisms for particular purposes. The new tools have been combined with hundreds of years of experience in crop breeding and development to create biotechnology varieties. The following examples illustrate how genetic manipulation is carried out and the types of crops that have been successfully commercialized over the last decade. The introduction of new traits into plants is accomplished by the transfer of genes. These unique nucleic acid sequences reside within a cell’s genetic material, deoxyribonucleic acid (DNA) or ribonucleic acid (RNA). By using enzymes known as restriction endonucleases, scientists are able to cut DNA molecules at precise locations, isolate particular fragments containing a gene of interest, reproduce them, and introduce the multiplied sequences into recipient cells. Once in the recipient cell, the gene may be heritably incorporated into the genome via a natural cell process known as recombination. When successfully incorporated, the “new” gene will function much as it did in the organism from which it came. For example, genes that produce insecticidal toxins in bacteria have individually and in a variety of combinations been introduced into corn (Zea mays), cotton (Gossypium spp.),

Entomological Risks of Genetically Engineered Crops Table 1

341

Genetically modified crop plants reviewed by US regulatory agencies

Food crop

Traits

Gene(s) transferred

Corn (Zea mays)

Insect resistance, herbicide tolerance, male sterility

Tomato (Lycopersicon esculentum)

Delayed ripening, insect resistance

Canola (Brassica rapa)

Altered fatty acids, herbicide tolerance, phytate degradation

Papaya (Carica papaya) Soybean (Glycine max)

Virus resistance Herbicide tolerance

Potato (Solanum tuberosum)

Insect resistance, virus resistance

Cantaloupe (Cucumis melo) Squash (Cucurbita pepo)

Delayed fruit ripening Virus resistance

Sugar beet (Beta vulgaris)

Herbicide tolerant

Rice (Oryza sativa) Wheat (Triticum aestivum L.) Alfalfa (Medicago sativa)

Herbicide tolerant Herbicide tolerant Herbicide tolerant

Cry1Ab, Cry3Bb1, Cry9c, Cry1F Phosphinothricin acetyltransferase (PAT) DNA methylase 5-Enolpyruvylshikimate-3-phosphate synthase (EPSPS) Barnase S-adenosylmethionine hydrolase Antisense polygalacturonase (PG) Cry1Ac 12:0 acyl carrier protein thioesterase 1-Aminocyclopropane-1-carboxylic acid deaminase 5-Enolpyruvylshikimate-3-phosphate synthase (EPSPS) Phosphinothricin acetyl transferase (PAT) Glyphosate oxidoreductase (GOX) Phytase Nitrilase Glyphosate Barnase Papaya ringspot virus (PRSV) coat protein Phosphinothricin acetyl transferase (PAT) GmFad2-1 5-Enolpyruvylshikimate-3-phosphate synthase (EPSPS) Cry3A Potato leaf roll virus resistance S-adenosylmethionine hydrolase Coat protein genes from cucumber mosaic virus, zucchini yellow mosaic virus, and watermelon mosaic virus-2 5-Enolpyruvylshikimate-3-phosphate synthase (EPSPS) Phosphinothricin acetyl transferase (PAT) Phosphinothricin acetyl transferase (PAT) 5-Enolpyruvylshikimate-3-phosphate synthase (EPSPS) 5-Enolpyruvylshikimate-3-phosphate synthase (EPSPS)

tomato (Lycopersicon esculentum), and soybean (Glycine max). Table 1 is a list of a variety of genetically modified food crops and their introduced traits compiled from a United States database. Because the new gene is a part of the plant genome, it is present in all of the progeny produced by the transformed plant and may be present in any or all the tissues (leaves, stems, roots, flowers, and pollen) of those progeny. When functional, the gene is transcribed and translated by the natural cellular mechanics to produce the insecticidal protein or toxin. These toxins have been shown to be quite effective in controlling insects that normally feed on crop plants. When susceptible larvae eat the leaves or roots of the transgenic plants, they also consume the toxins and will die or become sick and stop chewing. The toxins that have been used in commercial products so far have overwhelmingly come from the bacterium, Bacillus thuringiensis (Bt). These bacteria are known to produce > 50 variations of insecticidal toxins having different toxicities for particular insects. Certain toxin, such as the Cry1Aa is toxic only for lepidopterans while Cry9c is toxic for coleopterans. This toxin specificity is an important consideration in the risk assessment of new genetically modified products and will be discussed in the following text. In very much the same fashion, the gene responsible for particular herbicide tolerance may be introduced into crop plants. The most common trait introduced to date is the 5-enolpyruvylshikimate-3-phosphate synthase (EPSPS) gene, which produces an enzyme that degrades the herbicide, glyphosate. The presence of the gene and subsequent production of the enzyme in plants protects the plants from the toxic effects of the herbicide. This means that producers can spray their fields shortly after emergence of the tolerant crop plants and control accompanying weeds, thereby reducing the need for tilling and loss of crop due to overspray of the herbicide. In crops without the herbicide resistance genes, herbicide use is limited to avoid possible injury to the crops, and producers either must use more expensive and difficult weed control or simply allow weeds to compete with the crop plants.

Changes in Agricultural Practice With the Introduction of Transgenic Crops Since first becoming available commercially, the adoption of transgenic crops has been profound. Over the last decade the total area planted has increased by double-digit percentage each year (Fig. 1), to > 282 million acres estimated in 2007. Initially, adoption

342

Entomological Risks of Genetically Engineered Crops

320 280

Acres (millions)

240 200 160 120 80 40

08 20

06 20

04 20

02 20

00 20

98 19

19

96

0

Year

Fig. 1

GMO adoption worldwide (ISAAA).

was nearly entirely in the United States, Canada, and Argentina, but more recently developing countries have adopted the transgenic crops at faster rates. In 2007, > 12 million farmers in 23 countries were growing transgenic crops. It is clear that genetically engineered crops have had a significant and lasting impact on agricultural practices for insecticide and herbicide use. The adoption has resulted in modification of tilling practices that in turn resulted in lower energy consumption and soil run off. Repeated evaluations of the international agricultural landscape have indicated that pesticide use has been reduced significantly since the introduction of Bt crop plants. This is especially evident in transgenic cotton that would in many cases receive two to three times the number of applications of pesticide per growing season. Additionally, there has been a change away from the use of broad-spectrum pesticides to the more environmentally friendly, narrow-spectrum pesticides. In the case of herbicide-tolerant plants, the picture is different. Overall, there have been increases in the amounts of herbicide active ingredients being used, but this is primarily due to the use of a particular herbicide, glyphosate. Glyphosate tolerance was introduced into crops such as soybean that subsequently found rapid and widespread acceptance by soybean growers. Although increased glyphosate use may have occurred, it must also be acknowledged that the herbicide is more environmentally safe (degrades faster in the environment) than many alternative herbicides that would be used in its stead. Because it can be used early in the growing season (shortly after emergence), farmers can use no-till practices on their fields. No-till results in more stable soil systems with less soil run off and lower energy input requirements because the tractors do not have to cross the fields as often.

Implications of Widespread Adoption on Insect Populations Before the introduction of genetically engineered crops, many regulatory agencies were actively reviewing and anticipating their role in assessing the risks that might be associated with the large-scale release of these plants. The burden on risk assessors has been to use existing knowledge coupled with information from experimental evaluation and testing to determine the potential risks associated with large-scale introduction of transgenic crop plants. In contrast, new crop varieties developed through other methods such as chemical- or radiation-induced mutagenesis have not received such regulatory scrutiny. Regulatory agencies generally require the following information for evaluation before approval to release genetically engineered crop plants:

• • • • • • •

product characterization – the biochemical nature of the product, its mode of action, and the time and tissues in which the product is expressed, – the source of the genes and protein, and – gene construct used and amino acid sequence for proteins; mammalian toxicity; potential for allergenicity; pollen movement as an index of potential gene flow; impacts on nontarget organisms, particularly insects, other invertebrates, fish, and birds; fate, including accumulation in soil B of the pesticidal protein(s) and; insecticide resistance management (IRM) plan for Bt crops.

Nontarget impacts, environmental fate, and insect resistance have been the critical information elements considered in risk assessments before regulatory approvals for large-scale testing or commercialization. Despite a growing base of scientific information and regulatory experience, environmental concerns related to ecological risks and unintended consequences of the large-scale adoption

Entomological Risks of Genetically Engineered Crops

343

of genetically engineered crops persist. This may in part be because of the potential new products that are currently in development or under consideration which present new uses or require information not yet available from scientific research. For ecological systems, assessment endpoints include potential effects on nontarget insects and other invertebrates, particularly those closely related to the target species; whether sexually compatible relatives (either other crop species or resident plants) may be present in the locale where the crops will be grown and, if so, what the potential for cross-pollination might be; and how long the pesticide will remain viable in the soil, even after plants are removed. A growing number of field studies have been performed by academic and industry researchers examining the ecological effects of transgenic crops. Examples of these studies are summarized in Table 2 that lists the ecological impact studied and describes the methods used by the researchers along with their results. Most of the studies addressed the impact on nontarget organisms, the potential of gene flow from the transgenic plants, and development of resistance to transgenic crops. In addition, researchers have also examined the potential impacts of new agricultural practices for transgenic crops on wildlife in fields, and whether the transgene can provide plants with increased fitness in natural ecosystems.

Effects of Genetically Engineered Crops on Insects As genetically engineered crops are developed, the potential adverse effects on primary and secondary consumers of the plants, including potential feral transgenic plants and possible hybrids, also need to be considered. This is straightforward for plants Table 2

Selected greenhouse and field studies to address biotechnology risk issues

Concern

Plant(s)

Scale a

Conditions

Nontarget effects

Bt corn

Small, large

Bt cotton

Large

Used milkweed naturally dusted with Bt Field levels of Event 176 pollen were quite toxic pollen to examine toxicity to Monarch to larvae, but the exposure analysis butterfly larvae. Then determined exposure suggested 60 UK GMHT fields compared to agricultural techniques significantly affected WT crops. Counted numbers of individuals biodiversity. In general there was less in the and species in the different fields over GMHT beet and rape fields b/c of less weeds, two seasons. but more in the corn fields. No examination of the direct impact of the transgene though. Collected insects from numerous Bt fields In general, frequency of resistance over multiple seasons in the United States, was 1 ha.

a

Conclusions

344

Entomological Risks of Genetically Engineered Crops

with insecticidal properties and may include laboratory toxicity testing for a standard battery of test species such as fish, birds, and honeybees and species closely related to the target pest. Nontarget species may include above and below ground beneficial insects such as predators and parasitoids as well as invertebrate and vertebrate herbivores that are not considered economic pests. Nontarget effects need to be considered not only at the level of the specific crop plant but also at the plant community, ecosystem, and perhaps regional levels, given the easy and likely dissemination of transgenic pollen and seed in both terrestrial and aquatic environments, and the long-term survival potential of many types of seed in soil. A number of laboratory and field studies have examined nontarget toxicity of Bt corn or cotton, usually by measuring abundance and life history traits of a wide range of invertebrate organisms in Bt and non-Bt fields. Although some studies are flawed by limited replication or time, a meta-analysis of 42 of these studies indicates that nontarget insects and other invertebrates were generally more abundant and diverse in Bt cotton and corn fields than in non-Bt fields, due primarily to the reduction in insecticide use in Bt crops. In comparison with non-Bt fields that were insecticide-free, however, certain nontarget organisms (particularly Lepidoptera and in some cases Hymenoptera) were less abundant in Bt fields than the untreated controls. Another meta-analysis examined the effects of Bt corn, cotton, and potato on specific ecological guilds of arthropods, such as predators, parasitoids, and detritivores. Again, while certain guilds (predators) were less abundant in Bt crops versus unsprayed non-Bt control fields due in part to reduction in prey density, the main factor influencing arthropod abundance was insecticide applications. As expected, numbers of predators and herbivores were generally higher in Bt fields than in non-Bt fields with conventional insecticide applications. Interestingly, while research results and monitoring efforts indicate no significant adverse effects on nontarget insects, several have demonstrated large-scale population decreases in two of the pests targeted by Bt cottonpink bollworm (Pectinophora gossypiella (Saunders)) (now the target of area-wide eradication in the south-western United States) and old world bollworm (Helicoverpa armigera). Reductions in populations of a key corn pest, European corn borer (Ostrinia nubilalis), have also been observed in Bt corn. Because of the special importance of honeybees in agroecosystems and the high likelihood for exposure to the Cry protein toxins in pollen collected from Bt plants, a number of laboratory studies have focused specifically on possible impacts of Bt crops on honeybees. An analysis of 25 studies indicated no adverse effect of Bt Cry proteins on honeybee survival. Given the increasing environmental stresses to honeybees, however, more field-based studies are needed. The long persistence of Bt toxins associated with transgenic crop residues, as well as the potential for Bt crops to exude the Cry proteins in the rhizosphere raises concerns about the effects of Bt crops on soil-dwelling arthropods. A number of studies on the fate of Bt toxins in soils indicate that these compounds remain biologically active in soil for extended periods of time, although the duration of activity is highly site specific. Studies of soil communities, however, have shown few or no toxic effects of Cry proteins on earthworms, collembolan, or other soil arthropods or activity of common enzymes occurring in soil. Herbicide-resistant transgenic plants may also indirectly affect soil arthropod communities. For example, increases in the numbers of the invertebrate detritivore, Collembola, have been noted in the United Kingdom in association with herbicide-tolerant maize, beets, and spring oilseed rape. Indirect and ecosystem-level effects associated with both Bt and herbicide-resistant transgenic crops are discussed in greater detail in the section on secondary effects.

Crop Gene Flow, Introgression, and the Potential for Unintended Effects A unique environmental issue related to genetically engineered crop plants is the potential for season-long, continual production of the pesticide and pollen movement that can spread the transgenes to sexually compatible plants. The potential short- or long-term ecological consequences of vertical or horizontal gene transfer between engineered crops and nonengineered crops, weeds, and native species must be considered for these products. Gene flow from agricultural crops to native species, weeds, or other crops is generally mediated by pollen transfer from one plant or crop to another crop, weed, or native species. Following a hybridization event, the significant question is whether the transferred gene(s) has affected the fitness of recipient plant populations. Fitness changes could occur through increases or decreases in the biomass of vegetative plant parts above or below ground, or the reproductive biomass (seeds). If such changes occurred, ecosystem-level concern could include the possibility of increased invasiveness or “weediness” of the plants containing the transferred gene as well as negative impacts on insects and other herbivores that consume such plants. In the continental United States, the first generation of commercial transgenic crops (corn, soybean, and cotton) have no weedy or native relatives that are sexually compatible and flower at the appropriate time for cross fertilization. However, in other locales (e.g., Mexico and Hawaii) scientific, regulatory, and public concerns regarding the potential movement of transgenes to compatible native or crop species have been raised. Since many researchers have concluded that hybridization between transgenic crops and their wild relatives is unavoidable, they are studying whether the transgene could provide hybrids with a selective advantage in the wild. While one group of researchers have found that hybrid sunflowers containing a Bt gene had increased insect resistance and produced more seed than wild-type sunflowers, another saw no difference between hybrids with a virus-resistance transgene and wild-type plants. As these are recent, small-scale studies it is unclear whether these results are applicable to other transgenic crop plants, or even to other related transgenes expressed in sunflower. Additionally, no incidences of transgene introgression into wild-type plants have yet been observed, further making it difficult to evaluate the importance of including this type of analysis in risk assessments.

Entomological Risks of Genetically Engineered Crops

345

Insect Resistance Development Widespread adoption of corn and cotton containing B. thuringiensis (Bt) genes for pesticidal proteins has led to rekindling of concerns regarding the development of resistance in target and nontarget insect populations, to the active pesticidal ingredients produced by the source microbes or to engineered variants that are produced in plants. For a number of years now, regulatory authorities, academic scientists, and the agrobiotechnology industry have worked collaboratively to develop IRM strategies to manage this expected development of insect resistance. Historically, recommendations in IRM strategies have included (1) growing refugia of nonengineered crop or other species, (2) rotating crops, (3) rotating genes, (4) using multiple rather than individual gene sources for resistance, (5) temporally or spatially limiting the expression of the engineered gene rather than constitutive expression, and (6) controlling the level of expression of the pesticidal genes to be either very high or very low. Studies examining resistance development to Bt crops have also been very thorough, as numerous fields worldwide have been sampled multiple seasons to determine whether the number of resistant insects was increasing. Sampling was performed by collecting insects from the fields, crossing them with a laboratory strain, and then analyzing the F2 population for resistance. Surprisingly, the frequency of resistance did not increase as was expected after exposure for multiple years, suggesting that the Environmental Protection Agency’s (EPA’s) refuge requirements could be effective at preventing resistance development. Molecular markers for Bt resistance have been recently identified, allowing the development of a more rapid screen for resistant individuals and identification of heterozygotes. Experimental surveys for resistant insects and long-term monitoring programs provided no evidence of resistance to Bt toxin in the field for the first 10 years of use in the field. Although conclusions from some more recent monitoring results have been controversial, there is evidence that field resistance has occurred but is being managed. Many, if not most, of the new transgenic crops are now being developed with multiple toxin genes. This reduces the probability of resistance development because multiple mutational events in the target insects would be necessary for total resistance to become established.

Secondary or Indirect Effects In addition to the three concerns that provided the focus of the studies summarized earlier, transgenic crops can have other significant impacts on the environment. In the United Kingdom, farm scale evaluation studies were designed to analyze the impact of the agricultural practices associated with the cultivation of transgenic crops compared with traditional ones. The levels of weeds and animals living in these fields were compared and found to be lower in most of the fields where transgenic, herbicide-tolerant crops were grown. This was due in general to the increased effectiveness of weed control with the transgenic plants, which in turn decreased the food supply of the animals living in the fields. In the case of corn though, weed suppression with the tolerant herbicide, glufosinate-ammonium, was less effective than the conventional herbicide atrazine and greater biodiversity was found in the transgenic corn fields. These results demonstrate the importance of field trials under natural agronomic conditions, as the results could not have been obtained through more limited experimental conditions.

Conclusion: Insects and the Sustainability of Genetically Engineered Crops The sustainability of genetically engineered crops is dependent on a number of factors. In the broadest respect, it will depend on the acceptance of the technology by farmers and society as well as the development, regulatory, and marketing costs to the industry. At another level, sustainability will be determined by the type of environmental impacts that occur as a consequence of the adoption of transgenic crops. Monitoring for direct or indirect ecological impacts has been generally accepted as a sound principle because of the uncertainty surrounding risk assessments and our ability to anticipate effects before large-scale adoption of particular transgenic crop varieties. The main challenge to an effective monitoring program is to determine whether there is a cause-and-effect relationship between the genetically modified crop and some measurable ecological impact. With respect to insects, careful consideration is given to the potential direct and indirect effects on their populations, especially beneficial insects, and resistance development to particular transgenic traits. Scientists are exploring several molecular techniques for monitoring potential population or ecosystem-level changes in pest and nontarget organisms. Extensive field monitoring has also shown the presence of transgenes in wild population of corn. Introgression of transgenes into resident plant populations could indirectly affect expression of other genes. Microsatellite markers can be used as neutral genetic markers to study population genetics of the western corn rootworm. Genomics developments present a revolutionary technology that permits the simultaneous screening of the expression of thousands of genes. Monitoring for differential expression of specific genes in nontarget species could help determine whether they are being exposed to an insecticidal gene product expressed by transgenic crops, and responding biologically to that exposure. Rapidly evolving technologies make it possible to monitor an increasing number of traits and responses. Given the large number of possible monitoring targets, it becomes necessary to strategically select those that are most informative.

346

Entomological Risks of Genetically Engineered Crops

See also: Entomological Risks of Genetically Engineered Crops; Environmental Epidemiology and Human Health: Biomarkers of Disease and Genetic Susceptibility; Epigenetic Changes Induced by Environment and Diet in Cancer; Food Safety and Risk Analysis; Genetically Modified Plants: Risks to Environment; Labeling of Genetically Modified Foods.

Further Reading Andow, D.A., Zwahlen, C., 2006. Assessing environmental risks of transgenic plants. Ecology Letters 9, 196–214. Brookes, G., Barfoot, P., 2008. Global impact of biotech crops: Socio-economic and environmental effects, 1996–2006. AgBioForum 11, 21–38. Chrispeels, M.J., Sadava, D.E., 2003. Plants, Genes, and Crop Biotechnology, 2nd edn. Jones and Bartlett Publishers, Sudbury, MA. Conner, A.J., Glare, T.R., Nap, J., 2003. The release of genetically modified crops into the environment. Part II. Overview of ecological risk assessment. The Plant Journal 33, 19–46. Craig, W., Tepfer, M., Degrassi, G., Ripandelli, D., 2008. An overview of general features of risk assessments of genetically modified crops. Euphytica 164, 853–880. Duan, J.J., Marvier, M., Huesing, J., Dively, G., Huang, Z.Y., 2008. A meta-analysis of effects of Bt crops on honey bees (Hymenoptera: Apidae). PLoS One (1), e1415. Hilbeck, A., Andow, D.A., Arpaia, S., et al., 2006. Methodology to support non-target and biodiversity risk assessment. In: Hilbeck, A., et al. (Eds.), Environmental Risk Assessment of Genetically Modified Organisms, Methodologies for Assessing Bt Cotton in Brazil, vol. 2. CAB International, Wallingford, pp. 108–132. Icoz, I., Stotzky, G., 2008. Fate and effects of insect-resistant Bt crops in soil ecosystems. Soil Biology and Biochemistry 40, 559–586. Kleter, G.A., Bhula, R., Bodnaruk, K., et al., 2007. Altered pesticide use on transgenic crops and the associated general impact from an environmental perspective. Pest Management Science 63, 1107–1115. Kleter, G.A., Harris, C., Stephenson, G., Unsworth, J., 2008. Comparison of herbicide regimes and the associated potential environmental effects of glyphosate-resistant crops versus what they replace in Europe. Pest Management Science 64, 479–488. Marvier, M., McCreedy, C., Regetz, J., Kareiva, P., 2007. A meta-analysis of effects of Bt cotton and maize on non-target invertebrates. Science 316, 1475–1477. Raybould, A., 2007. Ecological versus ecotoxicological methods for assessing the environmental risks of transgenic crops. Plant Science 173, 589–602. Romeis, J., Bartsch, D., Bigler, F., et al., 2008. Assessment of risk of insect-resistant transgenic crops to nontarget arthropods. Nature Biotechnology 26, 203–208. Sanvido, O., Romeis, J., Bigler, F., 2007. Ecological impacts of genetically modified crops: Ten years of field research and commercial cultivation. Advance Biochemistry Engineers/ Biotechnology 107, 235–278. Stewart, C.N., 2004. Genetically Modified Planet: Environmental Impacts of Genetically Engineered Plants. Oxford University Press, Oxford. Wolfenbarger, L., Naranjo, S.E., Lungren, J.G., Bitzer, R.J., Watrud, L.S., 2008. Bt crop effects on functional guilds of non-target arthropods: A meta-analysis. PLoS One 3, e2118. Züghart, W., Benzler, A., Berhorn, F., et al., 2008. Determining indicators, methods and sites for monitoring potential adverse effects of genetically modified plants to the environment: The legal and conceptional framework for implementation. Euphytica 164, 845–852.

Environmental Agents and Childhood Cancerq Friederike Erdmann, International Agency for Research on Cancer (IARC), Section of Environment and Radiation, Lyon, France; and Childhood Cancer Research Group, Danish Cancer Society Research Center, Copenhagen, Denmark Akram Ghantous, International Agency for Research on Cancer (IARC), Epigenetics Group, Lyon, France Joachim Schu¨z, International Agency for Research on Cancer (IARC), Section of Environment and Radiation, Lyon, France © 2019 Elsevier B.V. All rights reserved.

Background on Cancer in Children The term childhood cancer is most commonly used to describe any type of cancer that occurs in patients younger than 15 years of age, although this is an arbitrary cut-off. An alternative definition for childhood cancer, for instance frequently used in the United States, is cancers that occur in patients younger than 20 years. Childhood cancer is a very heterogeneous group of malignancies. In children in high-income countries (HIC), leukemias are the most frequent cancer type (about 33%), with acute lymphoblastic leukemia (ALL) accounting for up to 25% of all childhood cancers, followed by tumors of the central nervous system (CNS) (20%–25%), lymphomas (about 10%) and solid tumors other than in the CNS. Childhood cancers are not only distinct to cancers in adults by their young age of occurrence, but many cancer types seen in children do not or rarely occur in adult ages. This applies also to cancers such as leukemia and CNS tumors, which do occur at all ages, but the morphological subtypes or genetic features seen in children are different from those in adults. Considering childhood cancer separately from adult cancers therefore stems from differences in occurrence site, histological appearance and clinical behavior. The above-mentioned heterogeneity of childhood cancer, together with low incidence rates, are challenges faced by childhood cancer epidemiology, and the evidence regarding causal factors is accumulating at a slow pace. A detailed overview of the evidence on environmental exposures in relation to the risk of childhood cancer is described below in the respective sections. In brief, many epidemiological studies targeted lifestyle factors or environmental pollutants as possible risk factors but with inconsistent results. To date, few genetic conditions, exposure to high-dose ionizing radiation and prior chemotherapy, and birth weight have been convincingly confirmed as risk factors, but only explain a small percentage (< 10%) of all cases. Birth weight was consistently found to be associated with several types of childhood cancers, albeit with differing patterns. Risk of ALL, neuroblastoma and Wilms tumor was observed to increase monotonically with increasing birth weight. For acute myeloid leukemia (AML) and CNS tumors, the risk appears to be elevated at both higher and at lower birth weights. The underlying mechanisms behind those associations with childhood cancers are not fully understood but might include prenatal growth hormone exposure (insulin-like growth factor-1), the underlying genetics of birth weight or simply the higher number of cells at risk for carcinogenic transformation. Breastfeeding was found to modestly reduce the risk of childhood leukemia in several studies, and while it is believed this is related to the strengthening of the child’s immune system, the mechanism has yet to be fully unraveled. For ALL specifically, in relation to the child’s immune system, there is a suggestion that subtypes of ALL may be the result of a rare abnormal reaction to common infections, possibly in connection with lack of immunological training during infancy, but the evidence remains inconsistent. Over the past decades, advances in treatment combinations and techniques, pharmacology, as well as better tailoring of treatment by risk grouping have led to substantial improvements in survival from childhood cancers. Five-year survival rates from childhood cancer have increased from 30% in the 1960s to 80% nowadays in HIC, although survival varies between cancer types, some of which still demonstrate considerably lower survival. As a result of the improvement in survival, mortality is declining and the number of childhood cancer survivors increases continuously. Survivors, however, may experience a wide spectrum of treatment-induced adverse late effects during the life-course, including increased risk of developing a second malignancy. Therefore, better understanding of the etiology of childhood cancer to implement primary prevention remains the ultimate goal. There is a lack of representative data from low income countries, but hospital-based statistics indicate much lower survival. To date, most children in poor-resources settings do not benefit from the advances in treatment of HIC, possibly due to a combination of limited access to health care services and unavailability of treatment regimens.

Epidemiology of Childhood Cancer Population-based cancer registries around the world report annual overall childhood cancer incidence rates ranging from 70 to more than 200 cases per million children under the age of 15 years, as estimated by the International Agency for Research on Cancer (IARC). Incidence patterns are relatively consistent and well described for economically developed countries. Incidence rates of 168, 176, and 155 per million children have for instance been reported for Germany, US Non-Hispanic Whites, and Australia, respectively.

q

Change History: October 2018. S.A. Savage and J. Schuz have update the text throughout the article. This is an update of S.A. Savage, J. Schüz, Environmental Chemicals and Childhood Cancer, Editor(s): J.O. Nriagu, Encyclopedia of Environmental Health, Elsevier, 2011, Pages 336–346.

Encyclopedia of Environmental Health, 2nd edition, Volume 2

https://doi.org/10.1016/B978-0-12-409548-9.11725-7

347

348

Environmental Agents and Childhood Cancer

Incidence rates vary by cancer type, race/ethnicity, sex, and age at diagnosis which may give some indications for its etiology. Highest rates are seen in infants (< 1 year) and slightly lower at age 1–4 years; incidence rates at ages 5–14 years are markedly lower compared to those for the first 5 years of life. Boys have an approximately 20% higher risk of childhood cancer than girls. Incidence rate differences by ethnicity/race are well recognized for the United States, with the highest incidence being reported for White Hispanic children and the lowest for Black children. The differences are particularly pronounced for ALL (2.5-fold) but smaller for AML (1.2-fold). Different racial/ethnic groups may vary in terms of their environmental exposures, and that there might be important interactions between selected exposures and underlying genetic susceptibility which might explain the differences in cancer risk. The distribution of childhood cancer types differs across populations. In Sub-Saharan Africa, Non-Hodgkin lymphoma (NHL) (Burkitt’s lymphomas) and Kaposi sarcoma are more frequent due to the specific exposure to infectious agents in that region (namely Epstein-Barr virus, malaria, HIV and human herpes virus). Hodgkin lymphoma is more commonly recorded in economically developed countries and within those populations, it is more common in individuals with higher socioeconomic position. On the other hand, in some low-middle-income countries (LMICs), particularly in Sub-Saharan Africa or some parts of Asia where registry data is particularly limited, remarkably low leukemia and overall childhood cancer rates have been reported. For instance, childhood cancer rates of 46, 55, and 97 per million have been reported for South Africa, Botswana, and India, respectively. The reasons for the variability in childhood cancer incidence rates across the world is largely unknown, but geographical differences may suggest unique genetic susceptibility or environmental exposures that affect the risk of childhood cancer or some of its subtypes. These observed geographical differences have been used to support several hypotheses of associations between exposures related to modern lifestyle and the risk of childhood cancer, particularly for ALL. However, estimating childhood cancer incidence globally is hampered by differences in diagnostic and reporting standards across countries. Several recent reports indicate that under-ascertainment of cancer cases, at least of leukemia and CNS tumors, may be sufficiently large to account for the majority of the observed differences between some LMICs compared with Europe and North America. The lower incidence rates observed in some LMICs might be therefore at least to some extent attributable to under-ascertainment of cases, with the degree of incomplete ascertainment most likely varying by region, cancer type, race/ethnicity, sex and age. At the same time, similarities in ALL incidence rates across most HICs are equally striking. Most recent data suggests that the ALL incidence rate is around 43 per million children per year in such different countries/regions as Australia, Canada, Germany, Nordic countries, Switzerland and Non-Hispanic US Whites, while it is, e.g., 37 per million per year in England and France. None of the known lifestyle and environmentally related cancers in adults shows such similarities. Hence, hypotheses on putative risk factors for ALL do not only have to take into account the geographical differences, in particular as they may be substantially affected by ascertainment artifacts, but also the observed geographical similarities. Time trend analyses based on data from population-based cancer registries in HICs had shown a modest increase by about 1%– 2% per year in childhood cancer incidence rates over the last three decades of the 20th century, followed by a leveling off in the early 2000s in some countries. This increasing trend in rates is also seen for the most common childhood cancer, ALL, but it is unclear whether this trend is mainly the result of improved diagnosis and more complete reporting or due a true rise in the incidence.

Challenges and Limitations of Epidemiological Studies of Environmental Influences on Childhood Cancer Risk Children as a Susceptible Population As mentioned before, the spectrum of cancers in children compared to adults is very different. In HICs, hematological malignancies represent 40%–60% of all cancers in the first 15 years of life, while they make up < 10% in adults. ALL, however, is most common in children under the age of 5 years and rare in adults, while AMLdalbeit showing a small peak in infantsdincreases steeply from age 50 onwards and is highest in the oldest age groups. Chronic leukemias, both lymphatic and myeloblastic, are uncommon in children but often seen in the elderly. Hodgkin lymphoma is rarely diagnosed in children under the age of 10 years, but the incidence rates remain stable from ages 10 to over 80 years. NHL is seen in children but much more often in adults. CNS tumors are the most common solid tumor in children, but its two most common subtypes in children, namely pilocytic astrocytoma and medulloblastoma, are hardly seen in adults whereas the commonest adult types, glioblastoma and meningioma, are hardly ever seen in children. Several histological types of other solid tumors almost exclusively occur in childhood, especially Wilms tumor, neuroblastoma or rhabdomyosarcoma. Many inherited cancer predisposition syndromes are identified during childhood. Children with Down syndrome (trisomy 21) are at 10–20-fold higher risk to develop leukemia. In nearly all patients with retinoblastoma (RB), germ line mutations in the RB gene are seen. These patients are also at increased risk of other cancers. There are several other syndromes which are related to an extremely high cancer risk, such as Li-Fraumeni syndrome, Gorlin syndrome and other. However, although these syndromes have been very informative in understanding cancer biology, they are rare and explain only a small minority of all childhood cancers. Although studies have attempted to explain the nongenetic causes of childhood cancer, the identification of causal agents for childhood cancer remains elusive. Several studies suggest increased risk with certain exposures, but the interaction of genetic and environmental exposures may play a stronger role in children, in whom the latency period is short (mostly well below 15 years given the age distribution of childhood cancer), compared to adults who have had many decades to accumulate environmental exposures.

Environmental Agents and Childhood Cancer

349

The early age at diagnosis suggests that some childhood cancers might originate in utero, and that factors prior to birth, including preconceptional or fetal environmental exposures, as well as those in early childhood may be important risk factors. In utero exposures are of concern to the developing fetus. Since oocytes divide only during fetal life, it is even conceivable that a child’s grandmother could have had a toxic in utero exposure that led to abnormalities in the dividing oocytes of the mother, who in turn passed those on to her child. It is conceivable that the father could be exposed to carcinogens causing abnormalities in spermatogenesis, subsequently passed on to his child. Thus, putative environmental exposures of the child, the parents, and even the grandparents could, possibly contribute to the risk of childhood cancers. With respect to exposures during early childhood, children are potentially exposed to environmental contaminants at higher levels than adults which might contribute to the short latency period. Young children spend more time on the floor or ground and are more likely to put various things in their mouths. Children also have a higher intake of food, water, and air per unit body weight. They have a higher surface to volume ratio than adults so a larger proportion of their skin surface can be exposed to a contaminant, leading to increased absorption.

Methodological Challenges and Limitations There are numerous challenges to keep in mind when studying environmental exposures in relation to the risk of childhood cancers. The design and interpretation of studies that attempt to evaluate environmental exposures are in general challenging even in the more common adult cancers. The great heterogeneity between childhood cancer types in which various environmental exposures and genetic risk factors may play different roles, coupled with low incidence rates, add to these challenges. Careful definition of the specific disease type is an important start in the design and interpretation of childhood cancer risk factors. There are significant biological differences in the cell of origin, age at diagnosis and clinical outcomes between cancer types and their subtypes. Even within ALL there are subtypes with different chromosomal translocations, age of onset and clinical outcome that could possibly have different etiologic factors. CNS tumors are a very heterogeneous group that also needs to be carefully biologically defined. The numerous other types of childhood solid cancers are even more heterogeneous. However, many childhood cancer studies conducted so far investigated mainly possible risks of the major types or diagnostic groups, as sample sizes were often too small for looking at specific subtypes. This reduces the ability to detect small to moderate risks in specific etiological subgroups. Each study therefore poses the challenge of defining the disease target, finding the balance between being too specific leading to too few cases to be able to detect moderate associations with risk factors and being too broad leading to lumping etiologically different subtypes together which dilutes potential associations. Longitudinal cohort studies which follow healthy subjects for years or even decades and study disease outcomes are being conducted in common adult cancers. However, this is particularly challenging in rare diseases, including childhood cancer. For example, even in a cohort of 1 million children followed from birth, with a cumulative risk of ALL of approximately 1 in 2000, about 500 cases would be expected to develop ALL before the age of 15 years. This approach requires long follow-up and biological heterogeneity would still limit statistical power. In some countries, cohorts of mothers enrolled before or during pregnancy are conducted, and consortia were established to pool data for larger sample sizes to investigate rare outcomes such as childhood cancer; if, however, rare exposures are studied, statistical power becomes again low. An alternative are retrospective register-based cohort studies, which are making use of longstanding register data as it is possible for example in the Nordic countries. The high-quality health data and population register infrastructure in the Nordic countries constitute an ideal and unique basis for systematic, large-scale epidemiological studies of rare diseases such as childhood cancers. The Nordic countries have a civil registration system with national population-based administrative registries such as the Cancer Registries, Central Population Registers, Medical Birth Registers, unique personal identification numbers, and legislation that permits and supports registry-based research. The personal identification number is used in all national registries, enabling accurate linkage of information between registries. Data-linkage between those registries provides the basis for nationwide, populationbased cohort studies, based on high quality data with virtually no loss to follow-up, participation-, nonresponse or recall bias. However, those purely register-based studies are obviously limited to information systematically recorded in registers (hence, lack of information on many lifestyle and environmental factors) and are limited to the few countries which have a population register infrastructure and permit data utilization for research purposes. Consequently, the case-control design is the one mostly used for studying environmental exposures in relation to childhood cancer and assessing exposure information was mostly through questionnaires. Cases are selected based on their cancer type, and the preferable choice for controls is a random sample of children not suffering from cancer, representing the source population from which the cases arose. Recall bias is a crucial concern in this study design because the parents of an ill child may remember certain past exposures better than those of controls in an effort to understand why the disease has happened to their child. The parents of a healthy child may not be as rigorous in their efforts to recall certain exposures. The use of friends and neighbors (and more so for siblings) as controls may result in overmatching on exposure status. The use of population-based controls avoids such limitations, but in studies requiring personal contact with participants, participation rates among controls are often low, and selection bias is a concern. If the likelihood of participation is associated with the exposure of interest, the resulting risk estimates may be biased. Environmental factors, such as pesticide and air pollution exposure, maternal medication, caffeine, parental tobacco smoke or alcohol intake are in general difficult to measure accurately, particularly in a retrospective design. Use of registry data, birth and medical records, and other data sources reduces some sources of bias; however, accurate exposure assessment remains a major

350

Environmental Agents and Childhood Cancer

barrier in determining the causal impact of environmental factors on childhood cancer risk. Methods of exposure assessment and data interpretation are especially challenging in studying rare disorders. Crude exposure assessments are often the only available data (e.g. exposure to any versus specific types of pesticides), and if dose response measures are possible, they may give more information regarding causality. Many chemicals are rapidly metabolized and excreted which makes studying levels in biological specimens not feasible. Even when an association is found between exposure and disease, it is very difficult to assess the role of other factors which might be responsible for an increased risk and to control the effects of potential confounding factors. The timing of the exposure is often difficult to assess. There may be a long latency period between exposure and development of cancer. This is especially challenging when studying children and preconception or in utero exposures could contribute to disease. Recent advances in high-throughput technologies (e.g. omics) have provided novel approaches enabling better assessment of exposure/lifestyle factors. Epigenomics has been particularly promising herein and it refers to mitotically heritable changes in gene expression that are not due to alteration of the genetic code itself. DNA methylation, a process by which methyl groups are added “on top of” (ancient Greek “epi”) the DNA molecule, is one of the most established epigenetic mechanisms. The utility of the DNA methylome in exposure assessment has been proven through several recent examples, one of which is the methylation levels of the Aryl-Hydrocarbon Receptor Repressor (AHRR) gene, which is involved in signaling pathways that respond to environmental toxins. AHRR methylation status has recently become one of the best blood biomarkers not only predicting tobacco smoking status but potentially also the duration of smoking, as AHRR methylation persists several decades after quitting this habit. This biomarker will likely supersede for example the well-established biomarker cotinine, which, in contrast, has a half-life of only few hours in blood. Epigenetics is now being utilized to also predict sex, age and ethnicity, with the list of predictions rapidly evolving. In parallel, exome and whole genome sequencing have recently enabled the characterization of mutational signatures, with the potential to predict certain exposures such as Ultraviolet (UV) radiation, tobacco carcinogens (e.g. benzo[a]pyrene), aristolochic acid (a mutagenic and nephrotoxic phytochemical commonly found in the flowering plant family Aristolochiaceae), aflatoxin (a family of toxins produced by certain fungi that are found on agricultural crops), and others that are being cataloged in public databases. When applied to human biospecimen, these omics approaches could capture a molecular snapshot of the (epi)genome, based on which past (e.g. UV) and current (e.g. age) exposure/lifestyle factors can be predicted. This becomes particularly important in retrospective studies that can retrieve biospecimen archived before the manifestation of the outcome of interest because then the biospecimen collected retrospectively would yield a prospective molecular image of lifestyle factors that happened before the occurrence of the outcome. For example, many case-control studies can retrieve neonatal blood spots (which are routinely archived by many hospitals and biobanks), and these biospecimen can offer a prospective molecular profile of earlylife factors that occurred perinatally, including the in utero period. Other types of omics technologies, such as metabolomics and transcriptomics, are also being implemented in exposure assessment. Though they do offer some advantages, they may not be ideally applicable to biospecimen that has been archived for long periods or without freezing as these methods rely on biomolecules (such as RNA and proteins) which are relatively not stable. On the contrary, DNA methylation and mutational signatures are stable over years as DNA is one of the steadiest biomolecules (including at room temperature). With the rapidly evolving field of molecular epidemiology, many current challenges regarding exposure timing, duration, bias and accuracy will likely find some solutions.

Radiation Ionizing Radiation Ionizing radiation (IR) is high-energy radiation that is powerful enough to cause displacement of electrons from atoms and breaks in chemical bonds. It is capable of introducing DNA strand breaks, introducing mutations, and causing cell death. IR is primarily genotoxic and is a well-described carcinogen. Although some uncertainty remains at low dose levels, there is accumulating evidence from studies in adults that the relationship between IR and cancer is best described with a linear nonthreshold model, that is, monotonic increase in risk with increasing exposure starting from lowest doses. Major type of IR exposures are natural, namely terrestrial or cosmic or naturally occurring radioactive nuclides in the environment. Geographic variability can result in different levels of background IR, but some background IR is ubiquitous and zero exposure does not exist. Other external and internal sources of IR exposures can occur as a result of man-made sources and pollutants. Medical exposures occur through X-rays, CT scans and cancer radiotherapy. Higher doses of IR is an established cause of childhood cancer. Increased risks of second hematological malignancies but also some solid cancers are observed in children having had radiation treatment of their primary cancer. Increased rates of thyroid cancer have been identified in patients with prior head and neck radiotherapy. After the Hiroshima and Nagasaki atomic bomb detonation increased rates of childhood leukemia were identified 5–6 years later. Recent studies suggest that there may be increases in solid cancer rates in children exposed either in utero or < 6 years of age at the time of detonation. IR (here radio-iodine specifically) as a result of the 1986 Chernobyl nuclear power plant accident resulted in increased rates of childhood thyroid cancer. Findings from studies of low doses of IR are more controversial. Studies before the 1980s on childhood cancer after prenatal Xray exposures yielded associations with some pediatric cancers, especially leukemia. Recent studies did not confirm this and radiation doses from X-ray examinations became considerably lower over time. Computed tomography (CT) examinations however are increasingly used in most HICs and studies from the United Kingdom and Australia suggest increased risks of leukemia and CNS tumors with increasing dose from CT exposures; although reverse causation is a concern, it appears that after taking this into account

Environmental Agents and Childhood Cancer

351

some attenuated risk increase remains. A multicenter study in nine European countries is currently under way. While there is no doubt CT is an important diagnostic instrument, results may urge better dose adjustment when planned examinations with children are conducted. No clear overall picture emerges from studies on radon and childhood leukemia. A German study showed a significant excess of leukemias in young children living close to nuclear power plants, reviving the debate about hazards of nuclear power plants, but radiation doses measured in the vicinity of the nuclear installations are thought to be too low to increase the risk of any cancer. Notably, thyroid cancer remains the only cancer for which an excess has been established following the Chernobyl nuclear accident. For the recent Fukushima Daiichi nuclear accident, thyroid doses are manifold lower and an increase in thyroid cancer has so far been attributed to over-diagnosis from organized thyroid ultrasound examinations carried out in large childhood populations living in the Fukushima prefecture. A recent pooled study of nine cohorts examined the risk of childhood leukemia with IR from various sources, and detected a threefold risk for acute myeloid leukemia and almost sixfold for ALL with each increase of 100 mSv; an increase was seen at exposures < 50 mSv. This is so far the most convincing evidence of a low-dose IR effect at least in leukemias.

Nonionizing Radiation Nonionizing radiation (NIR) is the part of the electromagnetic field spectrum covering static electric and magnetic fields, lowfrequency electric and magnetic fields and radiofrequency electromagnetic fields (EMF) and microwaves, including optical radiation like infrared and visible light. Its name derives from the fact that the energy quantum of NIR is too small to cause ionization in matter. Therefore, there is no straightforward mechanism as to why NIR should be associated with an increased cancer risk, although a role in tumor promotion or epigenetic mechanisms cannot be excluded. Extremely low-frequency (ELF) electric and magnetic fields are related to the distribution and use of power. Higher exposures occur when using electrical appliances, but the exposure time of children is short and intermittent and no consistent picture emerged from studies on the child’s use of appliances. Long-term exposures above background levels occur in children living close to electrical installations, that is, high-voltage power lines, substations, transformers or electric railroads, but unbalanced currents or grounding currents from indoor wiring are another potential exposure source. The majority of houses have a background magnetic field below 0.1 mT; in the United States, 5%–10% of houses had magnetic fields above 0.2 mT in various measurement surveys, while the proportion was much smaller in most Western European countries. Epidemiological studies on magnetic fields and childhood leukemia have consistently shown an increased risk of childhood leukemia with exposures above 0.3/0.4 mT compared with exposures below 0.1 mT. Based on these studies, ELF magnetic fields were classified as a possible carcinogen to humans. The major shortcoming is that little experimental data is currently available that would support the empirical association as observed in the epidemiological studies. Moreover, selection bias is a major concern in the case-control studies, as in the studies performing magnetic field measurements a deficit of control children with lower social status was identified. As lower socioeconomic position is associated with a higher likelihood of being exposed to higher magnetic fields, this may have led to an overestimation of the association; it is not clear, however, whether this bias explains the observed association in its entirety. Assuming that the observed association is causal, data on the prevalence of exposure and incidence rates as described above suggest that the fraction of childhood leukemias attributable to magnetic field exposure is small. Estimates range from about 2%–4% in North American countries to 1%–2% or less in West European countries. Radiofrequency (RF) are generated as part of the global telecommunications networks or as part of industrial processes utilizing RF energy for heating. There are a number of studies of childhood cancer incidence and mortality in children living in the vicinity of radio and television broadcast towers, some of which suggested a small increase in leukemia risk. Recent case-control studies in South Korea and Germany used RF calculated fields based on distance to the tower and antenna characteristics and both largescale studies did not observe an increased leukemia risk. Another large-scale UK study modeling exposure from cellular phone base station antennas did not find any increased risk of cancer in children. Altogether, the studies provide some evidence against an association between environmental RF-EMF and childhood cancer risk. Use of cellular phones has increased rapidly in children in the two recent decades. Due to the developing brain in children and their smaller skull, it was recommended to study the association between cellular phone use and brain tumor risk specifically in children rather than extrapolation from studies in adults, with the majority of studies not showing an association except perhaps in extremely heavy users. Until now, only one such study has been published, showing no increase in any type of brain tumor among children and adolescents. Ultraviolet (UV) radiation during childhood is known to be a risk factor for the development of malignant melanoma during adolescence and adulthood.

Air Pollution Air pollution is a variable mixture of particles and gases that vary in location and time of day and/or season dependent on geographical location. It contains a large number of chemicals, including some with known or suspected carcinogenic effect such as benzene, polycyclic aromatic hydrocarbons, 1,3-butadiene, diesel exhaust and gasoline. Particulate pollutants are generated from fuel burning and through chemical reactions of gaseous pollutants in the atmosphere. A growing body of literature on ambient air pollution, particularly arising from traffic exhaust, and the risk childhood cancer, particularly leukemia, has emerged over the past decades, suggesting that exposure to ambient air pollution may be associated with an increased risk of childhood leukemia. However, based on the current evidence, no firm conclusions can be drawn, given the

352

Environmental Agents and Childhood Cancer

differences in exposures assessment, pollutants studied and periods of exposure investigated across studies. Measures of ambient air pollution used in epidemiological studies included distance from potential environmental hazard sites, distance of homes from gasoline stations, traffic density or counts in a municipality of residence, measured pollutants in the air and others. Few studies have examined specific components but recent studies started evaluating more specific exposures as well as specific exposure time windows. Recent advances in exposure assessments included measurements from ground based monitoring stations or highly spatially resolved air pollution modeling approaches. A recent meta-analysis on the association between traffic-related air pollution and risk of childhood leukemia encompassing over 11,000 cases and 98,000 controls found major heterogeneity across studies. Overall, findings of the review and few studies conducted afterwards suggest that traffic air pollutants are associated with a somewhat increased risk of leukemia, both among all leukemias and major subtypes. Exposure to benzene appears to be the traffic contaminant most strongly associated with leukemia risk, particularly for AML. In contrast, for traffic density and NO2, the evidence is stronger for an increased risk of ALL but not AML. The postnatal exposure window might be more important than the prenatal one in relation to an increased childhood leukemia risk. Given the inconsistency across studies and the methodological limitations, the evidence for air pollution is regarded as inconclusive. Additional work in much larger studies with improved exposure assessment, assessment of different time windows of exposure, characterization of cancer subtypes and minimization of bias is warranted, given worldwide increasing levels of air pollution.

Environmental Tobacco Smoke Children may be exposed to environmental tobacco smoke (ETS) through a variety of mechanisms. Prenatal exposure occurs through maternal smoking during pregnancy and maternal exposures to other smokers. Direct exposure of children primarily derives from smoking in their home and other indoor environments. Passive smoking of the pregnant or lactating woman of a child or the child itself can lead to a nicotine level similar to a low level of active smoking. Increased risks of sudden infant death syndrome, respiratory tract infections, allergies and asthma have been well established in children exposed to ETS. ETS is a proven human carcinogen based on numerous, large epidemiological studies of ETS and lung and other cancers. In fact, tobacco smoke contains over 40 known carcinogens. It has been hypothesized that ETS may effect both somatic and germ cells during critical periods of child’s development. However, it is difficult to distinguish between pre- and postnatal exposure since individuals who smoke during pregnancy usually smoked before the pregnancy and often continue to smoke after the child’s birth. Many studies have attempted to evaluate the risk of childhood cancer, particularly childhood leukemia, in relation to ETS exposure. Nevertheless, these studies are usually based on self-reports which is prone to recall and reporting bias, particularly concerning questions about health behavior. The epidemiological evidence on maternal smoking during pregnancy and the risk of childhood ALL and AML is inconsistent but even the studies showing an association do not indicate a strong association. Findings for paternal smoking point more consistently in one direction and suggest a modest increase in risk for both childhood ALL and AML in relation to paternal smoking at different exposure time windows (preconception, during pregnancy or after birth). Evidence is inconclusive for parental smoking and other childhood cancer types. A meta-analysis on childhood lymphoma risk found a modest increased risk of non-Hodgkin lymphoma among children of mothers smoking during pregnancy, whereas there was no evidence of an increased risk for Hodgkin lymphoma. A meta-analysis on brain tumors indicates that parental smoking may not be associated with an increase in risk. Maternal smoking during pregnancy is regarded as a cause of hepatoblastoma in her offspring, although a recent register-based study from the Nordic countries observed no association unlike previous interview-based case-control studies.

Pesticides Pesticides are a common environmental exposure and have been extensively investigated with respect to their potential health consequences. The term “pesticides” covers a large, heterogeneous group of chemicals used to control insects, weeds, fungi and other pests. In addition to the thousands of naturally occurring pesticides in a wide variety of plants, there are at least 500 different synthetic pesticides and an estimated 5000 or more formulations. The active ingredients of each chemical may have different mutagenic, carcinogenic or immunotoxic properties. More than 20 individual pesticides have been classified as at least probable or possible human carcinogens by the International Agency for Research on Cancer’s Monograph program on the evaluation of carcinogenic risks to humans. With respect to the biological plausibility of an association between pesticides exposure and childhood cancer risk, exposure of the father prior to conception might lead to germ cell damage, while maternal exposure during pregnancy could result in fetal exposure, as demonstrated by pesticide residuals found in umbilical cord blood and meconium. Exposures to pesticides can come from a variety of sources, including farming, manufacturing and home and garden uses. In farming communities, in addition to the exposed farmer and his/her family, the croplands, crops, dust, ground and surface water, and rainfall can be sources of pesticide exposures. Leaks, spills or accidents during manufacture, distribution and application can also contribute to exposures. A large number of pesticide exposures in the general population come from common home, lawn and garden use. This includes commercially available agents for outdoor (i.e. lawn products) and indoor (i.e. insecticides including pet products and insecticidal shampoos) agents. The vast heterogeneity in the types of pesticides and wide variety in potential types of exposures makes detailed exposure assessment a challenge. Most studies have used the very general term “pesticide” as the exposure

Environmental Agents and Childhood Cancer

353

category. Questionnaires are usually used and degrees of exposure are categorized based on self-reported job titles, extent of indoor or outdoor use. The majority of studies of pesticide exposure and childhood cancer risk have focused on leukemia or brain tumors based on a case-control design. Findings for childhood leukemia or childhood brain tumors are mostly consistent with moderately increased risks with pesticide exposures, in the order of 1.3–2-fold risks. Meta-analyses overall found exposure to residential pesticide use during pregnancy and during childhood to be related to an increased risk of childhood leukemia, with the associations being strongest for residential insecticide exposure during pregnancy. A pooled analyses recently conducted within the Childhood Leukemia International Consortium (CLIC) based on > 8000 leukemia cases and almost 15,000 controls found that any pesticide exposure before conception, during pregnancy and after birth was associated with an increased risk of ALL, with little variation in risk by type of pesticide exposure. Similarly the risk of AML was associated with any pesticide exposure before conception and during pregnancy, but not after birth. Investigating parental occupational pesticide exposure in the prenatal period showed an association between maternal occupational exposure during pregnancy and an almost doubling in the risk of AML as well as between paternal occupational exposure before conception and a slightly increased risk of ALL (20% increased risk). Studies of brain tumor risk and pesticide exposures are difficult to interpret due to brain tumor heterogeneity and inconsistencies in pathology reporting. However, meta-analyses and also most individual studies observed an increased risk of childhood brain tumors in association with prenatal residential maternal or paternal exposure to pesticides. There is also evidence that parental occupational exposure to pesticides during the prenatal period is associated with an increased brain tumor risk in the offspring (40%– 50% increased risk). The evidence of an increased risk after pesticide exposure in childhood has been less consistent. While the evidence on pesticide exposure and childhood lymphoma risk is very sparse and studies are limited by sample size, findings suggest also an increased lymphoma risk. A meta-analysis on neuroblastoma concluded that the available evidence does not suggest an association of paternal occupational pesticide exposure with increased risk of neuroblastoma in the offspring, while results from individual studies suggests possible associations between the household use of pesticides and neuroblastoma. Many individual studies of other tumor types, including Ewing’s sarcoma, osteosarcoma, soft-tissue sarcoma, germ cell tumors, and retinoblastoma, have pointed toward possible associations with increased risk, but again, the majority were very small studies with limited statistical power and no conclusive picture emerged. Although meta-analyses and large comprehensive studies point toward an association between pesticides and risks of specific childhood cancers, there is some heterogeneity across studies and the specific underlying mechanisms have not been identified yet; current studies are unable to identify with any certainty the critical time periods of exposure or disentangle the potential effects of specific types of pesticides. Moreover methodological concerns remain. As the evidence arises mainly from case-control studies, recall bias when reporting pesticide exposure or occupational histories in interviews might occur, with some evidence that this is leading to at least to some overestimation of the magnitude of the associations. Selection bias is another concern if jobs with common pesticide exposures such as farmers are either over-represented in the case or under-represented in the control study population. Crude exposure estimates could in fact also lead to a dilution of any true association. In studies in farmers and pesticide applicators, exposure assessment is currently improved by designing exposure models for specific active ingredients that hopefully in the future can be expanded to childhood cancer studies. Major shortcoming of all childhood cancer studies is the lack of specificity on pesticide ingredients.

Hydrocarbons and Solvents Hydrocarbons are organic compounds that primarily consist of carbon and hydrogen atoms. They include substances such as gasoline, paint thinner, solvents, trichloroethylene, and others. Benzene is a commonly recognized hydrocarbon that is used as an additive in motor fuels and hobby glues, in the manufacture of plastics and is also formed by the incomplete combustion of fossil fuels. Benzene has been well-studied and is a known human carcinogen. The exposure (dose)–response relationship between benzene exposure and adult leukemia (especially AML) risk has been well established in subjects with occupational exposures. Studies in children have focused on air pollution (see above). Studies have also attempted to evaluate the risk that exposures to paints and plastics could potentially confer for childhood cancer. This includes studies of parental occupations, parental hobbies and home projects. Findings from pooled analyses conducted within the CLIC consortium suggested a weak to modest association between home paint exposure shortly before conception, during pregnancy or after birth and an increased risk of childhood ALL. Although one would assume that the frequency and level of exposure to paint chemicals would be much higher in occupational paint exposure, results did not show an association with parental occupational paint exposure. However, studies are in general limited by lack of information about the degree of exposure incurred by the child and doubts about the accuracy of the self-reported information. As in many other chemical exposures, there is much speculation about the role that parental exposure (preconception, during pregnancy, or in infancy) could play in the development of childhood cancer.

Heavy Metals There are various definitions used for heavy metals. The term heavy metal is often used as a group name for metals and semimetals that have a relatively high density and have been associated with environmental contamination and potential toxicity.

354

Environmental Agents and Childhood Cancer

Although epidemiological data on exposure to heavy metals and childhood cancer risk are relatively limited, most heavy metals are in fact cancer inducing agents. Arsenic, cadmium, chromium and nickel are classified by IARC as human carcinogens, primarily of the lung. Studies have found that exposure to these compounds leads to disruptions in tumor suppressor gene expression, damage repair processes, and enzymatic activities concerned in metabolism via oxidative damage. Some studies suggest that the risk of heavy metal exposure is interrelated with the contamination source. For example, recent studies found an increased risk of occupational disease and cancer in workers in heavy metal-using industrial areas. Despite such serious toxicity, heavy metals are utilized widely in several industrial, agricultural, domestic and technological applications. Few studies have investigated a potential association between exposures to specific metals and childhood leukemia risk, using different exposure assessments including paternal occupational exposure to metals, dust metal loadings in children’s home and estimated metal levels in drinking water. Overall, there is little and inconsistent support for an association between prenatal or early life exposure to specific metals and risk of leukemia. Most support for a potential association derives from studies investigating parental occupational exposures to metals. However, those findings were equivocal since metal exposures were not directly assessed and results were based on small numbers of parents of children with leukemia that were exposed to metals.

Nitrates, N-Nitrosamines N-nitrosamines and N-nitrosamides are the two major chemical groups that make up the N-nitroso compounds (NOC). The NOCs consist of a nitroso group attached to a nitrogen atom and are formed by the reaction of a nitrite compound with amines or amides. There are numerous sources of exposure to NOCs. Diet can lead to exposure to both preformed NOC and precursors such as sodium nitrite, amines, and amides. Nondietary sources of exogenous exposure to NOC include tobacco products, cosmetics, medications, agricultural chemicals, and certain occupations such as in rubber, leather, and metal machining industries. N-nitrosamides are direct alkylating compounds and can lead to DNA adducts at the site of their occurrence. N-alkylnitrosureas, a type of N-nitrosamides, have induced brain tumors in the offspring of pregnant rodents and monkeys. N-nitrosamines have been shown to induce tumors in various animal species. Several studies have evaluated the role of NOC exposure and childhood brain tumor risk. Many of these studies attempted to evaluate maternal intake of cured meat, vegetables, and fruits. Most evidence emerges from studies from North America and included cases diagnosed during the 1970s to 1980s, but the study hypothesis has hardly been followed up in more recent years. Findings on maternal cured meat intake during pregnancy point to some but also inconclusive evidence regarding a positive association with childhood brain tumor risk in the offspring. Nevertheless, cured meat intake does not only reflect NOC exposure but is also often related to high fat intake or other dietary habits; thus, it is not clear whether the observed association can be fully attributed to NOC. Moreover, crude measures of nitrate exposure from food sources have been estimated using questionnaires, but these measures suffer from all the usual difficulties associated with nutritional epidemiology compounded with limited data on nitrate concentrations in foods. There is less evidence regarding an association with child’s cured meat consumption and no evidence of a positive association with nitrate from vegetables in maternal diet. Hair coloring products (hair dyes) that contain NOC-related aromatic amines have been investigated as possible carcinogens. A large population-based case-control study in the West Coast of the United States as well as results from the few other studies found little evidence of an association between risk of childhood brain tumors and maternal hair dye use. Parental occupational exposures to NOCs have also been evaluated as childhood cancer risk factors. Small studies suggest a possible increased relative risk in children of parents with moderate to heavy exposures, but other studies have not consistently replicated this finding. Few studies have also evaluated nitrates in drinking water, showing that there may be some variability in risks of childhood brain tumors and nitrate exposures through well versus public water supplies, but again, results are not consistent between studies and limitations of studies preclude firm conclusions.

Medications, Illicit Drug Use, Vitamins As there is growing evidence that some childhood cancers are initiated in utero, maternal exposures during and before pregnancy and their possible role in the etiology of childhood cancer are important targets for research. Maternal medication use before and during pregnancy is linked to such exposures because drugs may cross the placenta and, potentially, damage the fetus. Congenital malformations are one adverse pregnancy outcome that was demonstrated to be associated with maternal medication use, for example the intake of thalidomide by mothers and limb defects in their children. There are a number of epidemiological studies which have investigated associations between maternal medication intake before and during pregnancy and the risk of childhood cancer. Among those associations that are more consistently found are a risk reduction for leukemia, neuroblastoma and CNS tumors following maternal folate supplementation and other vitamin intake, elevated risks for specific childhood cancer types following fertility treatment and the association between maternal use of diuretics and an increased neuroblastoma risk. A possible mechanism for the association between diuretics and neuroblastomas might be the toxic presence of N-nitroso precursors which are known to be part of some anti-diuretic medications. On the other hand, it might also be that the fetal catecholamine production might be responsible for maternal hypertension followed by the accordant medication.

Environmental Agents and Childhood Cancer

355

Health authorities in many countries recommend women planning pregnancy to take folic acid before and during pregnancy. A well-known benefit of folate supplementation is in the prevention of congenital malformations, particularly neural tube defects. The role of folate and other vitamins in cancer risk is supposed to be due to impaired DNA synthesis and repair as a result of folate deficiency and to prevent oxidative stress. Recent meta-analyses and findings from large international collaborative efforts indicate clearly a moderate reduction in risk of ALL following maternal folate supplementation and vitamin intake before or during pregnancy. For AML, a reduction in risk with intake of folate supplementation was found, but the evidence for vitamin intake remains inconclusive. Moreover, a reduced risk for CNS tumors and neuroblastoma following maternal use of folate supplementation is reported in the literature. Notably in contrast to these recent comprehensive meta-analyses, a nationwide cohort study from Norway including all live-birth from 1999 to 2010 found no association between maternal supplemental folic acid and multivitamin intake before or during pregnancy and risk of different childhood cancers. As data on folic acid or multivitamin intake generally come from maternal self-reports, the accuracy of reporting is uncertain. It needs to be noted that also in countries in which folate intake before and during pregnancy is strongly promoted, no lower leukemia incidence rates were observed, suggesting that the effect - if it exists is presumably weak. It has been hypothesized that fertility treatment may play a role in the etiology of childhood cancer. A potential underlying mechanism whereby fertility treatment might contribute to the development of childhood cancer includes epigenetic changes induced by repeated hormonal exposure when taking fertility drugs. Overall results for the association with fertility treatment are inconsistent. While several recent studies found no elevated overall childhood cancer risk in children conceived by fertility treatment, most studies indicated elevated risks for specific cancer types, including hematological cancers, CNS tumors, neuroblastoma and retinoblastoma. However, on the basis of the current literature, it is not possible to disentangle whether elevated risks stem from the fertility treatment or the underlying parental infertility per se. Another closely related putative risk factor is maternal use of contraception. Most studies that have addressed the issue of maternal use of hormonal contraception in the months leading up to conception, which is also a scenario for preconceptive exposure to sex hormones, point to a possible positive association with leukemia. A recent register-based nationwide cohort study from Denmark observed that maternal use of hormonal contraception up to or during pregnancy increased the risk of leukemia in the offspring, particularly the risk of nonlymphoid leukemia. Scattered positive results were seen for a variety of medications and illicit drug consumption in single studies, but the overall picture rather suggests no strong associations between most common maternal medications during pregnancy and childhood cancer risk. In particular, this applies to cold or “flu” medications, sleep, and pain medication. It needs to be noted that a better assessment of medication use is needed to shed further light on possible associations on this topic.

Alcohol Consumption Alcohol is rated as established human carcinogen by the International Agency for Research on Cancer and has been implicated as a risk factor for various cancer types. Maternal alcohol consumption during pregnancy has been suggested to affect epigenetics by interfering with pathways that convert dietary components into methionine for DNA methylation. The potential mechanisms by which parental preconceptional alcohol consumption might influence childhood cancer development in the offspring are rather speculative. Given the overall conflicting evidence with both deleterious and protective effects and methodological limitations of conducted studies, the role of both maternal and paternal alcohol consumption in relation to childhood cancer is unclear. However, the current knowledge derives predominantly from studies on maternal alcohol intake during pregnancy. Tumors most often observed to be positively associated with maternal alcohol consumption during pregnancy were AML, brain tumors and neuroblastoma. Recall bias might have had an influence on the results of many studies. Given the public concern about alcohol consumption during pregnancy, a general underestimation of alcohol use is likely; the feeling of guilt about having possibly harmed their child by drinking alcoholic beverages during pregnancy may have led case mothers to underreport their alcohol intake, particularly for the highest consumption. This type of response bias would have led to an underestimation of the association with alcohol consumption.

Gene–Environment Interactions Epidemiological research show primarily empirical associations between environmental factors and childhood cancer risk. However, these studies provide little information on the mechanisms by which exposures lead to cancer, unless being very specific with regard to exposure time window, dose–response and exposure specificity. The interaction between the (epi)genetic makeup of an individual and his/her exposure record (often called “exposome”) may uncover insights into mechanisms as well as causes contributing to malignancy. Here, the focus is on the discussion of genomics and epigenomics (particularly DNA methylation), instead of other biological readouts such as metabolomics and transcriptomics, as the former two are known to be mitotically heritable; hence, alterations in their biological code can be stably transmitted over time through cell divisions. Increasing evidence suggests that epigenetic changes are also transmittable across human generations, as is the case for genetic events. From a genomics perspective, common germline or somatic genetic variants may influence cancer risk even though they do not generally confer a significant phenotype. In fact, some genetic variants may cause subtle changes in gene expression or regulation which can alter cancer risk or metabolism. For example, genetic variants in an enzyme responsible for metabolizing carcinogens could alter cancer risk if they altered the activity of the enzyme. This could result in varying degrees of carcinogenic activity. The

356

Environmental Agents and Childhood Cancer

cytochrome p450 (CYP) family are phase 1 enzymes, which carry out the initial modification of a substrate, and are involved in the metabolism of medication and potentially toxic compounds. Studies of genes in the CYP2 family suggest associations with childhood leukemia. Phase 2 enzymes such as glutathione S-transferases (GSTs) are responsible for further detoxification, and their genetic variants have been associated in some studies to childhood leukemia, though they have not been consistently replicated. Maternal and paternal genotypes are also likely to play key roles in carcinogen metabolism. Variation in parental carcinogen metabolism, hence, potentially the levels of toxic metabolites, could have preconception and/or prenatal effects. In other instances, genetic alterations may play more direct roles in relation to cancer risk. For example, mutations in the retinoblastoma (RB) tumor suppressor gene were first identified in children with the inherited form of retinoblastoma. Individuals with the Li-Fraumeni cancer predisposition syndrome, a complex syndrome of predisposition to leukemias, sarcomas, and breast cancer, often have mutations in Tumor Protein p53 (TP53), a gene that functions as the guardian of the genome and is critical in the pathogenesis of a multitude of tumor types. Recent large-scale genetic sequencing of childhood tumors, however, identified surprisingly few or no mutations, highlighting the contribution of nongenetic (possibly epigenetic) factors to childhood cancer development. Unlike adult tumors, childhood cancers have shorter development time-course limiting their opportunities to accumulate mutations. This is in support of the notion mentioned earlier that initiating events of these cancers may have occurred otherwise in utero. Epigenetics is likely implicated in these initiating events especially that it has a key role in driving embryogenesis, including the differentiation of genetically identical embryonic layers into different tissue types. Moreover, epigenetic disruption is a near universal feature of human malignancy, and more than 50% of human cancers harbor mutations in epidrivers (genes involved in epigenetic regulation exhibiting recurrent disruption in cancer through mutational or nonmutational mechanisms). In addition to its driver roles in embryogenesis and cancer, epigenetics is being increasingly described as a molecular sensor to the environment with a key role in mediating gene–environment interactions. With the advent of state-of-the-art technologies and computational approaches, epigenetics is now being utilized to predict tobacco smoking status (as well as duration), sex, age, gestational age, cell type composition and ethnicity. Numerous international efforts have been recently made to enable the investigation of life-course exposures on the epigenome in large-scale epidemiological studies. The catalog of epigenetic signatures of various lifestyle and exposure factors is rapidly growing. Not only the type of environmental exposure, but also its timing plays an important role in influencing disease risk. Fetal life represents a sensitive period in the human life cycle due to the capacity for changes in cell fate during embryonic development, with potentially lifelong health outcomes. Another important developmental stage is puberty. In males, spermatogenesis starts at puberty and continues throughout life; in females, oogenesis begins before birth and is arrested in the prophase of meiosis until puberty. Therefore, oocytes remain until puberty in a haploid de-methylated state, which is more susceptible to environmental stressors than the diploid methylated state of the male germline. Later during adulthood, women may exhibit other susceptible windows during the menstrual cycle, pregnancy or menopause. In synchrony with these developmental stages, many genes are only expressed during specific windows and then turned off. Growth factors, in particular, are activated in childhood but tone down thereafter. Hence, different sets of expressed genes would interact with environmental exposures occurring at different life stages. A design that does not incorporate exposure timing cannot properly assess gene–environment interactions. Gene–environment interaction studies are becoming increasingly crucial in epidemiological research aiming to investigate associations between risk factors and disease. (Epi)genomics have helped better assess exposure or lifestyle factor measurements, hence, providing a molecular history of past and current exposure events, which can potentially be used as a prospective molecular assessment of archived biospecimen that are collected in retrospective study designs (see section Methodological Challenges and Limitations). (Epi)genomics have also highlighted crucial insights into mechanisms linking risk factors to cancer. Moreover, genetic proxies of exposure are being increasingly used for strengthening causal inferences in observational studies, with the application of Mendelian randomization (a method that uses genetic variants as proxies of exposure in order to examine the causal effect between exposure and disease, while basing on the fact that genetic variants are assigned randomly when passed from parents to offspring and should, hence, be unrelated to the confounders that typically plague observational epidemiology studies). Integrating epigenomics, genomics and exposure timing in epidemiological research would be key to undermine causal factors driving childhood cancer, with important implications in biomarker-based diagnosis, targeted therapy and prevention. In summary, there are many promising opportunities and respective work has started utilizing samples collected in large cohort studies, but at present its impact on the identification of environmental causes of childhood cancer is limited.

Conclusion and Directions for Future Studies The contribution of environmental exposures on the incidence of childhood cancer is still largely unknown (Table 1). The interaction of genetic, epigenetic and environmental exposures may play a more important role than currently established. Multicausal pathways rather than a single dominant cause have been proposed. The heterogeneity and relative rarity of childhood cancer add to the complexities of study design and interpretation of findings. For example, despite the large number of studies and several associations found between pesticide exposure and various childhood cancers, no clear picture emerged with respect to the critical time periods of exposure or the effect of specific types of pesticides or of specific active ingredients in pesticides. This is mainly due to the fact that interview-based studies showing those associations are prone to recall and reporting bias and subsequent false positive results, while register-based studies using crude exposure information may give false negative results as exposure information is

Table 1

Level of evidence of an association between different environmental exposures and the risk of childhood cancera Exposure time windows

Cancer type

Summary of evidence

Ionizing radiation (man-made)

ALL, AML, CNS, other solid

Dose-dependent effect from studies of diagnostic radiation and A-bomb survivors, less clear from studies of nuclear accidents. Clear evidence for therapeutic doses, dose-dependent effect from studies of diagnostic radiation and A-bomb survivors, less clear from studies of nuclear accidents. Evidence from studies of A-bomb survivors, nuclear accidents and medical use of radioiodine. Inconsistent findings from studies of domestic radon exposure, little evidence from studies of natural background gamma radiation. Several studies report an association between extremely low-frequency magnetic fields and leukemia risk, but it is not clear whether the association is causal or entirely due to bias and confounding. Parental exposure to extremely low-frequency magnetic fields shows association in few studies, but publication bias may play a role. No associations seen in any studies on environmental exposure, but question of possible association between mobile phone use and risk of CNS tumors in children and adolescents remains open. Current evidence suggests that exposure to ambient air pollution may be associated with an increased risk of leukemia and CNS tumors, although the evidence is regarded as not entirely convincing. Exposure to benzene appears to be the traffic-related contaminant most strongly associated with leukemia risk, particularly for AML. In contrast, for traffic density and NO2, the evidence is stronger for an increased risk of ALL but not AML. Evidence for maternal smoking does not indicate a strong association with ALL or AML, but a modest increased risk of non-Hodgkin lymphoma among children of mothers smoking during pregnancy. Findings for paternal smoking suggest a small increase in risk for ALL and AML. A meta-analysis on brain tumors indicates that parental smoking may not be associated with an increase in risk.

ALL, AML, CNS, other solid

Thyroid Ionizing radiation (natural)

ALL, AML

Nonionizing radiation (low frequencies)

ALL, AML

ALL, AML Nonionizing radiation (high frequencies)

CNS

Air pollution

ALL, AML, CNS

Environmental tobacco smoke (ETS)

ALL, AML, NHL

þþ consistent þ broadly consistent þ/ inconsistent

Strength b þþþ strong þþ modest þ weak

þþ to þ/

þþþ to þ

þþ to þ/

þþþ to þ

þþ

þ

In utero/postnatal Postnatal

þ/

þþ

Postnatal

þ

þ

Preconceptional, in utero

þ/

þ

þ/

þ

All

þ/

þþ to þ

All

þ/

þ

In utero Postnatal

Postnatal

357

(Continued)

Environmental Agents and Childhood Cancer

Exposure

Preconception, in utero, postnatal

Consistency

358

Level of evidence of an association between different environmental exposures and the risk of childhood canceradcont'd Exposure time windows

Exposure

Cancer type

Pesticides

ALL, AML, CNS, lymphoma, neuroblastoma

Hydrocarbons and solvents

Nitrates, N-Nitrosamines Folic acid

Fertility drugs

Alcohol

a

Summary of evidence

The majority of studies have focused on leukemia or brain tumors, reporting mostly consistent moderately increased risk with pesticide exposures, in the order of 1.3-2-fold risks. ALL, AML The literature suggested an association between childhood leukemia and occupational paternal exposure to solvents, paints, plastic material and inhaled particulate hydrocarbons. Observations for maternal occupational exposure have been less conclusive. A weak to modest association between home paint exposure and an increased risk of childhood ALL is more consistently found. CNS Findings on maternal cured meat intake during pregnancy point to some but also inconclusive evidence regarding a positive association with childhood brain tumor risk in the offspring. ALL, AML, CNS, Recent meta-analyses and findings from large international neuroblastoma collaborative efforts indicate a moderate reduction in risk following maternal folate supplementation before or during pregnancy. ALL, AML, CNS tumors, While several recent studies found no elevated risk for any type neuroblastoma, retinoblastoma of cancer in children conceived by fertility treatment, most studies indicated elevated risks for specific cancer types, including ALL, AML, CNS tumors, neuroblastoma and retinoblastoma. Given the overall conflicting evidence with both deleterious AML, brain tumors, neuroblastoma and protective effects and methodological challenges and limitations of conducted studies, the role of both maternal and paternal alcohol consumption in relation to childhood cancer is unclear.

Preconception, in utero, postnatal

þþ consistent þ broadly consistent þ/ inconsistent

Strength b þþþ strong þþ modest þ weak

All

þ

þþ to þ

All

þ/

þþ to þ

In utero

þ/

þþ to þ

Preconception, in utero

þ

þþ to þ

Preconception, in utero

þ/

þþ

Preconception, in utero

þ/

þþ to þ

Mainly focusing on leukemia and CNS tumors. Strength of association: distinguishes between strong (> 2-fold), modest (1.3–2-fold) and weak (< 1.3-fold) association as measured as relative risk of exposure compared to nonexposed.

b

Consistency

Environmental Agents and Childhood Cancer

Table 1

Environmental Agents and Childhood Cancer

359

often not very detailed. Overall, pesticides remain a possible candidate for further investigations. For other chemical exposures, there are fewer studies showing mixed results. Ionizing radiation is a known carcinogen, but more work needs to be done at low levels of exposures. Air pollution is a ubiquitous exposure and a known human carcinogen, but present evidence suggests more likely a minor role in the development of childhood cancers. Electromagnetic fields are also ubiquitous but not known to cause any cancer; should the observed empirical association between extremely low-frequency magnetic fields and childhood leukemia be causal, it would explain only around 1%–2% of cases. Large, longitudinal cohort studies that carefully document environmental exposures and clinical phenotypes are ongoing to hopefully better understand the effect that certain exposures confer on cancer risk in childhood. While these studies provide new opportunities, especially analyses of biospecimen collected before the occurrence of the childhood cancer, the investigation of the combination of rare exposures with those rare outcomes is a major challenge. From a public health point of view, none of the chemical or physical agents investigated to date suggest a large potential for primary prevention. Some are already targeted by prevention measures due to other established adverse health effects, including parental tobacco smoke, maternal alcohol consumption, maternal folate supplementation and other vitamin intake as well as some environmental pollutants. From a scientific point of view, however, the identification of any risk factor and its related pathway, irrespective of its public health importance, would provide the opportunity to better understand the etiology of the disease. Moreover, integrating genomics, epigenomics and exposure timing in the study designs can reveal crucial insights into mechanisms, causal pathways and etiologically relevant biomarkers. More insight is also expected from new generation casecontrol studies applying advanced exposure assessment techniques through mHealth. Clearly, most evidence so far comes from North American and European countries, with some but too few exceptions from elsewhere, and more global initiatives are needed. This applies to a more in-depth evaluation of geographical variation, both in the context of how much of observed differences are due to actual differences in cancer risk compared to differences in ascertainment and reporting of cases, as well as understanding the underlying reasons for sometimes more similar incidence rates across countries than expected from the observed prevalence differences of suspected risk factors.

See also: Cancer Risk Assessment and Communication; Critical Windows of Children’s Development and Susceptibility to Environmental Toxins; Epigenetic Changes Induced by Environment and Diet in Cancer; Gene–Environment Interactions and Childhood Cancer; Risk of Radiation Exposure to Children and Their Mothers.

Further Reading Bailey, H.D., Infante-Rivard, C., Metayer, C., et al., 2015. Home pesticide exposures and risk of childhood leukemia: Findings from the childhood leukemia international consortium. International Journal of Cancer 137, 2644–2663. Filippini, T., Heck, J.E., Malagoli, C., Del Giovane, C., Vinceti, M., 2015. A review and meta-analysis of outdoor air pollution and risk of childhood leukemia. Journal of Environmental Science and Health. Part C, Environmental Carcinogenesis & Ecotoxicology Reviews 33, 36–66. Ghantous, A., Hernandez-Vargas, H., Byrnes, G., Dwyer, T., Herceg, Z., 2015. Characterizing the epigenome as a key component of the fetal exposome in evaluating in utero exposures and childhood cancer risk. Mutagenesis 30 (6), 733–742. Hargreave, M., Morch, L.S., Andersen, K.K., Winther, J.F., Schmiegelow, K., Kjaer, S.K., 2018. Maternal use of hormonal contraception and risk of childhood leukemia: A nationwide, population-based cohort study. The Lancet Oncology 19 (10), 1307–1314. Little, M.P., Wakeford, R., Borrego, D., et al., 2018. Leukemia and myeloid malignancy among people exposed to low doses (< 100 mSv) of ionizing radiation during childhood: A pooled analysis of nine historical cohort studies. The Lancet Haematology 5 (8), e346–e358. Liu, R., Zhang, L., McHale, C.M., Hammond, S.K., 2011. Paternal smoking and risk of childhood acute lymphoblastic leukemia: Systematic review and meta-analysis. Journal of Oncology 2011, 854584. Metayer, C., Milne, E., Dockerty, J.D., et al., 2014. Maternal supplementation with folic acid and other vitamins and risk of leukemia in offspring: A Childhood Leukemia International Consortium study. Epidemiology 25, 811–822. Schüz, J., Ahlbom, A., 2008. Exposure to electromagnetic fields and the risk of childhood leukaemia: A review. Radiation Protection Dosimetry 132, 202–211. Spector, L.G., Pankratz, N., Marcotte, E.L., 2015. Genetic and nongenetic risk factors for childhood cancer. Pediatric Clinics of North America 62 (1), 11–25. Steliarova-Foucher, E., Frazier, A.L., 2014. Childhood cancer. In: Stewart, B., Wild, C.P. (Eds.), World cancer report 2014. International Agency for Research on Cancer, Lyon, pp. 69–76. Wakeford, R., 2013. The risk of childhood leukaemia following exposure to ionising radiationdA review. Journal of Radiological Protection 33 (1), 1–25.

Environmental and Health Consequences of Nuclear, Radiological and Depleted Uranium Weaponsq PR Danesi, University Institute for Advanced Studies, Pavia, Italy © 2019 Elsevier B.V. All rights reserved.

Abbreviations AED Aerodynamic equivalent diameter DU Depleted uranium EMP Electromagnetic pulse HOB Height of blast HPRT Hypoxanthine–guanine–phosphoribosyltransferase IAEA International Atomic Energy Agency LD Lethal dose RDD Radiation dispersal device TNT Trinitrotoluene UNEP United Nations Environmental Programme WHO World Health Organization

Description of Nuclear, Radiological, and Depleted Uranium Weapons Before describing the environmental and health effects of nuclear and radiological weapons, a short and simplified description of their functioning principles is provided. Although depleted uranium (DU) weapons cannot be considered either nuclear or radiological, they are also discussed here, as there have been claims that DU residues in the environment have been the cause of radiological health problems.

Nuclear Weapons A nuclear weapon is an explosive device that derives its destructive force from the nuclear reaction of fission or from a combination of fission and fusion. As a result, even a nuclear weapon with a small yield is far more powerful than the largest conventional weapon. The “yield” of a nuclear weapon is a measure of the amount of explosive energy it can produce. It is usual practice to report the yield of nuclear weapons in terms of the quantity of TNT that would generate the same amount of energy on explosion. Thus, a 1-kt (abbreviated 1 kt) nuclear weapon produces the same amount of energy in an explosion as does 1000 t of TNT. Similarly, a 1-megaton (abbreviated 1 Mt) weapon would have the energy equivalent of 1 million tons of TNT. In the history of warfare, only two nuclear weapons have been detonated offensively, both by the United States, during the last days of World War II. The first was exploded on August 6, 1945, when a uranium-235 gun-type device code named “Little Boy” was dropped on the Japanese city of Hiroshima. This bomb had a yield of approximately 15 kt and killed approximately 120,000 people. The second was detonated 3 days later when a plutonium-239 implosion-type device code named “Fat Man” with a yield of approximately 21 kt was dropped on the city of Nagasaki. This bomb killed approximately 60,000 people. The bombs resulted in the immediate deaths of people mostly from injuries caused by the explosion and acute radiation sickness. More deaths occurred at a later time due to the long-term effects of radiation. Since then > 2000 nuclear weapons have been detonated for testing and demonstration purposes. The countries known to have detonated such weapons are the United States, former Soviet Union, the United Kingdom, France, People’s Republic of China, India, Pakistan, and North Korea. Some other countries may have nuclear weapons but have never publicly admitted their possession. There are two basic types of nuclear weapons. The first are weapons that produce their explosive energy through nuclear fission reactions. These are known as atomic bombs or fission bombs. In fission weapons, a mass of fissile material (uranium highly enriched in U-235 or plutonium-239) is assembled into a supercritical mass either by shooting one piece of subcritical material into another (the “gun” method) or by compressing a subcritical sphere of material to a much higher density (the

q

Change History: November 2018. The section editor updated the title. This is an update of P.R. Danesi, Radiological and Depleted Uranium Weapons: Environmental and Health Consequences, Editor(s): J.O. Nriagu, Encyclopedia of Environmental Health, Elsevier, 2011, Pages 728–744.

360

Encyclopedia of Environmental Health, 2nd edition, Volume 2

https://doi.org/10.1016/B978-0-12-409548-9.11747-6

Environmental and Health Consequences of Nuclear, Radiological and Depleted Uranium Weapons

Fig. 1

361

The Badger Detonation: a 23 kt nuclear explosion fired on April 18, 1953 at the Nevada Test Site, part of Operation Upshot-Knothole.

“implosion” method) using chemical explosives. The latter approach is more sophisticated than the former, and the only one that can be used if plutonium is the fissile material. The amount of energy released by fission bombs can range from < 1 t to approximately 500 kt. The second type of nuclear weapons, known as hydrogen bombs or fusion bombs, produces a large amount of its energy through nuclear fusion reactions. They can be over 1000 times more powerful than fission bombs as fusion reactions release much more energy per unit of mass than fission reactions. Hydrogen bombs work by using the energy of a fission bomb to compress and heat fusion fuel (e.g., solid lithium deuteride). When the fission bomb is detonated, gamma and X-rays emitted at the speed of light first compress the fusion fuel, and then heat it to thermonuclear temperatures. The following fusion reaction creates enormous numbers of high-speed neutrons, which induce fission in materials that normally are not prone to it, such as natural uranium. Thermonuclear weapons can be made to an almost arbitrary yield. The largest ever detonated (the Tsar Bomba of the former USSR) released an energy equivalent to over 50 Mt. In practice, most hydrogen bombs are considerably smaller than this, due to constraints in fitting them into the space and weight requirements of missile warheads (Fig. 1). There are also many other types of nuclear weapons. For example, a boosted fission weapon is a fission bomb that increases its explosive yield through a small amount of fusion reactions. Here the neutrons produced by fusion serve primarily to increase the efficiency of the fission bomb. Some weapons are designed for special purposes, such as the neutron bombs. These are nuclear weapons that yield a relatively small explosion but a relatively large amount of radiation. Therefore, they could be used to cause massive casualties while leaving infrastructures mostly intact and creating a minimal amount of fallout. Most variety in nuclear weapon design is in different yields and in modifying design elements to make them as small as possible.

Radiological Weapons Radiological dispersion devices (RDDs) also known as “dirty bombs” are weapons designed to spread radioactive material with the intent to kill and cause disruption. An RDD uses conventional explosives to spread radioactive material (e.g., spent fuels from nuclear power plants, radioactive sources for medical or industrial applications, or radioactive waste). It must be emphasized that “dirty bombs” are not nuclear weapons. They imply neither nuclear chain reactions nor the need to create a critical mass. Moreover, they do not produce radioactive fission products. Therefore, they are expected to cause only immediate casualties resulting from the conventional explosion. RDDs have been, in the past, considered for military use. However, this possibility was soon abandoned because the target area would have become contaminated and hence of limited use to an advancing army.

362

Environmental and Health Consequences of Nuclear, Radiological and Depleted Uranium Weapons

There is currently an ongoing debate about the damage that terrorists using such a weapon might inflict. Many experts believe a dirty bomb that terrorists might reasonably be able to construct would be unlikely to harm by radiation more than a few people. Therefore, it would not be deadlier than a conventional bomb. The dominant effects would be mainly psychological, due to fear and panic, and economic, due to the cost of decontaminating the spread radioactive material.

Depleted Uranium Weapons DU is uranium, primarily composed of the isotope uranium-238 (U-238). Natural uranium is approximately 99.27% U-238, 0.72% U-235, and 0.0055% U-234. Because higher concentrations of U-235 are necessary for fission in nuclear reactors and weapons, natural uranium is enriched in U-235. The by-product of enrichment, called depleted uranium or DU, contains less than onethird as much U-235 and U-234 as natural uranium, making it less radioactive due to the longer half-life of U-238 with respect to U-235 and U-234. Another less common source of DU is the by-product of enrichment of uranium obtained from reprocessed spent nuclear reactor fuel. This can be distinguished from DU produced as by-product of nonirradiated uranium enrichment, by the presence of the artificial isotope U-236. The radiation dose from DU is approximately 60% of that from the same mass of natural uranium. Although uranium, as all heavy metals, can be toxic, its chemical toxicity is less than that of other heavy metals such as arsenic and mercury. Most military use of DU has been in ordnance, as kinetic energy penetrator (armor-piercing) rounds. DU is used for its very high density (19.1 g cm 3), low cost, and high pyrophoricity. Kinetic energy penetrator rounds consist of long, relatively thin, DU cylinders with pencil-shaped nose, surrounded by a discarding aluminum sabot. Such penetrators are self-sharpening and pyrophoric and on impact with a hard target, such as an armored vehicle, their nose fractures in such a way that it remains sharp (Fig. 2). The impact and subsequent release of heat energy causes it to disintegrate, fully or in part, to dust. This dust burns when it comes in contact with air because of the uranium metal pyrophoric properties. The DU content in various ammunitions is 180 g in 20 mm projectiles, 200 g in 25 mm, 280 g in 30 mm, 3.5 kg in 105 mm, and 4.5 kg in 120 mm ones. It is thought that between 17 and 20 states have weapons incorporating DU in their arsenals. Ammunition containing DU was used in recent conflicts, namely, 1991dIraq and Kuwait (Gulf War I); 1995dBosniaHerzegovina; 1999dKosovo; 2002dAfghanistan; 2003dIraq (Gulf War II).

Depleted uranium penetrator

Aluminum sabot M43 (lova) propellant

Aluminum fin

M13 tracer

M128 primer M148A1R1 steel case

Tactical M980 Fig. 2

105 mm depleted uranium armor-piercing munition with fin-stabilized discarding sabot. From http://en.wikipedia.org/wiki/Depleted_uranium.

Environmental and Health Consequences of Nuclear, Radiological and Depleted Uranium Weapons

363

Environmental and Health Effects of Nuclear Weapons Environmental Effects General considerations

The dominant destructive effects of a nuclear weapon (blast and thermal radiation) are basically the same as those attributable to conventional explosives. Nevertheless, the energy produced is millions of times more powerful and the temperature reached is tens of millions of degrees. In addition, the nuclear explosion is accompanied by the emission of radiation in the form of both electromagnetic radiation and high-energy particles. Nuclear explosions are characterized by an immediate rapid and brief release of nuclear radiation, followed by a rapidly developing fireball that emits intense thermal radiation (heat and light). This is rapidly followed by to a powerful pressure pulse, called the blast or shock wave, which travels out from the point of burst. The hot gaseous residues, contained within a relatively thin, dense shell known as the hydrodynamic front, move outward radially from the center of the explosion with very high velocities. The effects of nuclear weapons depend on their type and size, and when more than one are used, on the number detonated and how they are distributed in time and space. The effects also depend on whether a burst occurs at or near ground level, below ground, under water, in the denser part of the atmosphere (troposphere, approximately 20 km), in the very rarified atmosphere (stratosphere), the nature of terrain, and weather conditions (Fig. 3). The energy released from a nuclear weapon can be divided into four basic categories:

• • • •

Blast energy Thermal radiation energy Immediate ionizing radiation energy Residual radiation energy

When the blast occurs in the high troposphere the fireball does not reach the ground, and 50% of the energy released is in the form of blast or shock wave, 35% thermal radiation, 5% immediate radiation energy, and 10% residual radiation energy (Fig. 4). In general, the blast fraction is higher for low-yield bombs, decreasing at high altitudes as there is less air mass to absorb radiation energy and convert it into blast. Depending on the design of the weapon, the environment in which it is detonated and the height of the explosion, the fraction of energy distributed in each of the four basic categories can be greatly increased or decreased and, in extreme cases, even reduced to practically zero. Denser media around the bomb, such as water, absorb more energy, and create more powerful shock waves while at the same time limiting the area of its effect. A blast in the stratosphere produces more thermal energy whereas one at the surface or subsurface increases radioactive fallout. The explosion of a nuclear bomb over a target, such as a large city, would lead to enormous damage (Figs. 5 and 6). The extent of damage would depend on the distance from the center of the bomb blast, called the hypocenter. In general, the damage will be more severe the closer an object or biota is to the hypocenter, as summarized below (Tables 1 and 2).

• • •

At the hypocenter, everything will be instantaneously vaporized by the high temperature (up to 300 million degrees Celsius). Moving away from the hypocenter, most casualties will be caused by burns from the heat, injuries from the flying debris of buildings collapsed by the shock wave, and acute exposure to the high radiation. Beyond the immediate blast area, casualties will be caused by the heat, radiation, and fires produced from the heat wave.

Fallout begins soon after the elements coalesce from the heated cloud, but it can continue for many days and weeks at long distances. Global fallout continues for years and will occur over a wider area, having extension, shape, and orientation depending

tmax 1

Relative intensity

0.8 EMP 0.6 Blast

Thermal 0.4 0.2

Gamma

tmin Neutrons

10−9

10−8

10−7

10−6

10−5 10−4 10−3 Time after burst (s)

10−2

10

1

10

Fig. 3 Time histories of weapon outputs 1 km from 27 kt (height of burst 180 m). From McNaught, L.W. (1984). Nuclear Weapons and Their Effects. London: Brassey’s Defense Publishers.

364

Environmental and Health Consequences of Nuclear, Radiological and Depleted Uranium Weapons

Example of approximate energy distribution in a nuclear explosion Blast energy

5% 10%

Thermal energy Residual radiation energy (minutes to years)

50% 35%

Immediate ionizing radiation energy (first minute) Fig. 4 Example of approximate energy distribution in a nuclear explosion. Adapted from McNaught, L.W. (1984). Nuclear Weapons and Their Effects. London: Brassey’s Defense Publishers.

Outside the area damaged by the explosion, casualties will be due to long-term health effects. Light damage (Hiroshima ~ 5.6 km) Outside blast area. Casualties caused by fires and radiation (Hiroshima ~ 3.2 km) Blast area (heat wave and shockwave). This is where most lethal casualties occur form heat burns, fallen debris and radiation. Severe destruction (Hiroshima ~ 2 km)

Hypocentre (fireball) everything is vaporized. Complete destruction (Hiroshima ~ 1.6 km) Fig. 5

Schematic picture of the damaging effects of nuclear bombs.

1 Mt air burst

Light damage to commercial-type buildings

Moderate damage to commercial-type buildings

re s

i le f

sib Pos

Heavy damage to commercial-type buildings and equipment

?

ad pre

ted

itia

ny

Ma

in ires

f

Destruction of all except hardened facilities HOB 1.4 mi

30 psi 20psi

GZ

0.5 1.1

5 psi

2 psi

1 psi

4.1 Distance (miles)

7.3

11.2

10 psi

2.8

Fig. 6 Effect of 1 Mt. nuclear explosion in the atmosphere at an altitude of approximately 2000 m. HOB, height of blast. From FEMA (1990) Nuclear Attack Environment Handbook. Washington, DC: Federal Emergency Management Agency.

Environmental and Health Consequences of Nuclear, Radiological and Depleted Uranium Weapons Table 1

365

Examples of failures of overpressure-sensitive structural elements

Structural element

Failure

Glass windows, large and small

Shattering usually, occasional frame failure Shattering Connection failure followed by buckling Shearing and flexure failures

Corrugated asbestos siding Corrugated steel or aluminum paneling Brick wall panels, 20 cm thick (not reinforced) Wood sliding panels standard house construction Concrete or cylinder block wall panels, 30 cm thick (not reinforced)

Usual failure occurring at the main connections, allowing a whole panel to be blown in Shattering of the wall

Approximate side-on peak overpressure (kPa)

Approximate slant range (km) for 1 kt

Approximate slant range (km) for 20 kt

3.45–6.9

6–10

20–30

6.9–13.8 6.9–13.8

3–6 3–6

12–22 12–22

20.7–69.0

1–3

4–10

6.9–13.8

3–6

12–22

10.35–38.0

1.5–4

6.5–15

Defense Department Nuclear Doctrine and Policy, NATO Handbook on the Medical Aspects of NBC Defensive Operations, AMedP-6(B), Army Field Manual 8–9, Navy Medical Publication 5059, Air Force Joint Manual 44–151; Chapter 3: Effects of Nuclear explosions; Section IdGeneral.

Table 2

Most important effects of nuclear explosions for different yields and height of explosion

Effects (explosive yield/height of explosion) Blastdeffective ground range (km) Urban areas almost completely leveled (20 psi) Destruction of most civilian buildings (5 psi) Moderate damage to civilian buildings (1 psi) Thermal radiationdeffective ground range (km) Conflagration (uncontrolled burning of environment) Third-degree burns (humans) Second-degree burns (humans) First-degree burns (humans) Effects of instant nuclear radiationdeffective slant range (km) Lethal (10 Gy) total dose (neutrons and gamma rays) Total dose for acute radiation syndrome (2 Gy)

1 kt/200 m

20 kt/540 m

1 Mt/2.0 km

20 Mt/5.4 km

0.2 0.6 1.7

0.6 1.7 4.7

2.4 6.2 17

6.4 17 47

0.5 0.6 0.8 1.1

2.0 2.5 3.2 4.2

10 12 15 19

30 38 44 53

0.8 1.2

1.4 1.8

2.3 2.9

4.7 5.4

http://en.wikipedia.org/wiki/Effects_of_nuclear_explosionsdEffects of nuclear explosions.

on the prevailing winds. Eventually the radioactive fallout particles can enter the water supply and be inhaled and ingested by people even at considerable distances from the blast.

Environmental effects of the blast energy

Energy from a nuclear explosion is initially released in several forms of penetrating radiation. When there is a surrounding material such as air, rock, or water, this radiation interacts with it. This causes its rapid heating, vaporization, and rapid expansion. Kinetic energy created by this expansion contributes to the formation of a shock wave and the intense thermal radiation at the hypocenter forms a fireball. At first, the shock wave is inside the surface of the developing fireball, but within a fraction of a second the dense shock front obscures the fireball, causing the characteristic double pulse of light seen from a nuclear detonation. Acting similarly to a piston that pushes and compresses the surrounding medium, the front of the pressure wave transfers energy to the atmosphere by impulse and generates a steep-fronted, spherically expanding blast (or shock) wave. Contrary to what one might expect from geometry, the blast range is not maximal for surface or low altitude explosions but increases with altitude up to an “optimum burst altitude” and then decreases rapidly for higher altitudes. If the blast wave reaches the ground, it is reflected. The reflected and the direct wave then merge and form a reinforced horizontal wave, producing the so-called Mach effect (Fig. 7). Two distinct, simultaneous phenomena are associated with the blast wave in air: (1) static overpressure, that is, the sharp increase in pressure exerted by the shock wave (the overpressure at any given point is directly proportional to the density of the air in the wave) and (2) dynamic pressure, that is, drag exerted by the blast winds required to form the blast wave. These winds push, tumble, and tear objects. Most of the material damage caused by a nuclear airburst is caused by a combination of the high static overpressures and the blast winds. The long compression of the blast wave weakens structures, which are then torn apart by the blast winds. The compression, vacuum, and drag phases together may last several seconds or longer, and exert forces many times greater than the strongest hurricane (Table 3). The velocity of the accompanying blast wind may exceed several hundreds of

366

Environmental and Health Consequences of Nuclear, Radiological and Depleted Uranium Weapons

1 1 Shock 2

Time

Pressure 2

Front Compression phase

3 3 4 Suction

4

Negative phase

5

5

Fig. 7 Variations of blast effects associated with positive and negative phase pressures with time. From NATO Handbook on the Medical Aspects of NBC Defensive Operations, AMedP-6(B), Army Field Manual 8–9, Navy Medical Publication 5059, Air Force Joint Manual 44–151, Chapter 3: Effects of Nuclear Explosions, Section I General; http://www.fas.org/nuke/guide/usa/doctrine/dod/fm8-9/1ch3.htm.

kilometers per hour. Most buildings, with the exception of strongly reinforced or blast-resistant structure, will suffer severe destruction when subjected to overpressures of only 35.5 kPa (1 Pa ¼ 9.869  10 6 atm ¼ 145.04  10 6 psi). As far as the fireball is concerned, this rises rapidly and cools, forming the familiar spreading mushroom-shaped cloud (Fig. 8). The blast and shock effects are the primary damage-producing mechanisms for soft targets such as cities and often the only effective mechanisms for destroying underground structure such as missile silos. In a typical airburst, the values of the overpressure and wind velocity will prevail at a range of 0.7 km for 1 kt yield, 3.2 km for a 100 kt yield, and 15 km for a 10 Mt. yield. When a nuclear weapon is detonated on or near earth’s surface, the blast creates a large crater. Some of the material in the crater is deposited on the rim, the rest is carried up into the air and returns to earth as radioactive fallout. An explosion that is farther above the earth’s surface than the radius of the fireball does not dig a crater and produces negligible immediate fallout.

Environmental effects of the thermal radiation energy

A primary form of energy from a nuclear explosion is thermal radiation. In the initial microseconds of the chain reaction most of this energy goes into heating the bomb materials and the air in the vicinity of the blast. Temperatures reach those in the interior of the sun, approximately 100 million degrees Celsius, and produce a brilliant fireball. Therefore, the observed phenomena associated with a nuclear explosion and the effects on people and materials are also determined by the thermal radiation and its interaction with the surroundings. Thermal energy is emitted from the fireball in two pulses (Fig. 3). The first is quite short and carries only approximately 1% of the energy. The second pulse is more significant and is of longer duration (up to 20 s). The energy from the thermal pulse can initiate fires in dry, flammable materials. The incendiary effect of the thermal pulse is also considerably affected by the later arrival of the blast wave. Collapsed structures are much more vulnerable to fire than the intact ones. The reason is that the blast reduces many structures to piles of kindling, the many gaps opened in roofs and walls act as chimneys, gas lines are broken open, and storage tanks for flammable materials are ruptured. The primary ignition sources appear to be flames and pilot lights in heating appliances. Smoldering material from the thermal pulse can be very effective at igniting leaking gas. Thermal radiation damage depends very strongly on weather conditions. Cloud cover, fog, haze, smoke, or other obscuring materials in the air can considerably reduce effective damage ranges versus clear air conditions. Because thermal radiation travels more or less in a straight line from the fireball (unless scattered) any opaque object will produce a protective shadow. When thermal radiation strikes an object, part will be reflected, part transmitted, and the rest absorbed. The fraction that is absorbed depends on the nature and color of the material. The absorbed thermal radiation raises the temperature of the surface and results in scorching, charring, and burning of wood, vegetation, paper, fabrics, and so on. If the material is a poor thermal conductor, the heat is confined to the surface of the material. Actual ignition of materials depends on the thickness and moisture content of the target and how long the thermal pulse lasts. A given amount of radiant exposure is more effective in igniting a potential fuel if delivered in a brief span of time. Incendiary effects are reinforced by secondary fires started by the blast-wave effects. Therefore, under appropriate site and weather conditions there will be large wildfires, some of which can develop into mass fire or firestorms. In Hiroshima, a tremendous firestorm developed within 20 min after detonation and destroyed many more buildings and homes.

Environmental and Health Consequences of Nuclear, Radiological and Depleted Uranium Weapons Table 3

367

Damage to forests, plants, and wildlife by nuclear detonations

Level of damage

Damaged area by a 20 kt bomb

Damaged area by a 10 Mt bomb

Blast damage to forestdbombs exploded in the troposphere 30% blowdown (diameter area) 90% blowdown (diameter area)

4.0 km–1270 ha 2.7 km–565 ha

46.9 km–173,000 ha 32.3 km–82,000 ha

Damage

Damaged area by a  20 kt bomb

Damaged area by a 10 Mt bomb

Blast damage to wildlifedbombs exploded in the troposhere Lung damage (diameter area) 1.4 km–148 ha Lethal to 50% (diameter area) 0.7 km–43 ha

10.9 km–9330 ha 5.9 km–2740 ha

Plant material

Exposure required for ignition by a 20 kt bomb

Exposure required for ignition (kJ m 2) by a 10 Mt bomb

Pulse of radiant exposure required for igniting plant material Rotted conifer wood Dicotyledonous leaves Grass Sedge Brown conifer needles

150 kJ m 2 150 kJ m 2 180 kJ m 2 220 kJ m 2 360 kJ m 2

330 kJ m 2 330 kJ m 2 420 kJ m 2 460 kJ m 2 880 kJ m 2

Type of damage

Area suffering from damage by a  20 kt bomb

Area suffering from damage by a 10 Mt bomb

Damage to biota from the blast-wave, thermal, and nuclear radiationdbombs exploded in the troposphere Trees blown down by the blast wave 565 ha 82,000 ha Trees killed by nuclear radiation 129 ha 1250 ha All vegetation killed by nuclear radiation 18 ha 759 ha Dry vegetation ignited by thermal radiation 1170 ha 183,000 ha Vertebrates killed by the blast wave 43 ha 2740 ha Vertebrates killed by the nuclear radiation 318 ha 1840 ha Vertebrates killed by the thermal radiation 1570 ha 235,000 ha Type of damage

Area suffering from damage by a 20 kt bomb

Damage to biota from the blast-wave, thermal, and nuclear radiationdbombs exploded at the surface Trees blown down by the blast wave 362 ha Trees killed by nuclear radiation 148 ha All vegetation killed by nuclear radiation 43 ha Dry vegetation ignited by thermal radiation 749 ha Vertebrates killed by the blast wave 24 ha Vertebrates killed by the nuclear radiation 674 ha Vertebrates killed by the thermal radiation 1000

Area suffering from damage by a 10 Mt bomb 52,500 ha 63,800 ha 12,100 ha 117,000 ha 1540 ha 177,000 ha 150,000 ha

Westing, A.H. (ed.) (1977). In: Weapons of Mass destruction and the Environment. London: Taylor and Francis Ltd.

Environmental effects of the immediate ionizing radiation energy

Approximately 5% of the energy immediately released in a nuclear airburst is in the form of ionizing radiation. Initial nuclear radiation is defined as the radiation that occurs immediately after an explosion. It results nearly entirely from the nuclear processes that take place at the time of detonation. It consists of neutrons, gamma rays, alpha particles, and electrons moving at very high speeds. The neutrons result almost exclusively from the fission and fusion reactions, whereas the gamma radiation includes that arising from these reactions as well as that resulting from the decay of short-lived fission products. Gamma radiation decreases rapidly with distance from the point of burst because the radiation spreads over a larger area as it travels away from the explosion. It is also reduced by atmospheric absorption and scattering. Near the point of the explosion, the neutron intensity is greater than the gamma intensity but with increasing distance the neutron–gamma ratio decreases. Eventually, the neutron component of initial radiation becomes negligible in comparison to the gamma radiation. The range for significant levels of initial radiation does not increase very much by increasing the weapon yield. With large nuclear weapons (> 50 kt), blast and thermal effects are so much predominating that the effects due to prompt radiation effects can be neglected. Although people close to ground zero may receive lethal doses of radiation, they are concurrently being killed by the blast wave and thermal pulse. Therefore, in typical nuclear weapons, only a relatively small proportion of deaths and injuries can be attributed to initial radiation.

368

Environmental and Health Consequences of Nuclear, Radiological and Depleted Uranium Weapons

Surface detonation cloud height vs yield

B

120

C

Altitude × 1000 ft

60

24

Stratosphere

18 Troposphere

Top 12

40 A

Bottom

20 0 1 kt

10 kt

100 kt

1 Mt

Altitude × 1000 m

30

100 80

36

6 0 10 Mt 30 Mt

Yield Fig. 8 Surface detonation cloud height versus yield. Dependence of the mushroom cloud height on yield for ground bursts. (A) Approximate altitude of commercial aircrafts; (B) Fat Man, 21 kt; (C) Castle Bravo, 15 Mt. Redrawn for a work of the United States Federal Government.

Environmental effects of the residual radiation energy

The residual radiation from a nuclear explosion comes from the weapon debris, fission products, and, in the case of a ground or near-ground burst, the chemical elements present in soil made radioactive by the irradiated neutrons. In this way, a large amount of radioactive material, called radioactive fallout, is released into the environment, posing the primary risk of exposure to ionizing radiation, especially in the case of large nuclear weapons. There are over 300 different fission and activation products that may result from nuclear explosions. Many of these are radioactive with widely differing half-lives. Some are very short, that is, fractions of a second, whereas a few are long enough that the materials can be a hazard for months or years. Their principal mode of decay is by the emission of beta particles and gamma rays. As the emission of beta particles and gamma rays takes place over a period of time, at a rate depending on the amount and the nature of the radioactive material, residual nuclear radiation, mainly due to the fission products, is most intense immediately after the explosion but diminishes in the course of time.

Environmental effects of the electromagnetic pulse (EMP)

Gamma rays from a nuclear explosion produce high-energy electrons through Compton scattering. These electrons are captured in the earth’s magnetic field, at altitudes between 20 and 40 km, where they resonate. The oscillating electric current produces a coherent electromagnetic pulse (EMP) that lasts for approximately 1 ms. Secondary effects may last for more than a second. Although there is no evidence that EMP can represent a physical threat to humans or any other biota, EMP can damage electrical and communication equipment. The pulse is powerful enough to cause long metal objects such as cables to act as antennas and generate high voltages. These voltages, and the associated high currents, can destroy unshielded electronics and wires. One can shield electronics by wrapping them completely in a conductive mesh, or any other form of Faraday cage. However, once shielded, radios cannot operate any longer, as radio waves do not reach them. Therefore, EMP can cause massive disruption for an indeterminable period and huge economic damage.

Other environmental effects

Ozone depletion When a nuclear weapon explodes in the air, the surrounding air is subjected to great heat, followed by relatively rapid cooling. These conditions lead to the production of very large amounts of nitric oxides. These oxides are carried into the upper atmosphere, where they reduce the concentration of protective ozone. Ozone is necessary to block harmful ultraviolet radiation from reaching the earth’s surface. Earthquake The pressure wave from an underground explosion will propagate through the ground and cause an earthquake. Theory also suggests that a nuclear explosion could trigger fault rupture and may cause a major quake at distances within a few tens of kilometers from the blast point. Nuclear winter Nuclear explosions can set off firestorms over many cities and forests. Should this occur, great plumes of smoke, soot, and dust would be released in the atmosphere, lifted by their own heating to high altitudes where they can drift for weeks before returning

Environmental and Health Consequences of Nuclear, Radiological and Depleted Uranium Weapons

369

or being washed out onto the ground. Several hundred million tons of this smoke and soot could be moved by strong west-to-east winds until they would form a uniform belt of particles encircling the northern hemisphere, where most of the likely targets of nuclear weapons would be present. These thick black clouds could block out all but a fraction of the sun’s light for a period as long as several weeks. The lowering of the earth’s average temperature could range from 0.2 to 0.5 C. The conditions of semidarkness, killing frost, and subfreezing temperatures would create the so-called “nuclear winter.”

Effects on Humans and Other Biota Effects of blast energy on humans and other biota

Humans are quite resistant to the direct effect of the overpressure caused by the blast, and pressures of > 270 kPa are required before immediate, lethal effects are detected. For the most part, a nuclear blast kills people by indirect means rather than by direct pressure. The area of blast fatality is enlarged by reflected overpressures, by flying objects (secondary blast effects), and by bodies crashing into some objects (tertiary blast effects). These effects depend on the nature of the terrain and can be very large. The danger from overpressure also comes from the collapse of buildings. Moreover urban areas contain many objects that can become airborne, and the destruction of buildings generates many more. The collapse of built structures can crush or suffocate those caught inside. The blast also magnifies thermal radiation burn injuries (see later) by tearing away severely burned skin. This creates raw open wounds that readily become infected. Nevertheless, the significant damage to the lungs of humans and large mammals cannot be neglected. Acting on the human body, the shock waves cause pressure waves through the tissues. These waves especially damage junctions between tissues of different densities (bone and muscle) or the interface between tissue and air. Lungs and the abdominal cavity, which contain air, are particularly injured. The damage can cause severe hemorrhaging or air embolisms, either of which could become fatal. The overpressure estimated to damage lungs is approximately 70 kPa. Some eardrums would probably rupture at approximately 22 kPa and half of them would rupture between 90 and 130 kPa. With respect to damage to forests and trees, the destructive force is related to the peak transient wind. The greater the distance from the explosion, the longer is the duration of this phase. For example, 0.5–1 s for 18 kt and 4–8 s for 1 Mt.

Effects of thermal radiation energy on humans and other biota

Flash burns are one of the serious consequences of a nuclear explosion. They result from high intensities of light and therefore take place closer to the point of explosion. Flash burns result from the absorption of radiant energy by the skin of exposed individuals. A distinctive feature of flash burns is that they are limited to exposed areas of the skin facing the explosion. A 1 Mt. explosion can cause first-degree burns (corresponding to a bad sunburn) at a distance of approximately 12 km, second-degree burns (producing blisters and permanent scars) at distances of approximately 10 km, and third-degree burns (destroying skin tissue) at distances of approximately 8 km. Third-degree burns over 24% of the body or second-degree burns over 30% will result in serious shock and probably become fatal unless prompt, specialized medical care is provided. It has been estimated that burns caused approximately 50% of the deaths at Hiroshima and Nagasaki. Another consequence of a nuclear explosion is “flash blindness.” This is caused by the brilliant flash of light produced by the nuclear detonation. There are two types of eye injuries from the thermal radiation of a weapon. They are called “flash blindness” and “retinal burn.” “Flash blindness” is caused by the initial brilliant flash of light produced by the nuclear detonation and does not cause irreversible injury. The retina is particularly susceptible to visible and short wavelength infrared light, since this part of the electromagnetic spectrum is focused by the lens on the retina. The result is bleaching of the visual pigments and temporary blindness. During the daylight hours, flash blindness does not persist for > 2 min, but generally lasts a few seconds. At night, when the pupil is dilated, “flash blindness” will last for a longer period of time. A 1 Mt, explosion can cause “flash blindness” at distances as great as 22 km on a clear day and 90 km on a clear night. If the light intensity is great enough, a “retinal burn” resulting in permanent damage from scarring can occur. This is caused by the concentration of direct thermal energy on the retina. It will occur only when the fireball is in the individual’s field of vision. “Retinal burns” can occur also at considerable distances from the explosion. The size of the fireball determines the degree and extent of the retinal scarring.

General description of radiation effects on humans and biota

This section provides an abbreviated and simplified description of the radiation effects on humans to better comprehend the effects of immediate and residual ionizing radiation energy on human beings and other biota. The most important factor in determining the health effects is the amount of absorbed dose, namely, the amount of energy actually deposited in the body of an exposed person. The more energy absorbed by cells, the greater the biological damage to those cells. The amount of damage to specific tissues of the body, to whole organs, and to whole organisms can vary not only with the absorbed dose, but also with a number of factors related to biological function of the tissues and other health conditions at the time of exposure. The absorbed dose is measured in a unit called gray (Gy) (1 J absorbed by 1 kg). This is the most fundamental unit of radiation dose because it is based on purely physical phenomena and can be directly measured. Other metrics of radiation dose are used sometimes, however, depending on the purpose at hand. These include “equivalent dose” and “effective dose” whose definitions have varied slightly over time but have been defined by international radiation protection organizations. The unit of “equivalent dose” with the special name of sievert (Sv) takes into account not only the different biological effects of different types of radiation but also the relative radiosensitivity of different organs. To convert the absorbed dose in Gy to the

370

Environmental and Health Consequences of Nuclear, Radiological and Depleted Uranium Weapons

effective dose in Sv, the number of Gy must be multiplied by a number that reflects the potential for damage caused by a type of radiation. For beta, gamma, and X-ray radiation, this number is generally 1; for some neutrons, protons, or alpha particles, the number is 20. To convert the equivalent dose in Sv to the effective dose in Sv, the dose to each organ must be multiplied by its own tissue weighting factor, and all the weighted values summed. The response to radiation varies widely among people and the longer the time period over which a specific dose is accumulated the better your body can respond to, and recover from, the radiation damage. For example, a cumulative dose of 5.2 Sv, normally fatal to 50% of a group exposed to it when received within a week, will produce few detectable ill health effects if received in 1 year at the rate of approximately 0.10 Sv per week. The effects of ionizing radiation on humans can be classified according to different criteria. Generally this is the probability with which damage may be induced in the exposed individual and the effects are classified as either stochastic (random) or deterministic (nonrandom, or acute). Stochastic effects occur when low radiation doses (less than approximately 100–200 mSv) are received or higher cumulative radiation doses are received diluted over very long time periods. Presently, based on the liner no-threshold hypothesis, for low doses and low dose rates, the risk of induction of fatal cancers is assumed to be increased by approximately 5% for each Sv (effective dose) received. Stochastic effects may take place when an irradiated cell is modified rather than killed. Induction of cancer of different types, including leukemia and genetic effects, rank in the category of stochastic effects. A feature typical of stochastic effects is that it is the probability of the effect, not its severity, that increases with the radiation dose. Randomness, the basic feature of stochastic effects, is manifested by the fact that in a group of individuals exposed to radiation, a detrimental effect would eventually develop only in some individuals. Radiation induction of cancer is a complex, multistage process protracted over a long period of time. Full development of cancer may take 10–40 years and leukemia 5–20 years; this period is called the latency period. The risk of radiation-induced cancer varies with age at exposure, younger persons being more susceptible. Radiation-induced cancer can be detected only in epidemiologic studies in which cancer incidence in a large group of individuals exposed to a known radiation dose is compared with that in a control group. So far such studies have not been able to show significant increase in the incidence of cancer or leukemia attributable to radiation < 0.1–0.2 Sv. Lack of significant findings below those doses, however, imply little about the actual risk at those dose levels. At very low doses, the rate of cancer induction due to exposure would be very small and is extremely difficult to delineate from the rate of normal occurrence of cancers, which can be quite high. Deterministic effects are those that are evident soon after irradiation and for which severity increases with the radiation dose. These effects result from irradiation with a high radiation dose delivered at a high dose rate (short-term or acute irradiation). Under such conditions, a substantial degree of cell killing occurs, which can induce in the exposed individual clinically observable effects within a relatively short time after irradiation. Extensive damage to a vital tissue may also result in death. This is a situation typical for exposure, for example, to high levels of prompt gamma rays from a nuclear detonation. Such irradiations are considered to have a threshold value, below which no effects are observed. Contrary to stochastic effects resulting from low radiation doses, damage is caused to a large number of cells within a short time interval, when the cellular protective and repair mechanisms do not have enough time to become effective. Some of the most important deterministic effects are described in the following text. Acute radiation sickness can develop when whole-body irradiation with a high radiation dose occurs. The threshold dose is approximately at 1–2 Sv. During the first days after irradiation, the afflicted person suffers from nausea, loss of appetite, gloominess, headache, vomiting, weakness, and, depending on the dose absorbed, more or less serious changes in blood count. Then follows the latency period during which the initial symptoms retreat. Eventually in the final stage of the sickness, the initial symptoms reappear in a more pronounced form, accompanied with loss of hair, fever, bleeding from gums, and high sensitivity toward infections. The hematological form is caused by doses of up to 6 Sv. It is the consequence of the damage caused to bone marrow and is evidenced by a reduced count of lymphocytes, red blood cells, and platelets. At doses between 6 and 10 Sv, the hematological form is accompanied by the gastrointestinal form that results from the damage and killing of the cells of the intestinal epithelium, leading to intestinal malfunction and perforation. The onset of severe symptoms, including hemorrhagic diarrhea, occurs within 4–6 days after irradiation, leading to death within 20–30 days, unless intensive care is available. Mortality rate is approximately 80% at doses of 6 Sv, and approaches 100% at 10 Sv. At doses < 6 Sv some patients may gradually recover. However, those who recover often suffer from lasting consequences for a long time, such as impaired hematopoiesis, malfunction of genital organs, infertility, enhanced sensitivity toward infections, permanent weakness and fatigue, and cancer. Body doses higher than 50 Sv cause the neural form of the disease, whose symptoms are mental disorientation, confusion, ataxia, spasms, and deep unconsciousness. Death follows within several hours to days after irradiation. Exposure to intense irradiation also causes radiation dermatitis, and, depending on the severity of irradiation, may take the form of erythema and swollen skin with blisters, radiation ulcers, and skin necrosis. The threshold dose for radiation dermatitis is approximately 3 Sv. More serious skin lesions have also been known as radiation burns (Table 4; Fig. 9). Generally, damage to fetus is dependent on the age of the fetus at the time of irradiation. The highest sensitivity toward ionizing radiation is shown by the fetus between the third and the eighth week after conception. The damage depends on the absorbed dose and on the development stage, the threshold dose being approximately 0.1 Gy. Children born to women who during the sensitive weeks have received doses exceeding 0.1 Gy may suffer from malformations, mental retardation, cataract, or retarded growth. The doses for permanent sterility are approximately 6 Gy. The effect of gamma radiation doses on microorganisms and animals can be very different. For example, the following doses are needed for inactivation: enzymes > 20,000 Gy; virus 300–5000 Gy; bacteria 20–1000 Gy; human cells  1 Gy.

Environmental and Health Consequences of Nuclear, Radiological and Depleted Uranium Weapons Table 4

371

Radiation exposure levels and associated effects and symptoms

Dose (Sv)

Symptoms

0.05–0.2

No symptoms. Potential for increase in cancer risk and mutation of genetic material, according to the LNT model 0.05 Sv is the yearly protection limit for radiation workers in the United States No noticeable symptoms. Red blood cell count decreases temporarily Mild radiation sickness with headache and increased risk of infection due to disruption of immunity cells. Temporary male sterility is possible Mild radiation poisoning syndrome, 10% fatality after 30 days (LD 10/30). Typical symptoms include mild-to-moderate nausea (50% probability at 2 Sv), with occasional vomiting, beginning 3–6 h after irradiation and lasting for up to 1 day. This is followed by a 10- to 14-day latent phase, after which light symptoms like general illness and fatigue appear (50% probability at 2 Sv). The immune system is depressed, with convalescence extended and increased risk of infection. Temporary male sterility is common. Spontaneous abortion or stillbirth will occur in pregnant women Moderate radiation poisoning syndrome, 35% fatality after 30 days (LD 35/30). Nausea is common (100% at 3 Sv), with 50% risk of vomiting at 2.8 Sv. Symptoms onset at 1–6 h after irradiation and last for 1–2 days. After that, there is a 7- to 14-day latent phase, after which the following symptoms appear: loss of hair all over the body (50% probability at 3 Sv), fatigue, and general illness. There is a massive loss of leukocytes (white blood cells), greatly increasing the risk of infection. Permanent female sterility is possible. Convalescence takes one to several months Severe radiation poisoning syndrome, 50% fatality after 30 days (LD 50/30). Other symptoms are similar to the 2–3 Sv dose, with uncontrollable bleeding in the mouth, under the skin, and in the kidneys (50% probability at 4 Sv) after the latent phase Acute radiation poisoning syndrome, 60% fatality after 30 days (LD 60/30). Fatality increases from 60% at 4.5 Sv to 90% at 6 Sv (unless there is intense medical care). Symptoms start 30 min to 2 h after irradiation and last for up to 2 days. After that, there is a 7- to 14-day latent phase, after which generally the same symptoms appear as with 3–4 Sv irradiation, with increased intensity. Female sterility is common at this point. Convalescence takes several months to a year. The primary causes of death (in general 2–12 weeks after irradiation) are infections and internal bleeding Acute radiation poisoning syndrome, near 100% fatality after 14 days (LD 100/14). Survival depends on intense medical care. Bone marrow is nearly or completely destroyed, so a bone marrow transplant is required. Gastric and intestinal tissues are severely damaged. Symptoms start 15–30 min after irradiation and last for up to 2 days. Subsequently, there is a 5- to 10-day latent phase, after which the person dies of infection or internal bleeding. Recovery would take several years and probably would never be complete Acute radiation poisoning syndrome, 100% fatality after 7 days (LD 100/7). An exposure this high leads to spontaneous symptoms after 5–30 min. After powerful fatigue and immediate nausea caused by direct activation of chemical receptors in the brain by the irradiation, there is a period of several days of comparative well-being, called the latent phase. After that, cell death in the gastric and intestinal tissue, causing massive diarrhea, intestinal bleeding, and loss of water, leads to water– electrolyte imbalance. Death sets in with delirium and coma due to breakdown of circulation. Death is currently inevitable; the only treatment that can be offered is pain therapy

0.2–0.5 0.5–1 1–2

2–3

3–4 4–6

6–10

10–50

http://en.wikipedia.org/wiki/Radiation_poisoning–Radiation poisoning.

100 90 80

Lethality (%)

70 60 50 40 30 20 10 2

3

4

5

6

7

Dose (Gy) Fig. 9 Typical lethality as a function of dose. Adapted from Nuclear Weapon Radiation Effects, Federation of American Scientists. www.fas.org/nuke/ intro/nuke/radiation.htm.

372

Environmental and Health Consequences of Nuclear, Radiological and Depleted Uranium Weapons

A comparison among the lethal dose at which 50% of exposed animals die after 30 days also shows the very different radiation resistance: bacteria, algae, fungi  6000 Gy; amoeba 1000 Gy; shellfish 200 Gy; goldfish 20 Gy; amphibians and reptiles 15 Gy; song sparrow 8 Gy; rabbit 8 Gy; bees 5 Gy; monkeys and other mammals 3–8 Gy; man  4 Gy; dog 3.5 Gy. Insects are over a hundred times more resistant to ionizing radiation than vertebrates, with lethal doses ranging from 103 to 4 10 Gy. The response of plants toward ionizing radiation is diverse. For instance, although coniferous trees show considerable growth retardation at dose rates as low as 0.02–0.1 Gy per day, in some plant species growth rate is not affected even at tens of Gy per day. Observable effects, such as growth retardation, alterations in appearance of stalks, leaves, and flowers, or enhanced incidence of tumors depend on factors such as dose rate, dose fractionation, and plant growth rate. When exposed to a high, acute radiation dose, higher resistance toward irradiation is observed in plants with slower growth rate whereas in the case of chronic irradiation higher resistance is shown by faster growing species.

Effects of immediate ionizing radiation energy on humans and other biota

Although people close to ground zero may receive lethal doses of radiation, they are simultaneously killed by the blast wave and thermal pulse. Therefore, only a fraction of deaths and injuries can be attributed to radiation exposure from typical nuclear weapons (Fig. 9).

Effects of residual ionizing radiation energy on humans and other biota

A wide range of biological changes may follow the irradiation of humans and animals, ranging from rapid death following high doses to no apparent changes for a variable period of time until delayed radiation effects start to develop in a fraction of the exposed population. To compare the degree of lethality of high radiation exposures, the dose that is lethal to 50% of a given population is an often used one, although this gives only approximate indications and can vary with subsequent medical care. This dose is usually utilized when dealing with acute lethality and refers to the time period when the effects take place. The common time periods used are 30 days or less for most small laboratory animals and up to 60 days for large animals and humans. A second number in the subscript indicates the specified period of time. For example, LD50/30 indicates 50% mortality within 30 days (Table 4). As discussed earlier, the residual radiation from a nuclear explosion is mostly from the radioactive fallout, which can contain approximately 60% of the total radioactivity. Fallout particles vary in size from 1 mm (1  10 6 m) to several millimeters. Most of these fallout particles fall directly back down, close to ground zero within several minutes after the explosion, but some travel high into the atmosphere. Smaller particles will require many hours or days to return to earth and may be carried hundreds to many thousands of miles. This means that a surface burst can produce serious contamination even very far from the point of detonation. The remaining material will be dispersed over the earth during the following hours, days, and months. If fallout enters into the stable region of the stratosphere, radioactive particles can remain airborne from 1 to 3 years before returning to the surface. The radiation hazard is mostly due to radioactive fission fragments with half-lives of seconds to a few months, and from soil and other materials in the vicinity of the burst made radioactive by the intense neutron flux. However, beyond the blast radius of an exploding nuclear weapon there can also be areas the survivors should not enter because of radioactive contamination from long-lived radioactive isotopes. This radiation hazard could represent a serious threat for as long as 1–5 years after the nuclear blast. Predictions of the amount and levels of the radioactive fallout are difficult because this depends on several factors, such as the yield and design of the weapon, the height of the explosion, the nature of the surface beneath the point of burst, and the meteorological conditions. Many fallout particles are particularly hazardous as they contain radioactive elements such as:

• • • • •

137

Cs (cesium-137), a gamma emitter with half-life of 30.2 years, which distributes fairly uniformly throughout the body as it behaves similarly to potassium. It can contribute to gonadal irradiation and genetic damage. 90 Sr (strontium 90), a beta emitter with half-life of 28.6 years, which accumulates in growing bones as it is chemically similar to calcium. It can cause tumors, leukemia, and other blood abnormalities. 131 I (iodine-131), a beta emitter with half-life of 8.0 days, which concentrates in the thyroid gland. It can destroy all or part of the thyroid or can induce chromosomal damage leading to carcinogenesis. 3 H (tritium), a beta emitter with half-life of 12.3 years. It is dangerous as it can replace hydrogen in the water molecules. It can cause lung cancer. 239 Pu (plutonium-239), an alpha emitter with a half-life of 24,400 years. Its ingestion can cause the formation of bone and lung tumors.

Other effects on humans and biota

Ozone depletion It has been estimated that as much as 5000 t of nitric oxides can be produced for each megaton of nuclear explosive power. These could reduce the ozone levels by as much as 30%–70%. Such depletions might permit more ultraviolet radiation from the sun to cross the atmosphere and reach the surface of the earth, leading to an increase of the ultraviolet radiation reaching the ground. Ultraviolet radiation can damage macromolecules such as DNA and proteins present in cells. The consequences would be dangerous burns and a variety of potentially dangerous ecological effects (Fig. 10).

Environmental and Health Consequences of Nuclear, Radiological and Depleted Uranium Weapons

373

Time after detonation 10 y

20 y

30 y

40 y Leukemia Thyroid Breast cancer

Lung cancer Gastric cancer Colon cancer Multiple myeloma 1945

1950

1955

1960

1965

Latency period

1970

1975

1980

1985

Increase observed

Increase suspected Fig. 10 Detection time of various types of cancer in the survivors of the Hiroshima atomic bomb. Redrawn from Shigematsu, I., Ito, C., Kamada, N., Akiyama, N. and Sasaki, H. (1995). Effects of A-Bomb Radiation on the Human Body. Tokyo: Bunkadao Co., Ltd.

Nuclear winter The conditions of darkness and very low, subfreezing temperatures, combined with high doses of radiation from nuclear fallout, would interrupt plant photosynthesis and could destroy much of the earth’s vegetation and animal life. These effects, together with the widespread destruction of industrial, medical, and transportation infrastructures, and food supplies and crops, could trigger a massive death toll from starvation, radiation exposure, and diseases. Should a nuclear war ever produce a nuclear winter effect, the possibility of the extinction of Homo sapiens cannot be excluded.

Environmental and Health Effects of Radiological Dispersion Devices As mentioned earlier, this type of weapons are not nuclear weapons and so far have never been used either in warfare or in terrorist attacks. They use conventional explosives for spreading into the environment radioactive material that is packed in them. For this reason they are called RDDs. RDDs do not have the shock and thermal forces or radiation levels of a nuclear weapon. They are expected to cause casualties mostly as a result of the conventional explosion. Nevertheless, especially if used by terrorists, they could cause considerable fear and panic. The damage caused by the conventional explosives used in an RDD to both the environment and biota is not discussed here, as it is well documented in innumerable books and publications. To give just a feeling for the extent of casualties they could produce, only resulting from the detonation of the conventional explosive, it is recalled here that the number of deaths caused by some terrorists attacks in car or truck bombs has been up to several hundreds in some cases. For example, in the car bombing of 1983 in Beirut (Lebanon) against the US and French barracks, approximately 300 people were killed. The Oklahoma City (United States) terrorist truck bomb, detonated on April 1995, that had an estimated yield of 2 kt, killed approximately 170 people. The truck bomb terrorist attack of August 2007 at the Yazidi compounds in Kahataniya (Iraq) made an even larger number of victims. Approximately 800 people were apparently killed. Concerning the possible health effects of the radiation dispersed by an RDD, it must be considered that most of the radioactive materials that could become easily available and then potentially used by terrorists pose a limited threat as they have relatively low radioactivity levels. For example, the easily accessible low-level nuclear wastes generated by hospitals have a too low radioactivity to create a real radiological health risk. The same holds for some fissile material. Enriched, natural, or depleted uranium do not pose a significant radiological risk because of their low radioactivity. Use of plutonium in an RDD would be of great public concern but would not likely affect large number of people to significant risks. Moreover, the possibility that such material could become available to terrorists is remote as it is tightly guarded.

374

Environmental and Health Consequences of Nuclear, Radiological and Depleted Uranium Weapons

High-level nuclear waste, such as spent fuel, is also unlikely to be used by terrorists. First of all such material is difficult to obtain as security for spent fuel is high as it contains fissile material that could be useful for nuclear weapons. In addition, the high radioactivity of such waste makes it not only very hazardous to handle but also hard to hide. A more likely radioactive material could be obtained from the highly radioactive sources used in medicine and industry. In this case, on explosion, the radioactive material would disperse mostly in the form of very small particles over an area whose size will depend on the blast location and the power of the explosive. This will dilute the radiological health effects. It must also be considered that the larger, more radioactive particles will not travel too far from the blast point and, in general, for a given radioactive source, the larger the quantity of the conventional explosive detonated, the larger is the area over which the radioactivity is dispersed, and, consequently, the lower is the radiological hazard. An approximate estimate of the radiological risk due to the explosion of RDD can be made using some examples. The first one is RDD containing a typical large radioactive source used in medical applications, namely, 1750 Ci (Ci ¼ Curie; 1 Ci ¼ 3.7  1010 Bq) of Cs-137, exploded by 4–10 kg of TNT in a populated metropolitan area. In this case, the lethal blast effect from the explosion should not extend more than a few meters (5–10 m). To make sure that the radiation dose does not exceed the 50 mSv limit (maximum annual exposure for some radiation workers) in a short period of permanence in the most contaminated zone, an access control area should be created having a radius of approximately 150–200 m. The area requiring remediation should not exceed a radius of approximately 300–400 m. From such an RDD, nobody should suffer from acute radiation syndrome. As a second example, a RDD containing an even larger radiation source, namely, a 2700 Ci Ir-192 source used for industrial applications, exploded by approximately 20 kg of TNT in a truck bomb, is considered. In this case, the access control area would be considerably larger (perhaps 400 m) and the area requiring remediation could extend to approximately 600–800 m. In both cases the radiation exposures from the RDDs are unlikely to result in large radiological impacts because the size of the exposed population would probably be small. The highest exposed group in the proximity of the explosion, if surviving the blast, could receive a radiation dose of 50 mSv. This would imply the same excess risk of dying of cancer run by a smoker. The majority of the people would be exposed to level not exceeding few mSv. The excess risk from this exposure is approximately 1 in 10,000. Nevertheless, in addition to the expected negative effects caused by the explosion, there could be serious psychological and social consequence due to fear and panic. The additional economic damage will consist in the cost of decontaminating the areas over which the radioactive material was spread.

Environmental and Health Effects of Depleted Uranium Weapons In a combat situation, the main radiological hazard associated with DU munitions is the inhalation of the aerosols (or dust) created when DU munitions hit an armored target. Most of the DU aerosols created by the impact of penetrators against an armored target settle within a short time and in close proximity to the site of the hit, although smaller particles may be carried to a distance of several hundred meters by wind. This aerosol is deposited on the ground and other surfaces mainly as uranium oxide. A large proportion of DU munitions fired from an aircraft miss their intended target, and the penetrators impacting on soft targets (e.g., sand or clay) can penetrate the ground without generating significant dust. The physical state of these munitions can vary from whole intact penetrators, sometimes still encased in their aluminum jacket, to nonoxidized, partially oxidized, or fully oxidized DU fragments of different size. Penetrators that do not hit the target corrode with time forming DU fragments and particles, which may range from several millimeters to less than a micrometer in size.

DU Exposure Pathways Exposure to DU can be external or internal. Individuals who find and handle DU penetrators or fragments can be exposed via external irradiation mainly due to the beta particles emitted by the DU. Potential health effects from external exposure are limited to skin contact. The dose rate to the skin in direct contact with a piece of pure DU is approximately 2 mSv h 1, and much less in the case of dust. Therefore, the doses received can be significant only if a person remains in contact with DU over a considerable period of time. Nevertheless, direct skin contact with DU should be prevented. Health effects related to internal exposure may result from either chemical or radiological toxicity. Ingestion of DU is not considered the major internal exposure pathway. However, farmers working in a field in which DU munitions were fired could inadvertently ingest small quantities of soil, whereas sometimes children deliberately eat soil. Therefore, direct ingestion of contaminated soil must be taken into consideration. DU can enter the body in the form of uranium or as uranium oxide dust. In the body fluids, uranium is dissolved as uranyl ions. Uranium is absorbed into the blood, transported, and retained in body tissues and organs. Only approximately 0.2% of ingested, insoluble DU is absorbed into the blood. The fraction absorbed into the blood is rapidly cleared, with approximately 90% leaving the body in urine within the first week after intake. The rest will be distributed to tissues and organs. In particular, approximately 10% is deposited in the kidneys, and most of this is eliminated in a few weeks. Approximately another 15% is deposited in bone. Uranium remains much longer in the bone compartment, to the extent that after 25 years 1% can still be present. Inhalation of dust is considered the major internal pathway for DU exposure in both combat and noncombat situations. The small DU particles in the soil, formed by the corrosion of penetrators that did not hit hard targets can also be inhaled when resuspended through the action of the wind or human activities such as plowing. For persons entering an armored vehicle hit by DU

Environmental and Health Consequences of Nuclear, Radiological and Depleted Uranium Weapons

375

ammunition, the aerosols generated at impact and the uranium oxide dust formed as a result of DU corrosion may lead to considerably higher inhalation exposures. Approximately 95% of inhaled particles > 10 mm aerodynamic equivalent diameter (AED), deposited in the upper respiratory tract, are cleared to the pharynx and the gastrointestinal tract. Particles < 10 mm AED can reach deeper pulmonary regions (bronchioles and alveoli) and be retained for a considerable time. The amount of DU that is absorbed into the blood and deposited in tissue and organs depends mainly on particle size and the solubility of the uranium-containing particle. Soluble chemical forms are absorbed within days whereas insoluble forms generally take months to years. Toxic chemical effects are more likely to be associated with the more soluble forms of uranium whereas radiation effects are more likely to be associated with the insoluble forms, such as particles that are deposited in the lung and local lymph nodes and retained for extended periods of time. The kidneys are the critical organs for uranium chemotoxicity. Wound contamination by embedded DU fragments can also occur during combat activities or in case of accidental bruising of skin on contaminated surfaces. In the latter case, after wound cleaning, the resulting exposure to DU can be expected to be negligible. However, embedded fragments not removable by surgical means result in chronic, internal exposure.

Cancer and Other Health Risk From DU Calculations, using DU concentrations measured in top soil in the most severely hit locations where DU munitions were used in conflicts, suggest that the annual dose received by adults and children from the inhalation of DU particles amounted to approximately 10 mSv, assuming an overall residence time of 1 year. This level of dose is much lower than the annual dose from natural irradiation (2.4 mSv) and the additional annual dose limit for the public. Therefore, the additional risk of developing a lethal cancer or any other health effect is practically zero. However, individuals spending approximately 10 h inside a DU-contaminated military vehicle such as a tank could receive doses amounting to a few mSv. As mentioned earlier, such doses imply an increase of the probability of developing a lethal cancer similar to that of smoking 200 cigarettes in a lifetime. There are reports of biomarkers as indicators of DU exposure. An association of hypoxanthine–guanine–phosphoribosyltransferase (HPRT) mutations with high uranium levels in US Gulf War veterans who were victims of “friendly fire” and an increase in unstable chromosome aberrations in a group of United Kingdom Gulf War and Balkan’s War veterans have been reported. However, no direct experimental or epidemiological evidence of an increased cancer risk due to radiation was observed. Field and environmental assessments, involving in situ measurements and analysis of many environmental samples in the laboratory, were conducted jointly by IAEA, UNEP, WHO, national institutions, and international experts in many locations where reliable information existed that DU ammunition had been used in wars. The general conclusion of these studies was that no widespread contamination by DU was present. Detectable contamination by DU was mainly measured on objects directly hit by DU munitions (mainly military vehicles) and on the ground only as far as few meters away from where DU penetrators were found. DU fragments and DU oxides particles of respirable size were found dispersed in the ground around and beneath penetrators lying on the surface. However, the general conclusion of these studies, based on the corresponding external and internal radiation doses potentially received by the populations, was that no significant risk of suffering from radiological health effects should be expected in the short and medium terms. Analyses of urine samples provided by potentially exposed people were consistent with this conclusion, as all results indicated no substantial DU exposure.

Further Reading Allison, G., 2004. Nuclear terrorism: The ultimate preventable catastrophe. Henry Holt and Company, New York. Argonne National Laboratory (2005) Radiological dispersal device (RDD), Human health fact Sheetwww.ead.anl.gov/pub/doc/rdd.pdf (accessed July 2010). Assimakopoulos, P.A., 2003. Special issue: Depleted uranium. Journal of Environmental Radioactivity 64 (IdIII), 2–3. Barnaby, F., 2005. Dirty bombs and primitive nuclear weapons. Available at: http://www.oxfordresearchgroup.org.uk/publications/briefing_papers/pdf/dirtybombs.pdf. Beebe, G.W., Kato, H., Land, C.E., 1977. Mortality experience of atom bomb survivors 1950–1974. In: Radiation Effects Research Foundation Technical Report RERF, pp. 1–77. Bleise, A., Danesi, P.R., Burkart, W., 2003. Properties uses and health effects of depleted uranium (DU): A general overview. Journal of Environmental Radioactivity 64, 93–112. British Medical Association, Board of Science and Education, 1983. The medical effects of nuclear war. Wiley, Chichester. Choen, B.L., 1987. Alternatives to the BEIR relative risk model for explaining A-bomb survivor cancer rate mortality. Health Physics 52, 55. Toon, O.B., Robock, A., Turco, R.P., 2008. Environmental consequences of nuclear war. Physics Today 61 (12), 37–42. Cirincone, J., Wolfsthal, J.B., Rajkumar, M., 2005. Deadly arsenal: Nuclear, biological and chemical threats. Carnegie Endowment for International Peace, Washington, DC. Glasstone, S., Dolan, P.J., 1977. The effects of nuclear weapons, 3rd ed. U.S. Department of Defense and ERDA, Washington, DC. Hala, J., Navrartil, J.D., 2003. Biological effects of ionizing radiation, Chapter 5.7. In: Radioactivity, Ionizing Radiation and Nuclear Energy. Konvoj, Berkova, Czech Republic; spol.s.r.o. Hiroshima-Nagasaki Committee for the Compilation of Materials on Damage Caused by the Atomic Bombs LW, 1985Hiroshima-NagasakiCommitteefortheCompilationofMateria. The impact of the A-bomb; Hiroshima and Nagasaki, 1945–85. Iwanami Shoten Publishers, Tokyo. McNaught, L.W., 1984. Nuclear weapons and their effects. Brassey’s Defence Publisher, London. Military Medical Operations, 2003. Handbook of Medical Management of Radiological Casualties. AFFRI special publication 03-1, Bethesda, MD. http://www.afrri.usuhs.mil/ outreach/pdf/2edmmrchandbook.pdf. accessed November 2009. Office of Technology Assessment (OTA)dCongress of the United States, 1980OfficeofTechnologyAssessment(OTA)dCongressoftheUnite. T. The effects of nuclear war. Croom Helm, London. Ohkita, T., 1975. Acute effects; Review of thirty years’ study of Hiroshima and Nagasaki atomic bomb survivors. Journal of Radiation Research 16 (Suppl), 49–66. Oughterson, A.W., Warren, S. (Eds.), 1956. Medical Effects of the Atomic Bomb in Japan. McGraw-Hill, New York. Ring, J.P., 1984. Radiation risk and dirty bombs. Health Physics 86 (Suppl), 542–547. The Royal Society, 2001. The health hazards of depleted uranium munitionsdPart I. The Royal Society, London.

376

Environmental and Health Consequences of Nuclear, Radiological and Depleted Uranium Weapons

The Royal Society, 2002. The health hazards of depleted uranium munitionsdPart II. The Royal Society, London. United Nations Scientific Committee on Effects of Atomic Radiation (UNSCEAR), 1988UnitedNationsScientificCommitteeonEffectsofAtomicRa. Sources, effects and risk of ionizing Radiation. United Nations, New York. Warner, F., Kirchman, R.J.C. (Eds.), 2000. Nuclear test explosions: environmental and human impacts. Wiley, Chichester, England. Scope 59. Westing, A.H. (Ed.), 1977. Weapons of mass destruction and the environment. Taylor and Francis Ltd., London. WHO, 2001. Depleted uranium sources. In: Exposure and health effects. WHO, Geneva.

Relevant Websites http://www.icrp.org/about.aspdICRP. http://www.cancer.gov/i131dIodine-131 in fallout. http://www.nci.org/nci-nt.htmdNuclear terrorism. http://www.johnstonsarchive.net/nuclear/index.htmldNuclear weapons effects-an overview. Nuclear weapons effects: Some data. http://www.epa.gov/radiation/understand/health_effects.htmldRadiation Protection. Health Effects. http://www.atomicarchive.com/Effects/effects1.shtmldThe Effects of Nuclear Weapons.

Environmental Cancers: Environmental Lung Cancer Epidemiologyq Hisamitsu Omori, Ayumi Onoue, and Takahiko Katoh, Kumamoto University, Kumamoto, Japan © 2019 Elsevier B.V. All rights reserved.

Introduction Cancer is a major cause of morbidity and mortality. Up to a half of all men and a third of all women in developed countries can expect to develop an invasive cancer during their lifetime. > 10 million people worldwide are expected to be diagnosed with cancer. The number of new cases was expected to reach 15 million per year by 2020. The global cancer burden is estimated to have risen to 18.1 million new cases and 9.6 million deaths in 2018. Lung cancer is leading cause of death in both men and women and is the leading cause of cancer death in women in 28 countries. Cancer mortality in developed countries is twice that of developing countries, and this has been attributed to the impact of tobacco, diet, and environment and lifestyle. Lifestyle-related factors, screening and aging cannot fully account for the present overall growing incidence of cancer in high income countries. Numerous cancers may in fact be caused by the recent modification of our environment. Many carcinogens are present in the air we breathe, the food we eat, and the water we drink. It is unavoidable to be exposed to ubiquitous environmental carcinogens. Many cancer types are induced by multiple and diverse exogenous environmental carcinogens. Environmental pollution has been linked to various cancers. During the early 1970s dramatic increase in lung cancers were evident in the urban areas of Japan similar to that seen in the United States during the late 1940s. Most of the > 1.4 million lung cancer deaths that occur annually worldwide are caused by tobacco smoking. In the United States, factors other than tobacco smoking are estimated to account for 10%–15% of all lung cancer deaths. Radon and its decay products are thought to be a significant risk for lung cancer after tobacco smoking. Occupational exposure of many agents is known to cause lung cancer, accounting for about 5%–20% of all lung cancers. Thus, environmental factors have an important role in lung cancer. This updated article outlines the epidemiological research on environmental risk factors and cancer, especially focus on lung cancer.

Environmental Factors and Risk of Lung Cancer Environmental factors associated with risk of lung cancer are summarized in Table 1. Environmental pollution has been linked to lung cancer. Lung cancer is affected by outdoor air pollution, indoor air pollution and occupational exposures. The main outdoor air pollutants contain suspended particulates (particulates matter < 10 mm in aerodynamic diameter (PM10), particulates matter 2.5 (PM2.5)), carbon particulate associated with polycyclic aromatic hydrocarbons (PAHs), nitrogen dioxide (NO2) derived from transport and industrial sources, and radioactive gas, radon and its decay products. The main indoor air pollutions included environmental tobacco smoke (ETS), radioactive gas, radon and its decay products, formaldehyde, and exposure to cooking oil vapors, coal burning and fungus spores. The workplace environment and housing environment are closely associated with lung cancer. Lifestyle such as diet contributes to varying extents to the risk of lung cancer. Table 1

Environmental factors associated with risk of lung cancer

Environmental factors and risk of lung cancer Outdoor air pollution and risk of lung cancer Ambient particulate matter (PM) Polycyclic aromatic hydrocarbons (PAHs) Nitrogen dioxide (NO2) Radioactive gas, radon and its decay products Indoor air pollution and risk of lung cancer Environmental tobacco smoke (ETS) Radioactive gas, radon and its decay products Formaldehyde Cooking emissions Coal burning.

q

Change History: April 2019. Hisamitsu Omori updated the text and references. This is an update of H. Omori, T. Katoh, Environmental Lung Cancer Epidemiology, Editor(s): J.O. Nriagu, Encyclopedia of Environmental Health, Elsevier, 2011, Pages 471–475.

Encyclopedia of Environmental Health, 2nd edition, Volume 2

https://doi.org/10.1016/B978-0-12-409548-9.11825-1

377

378

Environmental Cancers: Environmental Lung Cancer Epidemiology

Outdoor Air Pollution and Risk of Lung Cancer Ambient Particulate Matter Recent worldwide efforts in epidemiological and biological research on health effect of particulates have revealed a variety of influences on respiratory diseases such as asthma, chronic obstructive pulmonary disease (COPD), as well as lung cancer and cardiovascular disease. Ultrafine or nanoparticles (particles with 1 dimension < 100 nm) are one of the components of respirable particulate air pollution. Particulate matter PM10 and PM2.5 indicates particles measuring < 10 mm and 2.5 mm in diameter, respectively. Particulates can carry carcinogenic materials into the lung. Increased air pollution contains carcinogenic particulate matter PM2.5. PM2.5 has gained particular attention in recent years as a causative factor in the increased incidence of respiratory diseases, including lung cancer. Some attempts have been made by epidemiologists to monitor the relationship between air pollution and lung cancer. Several US cohort studies have observed increased mortality rates associated with PM. The Adventist Health Study of Smog, from a data of a 15 year follow-up of 6338 nonsmoking Californians, observed that mean PM10 associated with increased lung cancer mortality in men and women. The large cohort and extended follow-up provide an unprecedented opportunity to evaluate associations between long-term particulate matter (PM) exposure and lung cancer risk. The American Cancer Society as part of the Cancer Prevention II study concluded that long-term exposure to combustion-related fine particulate air pollution is an important environmental risk factor for cardiopulmonary and lung cancer. The associations between fine particulate air pollution and lung cancer mortality, as well as cardiopulmonary mortality, are observed even after controlling for cigarette smoking, BMI, diet, occupational exposure, other individual risk factors, and after controlling for regional and other spatial differences. In this study, each 10-mg/m3 elevation in long-term average PM2.5 ambient concentrations was associated with approximately a 4%, 6%, and 8% increased risk of all-cause, cardiopulmonary, and lung cancer mortality, respectively. An extensive epidemiological research on health effects of air pollution has found an association of increased fine particulate air pollution (PM2.5) with acute and chronic mortality. In the Harvard Six Cities adult cohort study monitored for 14–16 year during the 1970s and 1980s and extended mortality follow-up for 8 year in a period of reduced air pollution concentrations, lung cancer mortality was each positively associated with ambient PM2.5 concentrations. The study demonstrated additional findings that reduced PM2.5 concentrations were associated with reduced mortality risk. According to a comprehensive review of all the studies providing data on ambient PM and cancer risk in Europe, European epidemiological studies on particulate matter (PM) and lung cancer do not show a clear association, but uncertainties remain for the measurement of exposure and latency. Most European studies provided data on PM10 only. According to the European Study of Cohorts for Air Pollution Effects (ESCAPE) providing data on particulate matter and lung cancer risk in Europe, 17 cohort studies based in nine European countries showed an association between exposure to PM and incidence of lung cancer, in particular adenocarcinoma, in Europe. The meta-analyses showed a statistically significant association between PM10 and PM2.5 and adenocarcinoma of the lung (Hazard ratio [HR] 1.51 (1.10–2.08) and 1.55 (1.05–2.29)). A recent systematic review and meta-analysis showed the relationship of exposure to PM2.5 and PM10 with lung cancer incidence and mortality. Meta-estimates for adenocarcinoma associated with PM2.5 and PM10 were 1.40 (95% CI: 1.07, 1.83) and 1.29 (95% CI: 1.02, 1.63), respectively. World Health Organization International Agency for Research on Cancer (IARC) concluded that exposure to outdoor air pollution and to particulate matter (PM) in outdoor air is carcinogenic to humans (Group 1) and causes lung cancer. A large, health conscious cohort of nonsmokers showed that increased estimates of incident lung cancer associated with each 10-mg/m3 increment in ambient PM2.5 in a study population consisting mainly of never smokers who lived in areas with relatively low concentrations of ambient PM2.5. Increased risk of lung adenocarcinoma was observed for each 10-mg/m3 increment in ambient PM2.5 concentrations. IARC has categorized diesel exhaust in Group 1 carcinogen. Automobile exhausts contribute more than a thousand times the concentration of environmental carcinogens than does tobacco smoke. In terms of COPD, chronic exposure to high levels of ambient particulate matter (mean levels of PM2.5) is associated with small airway remodeling of human lung.

Polycyclic Aromatic Hydrocarbons Numerous outdoor air pollutions such as polycyclic aromatic hydrocarbons (PAHs) increase the risk of cancers, especially lung cancer. PAHs result from the combustion of organic substances. They are produced by incomplete combustion of fuel. They are found in tobacco smoke as well as in factory smoke, waste incinerator emission and vehicle exhaust. PAHs can adhere on fine carbon particulates (PM2.5) suspended in the atmosphere. Particulate accumulate near the ground at the level we breathe. They penetrate our bodies primarily through breathing. Long term exposure to PAH-containing air in polluted cities was found to increase the risk of death due to lung cancer by 8%, after controlling for tobacco smoking. PAH of high molecular weight (5–7 rings) induce DNA adduction processes and so are mutagenic, while PAH with low molecular weight (3–4 rings) are non genotoxic promoters.

Environmental Cancers: Environmental Lung Cancer Epidemiology

379

Aside from PAHs and other fine carbon particles, another environmental pollutant, nitric oxide, was found to increase the risk of lung cancer in a European population of nonsmokers. Other studies have shown that nitric oxide can induce lung cancer and promote metastasis.

Nitrogen Dioxide (NO2) Nitrogen dioxide (NO2) is the expression of a mixture of particles and gases related to traffic (vehicle exhaust), power plants and/or waste incinerator emission. Traffic related NO2 at residual address was associated with increased risk of lung cancer. A recent systematic review and meta-analysis showed the evidence of a relationship between NO2, as a proxy for traffic sourced air pollution exposure, with lung cancer.

Radioactive Gas, Radon and Its Decay Products In most countries, radon is the dominant contributor among natural radiation sources for the general population. Radon and its decay products are detected in ambient air, particularly in urban areas. Although radon is chemically inert and electrically uncharged, radon atoms in air can spontaneously decay into other atoms. The radon progeny, the resulting atoms, is electrically charged and can attach themselves to tiny dust particles in indoor air. These dust particles can easily be inhaled and deposited on the lining of the lung. The deposited atoms decay by emitting a-rays, which cause DNA damage of the lung cells. This DNA damage can increase the risk of lung cancer. The radon progeny are now a well-recognized cause of lung cancer. Many studies of underground miners exposed to radioactive radon and its decay products have consistently demonstrated that exposure increase risk of lung cancer. IARC monograph updated 2004 included radon and its decay products in Group 1. A recent joint cohort analysis of Czech, French, and Canadian uranium miners provided strong evidence for an increased risk of lung cancer mortality from low occupational radon exposures.

Indoor Air Pollution and Risk of Lung Cancer Environmental Tobacco Smoke Involuntary (passive) tobacco smoking is exposure to second-hand smoke or environmental tobacco smoke (ETS), which is a mixture of exhaled mainstream smoke and sidestream smoke released from the smoldering cigarette or other smoking device (cigar, pipe, bidi, etc.) and diluted with ambient air. ETS contains numerous inhaling carcinogens and toxic substances. At least 17 carcinogenic chemicals contained in tobacco smoke such as benzene, 1,3-butadiene, benzo(a)pyrene, 4-(methylnitrosamino)-1-(3-pyridyl)-1-butanone and many others are emitted at higher levels in sidestream smoke than mainstream smoke. One of the metabolites of tobacco smoke [benzo(a)pyrene diol epoxide], which shows a direct aetiological association with lung cancer, is found in both mainstream and sidestream smoke. Since the 1980s, epidemiological studies have attempted to establish a link between ETS exposure and lung cancer development. Besides active smoking, environmental tobacco smoke (ETS) has been linked to excess risk of lung cancer in epidemiological studies. The review from 37 studies with pooled relative risk of lung cancer showed 1.24 (95% CI: 1.13–1.36). According to meta-analysis of ETS and lung cancer based on 55 studies including 7 cohort studies, 25 population-based case-control and 23 non-population-based case-control studies, the pooled relative risk (RR) for never-smoking women exposed to ETS from spouses is 1.27 (95% CI: 1.17–1.37). The RR for North America is 1.15 (95% CI: 1.03–1.28), Asia 1.31 (95% CI: 1.16–1.48) and Europe 1.31 (95% CI: 1.24–1.52). A population-based prospective study conducted in Japan demonstrated that passive smoking from husbands is a risk factor for lung cancer, especially for adenocarcinoma among Japanese women. Compared with women married to never smokers, hazard ratio (HR) [95% CI] for all lung cancer incidence in women who lived with a smoking husband was 1.34 (95% CI: 0.81–2.21). According to their study, association was clearly identified for adenocarcinoma (HR 2.03, 95% CI: 1.07–3.86), for which doseresponse relationship were seen for both the intensity (p for trend ¼ 0.02) and amount (p for trend ¼ 0.03) of the husband’s smoking. A meta-analysis of data from epidemiological studies in Japanese populations showed that the pooled relative risk of lung cancer associated with ETS was 1.28 (95% CI: 1.10–1.48). ETS in the home during adulthood results in a statistically significant increase in the risk of lung cancer. ETS at the workplace also increased the risk of lung cancer (HR 1.32, 95% CI: 0.85–2.04). Published meta-analyses of lung cancer in never-smoker exposed to ETS at the workplace have found a statistically significant increase in risk of 12%–19%. Moreover, a higher risk of adenocarcinoma was seen for combined husband and workplace exposure (HR 1.93, 95% CI: 0.88–4.23). The risk was higher among former smokers (stopped for at least 10 years) than among never smokers, which could indicate the greater susceptibility for former smokers due to already existing mutations. All individuals exposed to ETS have a higher risk of lung cancer. An epidemiological study suggested that subjects first exposed before age 25 have a higher lung cancer risk compared to those for whom first exposure occurred after age 25 years. In addition to an increased risk of adulthood lung cancer in children exposed to ETS.

380

Environmental Cancers: Environmental Lung Cancer Epidemiology

According to the results from European study, the proportion of lung cancers attributable to traffic-related air pollution and environmental tobacco smoke (ETS) in never- and former-smokers was precisely estimated to be 5%–7% and 16%–24%, respectively. The evidence based on summaries of data from > 50 investigations of involuntary tobacco smoking published during the last 25 years is sufficient to conclude that involuntary tobacco smoking is a cause of lung cancer in never smokers. World Health Organization International Agency for Research on Cancer (IARC) monograph updated 2004 included involuntary smoking (exposure to secondhand or “environmental” tobacco smoke) in Group 1 (Carcinogenic to humans).

Radioactive Gas, Radon and Its Decay Products The naturally-occurring radioactive gas, radon and its decay products, is the second most significant risk for lung cancer after tobacco smoking. High levels of radon were first identified in uranium mines, but more recently, it has been established that significant levels are found in the built environment. Radon enters homes mainly from the soil through cracks in the foundation and other holes to the geologic deposits beneath these structures. Indoor radon also comes from building materials and groundwater from drilled wells. Once inside the home the gas decays (half-life 3.8 d) and the ionized atoms adsorb to dust particles and are inhaled. These particles lodge in the lung and cause lung cancer. The possibility that a demonstrated lung carcinogen may be present in large numbers of homes raises a serious public health concern. However, because of differences between working in underground mines and living in houses, estimates are subject to major uncertainties. The potential hazard from indoor radon remains answered only indirectly through miner studies and experimental animal studies. Some studies report positive or weakly positive findings, while others report no risk. Most of residential case-control studies did not show a significant risk. A meta-analysis of eight epidemiological studies from five countries (each enrolling at least 200 case subjects and measuring houses for radon concentrations) found an estimated RR of 1.14 (95% CI: 1.0–1.3) at 150 Bq/m3. This study suggested that the risk from indoor radon is not likely to be markedly greater than that predicted from miners and indicate that negative exposure response reported in some ecologic studies is likely due to model misspecification or uncontrolled confounding and can be rejected. The results from a combined analysis of North American case-control studies of residential radon and lung cancer provided direct evidence of an association between residential radon and lung cancer risk, a finding predicted by extrapolation of results from occupational studies of radon-exposed underground miners. Smoking is a strong cause of lung cancer because it plays a causal role in a large proportion of cases. In contrast, exposure to ambient radon gas is a weaker cause because it has a causal role in a much smaller proportion of lung cancer cases. If society eventually succeeds in eliminating tobacco smoking, with a consequent reduction in smoking related cases of lung cancer, a much larger proportion of the lung cancer cases that continue to occur will be caused by exposure to radon gas. It would appear that eliminating smoking has strengthened the causal effect of radon gas on lung cancer. While the risk from smoking is considerably larger, radon exposure may be a significant risk factor for the nonsmoker. Recent simulation study of radon and thoron suggested that in certain circumstances the radon-related lung cancer risk obtained in past epidemiological studies was underestimated to one-tenth of true risk.

Formaldehyde Formaldehyde (FA) is a high-volume chemical, which is used for disinfection purposes and as a preservative. FA is used in the production of plastics, coatings and paints, flooring materials, wood-products, for textile finishing, for synthesis of chemicals, and it is a component of combustion products. FA is a major compound derived from ozone-initiated reactions with akenes. FA is a common indoor air pollutant because of its ubiquitous use. FA is genotoxic, causing DNA adduct formation, and has a clastogenic effect. The most recent evaluation by IARC classified formaldehyde as “carcinogenic to humans” (Group 1), and concluded that there is evidence that formaldehyde causes nasopharyngeal cancer and not sufficient evidence for leukemia and limited evidence for sinonasal cancer. Comprehensive qualitative and quantitative view of cancer risk in six cohorts of industry workers and professionals exposed to formaldehyde showed no appreciable excess risk for cancers of the oral cavity and pharynx, sinus and nasal cavity and lung. In 2010, the World Health Organization (WHO) established an indoor air quality guideline for short- and long-term exposures to formaldehyde. This guideline was supported by studies from 2010 to 2013. A recent update by IARC (2012) classified FA as “carcinogenic to humans” (Group 1), on the basis that FA may cause cancer of the nasopharynx and leukemia, whereas there was limited evidence for association with sinonasal cancer. A consistent finding is the observed occurrence of nasal cancer in rats and mice at high FA exposures. Recent re-evaluation of the WHO (2010) formaldehyde indoor air quality guideline for cancer risk assessment has demonstrated that the credibility of the WHO guideline (2010) has not been challenged by new studies since 2013. Overall, there is no consistent association between FA exposure and lung cancer.

Cooking Emissions An increased risk of lung cancer in non-smoking women was related to their exposure to smoke from wood and straw. Smoke from wood combustion include various chemical compounds such as polycyclic aromatic hydrocarbons (PAHs) and acidic or polar

Environmental Cancers: Environmental Lung Cancer Epidemiology

381

substances which have carcinogenicity. Products of incomplete combustion contain respirable particulates and many volatile and nonvolatile organic compounds, including carcinogens, such as benzo[a]pyrene, formaldehyde, and benzene. The International Agency for Research on Cancer (IARC) reported that indoor emissions from household combustion of biomass fuel (mainly wood) are probably carcinogenic to humans (Group 2A). Cooking oil vapors associated with high temperature wok cooking and indoor coal burning for heating and cooking in unvented homes, particularly in rural areas. Large epidemiological studies from mainland China and Taiwan have shown that exposure to cooking oil fumes at high temperature (wok cooking) with adequate fume extraction is a significant risk factor for lung cancer in non-smoking Chinese housewives. Extracts of fumes from safflower oil, vegetable oil and corn oil were found to contain benzo(a)pyrene, benzo(a)anthracene and dibenz(a,h)anthracene. Exposure to cooking emissions, in particular generated from Chinese-style frying, as a potential risk factor for lung cancer, has drawn increasing concerns over the last decades. A significant increase in the risk of lung cancer associated with moderate or high frequency of frying. The International Agency for Research on Cancer (IARC) recently classified emissions from high-temperature frying as “probably carcinogenic to humans (Group 2A)”. Around 3 billion people cook using polluting open fires or simple stoves fueled by kerosene, biomass (wood, animal dung and crop waste) and coal. Approximately 17% of premature lung cancer deaths in adults are attributable to exposure to carcinogens from household air pollution caused by cooking with kerosene or solid fuels like wood, charcoal or coal. The risk for women is higher, due to their role in food preparation.

Coal Burning Indoor burning of smoky coal for heating and cooking in unvented homes is common in some rural areas in China. > 70% of Chinese household use solid fuels, such as wood, crop residues, and coal for heating and cooking. Incomplete combustion of coal results in emission of Group 1 carcinogenic polycyclic aromatic hydrocarbons (PAHs) such as benzo(a)pyrene, benzo(a) anthracene and benzo-fluorathene. There was a significant correlation between indoor air benzo(a)pyrene concentration and high lung cancer mortality rates. Tobacco smoking and indoor air pollution from solid-fuel use are the most important global risk factors for COPD and lung cancer and account for a significant proportion of deaths from these diseases in developing countries.

Fungus Spores Fungus Microsporum canis may be involved in the high incidence of lung cancer in northern Thai women.

Diet and Risk of Lung Cancer Epidemiological studies have suggested that a higher intake of fruits and vegetables is protective against the risk of lung cancer, while dietary fat, particularly saturated fat consumption, is associated with an increased risk of lung cancer. Cruciferous vegetables, including broccoli, cabbage, and radish, are a rich source of isothiocyanates, which are known for their protective effect against cancer. A systemic review suggested that higher intake of cruciferous vegetables was modestly inversely associated with lung cancer risk. A large, prospective cohort study and updated meta-analysis showed an inverse association between cruciferous vegetables consumption and the risk of female lung cancer with stronger association among never smokers. A large-scale population-based prospective study in Japan demonstrated that cruciferous vegetable intake may be associated with a reduction in lung cancer risk among men who are currently nonsmokers.

See also: Environmental Epidemiology; Short-Term Effects of Air Pollution on Health; Short-Term Effects of Particulate Air Pollution on Human Health.

Further Reading Anand, P., Kunnumakara, A.B., Sundaram, C., et al., 2008. Cancer is a preventable disease that requires major lifestyle changes. Pharmaceutical Research 25, 2097–2116. Belpomme, D., Irigaray, P., Hardell, L., et al., 2007. The multitude and diversity of environmental carcinogens. Environmental Research 105, 414–429. Bosetti, C., McLaughlin, J.K., Tarone, R.E., et al., 2008. Formaldehyde and cancer risk: A quantitative review of cohrt studies through 2007. Annals of Oncology 19, 29–43. Darby, S., Hill, D., Auvinen, A., et al., 2005. Radon in homes and risk of lung cancer: Collaborative analysis of individual data from 13 European case-control studies. British Medical Journal 330, 223–227. Doi, K., Tokonami, S., Yonehara, H., et al., 2009. A simulation study of radon and thron discrimination problem in case-control studies. Journal of Radiation Research 50, 495–506.

382

Environmental Cancers: Environmental Lung Cancer Epidemiology

Enomoto, M., Tierney, W.J., Nozaki, K., 2008. Risk of human health by particulate matter as a source of air pollutiondComparison with tobacco smoking. The Journal of Toxicological Sciences 33, 251–267. Gallus, S., Negri, E., Boffetta, P., et al., 2008. European studies on long-term exposure to ambient particulate matter and lung cancer. European Journal of Cancer Prevention 17 (3), 191–194. Gharibvand, L., Shavlik, D., Ghamsary, M., et al., 2017a. The association between ambient fine particulate air pollution and lung cancer incidence: Results from the AHSMOG-2 study. Environmental Health Perspectives 125, 378–384. Gharibvand, L., Beeson, L., Shavlik, D., et al., 2017b. The association between ambient fine particulate matter and incident adenocarcinoma subtype of lung cancer. Environmental Health 16, 71–79. Hamra, G.B., Guha, N., Cohen, A.J., et al., 2014. Outdoor particulate matter exposure and lung cancer: A systematic review and meta-analysis. Environmental Health Perspectives 122, 906–911. Hamra, G.B., Laden, F., Cohen, A.J., et al., 2015. Lung cancer and exposure to nitrogen dioxide and traffic: A systematic review and meta-analysis. Environmental Health Perspectives 123, 1107–1112. Hori, M., Tanaka, H., Wakai, K., et al., 2016. Secondhand smoke exposure and risk of lung cancer in Japan: A systematic review and meta-analysis of epidemiologic studies. Japanese Journal of Clinical Oncology 46 (10), 942–951. Krewski, D., Lubin, J.H., Zielinski, J.M., et al., 2005. Residential radon and risk of lung cancer: A combined analysis of 7 North American case-control studies. Epidemiology 16, 137–145. Krewski, D., Lubin, J.H., Zielinski, J.M., et al., 2006. A combined analysis of North American case-control studies of residential radon and lung cancer. Journal of Toxicology & Environmental Health Part A: Current Issues 69, 533–597. Laden, F., Schwartz, J., Speizer, F.E., Dockery, D.W., 2006. Reduction in fine particulate air pollution and mortality: Extended follow-up of the Harvard six cities study. American Journal of Respiratory and Critical Care Medicine 173, 667–672. Lam, T.K., Gallicchio, L., Lindsley, K., et al., 2009. Cruciferous vegetable consumption and lung cancer risk: A systematic review. Cancer Epidemiology, Biomarkers & Prevention 18 (1), 184–195. Lam, W.K., 2005. Lung cancer in Asian womendThe environment and genes. Respirology 10, 408–417. Lam, W.K., White, N.W., Chan-Yeung, M.M., 2004. Lung cancer epidemiology and risk factors in Asia and Africa. The International Journal of Tuberculosis and Lung Disease 8 (9), 1045–1057. Lane, R.S., Tomasek, L., Zablotska, L.B., et al., 2019. Low radon exposures and lung cancer risk: Joint analysis of the Czech, French, and Beaverlodge cohorts of uranium miners. International Archives of Occupational and Environmental Health. https://doi.org/10.1007/s00420-019-01411-w. Lin, H.-H., Murray, M., Cohen, T., et al., 2008. Effects of smoking and solid-fuel use on COPD, lung cancer, and tuberculosis in China: A time-based, multiple risk factor, modeling study. Lancet 372, 1473–1483. Lubin, J.H., Boice, J.D., 1997. Lung cancer risk from residential radon: Meta-analysis of eight epidemiologic studies. Journal of the National Cancer Institute 89, 49–57. Mori, N., Shimazu, T., Sasazuki, S., et al., 2017. Cruciferous vegetable intake is inversely associated with lung cancer risk among current nonsmoking men in the Japan public health center (JPHC) study. The Journal of Nutrition 147, 841–849. Nielsen, G.D., Larsen, S.T., Wolkoff, L.P., 2017. Re-evaluation of the WHO (2010) formaldehyde indoor air quality guideline for cancer risk assessment. Archives of Toxicology 91, 35–61. Pope III, C.A., Burnett, R.T., Thun, M.J., et al., 2002. Lung cancer, cardiopulmonary mortality, and long-term exposure to fine particulate air pollution. Journal of the American Medical Association 287, 1132–1141. Raaschou-Nielsen, O., Andersen, Z.J., Beelen, R., et al., 2013. Air pollution and lung cancer incidence in 17 European cohorts: Prospective analyses from the European study of cohorts for air pollution effects (ESCAPE). The Lancet Oncology 14, 813–822. Rajendra, K.C., Shukla, S.D., Gautam, S.S., et al., 2018. The role of environmental exposure to non-cigarette smoke in lung disease. Clinical and Translational Medicine 7, 39–50. Rothman, K.J., 2002. Epidemiology: An introduction. Oxford University Press, New York. Thun, M.J., Hannan, L.M., Adams-Campbell, L.L., et al. (Eds.), 2008. Lung cancer occurrence in never-smokers: An analysis of 13 cohorts and 22 cancer registry studies. PLoS Medicine 5 (9), e185. United Nations Scientific Committee on the Effects of Atomic Radiation (2000) Sources and effects of ionizing radiation, United Nations Scientific Committee on the Effects of Atomic Radiation, UNSCEAR 2000 report to the general assembly, with annexes. United Nations, New York. Vineis, P., Airoldi, L., Veglia, F., et al., 2005. Environmental tobacco smoke and risk of respiratory cancer and chronic obstructive pulmonary disease in former smokers and never smokers in the EPIC prospective study. British Medical Journal 330, 277–281. Vineis, P., Hoek, G., Krzyanowski, M., et al., 2006. Air pollution and risk of lung cancer in a prospective study in Europe. International Journal of Cancer 119, 169–174. World Health Organisation, 2018. Household air pollution and health. WHO, Geneva. http://www.who.int/news-room/fact-sheets/detail/household-air-pollution-and-health. (Accessed 10 March 2019). World Health Organization International Agency for Research on Cancer, 2004. Tobacco smoke and involuntary smoking. IARC monographs on the evaluation of carcinogenic risks to humans. World Health Organization, International Agency for Research on Cancer, Geneva. World Health Organization International Agency for Research on Cancer, 2018. Latest global cancer data: Cancer burden rises to 18.1 million new cases and 9.6 million cancer deaths in 2018. World Health Organization, International Agency for Research on Cancer, Geneva. Wu, Q.J., Xie, L., Zheng, W., et al., 2013. Cruciferous vegetable consumption and the risk of female lung cancer: A prospective study and a meta-analysis. Annals of Oncology 24, 1918–1924. Yarnell, J., 2007. Epidemiology and prevention: A system-based approach. Oxford University Press, New York.

Environmental Carcinogens and Regulation RJ Preston, US Environmental Protection Agency, Research Triangle Park, NC, United States © 2011 Elsevier B.V. All rights reserved.

Abbreviations BBDR biologically-based dose-response (model) CPSC Consumer Product Safety Commission EPA US Environmental Protection Agency FDA Food and Drug Administration FIFRA Federal Insecticide Fungicide and Rodenticide Act HRF human relevance framework IARC International Agency for Research on Cancer ILSI International Life Sciences Institute IPCS International Program for Chemical Safety LOAEL lowest observable adverse effect level MOA mode of action MOE margin of exposure OECD Organization for Economic Co-operation and Development OMB Office of Management and Budget OSHA Occupational Safety and Health Administration PBPK physiologically-based pharmacokinetic (model) POD point of departure RfC reference concentration RfD reference dose REACH Registration Evaluation Authorization and Restriction of Chemical Substances TSCA Toxic Substances Control Act USDA US Department of Agriculture

Introduction The history of chemical carcinogenesis, involving both environmental and occupational exposures, covers 1000 or more years. In contrast, the regulation of such exposures has a much shorter history, covering approximately 100 years in one form or another, with the more rigorous process currently employed being initiated approximately 50 years ago. It is perhaps surprising that even though there were very clear and severe adverse health outcomes associated with exposure to specific chemicals or mixtures, the approach used to reduce the level of such outcomes was avoidance of exposure rather than reduction in exposure levels themselves. The path to a formal regulatory framework has been a difficult one, inevitably involving the debate on just what is ‘a safe level of exposure’ and what is ‘an acceptable risk.’ This article will provide some discussion of how regulatory standards that are protective against risk are set currently, and might in the future be established. Such a discussion is built on the premise that at least for cancer, regulatory decisions (within the US Environmental Protection Agency (EPA) and other international organizations) are based largely on quantitative risk assessment. The recent US EPA Guidelines for Carcinogen Risk Assessment (2005) present a significant departure from previous cancer risk assessment guidelines by placing a much greater reliance on the use of mechanistic data, especially in support of dose–response characterization. This decision is especially timely given our ability to address the underlying mechanisms of disease processes together with the opportunity to better address the impact of environmental chemicals on normal cellular processes. This can be accomplished now by using the ever-increasing repertoire of molecular techniques, including whole genome analysis at the DNA, RNA, and protein levels. The issue of the regulation of environmental chemicals is beginning to take on a quite different texture. This article will concentrate on the regulation of environmental carcinogens and the attendant risk assessment framework analysis.

Regulation of Carcinogens The procedures for regulating chemical carcinogens are relatively complex since they involve a number of different regulatory bodies within any one country or political entity (e.g., The European Union) depending on the particular use of the chemical. A brief

Encyclopedia of Environmental Health, 2nd edition, Volume 2

https://doi.org/10.1016/B978-0-444-63951-6.00029-2

383

384

Environmental Carcinogens and Regulation

summary of the regulatory process in the United States, the European Union, and Japan is provided here to allow for comparisons among these processes. It is important to note that regulations for chemical carcinogens, as with any regulations, are subject to change and so any discussion can only capture the situation at a moment in time. That said, it is probably reasonable to state that regulations governing chemical carcinogen exposures are less subject to change than the regulations for noncarcinogenic chemicals, to a large extent because of the long history of regulating carcinogens. In the United States, the two Federal agencies that have the greatest involvement in the protection of human health for carcinogen exposures are the EPA and the Occupational Safety and Health Administration (OSHA). The EPA has primary responsibility for protection measures associated with environmental exposures and OSHA for occupational exposures. The regulatory functions are carried out by 10 Regional Offices in both EPA and OSHA. Other Federal agencies are involved in the regulation of chemicals but are unlikely to be involved in the regulation of carcinogens because of the nature of the products that they regulate. For example, the Food and Drug Administration (FDA) has control over food additives, drugs, and cosmetics. The Consumer Product Safety Commission (CPSC) regulates the safety of all consumer products and the US Department of Agriculture (USDA) maintains control over chemicals in food. A summary of the process for the development of regulations might be helpful at this point. The process is initiated when the US Congress passes a bill describing what Congress wants regulated, the general method to be used, and the ultimate results expected. When the President signs the bill, it becomes a law or act, and it is then the responsibility of a designated agency to prepare and enact regulations to meet the requirements of the act. Draft regulations are first prepared and subject to internal review. These are then issued in the Federal Register and are available for comment and the draft regulations are revised accordingly. The Office of Management and Budget (OMB) reviews the draft for financial impact on the affected industries. A final version is developed taking into account comments received from all involved parties and is issued in the Federal Register. The Unified Agenda (or Semiannual Regulatory Agenda) is published twice a year in the Federal Register and summarizes the rules and proposed rules that each Federal Agency expects to issue during the next year. This is a useful source of information for assessing those regulations that involve environmental carcinogens. A number of acts have been passed by the Congress in the United States that cover the control of chemical exposures to humans and the environment. It is important to note that these have focused on control of single chemicals for the most part. The acts that are perhaps the most significant for environmental carcinogens are the Federal Insecticide, Fungicide and Rodenticide Act (FIFRA), Toxic Substances Control Act (TSCA), Clean Air Act, Clean Water Act, Safe Drinking Water Act, and the Food Quality Protection Act. Each of these sets regulations for either specific classes of chemicals or specific media, and as such each regulation will include the control of environmental carcinogens. Details of each act can be found on the internet, quite readily at the US EPA internet site. Very similar processes for regulation of environmental carcinogens operate in Japan and Canada. For example, in Japan, chemical substances are regulated by the Agricultural Chemicals Control Law (pesticides), the Food Safety Law (food additives), Fertilizers Control Law, and Law concerning the Examination and Regulation of Manufacture, etc., of Chemical Substances (industrial chemicals). In fact, Canada has established a partnership with the United States for the assessment of chemicals that have to go through FIFRA and TSCA. The current system for regulating the production and use of chemicals in Europe is quite complex and consists of the following four key components: 1. 2. 3. 4.

classification and labeling; restrictions on marketing (including bans); data requirements for new chemicals; a process to examine the safety of other chemicals.

The Existing Chemical Regulation was adopted with the intention of assessing the environment and health risks of existing substances by setting priority lists for assessment using available data including those submitted by industry. Chemicals on priority lists are assigned to Member States who then produce a risk assessment, and, where appropriate, risk management proposals. These, in turn, can lead to restrictions on marketing and use regulations. All chemicals introduced to the market since September 1981 have to enter the ‘new chemicals’ notification procedure. This includes specific for-market testing and assessment, with testing requirements increasing with amount of chemical manufactured. This system was deemed to be fraught with problems and it was proposed to develop a more comprehensive and effective process. The new approach that resulted in a new law entered into force on 1 June 2007 deals with the Registration, Evaluation, Authorization and Restriction of Chemical Substances (REACH). The stated overall aim of REACH is ‘to improve the protection of human health and the environment through the better and earlier identification of the intrinsic properties of chemical substances.’ A significant departure from other regulations in Europe and other countries is that REACH gives greater responsibility to industry to manage risks from chemicals. In this regard, manufacturers and importers will be required to collect information on the overall properties of their chemical substances. All data will be housed centrally on a database run by the European Chemicals Agency (Helsinki). All evaluations will be conducted using data housed on this database. The REACH regulation also builds in the progressive substitution of the most dangerous chemicals where suitable alternatives have been identified. What has been presented in this section is not intended to serve as a definitive description of international regulatory processes for environmental carcinogens and other environmental chemicals but rather to highlight two or three examples that help define the

Environmental Carcinogens and Regulation

385

general approach used. There are not only a number of similarities with regard to the risk assessment component but also some clear differences in risk management activities. The risk assessment approaches will be presented in the following section.

Risk Assessment for Environmental Carcinogens There are two major outcomes of the cancer risk assessment process irrespective of the specific regulatory requirements; these are a classification of the carcinogenic potential of a chemical to humans and a quantitative assessment of cancer risk to humans in order to set maximal limits on environmental exposures.

Classification and Evaluation of Carcinogenicity in Humans The general approach for evaluating whether or not a chemical has the potential to be carcinogenic in humans is based on an assessment of epidemiological data together with cancer and toxicological data in experimental animals and toxicological data in in vitro cellular assays. The overall evaluation of carcinogenicity conducted by the International Agency for Research on Cancer (IARC) is based on five categories (groups 1, 2A, 2B, 3, and 4) as presented in summary in Table 1 and briefly discussed in the following text. However, this overall evaluation is very informative with regard to types of data that are utilized for assigning a category and the respective weights that they carry, and so the reader is encouraged to read this on the IARC website.

Overall evaluation

The body of evidence is considered as a whole, in order to reach an overall evaluation of the carcinogenicity to humans of an agent, mixture, or circumstance of exposure. Group 1.

The agent (mixture) is carcinogenic to humans. The exposure circumstance entails exposures that are carcinogenic to humans. This category is used when there is sufficient evidence of carcinogenicity in humans or exceptionally, when evidence of carcinogenicity in humans is less than sufficient but there is sufficient evidence of carcinogenicity in experimental animals and strong evidence in exposed humans that the agent (mixture) acts through a relevant mechanism of carcinogenicity. Group 2. Agents, mixtures, and exposure circumstances are assigned to either group 2A (probably carcinogenic to humans) or group 2B (possibly carcinogenic to humans) on the basis of epidemiological and experimental evidence of carcinogenicity and other relevant data. Group 2A. The agent (mixture) is probably carcinogenic to humans. The exposure circumstance entails exposures that are probably carcinogenic to humans. This category is used when there is limited evidence of carcinogenicity in humans and sufficient evidence of carcinogenicity in experimental animals or when there is inadequate evidence of carcinogenicity in humans, sufficient evidence of carcinogenicity in experimental animals, and strong evidence that the carcinogenesis is mediated by a mechanism that also operates in humans, solely on the basis of limited evidence of carcinogenicity in humans. Group 2B. The agent (mixture) is possibly carcinogenic to humans. The exposure circumstance entails exposures that are possibly carcinogenic to humans. This category is used for agents, mixtures, and exposure circumstances for which there is limited evidence of carcinogenicity in humans and less than sufficient evidence of carcinogenicity in experimental animals or when there is inadequate evidence of carcinogenicity in experimental animals. It may also be used when there is inadequate evidence of carcinogenicity in humans but there is sufficient evidence of carcinogenicity in experimental animals. Group 3. The agent (mixture or exposure circumstance) is not classifiable as to its carcinogenicity to humans.

Table 1

IARC classification of the evaluation of carcinogenicity for human beings

Group

Evidence

1. Agent is carcinogenic to humans 2A. Agent is probably carcinogenic to humans 2B. Agent is possibly carcinogenic to humans

Human data strong Animal data strong Human epidemiology data suggestive Animal data positive Human epidemiology data weak Animal data positive Human and animal data inadequate Human and animal data negative

3. Agent is not classifiable as to its carcinogenicity to humans 4. Agent is probably not carcinogenic to humans

Source: Summarized from the Preamble in IARC (2004) Monographs on the Evaluation of Carcinogenic Risks to Humans, Vol. 91, Combined Estrogen–Progestogen Contraceptives and Combined Estrogen–Progestogen Menopausal Therapy. Lyon: IARC.

386

Environmental Carcinogens and Regulation

This category is used most commonly for agents, mixtures, and exposure circumstances for which the evidence of carcinogenicity is inadequate in humans and inadequate or limited in experimental animals. Group 4. The agent (mixture) is probably not carcinogenic to humans. This category is used for agents or mixtures for which there is evidence suggesting lack of carcinogenicity in humans and experimental animals or for which there is inadequate evidence of carcinogenicity in humans but evidence suggesting lack of carcinogenicity in experimental animals, consistently and strongly supported by a broad range of other relevant data. The US EPA uses a similar evaluation process as part of its Guidelines for Carcinogen Risk Assessment and this is presented in summary form in Table 2. Currently, over 100 chemicals, chemical mixtures, or exposure circumstances have been classified as group 1 carcinogens, based on there being sufficient evidence of carcinogenicity in humans. This hazard assessment is an important first step in the risk assessment process, because decisions to proceed with a quantitative risk assessment will be dependent on whether or not a chemical is classified as a carcinogen or likely to be carcinogenic in humans. The human relevance framework (HRF), discussed in the following text, is particularly helpful in deciding if a chemical for which there is no epidemiological evidence, but for which there are experimental animal carcinogenicity data, is likely to be carcinogenic in humans based on the use of mechanistic data. The evaluation for known or suspected human carcinogens is continued in selected cases to a qualitative or a quantitative risk assessment. This selection is based, in part, on an exposure assessment and likely human exposure, potential magnitude of exposure, and number of persons likely to be impacted.

Cancer Risk Assessment Framework Mode of action and human relevance

Cancer risk assessment practice has been moving toward a more harmonized approach with the development of a framework based on mode of action (MOA) and an HRF. The framework described in Table 3 was proposed by the US EPA in their Guidelines for Carcinogen Risk Assessment. This framework was extended to include the HRF by the International Program for Chemical Safety (IPCS) and the International Life Sciences Institute (ILSI). The framework describes the MOA for tumor formation in rodents (human tumor data are rarely available) in terms of a set of measurable key events that are required to convert a normal cell into a metastatic tumor. It is useful here to provide definitions of the essential components of this framework. The following definitions are taken from the US EPA Guidelines for Carcinogen Risk Assessment. MOA is defined as ‘a sequence of key events and processes, starting with interaction of an agent with a cell, proceeding through operational and anatomical changes, and resulting in cancer formation.’ MOA is contrasted with mechanism of action that implies a more detailed understanding and description of key events, often at the molecular level, than for MOA. Examples of MOA are DNA reactivity, mitogenicity, inhibition of cell death, cytotoxicity with regenerative cell proliferation, immune suppression, and epigenetic effects, such as changes in gene expression and DNA methylation patterns. A key event is an ‘empirically observable precursor step that is itself a necessary element of the mode of action or is a biologically based marker for such an element.’ In this regard, and for this article, a biomarker is considered to be a surrogate marker of exposure or an early biological marker of effect (e.g., mutations in reporter genes and total chromosome alterations). In contrast, a biological response that is itself a key event along the pathway from a normal cell to a transformed one is described as a bioindicator (e.g., mutation in a critical gene for cancer and cancer-specific chromosome translocation). In addition, this distinction is useful for

Table 2

US EPA cancer guidelines descriptors

Carcinogenic to humans 1. Strong evidence of human carcinogenicity, including convincing epidemiological evidence of a causal association between human exposure and cancer 2. The mode(s) of carcinogenic action and associated key precursor events have been identified in animals, and there is strong evidence that the key precursor events in animals are anticipated to occur in humans Likely to be carcinogenic to humans 1. Weight of the evidence is adequate to demonstrate carcinogenic potential to an agent in animal experiments in more than one species, gender, strain, site, or exposure route, with or without evidence of carcinogenicity in humans Suggestive evidence of carcinogenic potential 1. The weight of evidence is suggestive of carcinogenicity; a concern for potential carcinogenic effects in humans is raised, but the data are judged not sufficient for a stronger conclusion Inadequate information to assess carcinogenic potential 1. Available data are judged inadequate for applying one of the other descriptors Not likely to be carcinogenic to humans 1. This descriptor is appropriate when the available data are considered robust; there is no basis for human hazard concern, evidence in both humans and animals that the agent is not carcinogenic Source: Based on US EPA Guidelines for Carcinogen Risk Assessment as summarized in Klaunig JE and Kamendulis LM (2008) Chemical carcinogenesis. In: Klaassen CD (ed.) Casarett and Doull’s Toxicology, pp. 329–379. New York, PA: McGraw Hill.

Environmental Carcinogens and Regulation Table 3

387

US EPA mode of action framework

Mode of action criteria Summary description of the hypothesized mode of action Identification of key events Strength, consistency, specificity of association Dose–response concordance Temporal relationship Biological plausibility and coherence Consideration of the possibility of other MOAs Is the mode of action sufficiently supported in the test animals? Is the mode of action relevant to humans? Which populations or life stages can be particularly susceptible to the MOA? Source: Reproduced from US EPA Guidelines for Carcinogen Risk Assessment.

considering those cellular events that can be used only in a qualitative way for predicting tumor responses (biomakers) and those that can be both qualitative and quantitative endpoints in a tumor dose–response assessment (bioindicators). A point of departure (POD) is a point on a dose–response curve at which the range of data is extended from the observable range to lower dose ranges by extrapolation. Such extrapolation can be linear by default, linear by prediction, or nonlinear by prediction (to also include a threshold). An example of how MOA and HRF can be used has been provided by Preston and Williams for DNA-reactive carcinogens and using aflatoxin B1 and methylene dichloride as specific examples. The key events for the production of cancer by DNA-reactive chemicals are presented in Table 4. The aim of the process of defining an MOA for a chemical is to utilize the information for a qualitative description of the dose– response for tumors as a component of risk assessment. For the present discussion, it is necessary to consider chemicals in two separate health-related outcome categories: carcinogens and noncarcinogens. Historically, these two types of health outcomes differ substantially with regard to assumptions about dose–response at low doses, default linear for carcinogens and nonlinear for noncarcinogens. In addition, carcinogens need to be considered in two clearly distinguished groups based on their MOA: DNA reactive (mutagenic) and non-DNA reactive (e.g., receptor-mediated, mitogenic, cytotoxic, and oxidative stress). The reason for this classification is that a clear distinction is made in the US EPA cancer risk assessment framework between these two broad groups; a mutagenic MOA results in a default linear extrapolation from the lowest observable adverse effect level (LOAEL) for human or rodent tumors, or other toxicological endpoints; a nonmutagenic MOA can lead to a nonlinear or threshold tumor dose–response based on the available supporting information. Thus, knowledge of the MOA is very important in the overall process of regulation of carcinogens. The ability to develop a set of key events for defining the MOA has been significantly enhanced by a greatly increased knowledge of disease processes, particularly cancer, and some enhancement of our knowledge of how environmental chemicals can impact normal, homeostatic processes to produce abnormal phenotypes. Much of this enhancement of knowledge has resulted from the integration of molecular analysis with more traditional physiological and toxicological studies. In the regulatory arena, it is important to establish when and how the mechanistic information that can and will be developed can be best incorporated into risk assessments. The question that is frequently asked is ‘how much data is sufficient to define an MOA?’ The answer remains somewhat equivocal, but as the specific characteristics required for the induction of tumors by chemicals are identified, this problem should be overcome or at least more clearly articulated. Once an MOA has been established for a rodent model (usually required in the absence of the appropriate human data including those for tumors), it is necessary to determine if this MOA is likely to occur in humans, or in other words, is plausible in humans. A decision tree approach has been presented by ILSI and IPCS, as noted earlier. This is shown in Figure 1. The aim is to establish if the

Table 4

Key events for tumor development: DNA-reactive MOAs

1. Exposure of target cells (e.g., stem cells) to ultimate DNA-reactive and mutagenic species – in some cases, this requires metabolism 2. Reaction with DNA in target cells to produce DNA damage 3. Misreplication on damaged DNA template or misrepair of DNA damage 4. Mutations in critical genes in replicating target cell 5. These mutations result in initiation of new DNA/cell replication 6. New cell replication leads to clonal expansion of mutant cells 7. DNA replication can lead to further mutations in critical genes 8. Imbalanced and uncontrolled clonal growth of mutant cells may lead to preneoplastic lesions 9. Progression of preneoplastic cells results in emergence of overt neoplasms, solid tumors (which require neoangiogenesis), or leukemia 10. Additional information in critical genes as a result of uncontrolled cell division results in malignant behavior Note: Key events along the pathway to tumor development for DNA-reactive carcinogens can be assessed both qualitatively and quantitatively by experimental and human studies.

388

Environmental Carcinogens and Regulation

Animal MOA (and related endpoints) not relevant to humans

Animal MOA relevant or potentially relevant to humans

Is the weight of evidence sufficient to establish the MOA in animals? No Yes

• MOA: Data insufficient to characterize animal MOA

Are key events in the animal MOA plausible in humans? • MOA: Species-specific protein • MOA: Species-specific hormone suppression

No Yes

• MOA: Species-specific enhanced hormone clearance rate

No need to continue risk assessment for this endpoint

No

Taking into account kinetic and dynamic factors, are key events in the animal MOA plausible in humans?

Yes

• MOA: Comparable cytotoxicity and cell proliferation response • MOA: Comparable tissue response (different animal− human exposure potential) Continue risk assessment, including dose−response human exposure analysis, and risk characterization

Figure 1 General schematic illustrating how the human relevance framework can be used to assess whether or not an animal MOA has a human counterpart, thereby indicating if a quantitative risk assessment is required. Reproduced with permission from Meek ME, Bucher JR, Cohen SM, et al. (2003) A framework for human relevance analysis of information on carcinogenic modes of action. Critical Reviews in Toxicology 33: 591–653. © 2003 Informa Healthcare.

weight of evidence for describing an MOA in an experimental animal model is sufficient, and if so, then are the key events for this MOA likely to occur in humans. If the key events are not likely to occur in humans, then a species-specific protein or other speciesspecific component is acting to establish the laboratory animal MOA and no risk assessment is required. If the key events for the laboratory animal MOA are likely to occur in humans, then it is necessary to consider if kinetic and dynamic factors that need to be met to drive the MOA in the laboratory animal model also pertain in humans. For example, can the level for a reactive metabolite of a particular parent chemical, required to produce a specific key event, be obtained in humans? If the necessary kinetic and dynamic factors are likely to be met in humans, then the requirement is to continue the risk assessment process for developing a dose–response assessment, human exposure analysis, and risk characterization. There is no need to proceed with the risk assessment if the laboratory animal kinetic and dynamic factors required for an MOA cannot be achieved in humans. This HRF has been applied for a number of chemical carcinogens, including: aflatoxin B1, methylene dichloride, 4-aminobiphenyl, and 1,3-butadiene. It has been effective for these chemicals. The additional components of the US EPA MOA framework (Table 3) are incorporated to provide support for the selected MOA. There is, for example, a requirement that the dose–response relationships for key events are consistent with the proposed MOA. This means that if the formation of DNA adducts is a necessary precursor event for a formation of critical mutations, then there should not be a higher frequency of mutations than DNA adducts. Similarly, there is a need for a temporal consistency, such that a key event that is a precursor for another key event further along the toxicity pathway is formed before this second (dependent) key event. There is also a requirement to consider the possibility of MOAs in addition to the selected one. In fact, it is quite possible that more than one MOA is operational for any particular chemical, but that one is dominant. The dominant MOA is used as the driver for the risk assessment but not without consideration of the other MOAs. The establishment of an MOA and its relevance to humans initiates the process of dose–response assessment.

Dose–response assessment

It has proven difficult both to devise methods and to conduct the appropriate analysis and extrapolations for tumor dose– responses. The needs are for dose calculations and for response measures at relatively high dose levels (environmentally speaking) and response predictions at low dose levels. It is the approach for extrapolation from responses in the range of observation to those outside this range that is especially intractable. In this section, only broad principles are presented to identify data needs and approaches taken in the absence of the necessary data. The use of toxicokinetic modeling (e.g., physiologically based pharmacokinetic (PBPK) modeling) is the preferred approach for calculating tissue concentrations of a chemical and estimating tissue dose. Extrapolation across species (e.g., rodent to human) can

Environmental Carcinogens and Regulation

389

be achieved by modeling when some appropriate human exposure data are available or by the use of defaults in the absence of appropriate human data. Such defaults are differently applied as a feature of route of exposure (e.g., oral or inhalation) and if route is different in the laboratory animal model and humans. Response data can be used in dose–response modeling in different ways depending on the nature of the available data. If mechanistic data are available for defining an MOA for a particular agent, then some form of toxicodynamic modeling can be conducted using biological parameters and rodent (or occasionally human) tumor data. In some cases, a standard model already exists for a particular MOA, in which case agent-specific data can be used for developing the parameters for incorporation into the available model. Data on key events that define an MOA can be used to inform the shape of the dose–response at low exposure levels (i.e., at levels outside the range of observations for tumors). Such data might also be used, in the case of informative bioindicators, to provide quantitative estimates of tumor frequency at low exposure levels. Thus, an ideal situation for estimating cancer risk would be that human tumor data are available with reliable exposure measurements, rodent tumor data are available to enhance the human tumor data, and data are available for an informative bioindicator of tumors, in humans or laboratory animals. It is probably reasonable to note that this ideal does not prevail, although data sets that are appropriate for biologically based dose–response (BBDR) modeling have been tested (e.g., formaldehyde, chloroform, and inorganic arsenic). As is generally the case, when insufficient mechanistic data are available to support a BBDR approach, empirical modeling (curve fitting) is recommended. Such modeling uses data in the range of observation for tumors and key events. The shape of the curve beyond the range of observation for tumors can be estimated through the use of dose–response information on key events. The specific curve-fitting models that can be used for tumor and other biological data are quite numerous and require careful selection for a specific scenario, and equally the outcomes have to be interpreted with appropriate care and caution. A critical component of any dose–response modeling exercise is the establishment of the so-called POD that is the point at which observable data (either tumors or key events) end and extrapolation to lower doses begins. The POD can be selected for tumor data (most frequently for rodents) or for an informative bioindicator of the tumor response. In general terms, the lowest POD that should be used is that which is supported by the strength of the data and by the endpoint being a key event in the tumor pathway for a particular MOA. The methods for extrapolation and the selection of an appropriate one are rather complex processes, and outside the scope of this review. The reader is encouraged to seek additional information from the US EPA’s Guidelines for Carcinogen Risk Assessment. A few additional points concerning specific procedures are pertinent here for completeness. For chemicals that act via a DNAreactive (mutagenic) MOA, the US EPA uses a default linear dose–response curve for extrapolation. In the context of a POD, for linear extrapolation, the POD is used to calculate a slope factor that is an upper-bound estimate of risk per increment of dose. This slope factor can be used to estimate risk probabilities for different exposure levels. For example, the slope factor is equal to 0.01/LED01 (where LED01 is the lower confidence limit on a dose expected to produce a disease incidence, e.g., of 1% of the animals tested). For a nonlinear extrapolation, the POD is used in the calculation of an oral reference dose (Rf D) or inhalation reference concentration (Rf C). These reference value approaches are now used by the US EPA for cancer risk assessments in addition to their previous use for noncancer effects. It needs to be noted that an appropriate database is required for concluding that a nonlinear extrapolation is plausible. Such a conclusion is met through the application of the MOA framework. A brief summary of the process for risk estimation, once the extrapolation models have been selected, brings to completion the risk assessment process. At this point, the risk management procedure takes over to enact the recommendations of the risk assessment process.

Default options

Risk estimates are presented with their attendant uncertainties. Included in these uncertainties are those resulting from the use of default options when data are missing or uncertain. The key default options are covered by the following queries: 1. 2. 3. 4. 5.

Is the presence or absence of effects observed in a human population predictive of effects in another exposed human population? Is the presence or absence of effects observed in an animal population predictive of effects in exposed humans? How do metabolic pathways relate across species and among different age groups and between sexes in humans? How do toxicokinetic processes relate across species and among different age groups and between sexes in humans? What is the relationship between the observed dose–response relationship and the relationship at lower doses?

The default value that is applied is a factor of 10 or 3 depending on the scope of the available data. If sufficient data are available to provide a suitable response to one of the earlier questions, the default value can be reduced to 1 and is replaced by the data themselves. The risk characterization also includes a statement of the extent of the extrapolation of the estimate from data in the observable range to the exposure levels of relevance for estimating anticipated human risk. These extrapolations are also considered in the context of the certainty or uncertainty they invoke when quantifying risk. The extent of the extrapolation can be expressed as a margin of exposure (MOE), defined as the ratio of the POD to the anticipated or measured exposure estimate. The MOE can be used, in part, for risk management decisions; a low MOE (say 10–20) indicates that the POD is relatively close to the human exposure level and triggers an alert, whereas a high MOE (250 and above) indicates that the POD is quite far removed from the human exposure estimate and does not typically trigger an alert. This trigger could indicate a need to reduce exposure. It might also suggest that additional research is warranted to support a quantitative risk assessment.

390

Environmental Carcinogens and Regulation

It is appreciated that this section on risk assessment practices for environmental carcinogens is based on the US EPA’s current guidelines, but the general principles applied are similar to those applied by other regulatory organizations nationally and internationally. It is also noted that the overall approach is described with broad brushstrokes with the aim of providing an overview.

Risk Management In essence, risk management practices are a whole separate component of the regulation of environmental carcinogens and are not going to be considered in detail for this article. There is a separation of risk assessment and risk management in regulatory policy in the US EPA and most other regulatory organizations. However, the National Research Council in its report, Science and Judgment in Risk Assessment, provided a very clear discussion of the differences and links between risk management and risk assessment. Their view is presented as a direct quote from the NRC Report. “Risk Assessment vs. Risk Management

The principle of separating risk assessment from risk management has led to systematic downplaying of the science-policy judgments embedded in risk assessment. Risk assessment accordingly is sometimes mistakenly perceived as a search for ‘truth’ independent of management concerns.

EPA should increase institutional and intellectual linkages between risk assessment and risk management so as to create better harmony between the science-policy components of risk assessment and the broader policy objectives of risk management. This must be done in a way that fully protects the accuracy, objectivity, and integrity of its risk assessments – but the committee does not see these two aims as incompatible. Interagency and public understanding would be served by the preparation and release of a report on the science-policy issues and decisions that affect EPA’s risk-assessment and risk-management practices.”

In the regulatory environment (beyond just the US EPA), it appears that this overall aim of linking risk assessment and risk management decisions is being viewed favorably and is being initiated.

Conclusions The regulation of environmental carcinogens is implemented differently within different countries or political entities. However, the basis for the regulations is perhaps beginning to become more aligned. This alignment is driven to quite an extent by the development of a framework for cancer risk assessment that has its basis in the MOA whereby an environmental carcinogen can cause cancer. Thus, there is a much broader reliance on the incorporation of mechanistic data into the risk assessment process and much less reliance on default factors that have typically been used in the absence of appropriate data. The ability to develop the necessary mechanistic data is greatly enhanced by the use of the recently developed molecular technologies of genomics proteomics, and metabolomics along with sophisticated computational analysis and modeling. This MOA-based framework has been developed and assessed at an international level through the effort of IPCS and ILSI along with national organizations such as the US EPA and members of the Organization for Economic Co-operation and Development (OECD). The quantitative risk assessment process is driven to a great extent by rodent tumor data, largely because of a lack of comprehensive epidemiological studies for providing quantitative human tumor data, along with reliable exposure data. Thus, the second component of the MOA-based framework is to establish if a rodent MOA is likely to occur in humans using an HRF. Knowledge of an MOA in rodents and its plausibility in humans leads to the conduct of a quantitative risk assessment. This remains a somewhat complex process of data and model choices, but is undergoing improvement at a steady pace. The aim of much research on the toxicity of environmental carcinogens is to address the uncertainty in cancer risk assessments through enhanced databases and enhanced computational approaches, with the overall aim of reducing controllable uncertainty. In a general way, as a more reliable risk assessment process is developed, it will lead to a more defensible risk management decision. The path is clear and progress along it is steady.

See also: Biomarkers in Environmental Carcinogenesis; Cancer and the Environment: Mechanisms of Environmental Carcinogenesis (MS32); Carcinogenicity of Disinfection Byproducts in Humans: Epidemiological Studies; Valuation of Health Impacts Under the EU’s REACH Chemicals Regulation.

Environmental Carcinogens and Regulation

391

Further Reading Boobis, A.R., Doe, J.E., Heinrich-Hirsch, B., et al., 2008. IPCS framework for analyzing the relevance of a noncancer mode of action for humans. Critical Reviews in Toxicology 38, 87–96. International Agency for Research on Cancer (IARC), 2007. Monographs on the Evaluation of Carcinogenic Risks to Humans. Combined Estrogen–Progestogen Contraceptives and Combined Estrogen–Progestogen Menopausal Therapy (Preamble), Vol. 91. IARC, Lyon, France. http://monographs.iarc.fr. Klaunig, J.E., Kamendulis, L.M., 2008. Chemical carcinogenesis. In: Klaassen, C.D. (Ed.), Casarett and Doull’s Toxicology. McGraw-Hill, New York, PA, pp. 329–379. Chapter 8. Meek, M.E., Bucher, J.R., Cohen, S.M., et al., 2003. A framework for human relevance analysis of information on carcinogenic modes of action. Critical Reviews in Toxicology 33, 591–653. Sato, S., 1996. Risk evaluation of environmental carcinogens. Journal of Occupational Health 38, 149–154. US Environmental Protection Agency (EPA) (2002) A Review of the Reference Dose and Reference Concentration Processes Report Washington, DC: US Environmental Protection Agency. (http://cfpub.epa.gov/ncea/cfm/recordisplay.cfm?deid¼55365) US Environmental Protection Agency (EPA), 2005. Guidelines for Carcinogen Risk Assessment. Risk Assessment Forum. US Environmental Protection Agency, Washington, DC. http://cfpub.epa.gov/ncea/cfm/recordisplay.cfm?deid¼116283.

Relevant Websites http://ec.europa.eu. European Commission. http://www.hc.sc.gr.ca. Health Canada. http://www.epa.gov. United States Environmental Protection Agency.

Environmental Chemicals in Breast Milkq Cecilia Sara Alcala, Tulane University School of Public Health and Tropical Medicine, New Orleans, LA, United States Carlo Basilio, Tulane University School of Public Health and Tropical Medicine, New Orleans, LA, United States; and Tulane University School of Medicine, New Orleans, LA, United States Imani Whitea, Tulane University School of Public Health and Tropical Medicine, New Orleans, LA, United States Satori A Marchittia, US Environmental Protection Agency, Athens, GA, United States Erin P Hines, US Environmental Protection Agency, Research Triangle Park, NC, United States Cheston M Berlin, Penn State College of Medicine, Hershey, PA, United States Suzanne E Fenton, National Institute for Environmental Health Sciences, Research Triangle Park, NC, United States © 2019 Elsevier B.V. All rights reserved.

Introduction Humans are exposed to numerous environmental chemicals throughout their lifespan. Exposures to environmental chemicals stem from contact with environmental media which incluqde air, soil, dust, and water. Pharmaceuticals, food, and personal care products are also ways of which an individual could be exposed to environmental chemicals. Due to the continuous exposure, environmental chemicals have been measured in human tissues and fluids, which include blood, breast milk, urine, hair, exhaled breath, nails, cord blood, and meconium. Diet is a major environmental chemical exposure pathway for infants via breast feeding and/or infant feeding. Breast milk is a multifaceted and continuously changing mixture of endogenous substances of which includes fats, water, proteins, carbohydrates, vitamins, minerals, and antibodies (Lehmann et al., 2018). The first reports of environmental chemicals in breast milk appeared in the 1950s; since that time, the published literature on the detection of environmental chemicals in breast milk has expanded enormously, with information now available from numerous countries (Fig. 1). Conversely, the lack of systematic, nationally, and regionally representative breast milk biomonitoring studies has limited our understanding of environmental chemical concentrations in breast milk. The World Health Organization (WHO) along with the United National Environment Programme (UNEP) has organized and completed six rounds of international human milk sampling and analysis for persistent organic chemicals. Numerous countries provided data as part of these studies of which they collect a restricted number of samples due to the resource intensive nature of this type of research (Lehmann et al., 2018). Increasing the prevalence and duration of breastfeeding has been a multi-stakeholder effort that has been effective in the United States. The government has created and implemented numerous programs supporting and promoting breastfeeding, both locally and federally, respectively. The American Academy of Pediatrics (AAP) recommends exclusive breastfeeding for about 6 months, followed by continued breastfeeding as complementary foods are introduced, with continuation of breastfeeding for 1 year or longer as mutually desired by mother and infant (AAP, 2012). The organization continues to reaffirm their recommendation by educating physicians and the public to increase the incidence of breastfeeding. The AAP continues to urge the government and industry to provide sufficient postpartum leave, and to assure workplaces have a breastfeeding friendly environment. Over the past decade, government agencies have made tremendous strides in policy, enacting, for example, the Friendly Airports for Mothers Act of 2017. These organizations support breastfeeding and emphasizes the natural approach to providing infants with the proper nutrients they need for optimal growth and development. Breastfeeding provides the infant with decreased risks of infection, allergy, asthma, arthritis, diabetes, obesity, cardiovascular disease, and various cancers in both childhood and adulthood. It also delivers numerous advantages to the breastfeeding mother which include decreased risk of type 2 diabetes mellitus, rheumatoid arthritis, adult cardiovascular disease, hypertension, hyperlipidemia, and breast and ovarian cancers (Ip et al., 2007; AAP, 2012; Victora et al., 2016). The Department of Health and Human Services launched Healthy People 2020 in 2010, which strives to address major public health issues. The objectives of the plan relating to breastfeeding, include an increase of the proportion of infants who are breastfed, increase the proportion of employers that have worksite lactation support programs, reduce the proportion of breastfed newborns who receive formula supplementation within the first 2 days of life, and increase the proportion of live births that occur in facilities that provide recommended care for lactating mothers and their births (CDC, 2018). In 1970, 26% of newborn infants in the U.S. were breast-fed, with breast-feeding rates increasing over the years to 75.0% in 2007 (CDC, 2010). As of 2015, in the U.S. 83.2% of infants started to breastfeed, 57.6% were breastfeeding at 6 months, and more than 35.9% were breastfeeding at 12 months (CDC, 2018). Overall, most of the Healthy People 2020 objectives pertaining to 2020 have been met. The degree of breast-feeding influences infant exposure to environmental chemicals during lactation. The rising numbers of studies highlighting the presence of environmental chemicals in breast milk has raised concerns about whether breast milk is still the best food for infants. However, despite the presence of environmental chemicals in breast milk, the majority of evidence indicates that breastfeeding not only confers

q

Change History: April 2019. Cecilia Alcala updated the article. This is an update of J.S. LaKind, C.M. Berlin, S.E. Fenton, Environmental Chemicals in Breast Milk, Editor(s): J.O. Nriagu, Encyclopedia of Environmental Health, Elsevier, 2011, Pages 347–356. a Previous affiliation.

392

Encyclopedia of Environmental Health, 2nd edition, Volume 2

https://doi.org/10.1016/B978-0-12-409548-9.02139-4

Environmental Chemicals in Breast Milk

393

Fig. 1 Countries with published data on environmental chemicals in human milk (black circles). The data represents varying numbers of environmental chemicals and are not typically nationally representative. World map source: http://en.wikipedia.org/wiki/Image:World-map-2004-cia-factbooklarge-1.7m-whitespaceremoved.jpg.

numerous health benefits on the infant and mother but may also counter effects that have been associated with prenatal chemical exposures (Patandin et al., 1999; LaKind et al., 2008; Pan et al., 2010). Breast milk monitoring programs in various countries have offered valuable information on early life exposures to chemicals in breast milk. The trends found in chemical levels in humans have influenced obligatory restrictions of these chemicals both voluntary and by the government. Information on concentrations of environmental chemicals in breast milk also provides data that can be used in exposure and risk analysis for breast-feeding infants. Breast milk is an attractive matrix for monitoring postnatal exposure to environmental chemicals for several reasons. First, the collection procedure is noninvasive and can be done directly by the mother, as opposed to blood collection by a trained individual. Breast milk, unlike urine, is lipid-rich and many of the chemicals of interest are lipophilic and can be readily measured in milk. Finally, unlike data from other matrices, breast milk biomonitoring data provide information on the exposure levels of a particular chemical in the mother and provide insight into potential infant exposure to that chemical. Thus, the use of breast milk in biomonitoring studies provides unique information on adult and infant exposure to environmental chemicals. This paper provides an overview of three essential topics related to environmental chemicals in breast milk: (1) identification and concentrations of chemicals in breast milk over time and place; (2) factors associated with observed levels of chemicals in breast milk, and (3) the association between environmental chemicals in breast milk and infant health.

Environmental Chemicals in Breast Milk The detection and measurement of environmental chemicals in breast milk is convoluted due to it being a complex matrix comprised of lipids ( 4%), proteins ( 1%), lipoproteins, immune factors, lactose, and water. An additional complicating factor is that milk fat and protein content fluctuate considerably depending on the stage of lactation (higher in early stages, diluted at peak lactation) and how the milk was collected; the first (fore) milk will be lower in content and the last (hind) milk from the breast will be higher (LaKind et al., 2004), so numerous samples or mixing are suggested. Contingent on the lipophilicity of the specific chemical of interest, the sample collection, and analytical techniques vary. The sensitivity and selectivity of these analytical procedures have enhanced over time, in order to identify the limits of detection for several classes of environmental contaminants in breast milk, which are now in the low ppb range. Breast milk can contain environmental chemicals that are lipophilic and non-lipophilic (or water-soluble). The chemicals that separate into the lipid fraction of milk have been widely studied due to their bioaccumulation, long half-life in humans, and detection in biological mediums over decades. Due to the long half-life, the lifetime maternal exposure to these chemicals can contribute to the lactational exposure of these chemicals in infants. New developments in analytical methods and appropriate collection and

394

Environmental Chemicals in Breast Milk

storage protocols have allowed non-lipophilic chemicals to be readily measured in breast milk. Biomonitoring studies are a useful tool in providing information for informing regulators, physicians, new mothers, and children’s advocacy groups on the presence of environmental chemicals in breast milk. In this section we will provide the trends and concentrations of numerous persistent and non-persistent chemicals from studies around the world.

Lipophilic Chemicals Persistent organic chemicals are the most extensively studied environmental chemicals identified in breast milk. These chemicals include dioxins and furans, polychlorinated biphenyls (PCBs), organochlorine pesticides, and the class of flame retardants referred to as polybrominated diphenyl ethers (PBDEs). For these compounds, the breast milk concentrations are considerably higher than the levels in serum of the donor due to the lipophilic nature of the compounds. Toxic equivalents (TEQ) are measured concentrations multiplied by a toxic equivalent factor (TEF), like concentrations of dioxin and certain PCBs. An international decline in levels of dioxin has been seen over the past 20 years (LaKind, 2007; LaKind et al., 2001). Numerous countries have implemented bans and strict limitations on the use and release of dioxins, furans and PCBs, beginning in the 1970s (LaKind, 2007; LaKind et al., 2001, 2004) (Fig. 2). Germany (Furst, 2006; Wilhelm et al., 2007), Canada (Ryan and Rawn, 2014), and Sweden (Fang et al., 2013)

PCDDs and PCDFs India Egypt Netherlands Italy Germany Congo DR Spain Cote d'ivoire Luxembourg Belgium Ukraine Russia Fed. Chile Rep. Moldavia Romania Czech Rep Hong Kong Ireland USA Senegal Uruguay New Zealand Slovakia Croatia Sudan Bulgaria Switzerland Sweden Lithuania Finland Norway Hungary Australia Georgia Cyprus Tajikistan Antigua-Barb. Fiji Rep. Korea Philippines Brazil Mali Kiribati Ghana Haiti Syrian Arab Rep. Nigeria Tonga Mauritius Kenya Uganda

3rd survey (2000–2003) 4/5th survey (2005–2010)

0

5

10

15

20

pg TEQs/g lipid

Fig. 2 PCDDs and PCDFs in TEQs (pg/g liquid) in pooled human milk samples from different countries, as a result from the WHO/UNEP surveys (van den Berg et al., 2017).

Environmental Chemicals in Breast Milk

395

dl-PCBs

Ukraine Italy Czech Rep. Germany Russia Fed. Rep. Moldavia Netherlands Luxembourg Slovakia Spain Romania Switzerland Croatia Belgium Georgia Lithuania Sweden Norway Egypt Senegal India USA Ireland Tajikistan Nigeria Bulgaria Sudan Cote d'ivoire Uruguay Hong Kong Haiti Finland Kiribati Congo DR Ghana New Zealand gua and Barbuda Cyprus Australia Chile Philippines Rep. Korea Mali Hungary Brazil Fiji Mauritius Syria Arab Rep. Tonga Uganda Kenya

3rd survey (2000–2003) 4/5th survey (2005–2010) 0

5 pg TEQs/g lipid

10

Fig. 3 DL-PCBs in TEQs (pg/g liquid) in pooled human milk samples from different countries, as a result from the WHO/UNEP surveys (van den Berg et al., 2017).

have been monitoring breast milk for long periods of time in order to observe country specific declines in levels of dioxins and furans. PCBs have also been measured globally in breast milk, and countries like Sweden (Noren et al., 1996) and Serbia (Vukavic et al., 2013) have reported decreases in breast milk concentrations (Fig. 3). Since the late 1980s, WHO has conducted international studies of persistent organic chemicals in breast milk, which have shown a descending trend in levels of dioxins and furans (UNEP, 2012; van den Berg et al., 2017). As the United States does not have a national breast milk biomonitoring program, it is unclear from available data if levels of dioxin and PCBs in breast milk have decreased in U.S. women. However, blood levels of these chemicals in the U.S. population have declined significantly, suggesting that there may have been a similar decline in milk levels (CDC, 2018). Breast milk data are available for organochlorine pesticides for the following compounds: dichlorodiphenyltrichloroethane (DDT) and its metabolites, aldrin and its metabolite dieldrin, chlordane (oxychlordane, heptachlor, U-chlordane, trans-nonachlor) endosulfan, hexachlorocyclohexanes (HCHs) and mirex (Jensen and Slorach, 1991; LaKind et al., 2018). These pesticides have been discontinued and banned in many countries, and the exposures have decreased as well (LaKind, 2007; LaKind et al., 2004). Previous assessments have reported that concentrations of these compounds have been decreased in breast milk over time (LaKind, 2007; LaKind et al., 2004), however, the concentrations of certain pesticides in milk of some women may still be high. A common pesticide, DDT is used for malaria control in a few countries, as a result, levels in breast milk may reflect current use (Rodas-Ortíz et al., 2008). Additionally, in Chelem, Yucatan, Mexico, 36% of study participants were found to have serum DDT levels exceeding the Joint Meeting on Pesticide Residues/Food and Agriculture Organization (JMPR-FAO/WHO) acceptable daily intakes (Rodas-Ortíz et al., 2008).

396

Environmental Chemicals in Breast Milk

Fig. 4 Levels of PBDEs (ng per g fat) in human milk worldwide. Based on median values reported in Inoue K, Harada K, Takenaka K, Uehara S, Kono M, Shimizu T, . Koizumi, A. (2006). Levels and concentration ratios of polychlorinated biphenyls and polybrominated diphenyl ethers in serum and breast milk in Japanese. Environmental Health Perspectives 114(8):1179–1185 and Siddique, S., Xian, Q., Abdelouahab, N., Takser, L., Phillips, S. P., Feng, Y. L., Wang, B. and Zhu, J. (2012). Levels of dechlorane plus and polybrominated diphenylethers in human milk in two Canadian cities. Environment International 39(1), 50–55.

PBDEs is a class of brominated flame retardants, that was first measured in breast milk approximately 21 years ago (Meironyte et al., 1998). Between 1970 and the late 1990s, data from Sweden demonstrated that breast milk PBDE levels increased which was determined from pooled samples from the Mother’s Milk Center in Stockholm (Meironyte et al., 1999). Due to the increase in PBDE levels, attention was brought to this group of compounds. Subsequently, the U.S. data on PBDE’s in breast milk samples was published. In 2004, Europe banned PBDEs and the U.S. voluntarily phased out of manufacturing lower brominated penta- and octacongeners. These smaller sized congeners are believed to bioaccumulate to a greater extent than the highest brominated congeners. Because the U.S. was the primary consumer of the PBDE flame retardant products for quite some time, breast milk samples from the United States, specifically PBDE concentrations (Fig. 4), have been reported to be significantly higher than those from women in Europe (Meironyte et al., 1999) or Asia (Inoue et al., 2006). The high use of PBDEs in products in the U.S. in compliance with California’s TB 117 flame retardant standards could be the potential reason for the difference in exposure. Recently, the amendment for California’s TB 117-2013 smolder standard has been implemented (State of California, 2013). The amendment demands flame resistance for furniture covering material. With the amendment in effect, the use of PBDEs and flame retardants could decline. In order to assess the impact of the decrease of chemicals from the market on levels in breast milk, a milk sampling and analysis program is needed.

Non-Lipophilic Chemicals Most non-lipophilic chemicals do not accumulate in milk fat and are detected at concentrations at or below those measured in serum or urine of donors. Breast milk concentrations have been published for a limited number of non-lipophilic chemicals. The main reasons for the lack of data are: (a) the difficulty in detecting these chemicals in milk at the low levels found in milk (matrix and volume interactions), and (b) many but not all of the non-lipophilic compounds have short half-lives. Published data on non-lipophilic compounds measured in breast milk include bisphenol A (BPA), perchlorate, phthalate metabolites, parabens, perfluoroalkyl substances (PFAS), such as perfluorooctanoic acid (PFOA) and perfluorooctane sulfonate (PFOS), and a number of metals. Due to the limited concentration data for non-lipophilic chemicals in chemicals in breast milk, there is inadequate information to assess geographic or temporal trends. BPA is a phenolic compound that has been reported in breast milk and infant formula. Braun et al. (2012) described the change of BPA concentrations before and during pregnancy and found that the variability was higher during pregnancy for BPA concentrations (Braun et al., 2012). When assessing infant exposures through breastfeeding, Hines and colleagues found that measurements showed substantial variations in BPA concentrations when collected from mothers at two different time points (Hines et al., 2015). Research has found measurable concentrations in human milk samples of the ultraviolet-blocking compound benzophenone-3, the antibacterial triclosan, 2,4-dichlorophenol, ortho-phenylphenol, and 4-tert-octylphenol in the United States (Lehmann et al., 2018). Studies in the U.S. have also identified BPA in infant formula. Liao and Kannan reported the highest concentration was in a soy-based infant formula (20.8 ng/g) after analyzing seven samples of infant formula (Liao and Kannan, 2013).

Environmental Chemicals in Breast Milk

397

Phthalates are plasticizers that are used in cosmetics, food containers, medicine coatings, ink and tubing, among other commercial products and are ever-present in the environment. In breast milk, esterases metabolize the parent diester phthalates to monoester phthalate metabolites. Thus, it is the phthalate monoester metabolites, rather than diester phthalates, that are measured in studies of breast milk. The monoester phthalate metabolites of these compounds are measurable in breast milk, albeit in less than 10% of measured U.S. samples (Calafat et al., 2004; Hines et al., 2009). Additionally, other countries, like Canada (Zhu et al., 2006), Sweden (Högberg et al., 2008), and Korea (Kim et al., 2015) have reported phthalate monoesters at measurable concentrations in breast milk. Perchlorate is a component of jet and rocket fuel which occurs naturally in the environment, specifically in dry areas. The chemical is physically transported into milk via the sodium-iodide symporter (NIS) in the cell membrane of the mammary epithelium. The NIS has up to 30-fold greater affinity for perchlorate than iodide (Kirk et al., 2005). Perchlorate has the potential to interfere with the normal transfer of iodide to the breast-feeding infant and has been detected in nearly all human milk samples that have been tested (Kirk et al., 2005). Parabens are in a wide range of personal care products, in canned foods, beverages, and pharmaceuticals (Meyer et al., 2007; Fisher et al., 2017). These chemicals have been shown to have estrogenic activity. U.S. studies have measured parabens in breast milk (Hines et al., 2015; Ye et al., 2008). The two studies reported butyl paraben (BuP) was below the detection limit in all samples. Hines and colleagues reported that ethyl (EtP), methyl (MeP), and propyl (PrP) parabens were above the LOD of 1.0 mg/L in their samples. A cohort of pregnant women from Ottawa, Canada, participated in the Plastics and Personal-Care Product Use in Pregnant (P4) Study. Fisher et al. (2017) found that women who used lotions in the past 24 h had significantly higher paraben concentrations compared to women who reported no use in the past 24 h (Fisher et al., 2017). PFAS are chemicals that are water soluble, but often persistent in the body and environment. The PFAS are found in blood bound to proteins, and typically not found in lipids. Conventionally used as surfactants for firefighting and in making commercial products water or greaseproof, the most commonly detected PFAS in breast milk are PFOA and PFOS (von Ehrenstein et al., 2009; Karman et al., 2007). Roosens et al. (2010) aimed to assess the accumulation of persistent contaminants at numerous life stages using pooled samples of cord blood and the exposure of BFRs and PFAS of newborns and the extent of maternal transfer (Roosens et al., 2010). Researchers concluded that newborn exposure of BFRs and PFAS occurs mainly post-natally (Roosens et al., 2010). Various metals, such as lead, copper, zinc, cadmium, mercury, arsenic, have been detected in breast milk samples in numerous countries (Al-Saleh et al., 2013; Liu et al., 2013; Garcia-Esquinas et al., 2011; Ursinyova and Masanova, 2005). The environmental sources of metals are typically associated with region-specific dietary preferences or exposure patterns (metals in water/air, food storage containers, cigarette smoking status, and drinking water pipe composition). Low levels of arsenic were found in breast milk from a very small U.S. sample (Carignan et al., 2015), similar to other international studies which have reported low levels of arsenic in breast milk among women exposed to high levels of arsenic in drinking-water. Mercury was found in breast milk, with a mean level of 1.19 mg/L and a range of 0.01–6.44, in Saudi Arabia (Al-Saleh et al., 2013), and a similar mean and range was found in a study in Austria (Gundacker et al., 2002). The United States provides data on metals in infant formula via the FDA’s Total Diet Study (FDA, 2014). The study monitors levels of approximately 800 chemicals in roughly 280 foods and beverages which are common in the U.S. diet. Both milk and soy based infant formula have been a part of the study and analyzed for arsenic, mercury, cadmium, lead, and nickel. However, the majority of the measurements were below the LOD (Lehmann et al., 2018).

Pharmaceuticals In 1908, the first published concern over the appearance of pharmaceuticals and chemicals in human milk occurred (Reed, 1908). For the compounds discussed in this early work, the amount transferred into milk was quite small. Although analytic methods at the time were primitive in comparison to today’s capabilities, this observation has been repeatedly confirmed for most pharmaceuticals over the years. Almost all pharmaceuticals measured to date appear in milk to some extent with the milk/plasma ratio almost always between 0.5 and 1.0 (Berlin, 2011). In an attempt to both protect the breast-feeding infant as well as to permit maternal therapy when medically necessary, and continuation of lactation, the Committee on Drugs of the AAP has published statements on the transfer of drugs and chemicals in human milk. The most recent edition was published in 2014 (Committee on Drugs, American Academy of Pediatrics, 2014). This statement outlines the off-label use of drugs in children, specifically defining off-label use, the role of the U.S. Food and Drug Administration (FDA), therapeutic decision-making, and federal legislation to increase drug testing in children (Committee on Drugs, American Academy of Pediatrics, 2014). Following that statement, the US FDA finalized in 2015 the “Pregnancy and Lactation Labeling Rule” or PLLR. This rule applies to any product subject to physician labeling and includes medicines, vaccines, and therapies. The PLLR required an altered content and format of information for prescription drug labeling, and removed the pregnancy letter categoriesdA, B, C, D, and X. The PLLR also requires the label to be updated when information becomes outdated so that clinicians and patients may better weight the risk versus benefit of a given product (U.S. Food and Drug Administration, 2018). The use of opioids has steadily increased in the United States, specifically among women of childbearing age. Due to the increase in prescription opioid use among pregnant women and breastfeeding mothers, there has been an increase in the number of women needing treatment for abuse (Krans and Patrick, 2016). The American College of Obstetricians and Gynecologists recommends that women who are stable on the opioid agonists, who are not using illicit drugs and no other contraindications should be encouraged to breastfeed (The American College of Obstetricians and Gynecologists, 2017). Due to the side effects of common opioids and the

398

Environmental Chemicals in Breast Milk

potential harm to the breastfeeding infant, mothers need to be educated on how to keep their baby healthy if opioid use is required (Intermountain Healthcare, 2013). Codeine is a type of opioid, used to treat mild and moderate pain. It is also used for pain associated with caesarean section or after episiotomy (Madadi et al., 2008a,b). Normal doses of codeine given to breastfeeding mothers could potentially cause critically high levels of “active metabolite morphine in breast feeding infants” (Sachs and Committee On Drugs, 2013). The first case of neonatal fatality was of a breastfed infant who died from opioid toxicity from exposure to maternal codeine (Madadi et al., 2008a,b). This particular infant was exposed to the postmortem level of morphine (87 ng/mL) which exceeds the typical level in the breastfeeding infant (2.2 ng/mL) and the therapeutic range for neonates (10–12 ng/mL) (Sachs and Committee On Drugs, 2013). Individuals who are ultrarapid metabolizers of codeine “carry more than two normal function copies of the CYP2D6 gene” and metabolize codeine to morphine more rapidly and entirely (Dean, 2012). For example, with the normal doses of codeine, these individuals can experience symptoms of a morphine overdose. As a result, breastfeeding mothers are cautioned from using the medication and close monitoring of adverse signs and symptoms of the infant are advised. The prevalence of the CYP2D6 phenotype varies widely. Additionally, genetic testing for CYP2D6 is available and is advised for women who may receive codeine for postpartum pain while breastfeeding (Dean, 2012). The U.S. National Library of Medicine Toxicology Data Network launched an internet database called LactMed (https://toxnet. nlm.nih.gov/newtoxnet/lactmed.htm). It provides information on over 1000 drugs, chemicals, and dietary supplements, such levels in breast milk, infant blood levels, potential effects in breastfeeding infants, the AAP category indicating the levels of compatibility of the drug with breastfeeding, and alternate drugs to consider. Additional drugs are continually added and updated with peer review and literature citation.

Factors Influencing Concentrations of Environmental Chemicals in Breast Milk Chemical concentrations contained in a mother’s breast milk will depend upon the chemical levels in her environment and their respective physiochemical properties. Her exposure duration to environmental chemicals significantly affects the chemicals ability to be transferred during lactation. Persistent bioaccumulative compounds yield lifetime exposure rates whereas short-lived chemicals, such as volatile organic compounds, are representative of more recent exposures. Thus, the levels of any given chemical in milk are influenced by a multitude of complex factors presenting research implications for making population-based predictions about exposure rates. In general, physiochemical properties that favor the transfer of chemicals to breast milk are (1) low molecular weight, (2) unionized state, (3) low binding to maternal plasma proteins, and (4) lipid solubility. Despite these complexities, factors thought to be important in determining breast milk concentrations have been investigated and some generalizations can be made. These factors include geography, previous lactation, and length of lactation, diet, age, weight loss, body mass index, and lifestyle. A critical step in collecting accurate data is to take special caution during the collection and analysis of media for ubiquitous environmental chemicals including the PCBs, PBDEs, PFAS, bisphenols, and the phthalates. These chemicals can be accidentally introduced into biological media during collection and analysis contributing to inaccurate concentration calculations. This section summarizes the results of several studies that have examined factors that influence the levels of environmental chemicals in breast milk. Due to the majority of breast milk research focusing on persistent organic chemicals, the following sections address these chemicals more specifically.

Geography Environmental chemicals, including persistent bioaccumulative compounds, such as PCBs, dioxins, PBDEs, and heavy metals such as lead, are globally distributed and have been found in virtually all breast milk samples tested from around the world. These levels differ by geographic region as policies surrounding permissible chemical use vary by country. It is important to note that data is typically based on a limited number of samples from one or few regions in a country and are not likely to be representative of the country as a whole. This makes assessments about geography’s influence on environmental chemical concentrations of breast milk more difficult to discern. For example, DDT was banned in most Western countries in the 1970s, but only recently banned in other countries and is still used for malaria control in certain parts of the world. Thus, countries with more recent DDT use may still exhibit higher milk DDT and its metabolites compared to countries with longer-term bans in place. Globally, the use and release of PCBs and dioxins were severely diminished the 1970s and 1980s, and levels of these chemicals in breast milk have declined sharply. However, they can still be detected in breast milk samples, and the concentrations vary by country (country-specific data for dioxins and PCBs are shown in Figs. 2 and 3, respectively). Point sources of exposure like the application of Agent Orange and its co-contaminant dioxin in Vietnam can lead to regional spikes in environmental chemicals in breast milk in subpopulations. Non-point source exposures can be attributed to atmospheric and oceanic transport of chemicals from their original site. Organochlorine pesticides (OCPs) bioaccumulate and may enter breast milk through the mother’s diet, like fish, meats, or dairy products or from external exposure (EFSA, 2007, 2012). For example, lactating mothers in Tanzania produced breastmilk containing OCs and PBDEs believed to result from the use of these chemicals for vector control, occupation, or in their proximity to dumping or storage sites (Muller et al., 2017). In a country comparison, levels

Environmental Chemicals in Breast Milk

399

of OCs in breast milk from developed countries are lower than samples collected in developing countries where mothers have a high risk of exposure (Muller et al., 2017).

Lactation History Past studies have reported decreases in breast milk concentrations of persistent organic chemicals over the course of lactation (LaKind et al., 2001). One hypothesis explains that as a mother breast-feeds, she transfers some of her lifetime stores of chemicals from her adipose tissue to milk, and then to the breast-fed infant. Recent studies have not observed significant declines in these chemicals in breast milk over lactation, including dioxins (LaKind et al., 2009), as shown in Fig. 5. In some cases, levels of these persistent organic chemicals appear to increase for some women (LaKind et al., 2009; Sasamoto et al., 2006). PBDEs and other persistent chemicals such as HCB and some PCB’s appear to remain relatively constant over the course of lactation (Sjödin et al., 2005) (Fig. 6). This pattern may be associated with the balance between current low-level dietary exposures and elimination during breast feeding (Sjödin et al., 2005). Currently, the elimination of chemicals during lactation is still poorly characterized. A vast majority of research analyzes chemical concentrations in non-human milk while remaining research works to draw consistent conclusions of chemical transfer during lactation. Using the National Health and Nutrition Examination Survey (NHANES), Alcala and Phillips (2017) concluded that depuration of PCBs may occur through breastfeeding (Alcala and Phillips, 2017). In reproductive aged women, PCBs in the serum lipids increased with an increase in age and decreased in women with a history of breastfeeding (Alcala and Phillips, 2017). Some studies have found an association between increasing parity and a decrease in levels of persistent organic compounds in milk (LaKind et al., 2004; Nakamura et al., 2008). However, a study from a lactating population in the Yucatan peninsula of Mexico found no significant difference in the levels of PCBs or OCPs as a function of age or number of births (Rodas-Ortíz et al., 2008). Research on the exposure to environmental chemicals and its effect on the duration of lactation has been inconsistent (Konkel, 2017). Recent interest in PFAS, fluorinated chemicals that bind to blood proteins and not lipids, demonstrated a positive correlation between breastfeeding duration and childhood PFAS blood concentrations at 3 and 8 years of age in U.S. children. Serum PFAS concentrations were highest among breastfed children and maternal and child serum PFAS levels were not correlated when breastfeeding and its duration were not considered (Kingsley et al., 2018). Another study in a Faroese birth cohort that followed serum PFAS levels in children at 11, 18, and 60 months of age found increased PFAS associated with breastfeeding using adjusted mixed models, increasing by up to 30% per month in some cases (Mogensen et al., 2015). In a cohort of Swedish infants aged 2–4 months, PFOA, PFNA, and PFHxS levels increased 8%–11% per week of exclusive breast-feeding (Gyllenhammar et al., 2018).

Other Factors Humans are exposed to persistent organic chemicals throughout their lives; these compounds tend to accumulate in human lipids. Thus, it is believed that as age increases, so do the levels of persistent organic compounds. Several but not all studies have shown a relationship between increasing age of the mother and the level of some persistent organic chemicals in breast milk (LaKind et al., 2004). In general, it appears that there is a positive association between consumption of fatty foods and levels of persistent organic chemicals in breast milk (Dewailly et al., 1994). Studies exploring relationships between consumption of fish and shellfish and

Fig. 5 Examples of changes in concentrations of dioxins (PCDDs) and furans (PCDFs) in human breast milk during lactation over 350 days postpartum for 3 participants. Data from LaKind J, Berlin C, Sjodin A, Turner W, Wang R, Needham L, . Patterson, D. (2009). Do human milk concentrations of persistent organic chemicals really decline during lactation? Chemical concentrations during lactation and milk/serum partitioning. Environmental Health Perspectives 117, 1625–1631.

400

Environmental Chemicals in Breast Milk

Fig. 6 Changes in concentrations of two PBDEs (BDE-47 and BDE-99), HCB, DDT, and two PCBs (CB-118 and CB-153) in human milk during lactation over 120 days postpartum for 3 participants (Sjödin et al., 2005).

levels of chemicals in milk have yielded inconsistent results. However, those people who consume large amounts of locally caught fish, including some indigenous populations, have been found to have higher levels of certain persistent organic chemicals in milk. Levels of persistent organic chemicals in breast milk samples can be impacted by a mother’s lactation history. Therefore, it is believed that first time mothers will have higher concentrations than mothers who have previously lactated. A recent study collected 1017 human breast milk samples from Canadian women and showed toxic equivalency concentrations (TEQ2005) PCDD/ F þ dioxin-like (DL) PCB were significantly higher in milk from first time mothers (Rawn et al., 2017). Neither body mass index nor weight loss appear to be correlated with levels of persistent organic chemicals in breast milk, and conflicting results have been published regarding associations between a mother’s cigarette smoking status or alcohol consumption and levels of persistent organic chemicals in her milk (LaKind et al., 2004; Wohlfahrt-Veje et al., 2014; Lehmann et al., 2018). Levels of UV filters such as 4-MBC (4-methylbenzylidene camphor) and OCT (octocrylene) and benzophenone-3 in breast milk have been found to be directly correlated with consumer habits in terms of the extent of use of certain cosmetics and personal care products while no correlations were found with mother’s age, body weight, or nutrition (Schlumpf et al., 2010).

Health Effects The association between environmental chemicals in breast milk and potential for adverse health effects in the breast-feeding infant has been studied in epidemiologic research. While many research gaps exist, data on environmental chemicals in breast milk and potential adverse health effects in infants and children have been published (LaKind et al., 2008, 2018; Jorissen, 2007; Kacew, 1994; Landrigan et al., 2002; Massart et al., 2005; Pohl and Hibbs, 1996; Ribas-Fito et al., 2001; Schreiber, 2001). Studies evaluating environmental chemicals in breast milk on the health of the breast-feeding infant have been conducted for a limited number of chemicals, in a large number of cohorts. The number of high quality studies and the number of studies that have replicated findings from earlier studies are limited. Published research indicates inconsistencies to infant health outcomes and their association to exposures environmental chemicals in breast milk at “general population background levels” (LaKind et al., 2018). As with any epidemiological study, there are limitations that preclude making definitive all-encompassing statements about the impact of environmental chemicals in breast milk on infant health. These limitations include measurements on only a limited number of chemicals, reported effects that are not statistically significant, cohort sizes that are too small to detect health effects, studies that follow infants for only a small portion of their life with later health outcomes undetermined, inability to separate effects that may arise due to postnatal breast-feeding exposure from prenatal exposure, and lack of data on other sources of exposure to the infant including transdermal, oral, and respiratory routes. Despite these limitations, studies to date examining numerous health end points have consistently concluded that

Environmental Chemicals in Breast Milk

401

breast-feeding is recommended despite the presence of environmental chemicals in breast milk (Lehmann et al., 2018; LaKind et al., 2008; AAP, 2012). This conclusion, however, is not applicable to certain rare events such as poisonings, certain occupational exposures, or instances involving drugs of abuse. Key infant organ systems and effects that have been the major focus of studies are briefly summarized below.

Growth Numerous studies have assessed the growth of infants and children using various metrics and covering age ranges from newborn infants to children 18 years of age. These studies described inconsistent results. The chemicals assessed includes polychlorinated dibenzodioxins (PCDDs), polychlorinated dibenzofurans (PCDFs) and/or PCBs, PFAS, PBDEs, and OCPs (LaKind et al., 2018; Criswell et al., 2017; Gladen et al., 2000; Grandjean et al., 2003; Ilsen et al., 1996; Jackson et al., 2010; Jacobson et al., 1990; Leijs et al., 2008; Pan et al., 2010; Patandin et al., 1998; Pluim et al., 1996; Rogan et al., 1987; Wohlfahrt-Veje et al., 2014; Du et al., 2016, 2017; Gladen et al., 2000; Pan et al., 2010; Yalcin et al., 2015). Rogan et al. (1987) concluded that there were no effects of lactational exposure to PCB or DDE from infant weight to 18 months of age (Rogan et al., 1987). Additionally, Pan and colleagues, during the first year of life, found no association pertaining to PCB or DDE exposures and infant growth (Pan et al., 2010). Pluim et al. (1996) found no difference between body weight, body length, head circumference, and liver size, and high and low exposure groups, due to concentrations of PCDDs/PCDFs in milk fat among children up to 6 months of age (Pluim et al., 1996). Similarly, in a longitudinal study using the Copenhagen Mother Child Cohort of Growth and Reproduction, researchers concluded that there was no association between PFAS and weight gain, height gain, or body mass index (BMI) up to 18 months of age (Wohlfahrt-Veje et al., 2014). Consequently, five studies reported statistically significant results for certain characteristics of growth with exposure to chemicals in breast milk (see summary in LaKind et al., 2018). Yalcin et al. (2015) concluded that there was no association between levels of organochlorine pesticides in breast milk and athroprometric measures of 8 month old infants. However, they found an inverse correlation between z-scores for head circumference and breast milk concentration of B-HCH and DDT (Yalcin et al., 2015). Some reported no significant associations between exposures to HCB or PCB 74 and infant growth (LaKind et al., 2018; Criswell et al., 2017). Though, a relationship was reported between increased levels of B-HCH in breastmilk and a lower odds of rapid growth between the ages of 0 and 6 months, when their model was adjusted for demographic and maternal characteristics.

Neurodevelopment The association between neurodevelopment in environmental chemicals in breast milk has been widely researched. Previous studies have used parent questionnaires for child behavior, language, and developmental milestones to examine their associations between PBDEs, PFAS, mercury, organochlorine pesticides, and PCDDs/PCDFs, and/or PCBs in breastmilk. Forns et al. (2016) used the Infant/Toddler Symptoms Checklist (ITSC) in order to examine the relationship between behavioral problems in children at 12 and 24 months of age. An association was not established between the ITSC scores and breast milk concentrations of six PBDE congeners, PBDE 28, 47, 99, 100, 153, and 154 (LaKind et al., 2018; Forns et al., 2016). Children 8–12 months of age were tested with BSID-111, to assess social-emotional and adaptive behavior (LaKind et al., 2018; Chao et al., 2011). Researchers concluded that there were no associations observed between PBDEs in breastmilk and the scales. Results from the limited data available from lactational studies of PBDE exposure are difficult to interpret and suggest the need for additional research (Chao et al., 2011; Hoffman et al., 2012). The association between mercury levels in breast milk and scoring on the Parent’s Evaluation of Developmental Status (PEDS) among infants 3–12 months of age was positive (Al-Saleh et al., 2016a,b). Conversely, the results were no longer significantly associated once the model was adjusted for variables singificantly asosicated with both the exposure and the outcome (LaKind et al., 2018; Al-Saleh et al., 2016a,b). Two studies with different cohorts examined neurodevelopment in children 12–24 months of age. However, these studies had issues with confounding, blinding, and incomplete/missing outcome data (Forns et al., 2016; Pan et al., 2009). MacArthur-Bates Communicative Development Inventories (MacArthur CDI) was adminstered among 12 month olds to assess the lactational exposures to DDT or DDE and the association to behavior and development, Pan et al. (2009) found no association. Associations between breast milk concentratIons of HCB, B-HCH, oxychlordane, DDE, and DDT at around 1 month of age and ITSC scores at 12 and 24 months of age were evaluated by Forns and colleagues (Forns et al., 2016; LaKind et al., 2018). They found a negative association between DDT levels in breast milk and behavioral problems at 12 months of age, and higher ITSC scores, which specifies greater behavioral problems among infants 12 months of age (Forns et al., 2016; LaKind et al., 2018). In other studies, standardized researcher-administered assessments were used to study the relationship between motor development and PCDDs, PCDFs, PCBs, DDE, and mercury among young children. There was no association between lactational exposures to PCBs or DDE and Bayley scores at 6 months of age using a U.S. cohort (Gladen et al., 1988; LaKind et al., 2018). Pluim et al. (1996) found no significant influence of milk-borne PCDDs/PCDFs and the neurological optimality score at 1 week and 6 months of age, based on the Prechtl, mean number of abnormal reflexes, and tomus scores. However, statistically significant results were observed from the Netherlands cohort. This cohort reported that higher levels of PCDDs/PCDFs/PCBs in breast milk were significantly associated with reduced neonatal neurological optimality using the Prechtl scoring for infants 10–21 days old. Higher breast milk levels of planar PCBs were also reported to be statistically significant with a higher incidence of hyptonia (Huisman et al.,

402

Environmental Chemicals in Breast Milk

1995a,b). Overall, there was difficulty with drawing conclusions due to the evidence observed in the previous stuides “between lactational exposures and early life motor development due to difference in study design and results” (LaKind et al., 2018).

Thyroid Function Thyroid hormones are critical to the development of the human brain both in utero and during the neonatal period. All newborns born in the United States are screened for thyroid function so that, if necessary, prompt intervention can occur. In addition, maternal thyroid status is important for proper lactation. Some environmental chemicals, such as PFAS, may interfere with thyroid function or with thyroid hormone-binding proteins and thyroid hormone receptor binding (Zoeller, 2005). Pluim et al. (1994) evaluated the effect of chlorinated dioxins and furans on thyroid hormone levels in infants and concluded a relationship between thyroid hormone levels and lactational exposure. Two studies found no associations pertaining to dioxin TEQs and thyroid hormone levels in older children (Ilsen et al., 1996; ten Tusscher et al., 2008). Matsuura et al. (2001) assessed the relationship between changes in thyroid hormone levels and lactational exposures to PCDDs/PCDFs/ PCBs, and failed to find an association (Matsuura et al., 2001). Overall, previous studies on only a few chemical classes demonstrate a lack of consistent results of infants and children due to the association of environmental chemicals of breastfeeding and thyroid hormone levels.

Environmental Chemicals and Lactation Duration Both the AAP and the WHO recommend that infants should be exclusively breast-fed for the first six months of life to achieve optimal growth, development, and health. In addition, breast-feeding should continue (along with nutritionally adequate and safe complementary foods) until up to two years of age or beyond. Whether the presence of environmental chemicals in breast milk have an effect on lactation duration is understudied. Researchers from the North Carolina Breast Milk and Formula Project aimed to examine the potential health effects of infants exposed to PCBs and DDT in breast milk (Rogan et al., 1987; Konkel, 2017). They concluded that children with higher levels of DDE were breastfed for shorter times of which they surmise that DDE may restrict the mother’s ability to lactate due to its estrogenic properties (Rogan et al., 1987). However, in a large study of a highly exposed area of DDT and its metabolites in Mexico (due to more recent use of DDT for malaria control), the results did not support the hypothesis that exposure to DDE shortens length of lactation (Cupul-Uicab et al., 2008). Pan et al. (2009) also found no association with lactational exposures to DDT or DDE in 12month-old infants. Weldon et al. (2010) examined selected OCPs and polychlorinated biphenyls to assess their association with shortened lactation duration and concluded that the estrogenic POPs are not associated wtith shorted lactation duration (Weldon et al., 2010). One study has concluded that passive exposure of tobacco smoke during pregnancy was associated with shortened duration of breastfeeding (Rosen-Carole et al., 2017). However, Romano et al. (2016), Fei et al. (2010), and Timmermann et al. (2017) concluded that maternal PFOA and other PFAS exposures negatively affect breastfeeding duration. In multiparous women from the Danish National Birth Cohort (1996– 2002), PFOA and PFOS have been shown to be associated with decreased duration of breastfeeding (Fei et al., 2010). Since then, two more studies have confirmed an association between PFAS and shorter breastfeeding duration in U.S. and Faroese publications of women (Timmermann et al., 2017; Romano et al., 2016). The HOME (Health Outcomes and Measures of the Enviroment) Study examined the effect of low-level prenantal exposures to common environmental toxins, and breastfeeding exclusivity and duration (Romano et al., 2016). The effects could not be explained by previous breastfeeding or other variables. A mechanistic underpinning for a potential relationship between PFAS and shortened duration of lactation is provided by rodent studies in which PFOA exposure caused elevated serum progesterone in peri-pubertal rodents (Zhao et al., 2010) and impaired mammary gland development in lactating rodents (White et al., 2007). However, given the limited data in humans, additional research is needed.

Breast Cancer Several studies have explored whether being breast-fed as an infant is associated with altered risk of breast cancer later in life. These studies have consistently found that having been breast-fed does not result in increased risk of breast cancer (with a possible decreased risk of breast cancer for premenopausal women who had been breast-fed), nor do breast-fed infants appear to be at greater risk for childhood cancer as compared with formula-fed infants (LaKind et al., 2007). Studies have examined whether future breast cancer risk is altered in women who have breastfed their children. Rogan and colleagues used environmental chemical concentration data of breast milk from women in the United States during the 1970s and 1980s. They used a risk assessment approach to compare cancer risk attributable to chemicals frequently found in human milk. Rogan and colleagues observed that the estimated increase in cancer risk associated with excess lifetime exposure to these chemicals in breast milk were 12–80 excess cancers per 100,000 infants. Consequently, the authors estimated that the risk of post-neonatal. Mortality associated with not breastfeeding was 256 per 100,000 infants (LaKind et al., 2018; Rogan and Gladen, 1991).

Environmental Chemicals in Breast Milk

403

Previous studies have indicated that breast-feeding protects against breast cancer for women who gave birth before age 25 (Enger et al., 1998). More recent studies, however, have shown that women who have breastfed are protected against breast cancer regardless of when they gave birth (Lord et al., 2008). A robust meta-analysis of more than 14,000 women published in 2015 demonstrated that breastfeeding was inversely associated with breast cancer risk (ever vs. never RR ¼ 0.613 [95% confidence interval (CI), 0.442–0.850]), and that among women that nursed their infants that longer duration of breastfeeding conferred lower relative risk of cancer (RR ¼ 0.471; 95% CI, 0.368–0.602) (Zhou et al., 2015).

Effects Associated With Pharmaceuticals Similar to other environmental chemicals, it is difficult to definitively attribute an adverse reaction to a pharmaceutical detected in breast milk. In 2003, in a comprehensive review of the literature on adverse effects to the breast-feeding infant, Anderson and colleagues examined 100 case reports from 1966 to 2002 (Anderson et al., 2003). Of these, 53 adverse effects were “possibly” related to the pharmaceutical used by the mother, 47 were “probably” related to the pharmaceuticals and none were “definite.” In infants younger than 1 month of age, 63 cases were reported and in infants older than 6 months, only 4 were reported. In a study analyzing amlodipine, a commonly used antihypertensive medication, concentrations of the drug ranged from 6.5 to 19.7 ng/mL in breast milk, but infant exposure was very small, with plasma concentrations in infants under the limit of quantifications, 0.4 ng/mL (Aoki et al., 2018). There are many commonly used pharmaceuticals that are used by pregnant women that should be continued in pregnancy and ensuring the safety of these drugs regarding possible exposure to infants via breast milk is very important. Published reports on the excretion of pharmaceuticals in human milk have several problems that limit application to a larger breast-feeding infant population: (i) the series are usually small (often single patients); (ii) studies are frequently done only in early lactation and usually do not include mothers on long term medication; (iii) studies are not repeated in the same patients at different times of lactation; and (iv) long-term (throughout childhood) studies are lacking. The AAP’s Committee on Drugs provides guidance for the use of maternal medications during breastfeeding. They provide a framework that first inquires as to whether the drug therapy is necessary. Next, they suggest using the safest drug possible if multiple options are available. Then, if there is a possibility that a drug may present a risk to the infant, “consideration should be given to measurement of blood concentrations in the nursing infant.” Finally, pharmacokinetic principles dictate that the mother can minimize drug exposure to the breast-feeding infant by taking medications immediately after she has breastfed or just before the infant is expected to have an extended period of sleep (AAP, 2001). The general consensus is that a great proportion of drugs given to the mother are safe enough to be given directly to the breast-feeding child for therapeutic reasons in doses that far exceed the amount that appears in milk after maternal administration. It is of course essential that the breastfeeding mother consult with her health care practitioner regarding the use of pharmaceuticals during lactation.

Conclusion The amount of data on levels of environmental chemicals in breast milk has expanded substantially since the 1950s in terms of the types of chemicals measured and the number of countries for which data exist. Persistent lipophilic chemicals are the most commonly measured chemicals in breast milk, as such, less information is available on other chemicals in breast milk and infant formula. In order to thoroughly understand the many factors of which influence the levels of environmental chemicals in breast milk, future research is necessary. Additional research is needed to collect data on later life stages pertaining to potential impacts of developmental exposure. Research on the risks and benefits relating to environmental chemicals and the infant health outcomes of breast milk versus infant formula needs to be updated and distributed. WHO has stated that “evidence for the health advantages of breastfeeding and scientific evidence to support breastfeeding has continued to increase .” and “breastfeeding reduces child mortality and has health benefits that extend into adulthood” (WHO, 2007; van den Berg et al., 2017).

See also: Children’s Environmental Health: General Overview; Critical Windows of Children’s Development and Susceptibility to Environmental Toxins; Maternal and Child Health Disparities: Environmental Contribution.

References and Further Reading Adgent, M., Hoffman, K., Goldman, B., Sjodin, A., Daniels, J., 2014. Brominated flame retardants in breast milk and behavioural and cognitive development at 36 months. Paediatric and Perinatal Epidemiology 28 (1), 48–57. Alcala, C., Phillips, L., 2017. PCB concentrations in women based on breastfeeding history: NHANES 2001–2004. Environmental Research 154, 35–41. Al-Saleh, I., Abduljabbar, M., Al-Rouqi, R., Elkhatib, R., Alshabbaheen, A., Shinwari, N., 2013. Mercury (Hg) exposure in breast-fed infants and their mothers and the evidence of oxidative stress. Biological Trace Element Research 153, 145–154. Al-Saleh, I., Elkhatib, R., Al-Rouqi, R., Abduljabbar, M., Eltabache, C., Al-Rajudi, T., Nester, M., 2016a. Alterations in biochemical markers due to mercury (Hg) exposure and its influence on infant’s neurodevelopment. International Journal of Hygiene and Environmental Health 219, 898–914.

404

Environmental Chemicals in Breast Milk

Al-Saleh, I., Nester, M., Abduljabbar, M., Al-Rouqi, R., Eltabache, C., Al-Rajudi, T., Elkhatib, R., 2016b. Mercury (Hg) exposure and its effects on Saudi breastfed infant’s neurodevelopment. International Journal of Hygiene and Environmental Health 219, 129–141. American Academy of Pediatrics, 2016. SIDS and other sleep-related infant deaths: Updated 2016 recommendations for a safe infant sleeping environment. Pediatrics 138 (5), e20162938. https://doi.org/10.1542/peds.2016-2938. American Academy of Pediatrics (AAP), 2012. Breastfeeding and the use of human milk. Pediatrics e827–e841. https://doi.org/10.1542/peds.2011-3552. American Academy of Pediatrics Committee on Drugs, 2001. Transfer of drugs and other chemicals into human milk. Pediatrics 108, 776–789. Anderson, P., Pochop, S., Manoguerra, A., 2003. Adverse drug reactions in breastfed infants: Less than imagined. Clinical Pediatrics 42, 325–340. Aoki, H., Ito, N., Kaniwa, N., Saito, Y., Wada, Y., Nakajima, K., Ito, S., 2018. Low levels of amlodipine in breast milk and plasma. Breastfeeding Medicine 13 (9), 622–626. Berlin, C.M., 2011. The excretion of drugs and chemicals into human milk. In: Yaffe, S.J., Aranda, J.V. (Eds.), Neonatal and Pediatric Pharmacology, 3rd ed. Lippincott Williams & Wilkins, Philadelphia, pp. 210–220. Blanco, J., Mulero, M., Heredia, L., Pujol, A., Domingo, J., Sanchez, D., 2013. Perinatal exposure to BDE-99 causes learning disorders and decreases serum thyroid hormone levels and BDNF gene expression in hippocampus in rat offspring. Toxicology 308, 122–128. Bodley, V., Powers, D., 1997. Long-term treatment of a breastfeeding mother with fluconazole-resolved nipple pain caused by yeast: A case study. Journal of Human Lactation 13, 307–311. Braun, J., Smith, K., Williams, P., Calafat, A., Berry, K., Ehrlich, S., Hauser, R., 2012. Variability of urinary phthalate metabolite and bisphenol A concentrations before and during pregnancy. Environmental Health Perspectives 120, 739–745. Calafat, A., Slakman, A., Silva, M., Herbert, A., Needham, L., 2004. Automated solid phase extraction and quantitative analysis of human milk for 13 phthalate metabolites. Journal of Chromatography B 805, 49–56. Carignan, C., Cottingham, K., Jackson, B., Farzan, S., Gandolfi, A., Punshon, T., Karagas, M., 2015. Estimated exposure to arsenic in breastfed and formula-fed infants in a United States cohort. Environmental Health Perspectives 123, 500–506. Centers for Disease Control and Prevention, 2010. Breastfeeding Report Card-United States. Retrieved from Centers for Disease Control and Prevention. www.cdc.gov/breastfeeding/ data/reportcard/reportcard2010.htm. Centers for Disease Control and Prevention (CDC), 2018. Fourth National Report on Human Exposure to Environmental Chemicals. Updated Tables, March 2018. Centers for Disease Control and Prevention (CDC), Atlanta. Centers for Disease Control and Prevention; National Center for Chronic Disease Prevention and Health Promotion, 2018. Breastfeeding report card: United States 2018. Centers for Disease Control and Prevention, Atlanta. Chao, H.-R., Tsou, T.-C., Huang, H.-L., Chang-Chien, G.-P., 2011. Levels of breast milk PBDEs from southern Taiwan and their potential impact on neurodevelopment. Pediatric Research 70, 596–600. Chevrier, J., Harley, K., Bradman, A., Sjodin, A., Eskenazi, B., 2011. Prenatal exposure to polybrominated diphenyl ether flame retardants and neonatal thyroid-stimulating hormone levels in the CHAMACOS study. American Journal of Epidemiology 174, 1166–1174. Committee on Drugs, American Academy of Pediatrics, 2014. Off-label use of drugs in children. Pediatrics 133, 563–567. Criswell, R., Lenters, V., Mandal, S., Sitgum, H., Iszatt, N., Eggesbo, M., 2017. Persistent environmental toxicants in breast milk and rapid infant growth. Annals of Nutrition & Metabolism 70, 210–216. Cupul-Uicab, L., Gladen, B., Hernandez-Avila, M., Weber, J.-P., Longnecker, M., 2008. DDE, a degradation product of DDT, and duration of lactation in a highly exposed area of Mexico. Environmental Health Perspectives 116, 179–183. Dallaire, R., Dewailly, E., Ayotte, P., Muckle, G., Laliberte, C., Bruneau, S., 2008. Effects of prenatal exposure to organochlorines on thyroid hormone status in newborns from two remote coastal regions in Québec, Canada. Environmental Research 108, 387–392. Davidson, P., Myers, G., Cox, C., Axtell, C., Shamlaye, C., Sloane-Reeves, J., Clarkson, T., 1998. Effects of prenatal and postnatal methylmercury exposure from fish consumption on neurodevelopment. Journal of the American Medical Association 280, 701–707. Davis, J., Bhutani, V., 1985. Neonatal apnea and maternal codeine use. Pediatric Research 19, 170a. Dean, L., 2012. Codeine therapy and CYP2D6 genotype. In: Pratt, V., McLeod, H., Rubinstein, W., Dean, L., Kattman, B., Malheiro, A. (Eds.), Medical genetics summaries. Retrieved from. http://www.ncbi.nlm.nih.gov/books/NBK100662/. Dewailly, E., Ryan, J., Laliberte, C., Bruneau, S., Weber, J., Gingras, S., Carrier, G., 1994. Exposure of remote maritime populations to coplanar PCBs. Environmental Health Perspectives 102, 205–209. Dewailly, E., Ayotte, P., Bruneau, S., Gingras, S., Belles-Isles, M., Roy, R., 2000. Susceptibility to infections and immune status in Inuit infants exposed to organochlorines. Environmental Health Perspectives 108 (3), 205–211. Du, J., Gridneva, Z., Gay, M., Lai, C., Trengove, R., Hartmann, P., Geddes, D., 2016. Longitudinal study of pesticide residue levels in human milk from Western Australia during 12 months of lactation: Exposure assessment for infants. Scientific Reports 6, 38355. Du, J., Gridneva, Z., Gay, M., Trengove, R., Hartmann, P., Geddes, D., 2017. Pesticides in human milk of Western Australian women and their influence on infant growth outcomes: A cross-sectional study. Chemosphere 167, 247–254. Eggesbo, M., Thomsen, C., Jorgensen, J., Becher, G., Odland, J., Longnecker, M., 2011. Associations between brominated flame retardants in human milk and thyroid-stimulating hormone (TSH) in neonates. Environmental Research 111, 737–743. Enger, S., Ross, R., Paganini-Hill, A., Bernstein, L., 1998. Breastfeeding experience and breast cancer risk among postmenopausal women. Cancer Epidemiology, Biomarkers & Prevention 7, 365–369. Eskenazi, B., Chevrier, J., Rauch, S., Kogut, K., Harley, K., Johnson, C., Bradman, A., 2013. In utero and childhood polybrominated diphenyl ether (PBDE) exposures and neurodevelopment in the CHAMACOS study. Environmental Health Perspectives 121, 257–262. European Food Safety Authority (EFSA), 2007. Opinion of the scientific panel on plant protection products and their resi-dues on a request from the commission on the risks associated with an in-crease of the MRL for dieldrin on courgettes. European Food Safety Authority (EFSA), pp. 1–48. European Food Safety Authority (EFSA), 2012. Update of the monitoring of levels of dioxins and PCBs in food and feed. European Food Safety Authority (EFSA), pp. 2832–2882. Fang, J., Nyberg, E., Bignert, A., Bergman, A., 2013. Temporal trends of polychlorinated dibenzo-p-dioxins and dibenzofurans and dioxin-like polychlorinated biphenyls in mothers’ milk from Sweden, 1972–2011. Environment International 60, 224–231. FDA (U.S. Food and Drug Administration), 2014. Total diet study: Elements results summary statisticsdMarket baskets 2006 through 2013. FDA (U.S. Food and Drug Administration), College Park. Fei, C., McLaughlin, J., Lipworth, L., Olsen, J., 2010. Maternal concentrations of perfluorooctanesulfonate (PFOS) and perfluorooctanoate (PFOA) and duration of breastfeeding. Scandinavian Journal of Work, Environment and Health 36, 413–421. Fisher, M., MacPherson, S., Braun, J., Hauser, R., Walker, M., Feeley, M., Arbuckle, T., 2017. Paraben concentrations in maternal urine and breast milk and its association with personal care product use. Environmental Science and Technology 51, 4009–4017. Forns, J., Mandal, S., Iszatt, N., Polder, A., Thomsen, C., Lyche, J., Eggesbo, M., 2016. Novel application of statistical methods for analysis of multiple toxicants identifies DDT as a risk factor for early child behavioral problems. Environmental Research 151, 91–100. Furst, P., 2006. Dioxins, polychlorinated biphenyls and other organohalogen compounds in human milk. Levels, correlations, trends and exposure through breastfeeding. Molecular Nutrition & Food Research 50 (10), 922–933. Garcia-Esquinas, E., Perez-Gomez, B., Fernandez, M., Perez-Meixeira, A., Gil, E., de Paz, C., Go, 2011. Mercury, lead and cadmium in human milk in relation to diet, lifestyle habits and sociodemographic variables in Madrid (Spain). Chemosphere 85, 268–276.

Environmental Chemicals in Breast Milk

405

Gascon, M., Guxens, M., Vrijheid, M., Torrent, M., Ibarluzea, J., Fano, E., Sunyer, J., 2017. The INMA-INfancia y Medio Ambiente-(environment and childhood) project: More than 10 years contributing to environmental and neuropsychological research. International Journal of Hygiene and Environmental Health 220, 647–658. Gladen, B., Rogan, W., 1995. DDE and shortened duration of lactation in a northern Mexican town. American Journal of Public Health 85, 504–508. Gladen, B., Rogan, W., Hardy, P., Thullen, J., Tingelstad, J., Tully, M., 1988. Development after exposure to polychlorinated biphenyls and dichlorodiphenyl dichloroethene transplacentally and through human milk. Journal of Pediatrics 113, 991–995. Gladen, B., Ragan, N., Rogan, W., 2000. Pubertal growth and development and prenatal and lactational exposure to polychlorinated biphenyls and dichlorodiphenyl dichloroethene. The Journal of Pediatrics 136, 490–496. Glynn, A., Thuvander, A., Aune, M., Johannisson, A., Darnerud, P., Ronquist, G., Cnattingius, S., 2008. Immune cell counts and risks of respiratory infections among infants exposed pre- and postnatally to organochlorine compounds: A prospective study. Environmental Health 7, 62. Grandjean, P., Weihe, P., White, R., 1995. Milestone development in infants exposed to methylmercury from human milk. Neurotoxicology 16, 27–33. Grandjean, P., Budtz-Jorgensen, E., Steuerwald, U., Heinzow, B., Needham, L., Jorgensen, P., Weithe, P., 2003. Attenuated growth of breast-fed children exposed to increased concentrations of methylmercury and polychlorinated biphenyls. The FASEB Journal 17, 699–701. Gundacker, C., Pietschnig, B., Wittmann, K., Lischka, A., Salzer, H., Hohenauer, L., Schuster, E., 2002. Lead and mercury in breast milk. Pediatrics 110, 873–878. Guzman, M., Clementini, C., Perez-Carceles, M., Rejon, S., Cascone, A., Martellini, T., Cincinelli, A., 2016. Perfluorinated carboxylic acids in human breast milk from Spain and estimation of infant’s daily intake. Science of the Total Environment 544, 595–600. Gyllenhammar, I., Benskin, J., Sandblom, O., Berger, U., Ahrens, L., Lignell, S., Glynn, A., 2018. Perfluoroalkyl acids (PFAAs) in serum from 2-4-month-old infants: Influence of maternal serum concentration, gestational age, breast-feeding, and contaminated drinking water. Environmental Science & Technology 52, 7101–7110. Henderickson, R., McKeown, N., 2012. Is maternal opioid use hazardous to breast-fed infants? Clinical Toxicology 50, 1–14. Hines, E., Calafat, A., Silva, M., Mendola, P., Fenton, S., 2009. Concentrations of phthalate metabolites in milk, urine, saliva, and serum of lactating North Carolina women. Environmental Health Perspectives 117, 86–92. Hines, E., Mendola, P., von Ehrenstein, O., Ye, X., Calafat, A., Fenton, S., 2015. Concentrations of environmental phenols and parabens in milk, urine and serum of lactating North Carolina women. Reproductive Toxicology 54, 120–128. Hoffman, K., Adgent, M., Goldman, B., Sjodin, A., Daniels, J., 2012. Lactational exposure to polybrominated diphenyl ethers and its relation to social and emotional development among toddlers. Environmental Health Perspectives 120 (10), 1438–1442. Högberg, J., Hanberg, A., Berglund, M., Skerfving, S., Remberger, M., Calafat, A., Håkansson, H., 2008. Phthalate diesters and their metabolites in human breast milk, blood or serum, and urine as biomarkers of exposure in vulnerable populations. Environmental Health Perspectives 116, 334–339. Huisman, M., Koopman-Esseboom, C., Fidler, V., Hadders-Algra, M., van der Paauw, C., Tuinstra, L., Boersma, E., 1995a. Perinatal exposure to polychlorinated biphenyls and dioxins and its effect on neonatal neurological development. Early Human Development 41, 111–127. Huisman, M., Koopman-Esseboom, C., Lanting, C., van der Paauw, C., Tuinstra, L., Fidler, V., Touwen, B., 1995b. Neurological condition in 18-month-old children perinatally exposed to polychlorinated biphenyls and dioxins. Early Human Development 43, 165–176. Ilsen, A., Briet, J., Koppe, J., Pluim, H., Oosting, J., 1996. Signs of enhanced neuromotor maturation in children due to perinatal load with background levels of dioxins: Follow-up until age 2 years and 7 months. Chemosphere 33, 1317–1326. Inoue, K., Harada, K., Takenaka, K., Uehara, S., Kono, M., Shimizu, T., Koizumi, A., 2006. Levels and concentration ratios of polychlorinated biphenyls and polybrominated diphenyl ethers in serum and breast milk in Japanese. Environmental Health Perspectives 114 (8), 1179–1185. Intermountain Healthcare, 2013. A Physician’s guide to opioid use in the lactating mother. Intermountain Healthcare. Ip, S., Chung, M., Raman, G., Chew, P., Magula, N., DeVine, D., Lau, J., 2007. Breastfeeding and maternal and infant health outcomes in developed countries. AHRQ, Rockville. Ito, S., 2000. Drug therapy for Breast-feeding women. The New England Journal of Medicine 343, 118–126. Ito, S., Koren, G., Einarson, T., 1993. Maternal noncompliance with antibiotics during breastfeeding. Annals of Pharmacotherapy 27 (1), 40–42. Jackson, L., Lynch, C., Kostyniak, P., McGuinness, B., Buck Louis, G., 2010. Prenatal and postnatal exposure to polychlorinated biphenyls and child size at 24 months of age. Reproductive Toxicology 29, 25–31. Jacobson, S., Fein, G., Jacobson, J., Schwartz, P., Dowler, J., 1985. The effect of intrauterine PCB exposure on visual recognition memory. Child Development 56, 853–860. Jacobson, J., Jacobson, S., Humphrey, H., 1990. Effects of exposure to PCBs and related compounds on growth and activity in children. Neurotoxicology and Teratology 12, 319–326. Jensen, A., Slorach, S., 1991. Chemical contaminants in human milk. CRC Press, FL. Jorissen, J., 2007. Literature review. Outcomes associated with postnatal exposure to polychlorinated biphenyls (PCBs) via breast milk. Advances in Neonatal Care 7, 230–237. Kacew, S., 1994. Current issues in lactation: Advantages, environment, silicone. Biomedical and Environmental Sciences (BES) 7 (4), 307–319. Karman, A., Ericson, I., van Bavel, B., Darnerud, P., Aune, M., Glynn, A., Lindstrom, G., 2007. Exposure of perfluorinated chemicals through lactation: Levels of matched human milk and serum and a temporal trend, 1996–2004, in Sweden. Environmental Health Perspectives 115, 226–230. Karmaus, W., Asakevich, S., Indurkhya, A., Witten, J., Kruse, H., 2002. Childhood growth and exposure to dichlorodiphenyl dichloroethene and polychlorinated biphenyls. The Journal of Pediatrics 140, 33–39. Kim, S., Lee, J., Park, J., Kim, H.-J., Cho, G., Kim, G.-H., Choi, K., 2015. Concentrations of phthalate metabolites in breast milk in Korea: Estimating exposure to phthalates and potential risks among breast-fed infants. Science of The Total Environment 508, 13–19. Kingsley, S., Eliot, M., Kelsey, K., Calafat, A., Ehrlich, S., Lanphear, B., Braun, J., 2018. Variability and predictors of serum perfluoroalkyl substance concentrations during pregnancy and early childhood. Environmental Research 165, 247–257. Kirk, A., Martinelango, P., Tian, K., Dutta, A., Smith, E., Dasgupta, P., 2005. Perchlorate and iodide in dairy and breast milk. Environmental Science and Technology 39, 2011–2017. Konkel, L., 2017. Mother’s milk and the environment: Might chemical exposures impair lactation? Environmental Health Perspectives 125 (1), a17–a23. Koopman-Esseboom, C., Morse, D., Weisglas-Kuperus, N., Lutkeschipholt, I., Van Der Paauw, C., Tuinstra, L., Sauer, P., 1994. Effects of dioxins and polychlorinated biphenyls on thyroid hormone status of pregnant women and their infants. Pediatric Research 36, 468–473. Koren, G., Cairns, J., Chitayat, G., Leeder, S., 2006. Pharmacogenetics of morphine poisoning in a breastfed neonate of a codeine-prescribed mother. The Lancet 368 (9536), 704. Krans, E., Patrick, S., 2016. Opioid use disorder in pregnancy: Health policy and practice in the midst of an epidemic. Obstetrics and Gynecology 128, 4–10. LaKind, J., 2007. Recent global trends and physiologic origins of dioxins and furans in human milk. Journal of Exposure Science & Environmental Epidemiology 17, 510–524. LaKind, J., Berlin, C., Naiman, D., 2001. Infant exposure to chemicals in breast milk in the United States: What we need to learn from a breast milk monitoring program. Environmental Health Perspectives 109, 75–88. LaKind, J., Wilkins, A., Berlin, C., 2004. Environmental chemicals in human milk: A review of levels, infant exposures and health, and guidance for future research. Toxicology and Applied Pharmacology 198, 184–208. LaKind, J., Wilkins, A., Bates, M., 2007. Human breast biomonitoring and environmental chemicals: Use of breast tissues and fluids in breast cancer etiologic research. Journal of Exposure Science & Environmental Epidemiology 17, 525–540. LaKind, J., Berlin, C., Mattison, D., 2008. The heart of the matter on breastmilk and environmental chemicals: Essential points for healthcare providers and new parents. Breastfeeding Medicine 3 (4), 251–259. LaKind, J., Berlin, C., Sjodin, A., Turner, W., Wang, R., Needham, L., Patterson, D., 2009. Do human milk concentrations of persistent organic chemicals really decline during lactation? Chemical concentrations during lactation and milk/serum partitioning. Environmental Health Perspectives 117, 1625–1631. LaKind, J., Lehmann, G., Davis, M., Hines, E., Marchitti, S., Alcala, C., Lorber, M., 2018. Infant dietary exposures to environmental chemicals and infant/child health: A critical assessment of the literature. Environmental Health Perspectives 126 (9), 96002.

406

Environmental Chemicals in Breast Milk

Landrigan, P., Sonawane, B., Mattison, D., McCally, M., Garg, A., 2002. Chemical contaminants in breast milk and their impacts on children’s health: An overview. Environmental Health Perspectives 110, A313–A315. Lehmann, G., LaKind, J., Davis, M., Hines, E., Marchitti, S., Alcala, C., Lorber, M., 2018. Environmental chemicals in breast milk and formula: Exposure and risk assessment implications. Environmental Health Perspectives 126, 96001. Leijs, M., Koppe, J., Olie, K., vanAalderen, W., devOogt, P., tenTusscher, G., 2008. Delayed initiation of breast development in girls with higher prenatal dioxin exposure; a longitudinal cohort study. Chemosphere 73, 999–1004. Leijs, M., Koppe, J., Olie, K., van Aalderen, W., de Voogt, P., ten Tusscher, G., 2009. Effects of dioxins, PCBs, and PBDEs on immunology and hematology in adolescents. Environmental Science and Technology 43, 7946–7951. Levine, B., Moore, K., Aronica-Pollak, P., Fowler, D., 2004. Oxycodone intoxication in an infant: Accidental or intentional exposure? Journal of Forensic Sciences 49, 1358–1360. Liao, C., Kannan, K., 2013. Concentrations and profiles of bisphenol A and other bisphenol analogues in foodstuffs from the United States and their implications for human exposure. Journal of Agricultural and Food Chemistry 61, 4655–4662. Liu, K.-S., Hao, J.-H., Xu, Y.-Q., Gu, X.-Q., Shi, J., Dai, C.-F., Shen, R., 2013. Breast milk lead and cadmium levels in suburban areas of Nanjing, China. Chinese Medical Sciences Journal 28, 7–15. Lord, S., Bernstein, L., Johnson, K., Malone, K., McDonald, J., Marchbanks, P., Ursin, G., 2008. Breast cancer risk and hormone receptor status in older women by parity, age of first birth, and breastfeeding: A case-control study. Cancer Epidemiology, Biomarkers and Prevention 17, 1723–1730. Madadi, P., Ross, C., Hayden, M., Carleton, B., Gaedigk, A., Leeder, J., Koren, G., 2008a. Pharmacogenetics of neonatal opioid toxicity following maternal use of codeine during breastfeeding: A case-control study. Clinical Pharmacology & Therapeutics 85, 31–35. Madadi, P., Shirazi, F., Walter, F., Koren, G., 2008b. Establishing causality of CNS depression in breastfed infants following maternal codeine use. Paediatric Drugs 10, 399–404. Massart, F., Harrell, J., Federico, G., Saggese, G., 2005. Human breast milk and xenoestrogen exposure: A possible impact on human health. Journal of Perinatology 25, 282–288. Matsuura, N., Uchiyama, T., Tada, H., Nakamura, Y., Kondo, N., Morita, M., Fukushi, M., 2001. Effects of dioxins and polychlorinated biphenyls (PCBs) on thyroid function in infants born in JapandThe second report from research on environmental health. Chemosphere 45, 1167–1171. Meironyte, D., Bergman, A., Noren, K., 1998. Analysis of polybrominated diphenyl ethers in human milk. Organohalogen Compounds 35, 387–390. Meironyte, D., Noren, K., Bergman, A., 1999. Analysis of polybrominated diphenyl ethers in Swedish human milk. A time related trend study 1972–1997. Journal of Toxicology and Environmental Health, Part A 58 (6), 329–341. Melzer, D., Rice, N., Depledge, M., Henley, W., Galloway, T., 2010. Association between serum perfluorooctanoic acid (PFOA) and thyroid disease in the U.S. National Health and nutrition examination survey. Environmental Health Perspectives 118, 686–692. Meyer, D., Tobias, J., 2005. Adverse effects following the inadvertent administration of opioids to infants and children. Clinical Pediatrics 44, 499–503. Meyer, B., Ni, A., Hu, B., Shi, L., 2007. Antimicrobial preservative use in parenteral products: Past and present. Journal of Pharmaceutical Sciences 96, 3155–3167. Mogensen, U., Grandjean, P., Nielsen, F., Weihe, P., Budtz-Jorgensen, E., 2015. Breastfeeding as an exposure pathway for perfluorinated alkylates. Environmental Science & Technology 49, 10466–10473. Muller, M., Polder, A., Brynildsrud, O., Karimi, M., Lie, E., Manyilizu, W., Lyche, J., 2017. Organochlorine pesticides (OCPs) and polychlorinated biphenyls (PCBs) in human breast milk and associated health risks to nursing infants in northern Tanzania. Environmental Research 154, 425–434. Nagayama, J., Tsuji, H., lida, T., Hirakawa, H., Matsueda, T., Okamura, K., Watanabe, T., 1998. Postnatal exposure to chlorinated dioxins and related chemicals on lymphocyte subsets in Japanese breast-fed infants. Chemosphere 37, 1781–1787. Nagayama, J., Tsuji, H., lida, T., Nakagawa, R., Matsueda, T., Hirakawa, H., Watanabe, T., 2007. Immunologic effects of perinatal exposure to dioxins, PCBs and organochlorine pesticides in Japanese infants. Chemosphere 67, S393–S398. Nakamura, T., Nakai, K., Matsumura, T., Suzuki, S., Saito, Y., Satoh, H., 2008. Determination of dioxins and polychlorinated biphenyls in breast milk, maternal blood and cord blood from residents of Tohoku, Japan. Science of the Total Environment 394, 39–51. Naumburg, E., Meny, R., 1988. Breast milk opioids and neonatal apnea. American Journal of Diseases of Children 142 (1), 11–12. Noren, K., Lunden, A., Pettersson, E., Bergman, A., 1996. Methylsulfonyl metabolites of PCBs and DDE in human milk in Sweden, 1972-1992. Environmental Health Perspectives 104, 766–772. Pan, I., Daniels, J., Goldman, B., Herring, A., Siega-Riz, A., Rogan, W., 2009. Lactational exposure to polychlorinated biphenyls, dichlorodiphenyltrichloroethane, and dichlorodiphenyldichloroethylene and infant neurodevelopment: An analysis of the pregnancy, infection, and nutrition babies study. Environmental Health Perspectives 117 (3), 488–494. Pan, I., Daniels, J., Herring, A., Rogan, W., Siega-Riz, A., Goldman, B., Sjodin, A., 2010. Lactational exposure to polychlorinated biphenyls, dichlorodiphenyltrichloroethane, and dichlorodiphenyldichloroethylene and infant growth: An analysis of the pregnancy, infection, and nutrition babies study. Paediatric and Perinatal Epidemiology 24 (3), 262–271. Patandin, S., Koopman-Esseboom, C., deRidder, M., Weisglas-Kuperus, N., Sauer, P., 1998. Effects of environmental exposure to polychlorinated biphenyls and dioxins on birth size and growth in Dutch children. Pediatric Research 44, 538–545. Patandin, S., Lanting, C., Mulder, P., Boersma, E., Sauer, P., Weisglas-Kuperus, N., 1999. Effects of environmental exposure to polychlorinated biphenyls and dioxins on cognitive abilities in Dutch children at 42 months of age. The Journal of Pediatrics 134, 33–41. Pluim, H., Koppe, J., Olie, K., van der Slikke, J., Slot, P., van Boxtel, C., 1994. Clinical laboratory manifestations of exposure to background levels of dioxins in the perinatal period. Acta Paediatrica 83 (6), 583–587. Pluim, H., van der Goot, M., Olie, K., van der Slikke, J., Koppe, J., 1996. Missing effects of background dioxin exposure on development of breast-fed infants during the first half year of life. Chemosphere 33, 1307–1315. Pohl, H., Hibbs, B., 1996. Breast-feeding exposure of infants to environmental contaminantsdA public health risk assessment viewpoint: Chlorinated dibenzodioxins and chlorinated dibenzofurans. Toxicology and Industrial Health 12, 593–611. Rawn, D., Sadler, A., Cassey, V., Breton, F., Sun, W., Arbuckle, T., Fraser, W., 2017. Dioxins/furans and PCBs in Canadian human milk: 2008–2011. Science of the Total Environment 595, 269–278. Reed, C., 1908. A study of the conditions that require the removal of the child from the breast. Surgery, Gynecology & Obstetrics 6, 514–526. Ribas-Fito, N., Sala, M., Kogevinas, M., Sunyer, J., 2001. Polychlorinated biphenyls (PCBs) and neurological development in children: A systematic review. Journal of Epidemiology & Community Health 55, 537–546. Ribas-Fito, N., Cardo, E., Sala, M., de Muga, M., Mazon, C., Verdu, A., Sunyer, J., 2003. Breastfeeding, exposure to organochlorine compounds, and neurodevelopment in infants. Pediatrics 111 (5 Pt 1), e580–e585. Rodas-Ortíz, J., Ceja-Moreno, V., González-Navarrete, R., Alvarado-Mejía, J., Rodríguez-Hernández, M., Gold-Bouchot, G., 2008. Organochlorine pesticides and polychlorinated biphenyls levels in human milk from Chelem, Yucatán, México. Bulletin of Environmental Contamination and Toxicology 80, 255–259. Rogan, W., Gladen, B., 1991. PCBs, DDE, and child development at 18 and 24 months. Annals of Epidemiology 1, 407–413. Rogan, W., Gladen, B., McKinney, J., Carreras, N., Hardy, P., Thullen, J., Tully, M., 1987. Polychlorinated biphenyls (PCBs) and dichlorodiphenyl dichloroethene (DDE) in human milk: Effects on growth, morbidity, and duration of lactation. American Journal of Public Health 77, 1294–1297. Romano, M., Xu, X., Calafat, A., Yolton, K., Chen, A., Webster, G., Braun, J., 2016. Maternal serum perfluoroalkyl substances during pregnancy and duration of breastfeeding. Environmental Research 149, 239–246. Roosens, L., D’Hollander, W., Bervoets, L., Reynders, H., Van Campenhout, K., Cornelis, C., Covaci, A., 2010. Brominated flame retardants and perfluorinated chemicals, two groups of persistent contaminants in Belgian human blood and milk. Environmental Pollution 158, 2546–2552. Rosen-Carole, C., Auinger, P., Howard, C., Brownell, E., Lanphear, B., 2017. Low-level prenatal toxin exposures and breastfeeding duration: A prospective cohort study. Maternal and Child Health Journal 21, 2245–2255.

Environmental Chemicals in Breast Milk

407

Ryan, J., Rawn, D., 2014. Polychlorinated dioxins, furans (PCDD/Fs), and polychlorinated biphenyls (PCBs) and their trends in Canadian human milk from 1992 to 2005. Chemosphere 102, 78–86. Sachs, H., Committee On Drugs, 2013. The transfer of drugs and therapeutics into human breast milk: An update on selected topics. Pediatrics 132, e796–e809. Salmani, M., Rezaie, Z., Mozaffari-Khosravi, H., Ehrampoush, M., 2018. Arsenic exposure to breast-fed infants: Contaminated breastfeeding in the first month of birth. Environmental Science and Pollution Research 25 (7), 6680–6684. Sasamoto, T., Horii, S., Ibe, A., Takada, N., Shirota, K., 2006. Concentration changes of PCDDs, PCDFs, and dioxin-like PCBs in human breast milk samples as shown by a follow-up survey. Chemosphere 64, 642–649. Schlumpf, M., Kypke, K., Wittassek, M., Angerer, J., Mascher, H., Mascher, D., Lichtensteiger, W., 2010. Exposure patterns of UV filters, fragrances, parabens, phthalates, organochlor pesticides, PBDEs, and PCBs in human milk: Correlation of UV filters with use of cosmetics. Chemosphere 81, 1171–1183. Schreiber, J., 2001. Parents worried about breast milk contamination: What is best for baby? Pediatric Clinics of North America 48, 1113–1127. Shy, C.-G., Huang, H.-L., Chao, H.-R., Chang-Chien, G.-P., 2012. Cord blood levels of thyroid hormones and IGF-1 weakly correlate with breast milk levels of PBDEs in Taiwan. International Journal of Hygiene and Environmental Health 215, 345–351. Sjödin, A., LaKind, J., Patterson Jr., D., Needham, L., Wang, R., Paul, I., Berlin, C., 2005. Current concentrations and changes in concentrations of PBDEs, persistent pesticides, and PCBs in human milk. Organohalogen Compounds 73, 1745–1748. Smialek, J., Monforte, J., Aronow, R., Spitz, W., 1977. Methadone deaths in children. A continuing problem. JAMA 238, 2516–2517. Smith, J., 1982. Codeine-induced bradycardia in a breast-fed infant. Clinical Research 30, 259a. State of California, 2013. Technical bulletin 117–2013: Requirements, test procedure and apparatus for testing the smolder resistance of materials used in upholstered furniture. Department of Consumer Affairs, Sacramento, CA. ten Tusscher, G., Steerenberg, P., van Loveren, H., Vos, J., von dem Borne, A., Westra, M., Koppe, J., 2003. Persistent hematologic and immunologic disturbances in 8-year-old Dutch children associated with perinatal dioxin exposure. Environmental Health Perspectives 111, 1519–1523. ten Tusscher, G., Guchelaar, H., Koch, J., Ilsen, A., Vulsma, T., Westra, M., Koppe, J., 2008. Perinatal dioxin exposure, cytochrome P-450 activity, liver functions and thyroid hormones at follow-up after 7–12 years. Chemosphere 70, 1865–1872. The American College of Obstetricians and Gynecologists, 2017. Opioid use and opioid use disorder in pregnancy. The American College of Obstetricians and Gynecologists, Washington. Timmermann, C., Budtz-Jorgensen, E., Petersen, M., Weihe, P., Steuerwald, U., Nielsen, F., Grandjean, P., 2017. Shorter duration of breastfeeding at elevated exposures to perfluoroalkyl substances. Reproductive Toxicology 68, 164–170. U.S. Food & Drug Administration, 2018. U.S. Food & Drug Administration. Retrieved from Pregnancy and Lactation Labeling Final Rule: https://www.fda.gov/biologicsbloodvaccines/ guidancecomplianceregulatoryinformation/actsrulesregulations/ucm445102.htm. UNEP (United Nations Environment Programme), 2012. UNEP-coordinated survey of mothers’ milk for persistent organic pollutants. In: Guidelines for organization, sampling and analysis. UNEP (United Nations Environment Programme). Ursinyova, M., Masanova, V., 2005. Cadmium, lead and mercury in human milk from Slovakia. Food Additives and Contaminants 22, 579–589. van den Berg, M., Kypke, K., Kotz, A., Tritscher, A., Lee, S., Magulova, K., Malisch, R., 2017. WHO/UNEP global surveys of PCDDs, PCDFs, PCBs and DDTs in human milk and benefit–risk evaluation of breastfeeding. Archives of Toxicology 91, 83–96. Victora, C., Bahl, R., Barros, A., Franca, G., Horton, S., Krasevec, J., Rollins, N., 2016. Breastfeeding in the 21st century: Epidemiology, mechanisms, and lifelong effect. The Lancet 387, 475–490. von Ehrenstein, O., Fenton, S., Kato, K., Kuklenyik, Z., Calafat, A., Hines, E., 2009. Polyfluoroalkyl chemicals in the serum and milk of breastfeeding women. Reproductive Toxicology 27, 239–245. Vukavic, T., Miloradov, V., Mihajlovic, I., Ristivojevic, A., 2013. Human milk POPs and neonatal risk trend from 1982 to 2009 in the same geographic region in Serbia. Environment International 54, 45–49. Walkowiak, J., Weiner, J., Fastabend, A., Heinzow, B., Kramer, U., Schmidt, E., Winneke, G., 2001. Environmental exposure to polychlorinated biphenyls and quality of the home environment: Effects on psychodevelopment in early childhood. The Lancet 358, 1602–1607. Weisglas-Kuperus, N., Sas, T., Koopman-Esseboom, C., Van Der Zwan, C., De Ridder, M., Beishuizen, A., Sauer, P., 1995. Immunologic effects of background prenatal and postnatal exposure to dioxins and polychlorinated biphenyls in Dutch infants. Pediatric Research 38, 404–410. Weisglas-Kuperus, N., Patandin, S., Berbers, G., Sas, T., Mulder, P., Sauer, P., Hooijkaas, H., 2000. Immunologic effects of background exposure to polychlorinated biphenyls and dioxins in Dutch preschool children. Environmental Health Perspectives 108 (12), 1203–1207. Weldon, R., Webster, M., Harley, K., Bradman, A., Fenster, L., Davis, M., Eskenazi, B., 2010. Serum persistent organic pollutants and duration of lactation among Mexican-American women. Journal of Environmental and Public Health 2010, 861757. West, P., McKeown, N., Hendrickson, R., 2009. Methadone overdose in a breast-feeding toddler. Clinical Toxicology 47, 721. White, S., Calafat, A., Kuklenykik, Z., Villanueva, L., Zehr, R., Helfant, L., Fenton, S., 2007. Gestational PFOA exposure of mice is associated with altered mammary gland development in dams and female offspring. Toxicological Sciences 96, 133–144. Wilhelm, M., Ewers, U., Wittsiepe, J., Furst, P., Holzer, J., Eberwein, G., Ranft, U., 2007. Human biomonitoring studies in North Rhine-Westphalia, Germany. International Journal of Hygiene and Environmental Health 210, 307–318. Wilhelm, M., Wittsiepe, J., Lemm, F., Ranft, U., Kramer, U., Furst, P., Winneke, G., 2008. The Duisburg birth cohort study: Influence of the prenatal exposure to PCDD/Fs and dioxinlike PCBs on thyroid hormone status in newborns and neurodevelopment of infants until the age of 24 months. Mutation Research, Reviews in Mutation Research 659, 83–92. Wohlfahrt-Veje, C., Audouze, K., Brunak, S., Antignac, J., le Bizec, B., Juul, A., Main, K., 2014. Polychlorinated dibenzo-p-dioxins, furans, and biphenyls (PCDDs/PCDFs and PCBs) in breast milk and early childhood growth and IGF1. Reproduction 147, 391–399. World Health Organization (WHO), 2007. Fourth WHO-coordinated survey of human milk for persistent organic pollutants in cooperation with UNEP. In: Guidelines for developing a national protocol. WHO, Geneva. Yalcin, S., Orun, E., Yalcin, S., Aykut, O., 2015. Organochlorine pesticide residues in breast milk and maternal psychopathologies and infant growth from suburban area of Ankara, Turkey. International Journal of Environmental Health Research 25, 364–372. Ye, X., Bishop, A., Needham, L., Calafat, A., 2008. Automated on-line column-switching HPLC-MS/MS method with peak focusing for measuring parabens, triclosan, and other environmental phenols in human milk. Analytica Chimica Acta 622, 150–156. Zhao, Y., Tan, Y., Haslam, S., Yang, C., 2010. Perfluorooctanoic acid effects on steroid hormone and growth factor levels mediate stimulation of peripubertal mammary gland development in C57Bl/6 mice. Toxicological Sciences 115, 214–224. Zhou, Y., Chen, J., Li, Q., Huang, W., Lan, H., Jiang, H., 2015. Association between breastfeeding and breast cancer risk: Evidence from a meta-analysis. Breastfeeding Medicine 10, 175–182. Zhu, J., Phillips, S., Feng, Y.-L., Yang, X., 2006. Phthalate esters in human milk: Concentration variations over a 6-month postpartum time. Environmental Science & Technology 40, 5276–5281. Zoeller, R., 2005. Environmental chemicals as thyroid hormone analogues: New studies indicate that thyroid hormone receptors are targets of industrial chemicals? Molecular and Cellular Endocrinology 242, 10–15.

Environmental Conditions in the Estuarine Coast of Montevideo (Uruguay): Historical Aspects and Present Status: An Updateq Pablo Muniz and Natalia Venturini, Instituto de Ecología y Ciencias Ambientales, Montevideo, Uruguay © 2019 Elsevier B.V. All rights reserved.

Abbreviations AMBI AZTI’s Marine Biotic Index BOD Biochemical oxygen demand Eh Redox potential PAHs Polycyclic aromatic hydrocarbons PEL Probable effect level TEL Threshold effect level

Introduction Globalization is the process driven by economic, technological, sociocultural, and political forces, which tend to bring together the societies of the world into a single unit functioning together. Developing countries in South America had been suffering from the effects of globalization processes since the discovery of ‘the new continent’ by Christopher Columbus in 1492. Pre-Columbian cultures had developed sustainable social and environmental models that maintained the main ecosystems without deep changes for several centuries. However, during the Portuguese and Spanish colonization period, wider commercial nets were established, promoting indiscriminate exploitation of mineral resources, deforestation, intensive pasture, and the introduction of exotic plant cultures and monocultures, besides influencing cultural aspects of native populations. At the beginning of the nineteenth century, most of the colonies declared their independence but maintained the old social structure and means of production by becoming suppliers of primary matters and food products, first for the European countries and thereafter for the United States. Throughout this time, the exploitation of natural resources increased without any concern about environmental risks, principally in areas with high population densities. In the twentieth century with the railway construction and expansion of the main harbors, this became worse. Over the past 120 years, worldwide urban growth has been high worldwide as a consequence of people migrating from the countryside to the city due to job opportunities, entertainment, and better educational and health services in the latter. Cities are one of the utmost achievements of human civilization because they are social, cultural, communicational, and commercial centers, but at the same time, they play an important role in environmental degradation with effects that are perceptible outside their physical limits. Among the main problems associated with urbanization are the energy demand for household and industrial activities and transport, the accumulation of waste, the disposal of untreated or partially treated sewage, and the production of several kinds of liquid, solid, and gaseous residues. Therefore, cities can be seen as artificial complex organisms that continuously interact and exchange materials, energy, and information with their surrounding environment. In 1995, 70% of Latin America population lived in urban regions and this trend is increasing nowadays. In addition, it is expected that by the year 2025 the number of people living in cities in the world would be double the number today. This, besides the fact that approximately 40% of the cities with > 500,000 inhabitants are situated in coastal areas influenced by anthropogenic pressure, make the evaluation and mitigation of coastal environmental impacts a major challenge. Human interaction with the ecosystem includes consumptive and nonconsumptive activities such as commercial fisheries, tourism, biodiversity, research, and education. Industrial, agricultural, and urban wastes produce pollution and eutrophication problems that affect species structure and ecosystem functioning as well as habitat alterations by land reclamation and dredging. One of the most severe impacts on coastal environments is the external supply of organic matter to the system. Major effects of the input of organic matter and pollutants into coastal ecosystems are reflected not only in water and sediment quality but also in the equilibrium of biological communities. This, in turn, can generate public health problems through the direct use of the aquatic system for recreation or the extraction and consumption of its natural resources.

q

Change History: October 2017. Authors Muniz and Venturini were involved in the update of the chapter. This is an update of P. Muniz, N. Venturini, Uruguay: Environmental Conditions in the Coast of Montevideo: Historical Aspects, Present Status and Perspectives of Habitat Degradation and Uses, Editor(s): J.O. Nriagu, Encyclopedia of Environmental Health, Elsevier, 2011, Pages 590–601.

408

Encyclopedia of Environmental Health, 2nd edition, Volume 2

https://doi.org/10.1016/B978-0-12-409548-9.10942-X

Environmental Conditions in the Estuarine Coast of Montevideo (Uruguay): Historical Aspects and Present Status

409

Ecosystem Health and Estuaries The resilience of different coastal ecosystems to natural or man-induced perturbations depends on their structural complexity, trophic relationships within the local food web, the connectivity between habitats and larger ecological units, and the ecological and biological strategies of the species they contain. The pathway for an ecosystem to attain structural maturity and climax is long and slow, whereas the reverse is faster. Natural perturbations play a role in maintaining spatiotemporal heterogeneity in ecosystems, preventing them from reaching a steady state or stimulating them backward in the ecological succession. Estuaries are environments where different water masses meet and therefore have contrasting physical and chemical properties, promoting the establishment of strong horizontal and vertical gradients. Hydrographical regimes are very variable among estuaries and also, temporally, within a particular one; therefore, populations and species associated with these relatively complex environments should develop physiological adaptations to deal with this high natural variability. Moreover, they are zones of transition between the marine and the freshwater environments that perform essential ecological functions including nutrient degradation and regeneration. Also, they control the fluxes of nutrients, water, particles, and organisms from and to the continental margins, rivers, and oceans. Currently, the ecological and economical importance of estuaries and coastal zones in general is recognized worldwide. They are high-productive environments, which provide lot of benefits to the society. Nevertheless, evaluating and managing changes in estuarine areas is a complex task that must take into account their natural variability. Environmental conditions of an ecosystem must be evaluated using biological and ecological indicators, which require a good knowledge and an integrated vision of the mechanisms and processes responsible for its functioning, development, and deterioration. Ecosystem health is defined at the ecosystem level. Subsequently, it includes the whole of the environment, both living and nonliving components of a landscape, with humans as a key element in management. Therefore, ecosystem health is an interdisciplinary theme that should include ecological, environmental managing, public health, and socioeconomic, ethic, and policy aspects. The consciousness about the consequences of man-induced impacts on marine ecosystems has increased during the past years, and therefore, the need to apply effective measures to prevent or stop them. In this sense, the development of research to identify and attenuate the impact is fundamental, aiming at the prevention of socioeconomical prejudices and health risks.

The Montevideo Coastal Zone Within This Context The Montevideo coastal zone between 34 500 –34 560 S and 56 050 –56 250 W is bathed by the waters of the Río de la Plata Estuary (Fig. 1). The Río de la Plata basin is the second largest in South America, following the Amazon basin. It drains nearly 20% of the continent surface with an area of 3.1  108 km2 and an extension of 320 km, discharging approximately 23,000 m3 s 1 of freshwater on the western South Atlantic shelf. It is a coastal-plain tidal river, with a semienclosed shelf sea at the mouth. It is subjected to a high micro- and mesoscale variability, depending on the freshwater input, tidal oscillations, wind patterns, and other episodic events such as ‘El Niño’ and ‘La Niña.’ The Río de la Plata Estuary is highly productive and variable environment characterized by a well-developed turbidity front and most of the time by strong vertical salinity stratification. Strong and unpredictable wind events can generate alternative pulses of stratified and partially mixed conditions in few hours. This system is one of the most important estuarine environments in the continent that sustains valuable artisanal and coastal fisheries. Owing to their morphology and nutrient availability, the coastal areas of the inner and lower Río de la Plata constitute favorable habitats as breeding areas for the croaker Micropogonias furnieri and other species with economic importance, such as king weakfish and menhaden. Therefore,

59°

56°

53°

50° W

Brazil South America Uruguay S

33° S

Atlantic Ocean Río de la Plata

Argentina

Fig. 1

Geographic position of Uruguay in South America.

37° S

410

Environmental Conditions in the Estuarine Coast of Montevideo (Uruguay): Historical Aspects and Present Status

this has become the second largest fishery resource of Uruguay, producing around 23,000 tons per year. The outer Río de la Plata and the neighboring coastal shelf are characterized by the presence of sandy sediments, whereas silty clay and silt covered the upper and middle Río de la Plata basin. Montevideo Bay with an area of approximately 10 km2 and a mean depth of 5 m is included in this estuarine system. The predominant winds from NE and W–SW promote mainly the clockwise water circulation within the bay and tidal variations have amplitude of 40 cm. Modern sediments are mostly composed of silt and clay fractions, being the principal sediment sources related to the proximity and fluctuations eastward–westward of the turbidity front, urban inputs, and other freshwater inputs. Montevideo Harbordthe capital and principal port of Uruguaydis located on this coastal area, which results in the input of a considerable load of urban and industrial wastes. Most tanneries and slaughterhouses, as well as chemical, dairy, wool washing, and chlorine-alkali industries are established in this area. Their inputs are basically carried by three streams that flow into Montevideo Bay (Pantanoso, Miguelete, and Seco), and by the main sewage pipe of Uruguay, located eastward of Montevideo in the Punta Carretas region (Fig. 2). Moreover, this water body is intensively used as a transport route for the economic activities in the harbor. This has negative consequences on the ecosystem, such as chronic and accidental oil spills, sewage release, the input of toxic substances, dredging, and habitat destruction. Contamination has been recorded in water and sediments of the Montevideo area mostly related to enrichment in organic matter and trace metals, petroleum hydrocarbons, and sewage inputs. Pollutants may be transported through the food web and ultimately affect the health of human consumers. Although it is contrary to the consumer’s interests, evidence on metal bioaccumulation suggests that it is preferable to avoid the consumption of large (i.e., old) specimens.

56°08′

(a)

N N WW

Miguelete stream Miguelete Stream Pantanoso Pantanoso Sstream tream

E E

34°52′

S C D

E F W Pta. Yeguas Pta. Yeguas U R

G X Pta. Pta. Lobos Lobos

V S

B

A Montevideo Harbour harbour

H I

Seco stream SecoStream

J

K

L

T

Pta. Pta . Carretas Carretas M O

R í o de la

34°56′

2 km

2 km

Río de la Plata Plata

N P

Q

56°17′

(b)

A* B*

C*

H*

G*

D* F* E*

Storm drains and minor sewage pipes

Fig. 2 Location of (A) Montevideo Bay with the 10 stations (A to J), and (B) Montevideo harbor with the 8 stations (*) marked A to H. Arrows indicate the principal sewage pipes and storm drains that discharge into the harbor basin. Photographs are general views of Montevideo Bay, the petroleum refinery near Pantanoso Stream, and the coastal zone (view from land) of the most internal region of the bay.

Environmental Conditions in the Estuarine Coast of Montevideo (Uruguay): Historical Aspects and Present Status

411

In Uruguay, these aspects have received little attention, and published data on these topics are scarce. However, recent works have reported high levels of lead in children’s blood from Montevideo and the increment in intensity and frequency of diverse environmental alerts (toxic algal blooms, oil spill, massive fish mortalities, etc.), which in turn have generated public interest.

Some Historical Aspects The history of land occupation, socioeconomic development, and ecological modification in the Montevideo coastal zone began in 1724 during the course of Montevideo City foundation by the Spanish colonizers. Modifications of the original landscape were initiated in the Montevideo Bay, an area attractive as a natural refuge for ships, and later on, as a suitable place for the construction of a harbor with great regional importance. From 1851 to 1875, an intense demographic expansion occurred with little landscape transformation. However, between 1876 and 1930 most environmental alterations, such as land reclamation in Montevideo Bay took place, associated with urban and industrial development. In 1901, the construction of the modern Montevideo Harbor began, which ended 8 years later. The main consequences of this construction were shifts in circulation patterns, changes in the hydrodynamic regime, and an increase in sedimentation rates within the bay. Between 1917 and 1930, industrial activities increased on the coast with the installation of meat-based products’ industrialization plants, textile industries, and tanneries without any regulation. By 1930 huge transformations had happened, and the subsequent economic stagnation progressively led to spatial segregation with the establishment of irregular urban areas in the periphery of Montevideo City, the demolition of several cultural works, and also landscape and environmental degradation. Heavy metals analysis showed that since Montevideo foundation, metal concentrations increase in accordance with industry development, and also metals concentrations and related indices (such as pollution index, metal enrichment, etc.) represent a reliable footprint of the history of different economic and environmental policies influencing historical industrial activities. Also, more recent studies about the isotopic composition of sediments suggested a tendency for higher influence of marine organic matter in historic sediments, while after  1940 to the present, the Montevideo coastal zone appear to be more influenced by estuarine organic matter. Such isotopic trend could be explained by significant contribution of particulate organic matter transported by the effluents that the system received. Montevideo City is currently suffering from the same environmental problems as other capitals or big cities in the world, with the present environmental status of the adjacent coastal zone being the result of at least 150 years of anthropogenic alterations.

Current Environmental Status of Montevideo Bay and the Adjacent Coastal Area: Present Knowledge about the State of this Aquatic System Until now, the knowledge about anthropogenic perturbation, local pollution levels, and its effects has been limited, mainly in relation to the biota. The first studies concerning heavy metal and hydrocarbon pollution in the Montevideo coastal zone were performed in the early 1990s. They found that the sediments near the mouth of the Pantanoso Stream were severely contaminated by chromium derived from leather fabrics. Soon after, in the late 1990s several studies were carried out in the Montevideo coastal zone, with the main objective of evaluating the degree of contamination and its effects on this ecosystem (Fig. 2). These studies have tried to improve the understanding about natural and anthropogenic perturbations, and the structure of and interactions between the abiotic and biotic components of this system, to create a scientific–technical basis for the correct implementation of an environmental monitoring plan for the Montevideo coastal zone.

Water Column Shallow coastal waters of Montevideo are directly influenced by wind direction and speed, apart from variations of the discharge of the Uruguay and Paraná rivers. It was established that short-term variability of physical and chemical water properties can be recognized by evident changes in water color and turbidity that take place within a few hours after changes in meteorological and hydrodynamic conditions, indicating water type mixing. However, hydrochemical as well as biological variables reflect the very high level of perturbation of Montevideo Harbor and the inner portion of Montevideo Bay. Low oxygen saturation or its absence in summer bottom waters is conspicuous; several hypoxia events have been recorded during all the year but especially during summer, even at the surface waters. Part of the observed oxygen deficit in the study area can be ascribed to degradation of organic matter. Biochemical oxygen demand (BOD) levels are high in the inner portion of Montevideo Bay and in the harbor area, and generally exceed the maximum values permitted by environmental laws. In addition, the duplication of BOD levels has been registered frequently after storms, suggesting that storm drains result in the discharge of organically enriched effluents into the harbor’s basin. Elevated levels of chlorophyll-a in water samples during summer are a direct consequence of the summer phytoplankton and cyanobacteria bloom plus the over-enrichment of the system by nutrients. It has been observed that reduced light intensity and water temperature become less favorable for algae growth during the cold season, and as a consequence, phytoplankton biomass diminish in the study area.

412

Environmental Conditions in the Estuarine Coast of Montevideo (Uruguay): Historical Aspects and Present Status

The zooplankton community in the Montevideo harbor area consists of only five species: the copepods Acartia tonsa, Eurytemora affinis, and Notodiaptomus incompositus; the cladoceran Moina micrura; and larvae of the euryhaline polychaete Heteromastus filiformis, representing freshwater and estuarine ecosystems. Maximum total abundance, which has been observed at station D (harbor access), is merely 17 individuals per cubic meter. Also, both diversity and abundance of zooplankton are extremely low in the area and larvae of the polychaete H. filiformis dominate in the samples. The copepod N. incompositus (Calanoidae) and one not yet determined species of Harpacticoidae, both typical components of freshwater zooplankton, have been registered in the area for the first time. Conversely, in the bay and in its adjacent coastal area, zooplankton community is richer than in the harbor area, reaching 26 species with A. tonsa the dominant species. Apart from the 10 copepod species, the zooplanktonic community is composed by one chaetognatha, five species of cladocerans, one mysid, and one ctenophore. All of them are considered frequent inhabitants of this and other estuarine coastal areas. One important feature observed in the dominant species A. tonsa is the presence of ectobionts, which are more abundant on copepods recorded in the inner part of Montevideo Bay (Fig. 2). The identified ectobiont corresponds to a sessile ciliate of Peritrichida, Zoothamnium elegans, which can affect the feeding, swimming, and reproductive behavior of the infected species and the highest percentage of ectobionts on A. tonsa individuals is reached in summer. Fig. 3 shows the zonation analysis based on hydrological and planktonic communities of the Montevideo Bay area. Three zones of the bay can be differentiated according to their water quality (oxygen demand, BOD, temperature, salinity, nutrients) and also three considering the zooplanktonic density. As mentioned earlier, there are several hypoxia events throughout the year in Montevideo Bay; however, these events are more frequent during the summer period, when the inner part of the bay reaches values close to anoxia, even at the water column surface. Some studies have suggested that oxygen deficiency can be the result of the high nutrient and chlorophyll-a concentrations recorded during the austral summer, and also, that Montevideo Bay, including the harbor area, shows signals of eutrophication.

Bottom Sediments A number of studies have shown little grain size variation in sediment composition of the harbor area. The predominant fraction is silt (up to 85%) and the only area where sand reaches 15% is that near the fluvial dock (Station E* in Fig. 2). Organic matter content of the sediments is high, with a clear spatial gradient from the inner area (Station H* ¼ 16.5%, in Fig. 2) to the outer one (Station G* ¼ 9.6%, in Fig. 2). Hence, redox potential (Eh) has evidenced the lack of oxygen in surface sediments at most of the stations. In addition, reduced conditions have been detected at 1 cm depth within the sediment column with values ranging between  90 and þ 100 mV. Moreover, high levels of heavy metals (zinc, chromium, lead, mercury, cooper, nickel, cadmium, and silver) and petroleum hydrocarbons have been recorded in sediments of Montevideo Harbor. Besides, these data and those derived from the analysis of hydrocarbons have been evaluated for potential adverse effects on biological organisms using available sediment quality values and sediment quality guidelines. Concerning heavy metals and considering that toxicity rarely occurs below the threshold effect level (TEL), and frequently, above probable effect level (PEL), it can be concluded that (1) the inner region of the harbor shows heavy metal concentrations that may cause major adverse biological effects, except for Ni and Hg, and almost all the metals are above PEL, the exceptions are Cd, Ni, and Ag that are between TEL and PEL; (2) the entrance of the harbor presents Pb and Cr levels between TEL and PEL; (3) only the most-inner harbor area shows Ag values that would cause major adverse biological effects on benthic fauna. Related to polycyclic aromatic hydrocarbons (PAHs), in the harbor locations, at least one of the analyzed compounds is present in excess of the TEL, and most of them (ca. 80%) have at least one compound in excess of PEL. A recent temporal comparison of metal and hydrocarbons was made with previous measurements taken 12 years ago and clear changes were observed. Metals such as Pb, Cr, and Zn have been their concentration reduced, while aliphatic hydrocarbons increased in almost all stations. Differences were observed between summer and winter suggesting sedimentation, input, or hydrodynamic changes in the area. A recent study developed to assess the organic pollution in the study area by molecular markers indicated high chronic organic pollution in the stations of Montevideo Bay. Main sources of aliphatic and polycyclic aromatic hydrocarbons were petroleum inputs and combustion, introduced by oil transport and refinement, harbor activities, and vehicular emissions. Major sources of linear alkylbenzenes and steroids were industrial and domestic sewage. Coprostanol levels (frequently used as an urban sewage tracer) indicate sewage contamination in the whole harbor area, high as those at other sites situated near densely populated areas. Also, related ratios showed sediment contamination by raw sewage. Although significant anthropogenic inputs, a natural footprint of terrestrial higher plants contribution was recognized. The use of multimolecular marker and comprehensive assessments can improve the establishment of more precise regulation actions to reduce pollution levels. Another study evaluated the benthic trophic status of the Montevideo coastal zone using the quantity and the biochemical composition of sedimentary organic matter as synthetic descriptors. The spatiotemporal patterns in the biochemical composition of sedimentary organic matter were related to the presence of some natural and human pressures. Chlorophyll-a, phaeopigment, and the biopolymeric carbon concentrations were similar to those reported in very productive, eutrophic, and anthropized estuarine areas. Total proteins (PRT) and lipids (LIP) showed the highest concentrations in the inner portion of Montevideo Bay, decreasing toward the nearby coastal areas of Punta Carretas and Punta Yeguas. Total carbohydrates (CHO) presented the lowest values in the outer stations of Montevideo Bay, but similar and higher concentrations were recorded in the inner stations of the bay and the adjacent coastal zones. PRT:CHO ratios > 1 were always observed for the inner stations of Montevideo Bay, thus suggesting intense detritus mineralization and an increment in their protein content due to bacterial activity. The biopolymeric carbon showed the

Environmental Conditions in the Estuarine Coast of Montevideo (Uruguay): Historical Aspects and Present Status

413

Fig. 3 Zonation pattern of Montevideo Bay according to physical and chemical variables from the water column (A) and zooplankton abundances (B).

same spatial trend observed for PRT, LPD, and the PRT:CHO ratios with highest concentrations in the inner bay than in Pta. Carretas and Pta. Yeguas. Elevated contributions of PRT and LIP in the inner Montevideo Bay may be associated with anthropogenic inputs of organic matter such as sewage, food industry, and petroleum hydrocarbons. Conversely, high CHO contributions in the nearby coastal areas of Pta. Carretas and Pta. Yeguas seemed to be related to autochthonous primary production and CHO temporal variability with natural oscillations in the productivity of the system. Biochemical composition of organic matter indicates hypertrophic conditions and poor environmental quality in the sediments of the inner stations of Montevideo Bay due to strongest human impact. However, biochemical descriptors showed a relative improvement of the benthic trophic status in the external coastal areas subjected to moderate levels of anthropogenic pressure.

414

Environmental Conditions in the Estuarine Coast of Montevideo (Uruguay): Historical Aspects and Present Status

Benthic Communities The benthic biota of Montevideo Harbor is extremely poor. Previous works have reported that macrobenthic fauna consists of only four small size species and is strongly dominated by the gastropod Heleobia australis, an opportunistic (or tolerant) species that feed on the surface sediment. Other inhabitants of the benthic realm are the polychaetes Alitta succinea and Nephtys fluviatilis and the pelecipod Erodona mactroides. The most-inner part of the harbor is azoic. H. australis is the only zoobenthic species found at the inner harbor area and it represents > 94% of the total abundance. Very small organisms (< 2 mm total length) make up a significant portion (ca. 20%) of the total number of specimens. These studies have hypothesized that the small size of Heleobia individuals and the highly variable percentage of small specimens could be related with dredging and frequent sediment resuspension (disturbances), which in turn can affect recolonization, settlement of juveniles, and their further survival. This hypothesis is also supported by the fact that, although larvae of the polychaete Heteromastus filiformis, a small opportunistic species adapted to variable environmental conditions, have been recorded in plankton samples, no adults have been found colonizing the sediments. This general pattern of low diversity and high abundance of a single species is a common feature of estuaries worldwide, including those in the same geographic region. Diversity and species richness, however, are substantially lower in Montevideo Harbor than at the locations in the immediate vicinity. In the contiguous Montevideo Bay, species richness is higher than in the harbor and the Phyla Arthropoda, Nematoda, Mollusca, and Annelida have been formerly recorded. Like in the harbor, the most frequent and dominant species is H. australis; however, N. fluviatilis, E. mactroides, H. filiformis, A. succinea, Goniadides sp., Glycera sp., Sigambra grubii, as well as, unidentified Nematoda and Ostracoda are also present. Density is very variable on a month scale with high values occurring in May, June, and July (austral winter) and the lowest in March (late austral summer). In general, it has been observed that the inner portion of the bay presents a lower number of individuals than the remaining stations. Ignoring unidentified nematodes, ostracods, and barnacles, a total of 10 species has been recordeddthe species richness is low throughout the year. Maximum species richness occurs between April and August. Also, it has been verified that a higher number of individuals do not always correspond to a higher number of species. In addition, Shannon diversity is also low, reflecting the high dominance of H. australis that occurs frequently in high abundances in several regions of the bay. As for abundance, biomass is also variable in all stations during the year. The highest biomass values of the dominant species H. australis have been occasionally exceeded by those corresponding to the second most abundant species, the bivalve E. mactroides. The highest biomass has been recorded in July and the lowest in March. The decline in the number of H. australis during some months of the period studied suggests that it is a short-lived species. Relationships among the rise of organic matter concentration, the reduction in species number, diversity, and the increment of the abundance of one or two small size species have been well reported in previous studies worldwide, and they are generally considered indicators of organically enriched sediments. In perturbed communities, the frequency of disturbance is usually higher than the recovery rate. Thus, opportunistic species of small size and short lifetime will be favored and could colonize such habitats with any type of biological competition. Then, such species can be adapted to a high frequency of continuous disturbance. Even though Heleobia is the most abundant (80% of the total abundance) and the dominant macrobenthic species in Montevideo Bay, many of the other species, especially the polychaetes N. fluviatilis, A. succinea, H. filiformis, and Goniadides sp., have also been reported in environments with a high organic load elsewhere. Therefore, the high frequency of occurrence of these species, in addition to the presence of large-bodied nematodes, which are retained in the 0.4 mm sieve, would be related to the high organic content of the sediments. Cluster analysis of abundance and biomass data (annual arithmetic mean of pooled data) shows two groups of stations at near 60% of similarity. One group is composed by the most-inner stations, whereas the outer places of the bay constitute the other. The cluster formed by stations located in the inner part of Montevideo Bay shows less abundance and biomass of benthic organisms and corresponds to unfavorable environmental conditions. Water circulation is limited in this region; there is a high percentage of organic matter, chromium, lead, and polycyclic aromatic hydrocarbons (PAHs) in the bottom substrate and a trend toward the presence of reduced sediments. The other cluster corresponds to regions of the bay that are heterogeneous, but in general have more favorable environmental conditions than the inner portion. The high water circulation and oxygenation of the sediment column, together with the small percentage of organic matter, may be responsible for the greater abundance and biomass of benthic organisms recorded at these locations. On a more general scale, considering the entire coastal area and using multivariate statistical techniques, it has been shown that the region can be divided into three zones with different abiotic and macrofaunal patterns. The inner portion of Montevideo Bay is the most heterogeneous according to the sedimentological composition. It has a high organic load and is highly contaminated by chromium, lead, and oil-derived hydrocarbons. The outer portion of the bay and the nearby coastal area show a moderate contamination level. Despite the dominance of H. australis in the whole area, the different status of environmental quality among the three regions is reflected in their macrobenthic community structure (Fig. 4). In the inner portion of Montevideo Bay, benthic communities show a simple structure, which is dominated by nematodesdorganisms that belong to the meiofauna generally associated with organically enriched environmentsdand also, some individuals of Heleobia. In the other two zones (outer portion of Montevideo Bay and the nearby coastal area), benthic communities have a more complex structure with a high number of species and diversity. Moreover, in the inner portion of the bay, the individuals of the dominant species (H. australis) show epibiontic parasites (Ciliophora of the family Vortecellidae: Z. elegans), which are of smaller size and have thinner shells than those individuals of the outer portion and the nearby coastal area. The environmental variables best related with the macrofauna distributional patterns are lead, PAH concentrations, and salinity, indicating the influence of both natural and anthropogenic factors in this area.

Environmental Conditions in the Estuarine Coast of Montevideo (Uruguay): Historical Aspects and Present Status

Fig. 4

415

Zonation pattern of the Montevideo coastal zone based on benthic communities characteristics.

As was stated earlier, coastal regions and estuaries are dynamic environments characterized by great variations in their abiotic parameters and subjected to continuous natural disturbances. This natural variability can represent the main cause of stress for organisms; however, the input of nutrients, organic matter, and human-derived contaminants can alter environmental conditions in a different manner from that expected by natural causes alone. Also, there is a consensus about the necessity of discerning between the effects of natural and anthropogenic perturbations on macrobenthic communities to correctly assess the environmental status of marine and transitional environments. To help ecologists in doing this, several statistical tools and diverse ecological approaches have been developed and tested in different geographical regions and ecosystems.

Different Statistical Approaches for the Assessment of the Degree of Perturbation A method denominated phylum-level meta-analysis, which uses abundance and biomass data at the phylum level, allows the evaluation of the degree of perturbation of a particular benthic community in a global scale of anthropogenic impact. Applying this method to data obtained in the Montevideo coastal zone, it is observed that within this global scale of anthropogenic impact, the inner portion of Montevideo Bay can be classified as highly contaminated and the outer part as contaminated. In addition, the adjacent coastal area could be considered as moderately contaminated. Moreover, a conceptual framework has been established in the literature concerning the effects of organic enrichment on benthic communities. Within this context, several researchers have developed biotic indices to estimate macrobenthic community disturbance level and establish the ecological status of soft-bottom benthos. All such studies have emphasized the importance of biological indicators to measure the ecological quality of a marine environment. Recent approaches have developed a biocriteria-based predefined reference condition and, upon this, several deviations (disturbance classes) were established. AZTI’s Marine Biotic Index (AMBI), using macrobenthic organisms as bioindicators, has been adopted to explore the response of soft-bottom communities to natural and man-induced changes in water. Such approach has integrated the long-term environmental conditions in several European estuarine and coastal environments. This index is based essentially on the distribution of five ecological groups, of soft-bottom macrofauna, which are in relation to their sensitivity to an increasing stress gradient. Further, more recent works have established that benthic samples subjected to different impact sources (e.g., organic enrichment, physical alterations of the habitat, and heavy metal inputs), along the European coast, were correctly classified using this index. In this context, the AMBI has been applied to the dataset of the Montevideo coastal zone, including Montevideo Bay. As stated earlier, the inner part of Montevideo Bay (1) is associated with high concentrations of chromium, lead, and PAHs in the sediments; (2) presents anoxic conditions, with negative values of redox potential; and (3) comprises the benthic communities dominated by large-sized nematodes, which are not considered in the AMBI calculations. Although the second-order opportunistic species H. australis dominate over the whole area, in most of the outer stations of Montevideo Bay and in the adjacent coastal zone, the benthic communities are richer, more diverse, and the bottom conditions less severe. This general trend is clearly observed with high AMBI values in the innermost stations of Montevideo Bay and their decrease throughout the outermost part of the bay and the

416

Environmental Conditions in the Estuarine Coast of Montevideo (Uruguay): Historical Aspects and Present Status

Fig. 5 Benthic community health (BCH) and site disturbance classification (SDC) of the 24 stations of the Montevideo Coastal Zone according to AMBI values. Letters are sampling stations, see Fig. 2.

nearby coastal zone (Fig. 5). As well, based on the AMBI, the innermost stations can be classified as moderately disturbed. In addition, with the use of the AMBI, it is possible to detect differences between the two coastal areas nearby Montevideo Bay, Punta Yeguas, and Punta Carretas, located to the west and to the east, respectively. According to AMBI, Punta Yeguas can be classified as an area with an unbalanced benthic community health (slightly disturbed) and Punta Carretas as moderately disturbed, with a transition to pollution benthic community health. Summarizing through the integration of abiotic and biotic data and using different statistical and ecological approaches, it is possible to classify the coastal area of Montevideo, Uruguay, into four zones with different environmental quality and degree of anthropogenic perturbation (Fig. 6). The first one corresponds to the inner portion of Montevideo Bay and includes Montevideo Harbor. This zone is the most impacted by heavy metals and oil-derived hydrocarbons with several compounds present in concentrations potentially harmful for benthic organisms, has the highest organic load, and sometimes show evidences of lack of oxygen. As a result of these conditions, benthic diversity is poor and species richness low with the dominance of only one opportunistic species and nematodes. The second zone corresponds to the outer portion of Montevideo Bay, which is more heterogeneous. There, environmental conditions seem to be more favorable for the establishment and development of benthic organisms, due to higher hydrodynamic energy, oxygenation of the sediments, and lower organic matter and contaminant concentrations than in the inner part. The greater abundance, biomass, and diversity of benthic organisms confirm this trend. In a global scale, the less impacted area is the adjacent coastal zone located westward of Montevideo (Punta Yeguas), which can be classified as slightly disturbed, whereas the adjacent coastal zone that is located eastward of Montevideo (Punta Carretas) can be classified as moderately disturbed. This is a consequence of the sewage input through the submarine pipe operating in Punta Carretas. It is relevant to remark that despite a clear salinity gradient existing from the inner stations of Montevideo Bay to the outer coastal stations, through the utilization of different approaches, the occurrence of an environmental quality gradient (with the same direction) was effectively established. Then, considering the results until present is clear the existence of an environmental quality gradient, with the worst conditions within the inner stations of Montevideo Bay and an improvement toward the adjacent coastal zone. Higher levels of chromium, lead, phaeopigments, organic biopolymers, and poor benthic macrofauna and diatom communities, characterized the hypertrophic innermost portion of Montevideo Bay. More recent data also indicated a clear deterioration of the adjacent coastal zone comparatively to that observed 15 years ago.

Environmental Conditions in the Estuarine Coast of Montevideo (Uruguay): Historical Aspects and Present Status

Fig. 6

417

Global environmental classification of the Montevideo coastal zone according to the biotic and abiotic variables studied.

Perspectives, Priorities, and Future Research The results referred to earlier correspond to the first comprehensive study developed for the coastal area nearby Montevideo City. It is expected that they will be useful as a previous framework for future research on the environmental quality of this ecosystem and also for upcoming monitoring and management programs considering the Montevideo City Hall’s planning for this area, such as the expansion of Montevideo Harbor and the construction of the sewer network in the west side of the city and another pipe, similar to that located in Punta Carretas. One priority is to assess the degree of recovery or other possible changes that this environment had suffered along the past 10 years to evaluate its present status, ahead of the beginning of the urban works mentioned in the preceding text. Nevertheless, studies would be performed in this region to fulfill the lack of information that persists today, not only in relation to other kind of pollutants, which were not considered in these studies, but also in relation to their persistence and dynamics in this ecosystem. In addition, research would apply an ecosystem approach, taking into account both the structure and functioning of the system. Indirect effects of contaminants can be negative or positive, can occur between species or within the same species, and expand to the whole ecosystem through a trophic cascade. As they are mainly associated with ecosystem function, not structure, this kind of approach should be included in the ecological risk assessment. The paleoecological study performed to reconstruct the history of eutrophication and anthropogenic impact in the Montevideo coastal zone through the analysis of sediment cores showed clearly the historic conditions and relationships among stressors and ecosystem responses, and also, the knowledge of long-term variability that is essential for the implementation of appropriate management guidelines for estuarine and coastal waters quality. In the Montevideo coastal zone, several environmental problems and their impacts on economic productivity, human, and ecosystem health are the outcome of political and socioeconomical factors. Therefore, governmental authorities need to follow the concept of sustainable use and development, to preserve and optimize the ecological quality of natural resources despite their exploitation for economic reasons. In this sense, there is a huge necessity in Uruguay of improving current environmental legislation to regulate atmospheric emissions, effluent disposal, and land use demand, and to properly confer planning licenses. Also, an accurate urban order, the access of the population to basic health services, information, education, and employment, together with the participation and discussion of these problems at all society levels, is fundamental to mitigate pollution effects, preserve ecosystems from degradation, and restore the degraded ones. Fortunately, in recent years, the interest and efforts to implement effective environmental policies have increased, having in mind that the current challenge is to ensure the availability of this coastal ecosystem to future generations.

See also: Bahama Archipelago: Environment and Health; Biodiversity and the Loss of Biodiversity Affecting Human Health; Bolivia: Mining, River Contamination, and Human Health; Mexican Epidemiological Paradox: A Developing Country with a Burden of "Richness" Diseases; Uruguay: Child Health.

418

Environmental Conditions in the Estuarine Coast of Montevideo (Uruguay): Historical Aspects and Present Status

Further Reading Antón, D.J., 1999. Diversidad, globalización y la sabiduría de la naturaleza. Piriguazú Ediciones/CIID, Montevideo. Borja, A., Franco, J., Pérez, V., 2000. A marine biotic index to establish the ecological quality of soft-bottom benthos within European estuarine and coastal environments. Marine Pollution Bulletin 40, 1100–1114. Bueno, C., Brugnoli, E., Figueira, R.C.L., Muniz, P., Ferreira, P.A.L., García- Rodríguez, F., 2016. Historical economic and environmental policies influencing trace metal inputs in Montevideo Bay, Río de la Plata. Marine Pollution Bulletin 116, 141–146. Danulat, E., Muniz, P., García-Alonso, J., Yannicelli, B., 2002. First assessment of the highly contaminated harbor of Montevideo, Uruguay. Marine Pollution Bulletin 44, 554–565. García-Rodríguez, F., Brugnoli, E., Muniz, P., Venturini, N., Burone, L., Hutton, M., Rodríguez, M., Pita, A., Kandratavicius, N., Pérez, L., Verocai, J., 2014. Warm phase ENSO events modulate the continental freshwater input and the trophic state of sediments in a large south American estuary. Marine and Freshwater Research 65, 1–11. Hutton, M., Venturini, N., García-Rodríguez, F., Brugnoli, E., Muniz, P., 2015. Assessing the ecological quality status of a temperate urban estuary by means of benthic biotic indices. Marine Pollution Bulletin 91, 441–453. Marrero, A., Venturini, N., Burone, L., García-Rodríguez, F., Brugnoli, E., Rodríguez, M., Muniz, P., 2013. Testing taxonomic sufficiency in subtidal benthic communities of an anthropized coastal zone: Río de la Plata (Uruguay). International Journal of Environmental Science and Engineering Research (IJESER) 43, 29–45. Moresco, I., Dol, H., 1996. Metales en sedimentos de la Bahía de Montevideo. Revista Asociación de Ciencias Naturales del Litoral 27, 1–5. Moyano, M., Moresco, H., Blanco, J., Rosadilla, M., Caballero, A., 1993. Baseline studies of coastal pollution by heavy metals, oil and PAHs in Montevideo. Marine Pollution Bulletin 26, 461–464. Muniz, P., Danulat, E., Yannicelli, B., García-Alonso, J., Medina, G., Bícego, M.C., 2004. Assessment of contamination by heavy metals and petroleum hydrocarbons in sediments of Montevideo Harbor (Uruguay). Environment International 29, 1019–1028. Muniz P, Gómez-Erache M, Venturini N, Rodríguez M, and Lacerot G (2000) Contaminación en zona costera del Depto. de Montevideo a través del estudio de las comunidades planctónicas bentónicas, Final Report (in Spanish). Fac. Ciencias-IMM, 205 pp. Muniz, P., Venturini, N., Gómez-Erache, M., 2004. Spatial distribution of chromium and lead in the benthic environment of coastal areas of the Río de la Plata estuary (MontevideoUruguay). Brazilian Journal of Biology 64, 103–116. Muniz, P., Venturini, N., Martínez, A., 2002. Physicochemical characteristics and pollutants of the benthic environment of Montevideo coastal zone, Uruguay. Marine Pollution Bulletin 44, 962–968. Muniz, P., Venturini, N., Hutton, M., Kandratavicius, N., Pita, A., Brugnoli, E., Burone,L., García Rodríguez, F., 2011. Ecosystem health of Montevideo coastal zone: A multi approach using some different benthic indicators to improve a ten-year ago assessment. Journal of Sea Research 65, 38–50. Muniz, P., Venturini, N., Martins, C.C., Munschi, A., Garcia-Rodriguez, F., Brugnoli, E., Lindroth, A.L., Bícego, M.C., García-Alonso, J., 2015. Integrated assessment of contaminants and monitoring of an urbanized temperate harbor (Montevideo, Uruguay): A twelve-year comparison. Brazilian Journal of Oceanography 63, 311–330. Venturini, N., Muniz, P., Rodríguez, M., 2004. Macrobenthic subtidal communities in relation to sediment pollution: The phylum-level meta-analysis approach in a south-eastern coastal region of South America. Marine Biology 144, 119–126. Venturini, N., Pita, A.L., Brugnoli, E., García-Rodríguez, F., Burone, L., Kandratavicius, N., Hutton, M., Muniz, P., 2012. Benthic trophic status of sediments in a metropolitan area (Rio de la Plata estuary): Linkages with natural and human pressures. Estuarine, Coastal and Shelf Science 112, 139–152. Venturini, N., Bícego, M.C., Taniguchi, S., Sasaki, S.T., García-Rodríguez, F., Brugnoli, E., Muniz, P., 2015. A multi-molecular marker assessment of organic pollution in shore sediments from the Río de la Plata estuary, SW Atlantic. Marine Pollution Bulletin 91, 461–475.

Environmental Epidemiology Michael S Bloom, University at Albany, State University of New York, Rensselaer, NY, United States © 2019 Elsevier B.V. All rights reserved.

Abbreviations OR Odds ratio PPR Prevalence proportion ratio RD Risk difference RR Relative risk

What is Environmental Epidemiology? Epidemiology is defined as the study of the distribution and determinants of health-related events, or disease. Environmental epidemiology focuses on the distribution of physical, chemical, and biologic agents in the environment acting as determinants of disease. Exposure to environmental agents is often involuntary and may affect large segments of the population, for example, air pollution, municipal drinking water contamination, and vector-borne infectious diseases. The aims of environmental epidemiology are to infer causality, to identify environmental causes of disease, such as from air and water pollutants, dietary contaminants, built environments, and others. Furthermore, the growing notion of an “epidemiology of consequence” stipulates that epidemiologic research should lead to the design and implementation of health interventions to mitigate the impact. Often considered as the “gold standard” for scientific evidence, human experimentation is often impractical and unethical for investigating effects from exposure to potentially toxic environmental agents. Animal and in vitro experimentation offer invaluable contributions for assessing environmental health risks, yet differences in toxicokinetics and toxicodynamics across species, use of genetically homogenous test populations, and nonrepresentative exposure scenarios present challenges for extrapolation to humans. Environmental epidemiology helps to address these challenges; it is a systematic approach for collecting data observed from events occurring among defined populations, so-called “natural experiments.” As an early example, Dr. John Snow identified consumption of contaminated drinking water as the cause of a virulent cholera epidemic in 1854 London. In conjunction with environmental toxicology, environmental epidemiology provides a critical contribution to human health risk assessment and for designing interventions to protect human health.

What is the Epidemiologic Approach? Epidemiologists study populations, groups sharing common characteristics, such as similar age, race, sex, occupation, or geographic residence. As the complete enumeration of a target or “source” population, a “census,” is often impractical, epidemiologists collect representative subsets, or population “samples” (also called “study populations”). Results from the statistical analysis of study population data is then extrapolated to the wider “source population,” requiring a representative sample to ensure generalizability (i.e., “external validity”). Contacting and enrolling a suitable study population can be a daunting task. Systematic errors or ill-conceived strategies can bias study results, compromising the “internal validity” of a study, and leading to inaccurate inferences. Random or “chance” errors in population sampling can lead to false positive (“type 1 errors”) or false negative (“type 2 errors”) results. Epidemiologic study designs provide a formalized framework for systematic data collection in the absence of controlled (i.e., experimental) conditions. Well-conducted epidemiologic studies maximize the validity and precision of results for drawing causal inferences.

How Do Epidemiologists Quantify Disease? In collecting information about environmental exposures and disease, epidemiologists count denominators, the number at risk for a disease, and numerators, the number of disease cases. Disease may be defined as a clinical diagnosis or injury, self-reported symptoms, laboratory values, or any other formally specified health-related endpoint. Epidemiologists use “incidence” and “prevalence” to characterize the frequency of disease relative to the population size. Incidence describes the “risk,” the probability for a population member to experience a disease during a specified time interval. Incidence is defined as the number of new disease cases divided by the size of the population at risk, often multiplied by a size constant for convenience (Table 1). For example, the incidence rate of lung/bronchus cancer was 58.3 cases per 100,000 U.S. residents in 2014; thus, there were 58.3 new lung/bronchus cancer cases diagnosed in 2014 for every 100,000 people living in the United States. Prevalence describes the proportion of disease cases in

Encyclopedia of Environmental Health, 2nd edition, Volume 2

https://doi.org/10.1016/B978-0-12-409548-9.10635-9

419

420

Environmental Epidemiology Table 1

Common epidemiologic measures

Metric Incidence (I) Prevalence (P)

Relative risk (RR) Risk difference (RD) Odds ratio (OR) Prevalence proportion ratio (PPR)

Formula  I ¼ P ¼

#New disease cases Size of population at risk



#Existing disease cases Population size

 over time interval  constant  at a point or interval in time  constant

P z I  D, where P ¼ prevalence; I ¼ incidence; D ¼ duration Incidence

Exposed RR ¼ IncidenceUnexposed

RD ¼ IncidenceExposed  IncidenceUnexposed

Probability

OddsCases ; Oddscases or controls ¼ 1ProbabilityExposure OR ¼ Odds Controls Exposure Prevalence

Exposed PPR ¼ PrevalenceUnexposed

a population at a point in time (Table 1). For example the prevalence of diabetes among U.S. adults was 12.2% in 2015; thus 122 of every 1000 U.S. adults had diabetes in 2015. Incidence and prevalence are inextricably linked, in that prevalence is a function of disease incidence and disease duration, in a closed population (Table 1). Highly infectious and virulent diseases with limited duration tend to have high incidence relative to prevalence. For example, the incidence of influenza infection tends to be high relative to its prevalence; patients recover or expire within a few days of infection. Chronic diseases, in contrast, tend to have a high prevalence relative to incidence. For example, primary infertility is diagnosed in approximately 10%–15% of the population. As spontaneous resolution is infrequent, cases accumulate over time leading to higher prevalence relative to incidence. While prevalence is useful for hypothesis generation, and provides important data for policy decisions and allocation of resources, it does not confer risk information. Incidence data, which describe risk, predict the likelihood of future events, and are required for causal inference. While subtle, the distinction between incidence and prevalence is very important.

How Do Epidemiologists Conduct Studies? Descriptive Studies “Descriptive” epidemiologic studies characterize health-related events or exposures to environmental agents in terms of person, place, and time; “who,” “where,” and “when”? This might entail a clinical case study or case series, in which one or more unusual cases of disease are described (Table 2). For example, a report describing an unexpectedly high number of microcephaly cases at a hospital in Brazil. Routine surveillance and biomonitoring studies characterize the distribution of health-related events or environmental agent exposures in specific populations. For example, the annual number of cases of tick-borne Lyme disease in New York State over time, or the distribution of blood mercury concentrations for a nationally representative sample of the United States population described by the U.S. Centers for Disease Control and Prevention’s National Report on Human Exposure to Environmental Chemicals. Descriptive studies are very useful to policy makers and are an important first step in formulating data-driven, testable questions about environmental causes of disease, or “hypotheses,” for further investigation. However, without an appropriate comparison or “referent” group, descriptive studies are not suited to testing hypotheses.

Inferential Studies Unlike descriptive studies, inferential studies allow for formal hypothesis testing, and characterize associations between exposure to environmental agents and disease. Inferential environmental epidemiology studies are generally conducted in three stages, although activities are not necessarily mutually exclusive. In the “design phase,” a study hypothesis is developed and refined, a source population is identified, and study population enrollment and data collection strategies are prepared. In the “implementation phase,” study participants are consented and enrolled into the study, and data are collected, including biologic specimens if appropriate. Finally, in the “analysis phase,” data are cleaned, analyzed, and study results are reported. While many variations exist, typical inferential study designs include ecologic, cohort, case–control, and cross-sectional. Each presents strengths and limitations, and may be better suited for investigating specific diseases and exposure to specific environmental agents.

Group level study designs

Ecologic, or “correlational” studies, compare the frequencies of disease among groups experiencing different summary (e.g., average) levels of exposure to environmental agents (Table 2). Groups may be defined by geography or by time periods, socalled “time series” studies. For example, incidence rates for bladder cancer diagnosis might be compared across different U.S. counties with various average chlorine disinfection by-product concentrations measured in public drinking water supplies. Importantly, these types of data are assessed at the “group” level, information is unavailable for individual members of each group. This

Table 2

Strengths and limitations of epidemiologic study designs

Study design

Basic features

Strengths

Limitations

Common uses

Descriptive studies

Characterize health-related events or exposure to environmental agents by person, place, and time Includes case studies, case series, surveillance and biomonitoring studies Compare disease frequencies across population groups based on differences in location or time

Data sources often readily available Describe patterns of distribution

No comparison group, cannot test etiologic hypotheses

Hypothesis generation Resources allocation

Data sources often readily available Provide valid group-level inferences Hypothesis generation for individual level inferences Generate rate ratios

Group level data collected Cannot link exposure and outcome in individualsdEcologic fallacy Cannot establish temporality in many circumstances Cannot adjust for individual level confounders Often resource intensive Diseases with long latency periods will require long period of follow-up Can evaluate limited environmental agents in a single study Need very large sample size to capture rare diseases Vulnerable to selection bias from “loss to follow-up” Cannot estimate risk or population prevalence Can evaluate limited diseases in a single study Need very large sample size to capture rare exposures Vulnerable to selection bias from control selection Retrospective exposure assessment vulnerable to recall bias Vulnerable to exposure misclassification from environmental exposures with short half-lives Odds ratio biased estimator of relative risk (away from the null) when disease is common Vulnerable to reverse causation Vulnerable to prevalence bias Need very large sample size to capture rare diseases Need very large sample size to capture rare exposures

Hypothesis generation Effect estimates for group level inferences

Ecological studies

Participants enrolled based on exposure status Follow initially disease free exposed and unexposed participants over time for incident disease Generate relative risk estimates

Case–control studies

Participants enrolled based on disease status Compare historic exposure frequencies among participants with disease (cases) to participants without (controls) disease Generate odds ratios

Cross-sectional studies

Participants enrolled irrespective of exposure or disease statusda “snapshot in time” Compare prevalence of disease among exposed to unexposed participants Generate prevalence proportion ratios or odds ratios

Individual level data collected Incorporates temporality between exposure and disease Direct estimates of risk and prevalence Useful for rare/infrequent exposures Can evaluate multiple diseases in a single study Less vulnerable to information biases than case–control and cross-sectional studies Individual level data collected Incorporates temporality between exposure and disease Useful for rare/infrequent diseases Can evaluate multiple environmental agents in a single study Often less resource-intensive than cohort designs No loss to follow-up

Individual level data collected Usually less resource intensive than cohort or case–control designs Can evaluate multiple diseases in a single study Can evaluate multiple environmental agents in a single study Can estimate population prevalence

Rare exposures Effect estimates for individual level inferences Exposure to environmental agents with short half-lives Contributes strongest observational evidence for causality Rare diseases Effect estimates for individual level inferences

Preliminary studies Hypothesis generation

Environmental Epidemiology

Cohort studies

421

422

Environmental Epidemiology

leads to the “ecological fallacy,” the notion that group level inferences may disappear or even change direction at the individual level. Thus, the validity of results from ecologic studies is limited and used primarily for hypothesis generation.

Individual level study designs

Cohort designs In contrast to ecological designs, cohort designs collect individual person-level data, thereby precluding the ecological fallacy (Table 2). A group of individuals without a disease of interest, or “cohort,” is followed through time for occurrence of new, incident events; incidence is then compared for study participants with and without exposure to environmental agents of interest (Fig. 1). For example, disease free licensed anglers were followed over several years for the development of thyroid disease in a hypothetical study. Investigators compared the incidences among those with high serum levels of polychlorinated biphenyls (PCBs), exposed participants, to those with low serum PCB levels at enrollment, unexposed participants: 6 cases per 100 participants vs. 2 cases per 100 participants, respectively. The ratio of incidences, or relative risk (RR), quantifies the magnitude of the association between an environmental agent and a disease (Table 1). For example, RR > 1 indicates an increased risk, and values < 1 indicate a decreased risk, whereas 1 indicates no association between disease and exposure. In the Some exposed to environmental agent at enrollment; none have disease.

Enroll participants here

(A)

(B)

Some experience disease during study follow-up.

Time

Time

Some have disease (cases) and some do not (controls) at study enrollment.

Some exposed to environmental agent in the past.

Enroll participants here

Time

Time

Some have disease and some do not, some are exposed to environmental agent and some are not, at study enrollment.

(C)

Enroll participants here

Time

Time

Fig. 1 Common environmental epidemiology study designs. (A) Cohort study, (B) case–control study, and (C) cross-sectional study. Green shading indicates disease of interest; red shading indicates exposure to environmental agent of interest.

Environmental Epidemiology

423

aforementioned example, an RR ¼ 3.0 indicates that participants with high serum PCBs were threefold more likely to develop thyroid disease than participants with low serum PCBs. The difference between two incidences, or “risk difference,” characterizes the number of disease cases attributable to the environmental agent among those exposed (i.e., some of the exposed would have experienced the disease in the absence of exposure). Referring again to the previous example, the RD ¼ 4.0, meaning that four additional cases of thyroid disease occurred among every 100 participants due to high PCB exposure. An historical, or “retrospective cohort,” approach can also be used, more often in occupational settings where extensive health and exposure records are available. Generally, data from properly conducted cohort studies are afforded the highest level of credibility among observational study designs. The many strengths of cohort designs include direct estimation of disease risk and prevalence, “temporality,” in which exposure predates disease diagnosis, and comparatively straightforward selection of a referent group relative to other study designs. However, cohort studies can be exceptionally resource intensive, requiring long periods of follow-up for diseases with long latency periods, and necessitates very large study populations to capture rare diseases, such as adult onset cancers. Furthermore, incorporating additional exposures of interest may be challenging after a cohort is established, although with flexibility to study multiple diseases. Case–control designs In contrast to the cohort study design, in which participants are enrolled based on exposure to an environmental agent of interest, and the incidence of disease compared, the case–control study design enrolls and compares participants based on disease state (Table 2). “Cases,” those with disease, and “controls,” those without disease, are enrolled in the study and, retrospectively assessed for historic exposure to environmental agents of interest (Fig. 1). For example, a study of spontaneous abortion, in association with consumption of arsenic contaminated drinking water during pregnancy, enrolled women treated at a hospital for a recent pregnancy loss as cases, and women receiving routine prenatal care for ongoing pregnancies at the same hospital as controls. Arsenic was measured in the drinking water the women consumed during the pregnancy. This design is effective for studying diseases and allows for considering exposure to multiple environmental agents in a single study. Yet, unlike the previously described cohort study design, incidence and prevalence are not estimable using a case–control study design, as the relative proportion of cases is predetermined during participant enrollment. Rather, odds of probabilities for exposure are estimated, and the quotient of the odds in case relative to control participants is estimated, the “odds ratio” (OR). The OR is interpreted similarly to the RR, and estimates the underlying population RR if the frequency of the disease is rare in the study population (i.e., < 10%). Cross-sectional designs Cross-sectional, or “prevalence studies,” characterize associations between exposure to environmental agents and disease at a point in time, a “snapshot” so to speak (Table 2). A random, or specifically targeted, sample of the source population is enrolled as the study population; exposure and disease state are simultaneously queried (Fig. 1). Thus, the prevalence of a disease can be estimated as the proportion experienced by the study population; however, incidence cannot. In fact, only prevalent cases are captured by the cross-sectional design. Cross-sectional studies provide prevalence proportion ratios (PPR); the prevalence of disease in exposed relative to unexposed participants, and interpreted analogously to RRs (Table 1). Yet the PPR introduces a “prevalence bias,” in which associations with the duration of a disease are difficult to partition. Further, “reverse causality,’ in which disease counterintuitively causes exposure to an environmental agent, is a threat; there is no temporal sequence established between exposure and disease in a cross-sectional study. For example, in a cross-sectional study of women attending an infertility clinical, low levels of estrogen were associated with high levels of the plastic monomer bisphenol A (BPA) in urine specimens. However, it was not clear if higher BPA led to lower estrogen, or if lower estrogen limited excretion of BPA leading to higher levels. Still the cross-sectional design is usually less resource intensive than other individual level-data epidemiologic study designs and so is used frequently for hypothesis generating environmental epidemiology studies.

Threats to Validity Epidemiologic associations between environmental agents and disease may reflect an underlying causal process. However, associations from epidemiologic studies may also reflect bias, confounding, or chance occurrence. To ensure accurate study results, or “internal validity,” the roles of bias, confounding, and chance should be considered for any statistical association.

Bias Bias refers to systematic errors, or trends, in participant enrollment or retention, data collection, data analysis, or reporting, that lead to inaccurate conclusions. Biases “toward the null,” spuriously underestimate the magnitude of an effect in a study population, relative to the true value in the source population. Biases “away from the null,” in contrast, spuriously overestimate the magnitude of an effect in a study population. Unfortunately, once present, bias is very difficult to correct, and so is best addressed a priori, during study design. While bias takes myriad forms, various selection and information biases pose the greatest threats to environmental epidemiology, contingent on study design (Table 2).

424

Environmental Epidemiology

Selection bias

Bias arising from systematic differences in study participant enrollment is known as “selection bias.” Often, selection bias is insidious, introduced by unforeseen factors that disproportionately impact the likelihood for exposed or unexposed cases and noncases to enroll or remain in a study. For example, in a hypothetical case–control study of lung cancer and air pollution, cases were recruited from among patients treated by a large urban cancer hospital serving a wide geographic area, including urban, suburban, and rural communities. Out of convenience, controls were enrolled from cancer-free neighborhood residents. Yet, this strategy is tantamount to selecting cases and controls based on the exposure of interest, as urban residents tend to experience higher average air pollution than residents of suburban and rural areas. The ensuing selection bias is likely to underestimate differences in air pollution exposure between cancer cases and controls, as the latter were inadvertently “selected” to have a similar exposure profile as the former. A “lost to follow-up bias” might be introduced into a cohort study if exposed or unexposed participants with higher or lower baseline disease risks were more or less likely to drop out of the study during follow-up.

Information bias

“Information bias” arises from systematic differences in the collection, analysis, or reporting of data. Information bias may be introduced by differential recall of past exposures among diseased and nondiseased. For example, the mother of an infant with a birth defect might be more motivated to recall modest or transient environmental exposures during pregnancy than the mother of an infant without a birth defect, leading to a “recall bias.” Similarly, a study interviewer might interrogate known cancer cases more thoroughly than controls, thus differentially capturing exposure detail, and leading to an “observer bias.” The differential accuracy, or potential to capture exposure data can spuriously create or obfuscate associations in the study population, despite the absence or presence, respectively, of associations in the source population. Exposure misclassification While the aforementioned information biases threaten many epidemiologic studies, threats from “exposure misclassification,” are pervasive in environmental epidemiology. In fact, exposure assessment is often considered the “Achilles’ heel” of environmental epidemiology. Environmental exposure, the quantity of an agent that contacts an individual, is frequently characterized by route (e.g., ingestion, inhalation, dermal absorption, injection), magnitude (e.g., average, cumulative, or peak), frequency (e.g., hourly, daily, weekly), and duration (e.g., episodic, short-term, long-term). However, myriad “fate and transport” processes, including an agent’s physical characteristics and meteorological conditions, introduce uncertainty into exposure estimates after an environmental agent is released from its source, such as fine particulate air pollution from diesel trucks. Following exposure, toxicokinetic processes, including absorption, distribution, metabolism, and excretion, introduce additional uncertainty into the “dose” estimate, the quantity of an agent that crosses epithelial surfaces and enters the body, and more importantly, the amount interacting with target molecules to initiate disease (i.e., “biologically effective dose”). Given these complexities, exposure data are rarely collected with perfect accuracy and reliability, and so study participants may be inadvertently assigned lower or higher values than actually experienced, leading to “exposure misclassification.” Exposure misclassification may result in over or underestimates in the magnitude of epidemiologic associations. For example, use of a dietary questionnaire to capture exposure to persistent organic pollutants (POPs) via consumption of sport-caught fish would likely underestimate exposure for some participants and overestimate exposure for others. Indirect exposure assessment strategies, such as questionnaires and environmental sampling, and personal exposure monitoring, may be limited by heterogeneity in space and time, the presence of unconsidered exposure sources, and variabilities in human behavior and recall, leading to bias. The aforementioned example of sport-caught fish consumption and POPs exposure might be improved by capturing a blood specimen, followed by analysis of POPs in the laboratory. Generally, the approximation of biologically effective dose for an environmental agent improves, and exposure misclassification declines, with more personalized exposure assessments. Biomarkers of exposure, measureable agents in human tissues, such as environmental agents detected in urine, blood, and breast milk, accommodate the aforementioned environmental fate and transport processes, as well as toxicokinetic processes, and integrate multiple sources of exposure to provide a more direct estimate of dose. Still, biomarkers are not a panacea, and require proper validation prior to use in epidemiologic studies so that analyte-specific nuances are characterized. For instance, the in vivo half-lives of environmental agents are critical to appropriate interpretation of biomarker data. Urinary phthalates, for example, have half-lives measured in hours, and are thus poorly suited to retrospective exposure assessments, such as used in case–control studies. PCBs, in contrast, have half-lives measured in months to years, and so are less vulnerable to exposure misclassification using retrospective exposure assessment strategies. Additional biomarker imitations include challenges to compliance for collection of invasive specimens, such as venipuncture for blood, strict collection protocols to preclude exogenous contamination by widely distributed environmental agents, such as lead, and high costs for laboratory analysis.

Confounding and Effect Modification Confounding

Confounding is a mixing of associations leading to spurious results from epidemiologic studies. Potential confounders are traditionally defined as independent predictors of a disease that are also associated with an exposure of interest, and not intermediate in the causal pathway between exposure and disease. Other definitions for confounding leverage multivariable regression approaches (i.e., “collapsibility”) and causal graphing theory (i.e., “back door pathways”). Age, sex, race, ethnicity, adiposity,

Environmental Epidemiology

425

socioeconomic status, cigarette smoking, and alcohol consumption are frequent confounders in environmental epidemiology. Unmitigated confounding threatens the validity of epidemiologic results, and so quantitative methods, including stratification, matching, and multivariable regression modeling, are frequently used to measure and adjust for confounding during data analysis. For example, a hypothetical study indicated that higher urine cadmium concentration was associated with a twofold higher odds for prostate cancer compared to lower urine cadmium concentration (i.e., OR ¼ 2.0). However, this association was likely to have been confounded by age, as older men are at greater risk for prostate cancer than younger men, independent of cadmium exposure, and cadmium bioaccumulates with age. In contrast, the “age-adjusted” statistical association suggested that there was no association between urine cadmium concentrations and prostate cancer (i.e., OR ¼ 1). Importantly, confounder data must be collected with the same level of quality given to the main exposure of interest. Confounding that remains due to insufficient statistical adjustment is referred to as “residual confounding.”

Effect modification

Heterogeneity of effects, or “effect modification,” is present when the statistical association between disease and an environmental agent varies by levels of a third factor. For example, age was an effect modifier of the association between high concentrations of air pollutants and respiratory disease during the infamous Donora, Pennsylvania smog incident of 1948; residents 55 years of age and over were at greater risk for hospitalization and death than younger residents. Common effect modifiers in environmental epidemiology include age, sex, race, genetic polymorphisms, and nutritional status. Unlike confounding, which is a nuisance to be eliminated, effect modification reflects pathophysiology and should be characterized and reported.

Chance

Random population sampling errors, or “chance,” raise the possibility for “false positive” results; associations identified in the study population that do not exist in the source population (i.e., type 1 error). The probability for a type 1 error is frequently quantified using formal hypothesis testing. Generally, an observed value (e.g., a difference between exposed and unexposed members of the study population) is compared to an expected value under the null hypothesis, given the observed variance; the likelihood for generating a test statistic value more extreme than that observed is determined. The P-value is defined as the likelihood for the observed value given the null hypothesis, loosely interpreted as “chance.” The Student t-test, c2-test, Mann–Whitney U test, analysis of variance, Kruskal–Wallis test, and Wald test are frequently used hypothesis tests for environmental epidemiology. Investigators often define “statistical significance” as a P-value < 0.05, indicating that the observed result would occur in fewer than 5% of study population samples enrolled from the source population, under the null hypothesis; chance is unlikely to explain the result. However, the latter is an arbitrary convention, and a priori selection of a P-value threshold should depend on the nature of the research question. Although employed widely, the validity of the P-values is increasingly debated, given widespread misuse and misinterpretation. Confidence intervals are recommended to address the limitations of the P-value. Confidence intervals describe the range of values within which the true magnitude of an associations lies with a certain degree of confidence, often 95%. The width of a confidence interval reflects the precision of the estimate and provides a range within which the true value (i.e., the association in the source population) is likely to fall. For example, a 95% confidence interval of RR ¼ 1.10–3.36 suggests that the true RR is likely to be between 1.10 and 3.36, a heightened risk.

Sufficient-Component Cause Model of Disease Identifying causal factors, in which every change in the level of exposure to an environmental agent leads to a change in the frequency of disease, is the goal of environmental epidemiology. However, this goal is complex, as most diseases are “multifactorial” in nature, having several simultaneous causes. The “sufficient-component cause” model of disease posits that various permutations of risk factors, called “component causes,” interact to form “sufficient causes,” which initiate disease. “Necessary causes” are those component causes required in every sufficient cause, explaining why different people experience disease despite disparate exposure histories, and why people with similar exposures may not experience disease. For example, three hypothetical sufficient causes for disease are demonstraed in Fig. 2. Whereas arsenic exposure is a necessary cause for disease in sufficient causes “A,” “B,” and “C,” disease is initiated only when exposure occurs in conjunction with various component causes.

Bradford-Hill Guidelines Even in the absence of bias, confounding, and random error, epidemiologic associations between environmental agents and disease do not prove causation. Without the benefit of experimental conditions, epidemiologic studies cannot demonstrate causal associations, given the likelihood for unrecognized biases, confounding, and sampling error. Rather, causality is indirectly “inferred” from observational data, upon achieving a persuasive “weight of evidence.” In 1965, Sir Austin Bradford Hill offered guidelines for assessing the weight of observational evidence to establish causal associations between environmental agents and disease, including the strength of association (i.e., stronger is more likely to be causal), the consistency of results across studies (i.e., similar results using various study designs and study populations suggest causality), specificity of associations (i.e., exclusive associations suggest causality), temporality (i.e., exposure must precede disease for causality), biological gradient (i.e., linear dose-response curves, in which stronger effects occur with higher exposure, suggest causality), biologic plausibility (i.e., experimental models explaining the observed association substantiate causality), coherence (i.e., agreement with existing knowledge corroborates causality),

426

Environmental Epidemiology

Fig. 2 Causal pies demonstrating three hypothetical sufficient causes for disease. In this hypothetical example, arsenic exposure is a necessary causal component, which in conjunction with three different permutations of other component causes, will be sufficient to initiate disease. These include cigarette smoking, phthalates exposure, a metabolic gene polymorphism, triclosan exposure, psychosocial stress, infection, lack of exercise, and alcohol consumption.

experimental evidence (i.e., if the introduction or elimination of an exposure elicits a change in disease frequency this is evidence for causality), and analogy (i.e., previous causal associations established for related environmental agents and disease supports causality). While the “Bradford Hill criteria” are well ensconced in the canon of environmental epidemiology, nonlinear and low dose effects reported for “endocrine disrupting” environmental agents, and multiple adult onset diseases with common pathophysiology, such as the “testicular dysgenesis syndrome,” undermine these traditional guidelines. Modern developments in -omics technologies and high-throughput sequencing, including genetics and epigenetics, now allow for a far more nuanced characterization of biologic mechanisms and the opportunity to formally “integrate” experimental with observational evidence in drawing causal inferences.

Future Directions Environmental epidemiologists will continue to face challenges in characterizing associations between environmental agents and human disease; however, the field is rapidly evolving to meet these challenges. Recent developments in analytic chemistry allow for exceptional sensitivity and reliability when measuring trace and ultratrace levels of environmental agents in minute samples of environmental and biological media, which may have modest, yet clinically meaningful, health effects. The importance of synergistic and antagonistic effects, within increasingly heterogeneous mixtures of agents to which human populations are exposed on a more frequent basis, are now recognized. In fact, the “exposome” paradigm was proposed to characterize the totality of all exposure to all environmental agents, simultaneously. Causal paradigms are shifting, with the importance of transient intervals of heighted biologically sensitivity identified and targeted for investigation, including the fetal period, pregnancy, adolescence, and senescence, so-called “critical windows.” The “developmental origins of health and disease” hypothesis posits that gestational exposures have lifelong implications for health and wellness. Furthermore, appreciation for psychosocial and economic stressors that can potentiate otherwise benign exposures to environmental agents, has shifted focus to race-related environmental health disparities, and underscores the importance of “environmental justice” in the prevention of disease. The extraordinary growth in microprocessor power and statistical computing, coupled to remarkable developments in “big data” collection, will allow for a more comprehensive and robust analysis of epidemiologic data than previously feasible. Together, these advances and others herald a new era for environmental epidemiology, in which a multidisciplinary approach will foster the identification of causal disease factors to inform the design of more effective interventions to prevent disease.

See also: Bias in Environmental Epidemiology; Environmental Cancers: Environmental Lung Cancer Epidemiology.

Further Reading Adami, H.O., Berry, S.C., Breckenridge, C.B., et al., 2011. Toxicology and epidemiology: Improving the science with a framework for combining toxicological and epidemiological evidence to establish causal inference. Toxicological Sciences 122, 223–234. Asechengrou, A., Seage, G.R., 2014. Essentials of epidemiology in public health, 3rd edn. Jones & Bartlett Learning, Burlington, MA. Buck Louis, G.M., Bloom, M.S., Gatto, N.M., et al., 2015. Epidemiology’s continuing contribution to public health; the power of “then and now”. American Journal of Epidemiology 181, e1–e8. Calafat, A.M., 2016. Contemporary issues in exposure assessment using biomonitoring. Current Epidemiology Reports 3, 145–153. Fedak, K. M., Bernal, A., Capshaw, Z. A. and Gross, S. Applying the Bradford Hill criteria in the 21st century: How data integration has changed causal inference in molecular epidemiology. Emerging Themes in Epidemiology 12, 14. Gore, A.C., Chappell, V.A., Fenton, S.E., et al., 2015. EDC-2: The Endocrine Society’s second scientific statement on endocrine disrupting chemicals. Endocrine Reviews 36, E1–E150. Hill, A.B., 1965. The environment and disease: Association or causation? Proceedings of the Royal Society of Medicine 58, 295–300.

Environmental Epidemiology

427

James-Todd, T.M., Chiu, Y.-H., Zota, A.R., 2016. Racial/ethnic disparities in environmental endocrine disrupting chemicals and women’s reproductive health outcomes: Epidemiological examples across the life course. Current Epidemiology Reports 3, 161–180. Keyes, K., Galea, S., 2015. What matters most: Quantifying an epidemiology of consequence. Annals of Epidemiology 25, 305–311. Morgenstern, H., Thomas, D., 1993. Principles of study design in environmental epidemiology. Environmental Health Perspectives 101 (Supplement 4), 23–38. Porta, M.A., 2014. A dictionary of epidemiology, 6th edn. Oxford University Press, New York, NY. Rothman, K.J., Greenland, S., Lash, T.L., 2008. Modern epidemiology, 3rd edn. Wolters Kluwer Health/Lippincott Williams & Wilkins, Philadelphia, PA. Sexton, K., Selevan, S.G., Wagener, D.K., Lybarger, J.A., 1992. Estimating human exposure to environmental pollutants: Availability and utility of existing databases. Archives of Environmental Health 47, 398–407. Wasserstein, R.L., Lazar, N.A., 2016. The ASA’s statement on p-values: Context, process, and purpose. American Statistician 70, 129–133. Wild, C.P., 2005. Complementing the genome with an “exposome”: The outstanding challenge of environmental exposure measurement in molecular epidemiology. Cancer Epidemiology, Biomarkers and Prevention 14, 1847–1850.

Relevant Websites cdc.gov, n.d., National report on human exposure to environmental chemicalsdhttps://www.cdc.gov/exposurereport/dCenters for Disease Control and Prevention, U.S. Department of Health and Human Services.

Environmental Epidemiology and Human Health: Biomarkers of Disease and Genetic Susceptibility PB Tchounwou, College of Science, Engineering and Technology, Jackson State University, Jackson, MS, United States WA Toscano, School of Public Health, University of Minnesota, Minneapolis, MN, United States © 2011 Elsevier B.V. All rights reserved.

Abbreviations ATM Ataxia telangiectasia mutated CEA carcinoembryonic antigen COMT catecholamine-o-methyl-transferase COX cyclooxygenase DNA desoxyribonucleic acid FISH fluorescence in situ hybridization GSTM1 glutathione-S-transferase M1 NAT N-acetyl transferase NSAID nonsteroidal anti-inflammatory drugs PAHs polycyclic aromatic hydrocarbons PCBs polycyclic biphenyls PSA prostate-specific antigen SCE sister chromatid exchange TUNEL terminal deoxynucleotidyl transferase-mediated dUTP nick end labeling UPS ubiquitin-proteasome system

Introduction Epidemiology is the study of the distribution and determinants of diseases and other health-related conditions in humans. Application of epidemiology to the prevention and control of health problems is a powerful tool in the practice of public health. Traditionally, epidemiology was linked with disease prevention, in that it has the potential of identifying risk factors that can be modified or avoided in order to prevent or control adverse health outcomes. Hence, environmental epidemiology is a logical extension of this field of scientific endeavor, expanding its scope to focus on the assessment of specific environmental (biological, physical, and chemical) factors that may be associated with specific patterns of health-related conditions in human populations. Thus, the purpose of environmental epidemiology is threefold: first, to identify the potential health risks associated with exposure to various biological, physical, and chemical factors; second, to develop and implement appropriate strategies to prevent/control exposures; and third, to ultimately reduce the burden of disease and promote public health. There are four main goals of disease control: (1) to reduce the incidence of disease through prevention, (2) to delay the onset of disability, (3) to alleviate the severity of disease, and (4) to prolong an individual’s life. Understanding the natural history of diseases is critical for designing effective preventive methods and strategies. Disease prevention is classified as primary, secondary, or tertiary depending on where it is applied in the disease continuum. While primary prevention reduces the incidence of disease, secondary prevention shortens the duration and severity of disease, and tertiary prevention reduces complications from the disease. Table 1 shows the levels of prevention, the related strategies, and their potential impact on disease control. During the twentieth century, environmental health hazards became a major concern not only to public health professionals but also to the society at large because of their tremendous health, sociocultural, and economic impacts. Water and food contamination Table 1

Categories of disease prevention and control

Level of prevention

Disease status in people

Primary

Susceptible

Secondary Tertiary

Asymptomatic Symptomatic

428

Strategies

Effects/outcomes of prevention

Environmental, occupational, and regulatory controls; lifestyle and behavioral modifications Diagnosis/screening and appropriate treatment Pallative care and hospice

Reduction in disease incidence Reduction in disease prevalence and health consequences Reduction in disease complications and disabilities

Encyclopedia of Environmental Health, 2nd edition, Volume 2

https://doi.org/10.1016/B978-0-444-63951-6.00232-1

Environmental Epidemiology and Human Health: Biomarkers of Disease and Genetic Susceptibility

429

as well as air pollution by various biological agents (bacteria, viruses, and protozoa), chemical compounds (heavy metals, pesticides, pharmaceuticals, polycyclic biphenyls (PCBs), polyaromatic hydrocarbons (PAHs), and other synthetic compounds), and physical agents (electromagnetic radiation from high-tension wires and ionizing radiation from natural and synthetic sources) is of major concern. In effect, environmental factors have been implicated in the development of a wide variety of both acute and chronic diseases including cardiovascular disease; kidney, liver, and lung diseases; cirrhosis; diabetes; musculoskeletal disease; skin disease; neurological disorders; and cancers. In general term, disease is defined as a cluster of signs, symptoms, and laboratory findings linked by a common pathophysiological sequence, which causes human distress. At the international level, the World Health Organization has classified human diseases into 22 classes, as shown in Table 2. Most, if not all, of these systemic and carcinogenic health effects are associated with specific environmental exposures. However, because of the complex nature of such environmental exposure, it is usually difficult to establish causation. For example, several environmental factors have been found to cause skin cancer including radiation and arsenic exposure. Also, the incidence of leukemia has been associated with benzene and radiation exposure. Nevertheless, a variety of criteria have been established to characterize the causal relation between exposure and disease. These criteria of judgment include the following:

• • • • • • • • •

Biological gradient: There is a gradient of risk associated with the degree of exposure. The greater the exposure, the stronger the effect. Strength of the association: The stronger the association, the less likely that the disease is due to chance. Hence, the strength of association measures the size of the risk attributed to exposure to the causal agent. Biological plausibility: There is a known or postulated mechanism by which exposure may alter disease risk. Coherence: The observed data should not conflict with known facts about the natural history of the disease. Experiment: Data from well-designed experiments must be used to support the association. Analogy: In some cases, it is fair to judge causal relationship by analogy. If a disease is due to a causal agent, removing the agent should reduce or eliminate the disease. Consistency: A similar association must be observed by different people in different places, circumstances, and times. Specificity: The relationship should support causation. Hence, the specificity of the association indicates that the disease is caused by the suspected agent. Temporal sequence: The exposure itself should precede the development of symptoms/outcomes.

Many environmental agents, acting either independently or in combination with other toxics, may induce a wide range of adverse health outcomes. These can include malignant neoplasms associated with exposure to radiation and various environmental, industrial, and agricultural chemicals; neurobehavioral and mental outcomes associated with lead, methyl mercury, or other neurotoxic agents; respiratory diseases associated with air contamination; reproductive and developmental outcomes associated with exposure to teratogenic and developmental toxicants; and other organ/tissue-specific effects or responses associated with environmental exposures.

Table 2

Major categories of the international classification of diseases

Category

Blocks

Description

I II III IV V VI VII VIII IX X XI XII XIII XIV XV XVI XVII XVIII XIX XX XXI XXII

A00-B99 C00-D48 D50-D89 E00-E90 F00-F99 G00-G99 H00-H59 H60-H95 I00-I99 J00-J99 K00-K93 L00-L99 M00-M99 N00-N99 O00-O99 P00-P96 Q00-Q99 R00-R99 S00-T98 V01-Y98 Z00-Z99 U00-U99

Certain infections and parasitic diseases Neoplasms Diseases of the blood and blood-forming organs and certain disorders involving the immune system Endocrine, nutritional, and metabolic disorders Mental and behavioral disorders Diseases of the nervous system Diseases of the eye and adnexa Diseases of the ear and mastoid process Disease of the circulatory system Disease of the respiratory system Disease of the digestive system Diseases of the skin and subcutaneous system Diseases of the musculoskeletal system and connective tissue Disease of the genitourinary system Pregnancy, childbirth, and the puerperium Certain conditions originating in the perinatal period Congenital malformations, deformations, and chromosomal abnormalities Signs, symptoms, and abnormal clinical and laboratory findings, not elsewhere classified Injury, poisoning, and certain other consequences of external causes External causes of morbidities and mortalities Factors influencing health status and contacts with health services Code for special purposes (new diseases of uncertain etiology and bacterial agents resistant to antibiotics)

430

Environmental Epidemiology and Human Health: Biomarkers of Disease and Genetic Susceptibility

Understanding the health risks associated with exposure to environmental factors requires an understanding of human biology and a thorough assessment and understanding of how environmental agents affect biological systems at the molecular, cellular, and target organ levels. In recent years, biomarker evaluation has become a very useful tool in identifying biological changes and alterations that are indicative of abnormalities and diseases. Hence, biomarkers are becoming extremely important in achieving the goals of biomedical research, and as a result several innovative laboratory and sensitive analytical technologies are now being deployed to detect and identify changes and alterations in chemical, physiological, and biochemical functions in support of molecular epidemiology. This article provides relevant information on various types of biological markers and discusses their importance with regard to genetic polymorphisms and cancer susceptibility in human populations.

Biomarkers in Environmental Epidemiology and Human Health The National Research Council defines biological markers as “indicators signaling events in biological systems or samples” and “indicators of variation in cellular and biochemical components or processes, structures or functions which are measurable in a biological system or sample.” In general, a biomarker is commonly defined as any measurable chemical, biochemical, cytological, physiological, morphological, or other biological parameter that is directly or indirectly associated with exposure to an environmental toxicant. Depending on the magnitude, this alteration can be recognized as a health impairment or disease. Figure 1 presents the characteristics of exposure–disease continuum and indicates related biomarkers. As shown in this figure, three groups of biomarkers (exposure, susceptibility, and effect) have been identified. A biomarker of exposure is an exogenous substance, its metabolite, or the product of an interaction between a xenobiotic agent and some target molecule or cell that is measured within an organism. A biomarker of susceptibility is an indicator of an inherent or acquired limitation of an organism’s ability to respond to the challenge of exposure to a specific xenobiotic compound. A biomarker of effect is a measurable biochemical, physiological, or other alterations within an organism that, depending on the magnitude, can be recognized as an established or potential health impairment or disease. An ideal biomarker is agent-specific, available for analysis via relatively noninvasive methods, detectable in trace concentrations, inexpensive to identify and measure, quantitatively relatable to the degree of exposure, and quantitatively predictive of a health effect, and appears early in the exposure–disease continuum. As illustrated in Figure 2, the magnitude of biological response is strongly related to the degree of toxic exposure. Therefore, understanding the nature and function of exposure, susceptibility, and effect biomarkers is critical for avoiding mistakes in exposure and disease classification, refining the pathways and mechanisms of disease causation, and identifying high-risk persons and populations. Biomarkers have also been categorized into three major interrelated types, as defined and described in the following sections.

Type 1 Biomarkers Type 1 biomarkers relate to the measurement of parent compounds, their metabolites, and adducts derived from their interaction with macromolecules such as nucleic acids, proteins, compound–receptor complexes, and other structures. Hence, type 1

Mode of action

Disease etiology

Dose

Exposure

Internal dose

Biological effective dose

Early biological effects

Exposure biomarkers

Altered structure/ function

Clinical disease

Effect biomarkers

Susceptibility biomarkers

Figure 1

Characteristics of exposure–disease continuum and related sites for exposure, susceptibility, or effect biomarkers.

Environmental Epidemiology and Human Health: Biomarkers of Disease and Genetic Susceptibility

Toxic exposure level

Low

DNA/ protein modification

Low

Figure 2

Gene/ protein expression

Immunological changes

Behavioral changes

431

High

Decreased reproduction

Biologic response

Diseases

Death

High

Relationship between the degree of toxic exposure and the magnitude of biological response.

biomarkers are indicative of both chemical exposure and body burden. Humans may be exposed to various physical, chemical, and biological agents through ingestion of contaminated water or food, inhalation of contaminated air, dermal contact with toxic compounds and ionizing radiation, and parenteral routes through injection. Once absorbed, parent compounds may be metabolized and distributed throughout the body. Hence, a biomarker of exposure may be the chemical itself (as it is the case for most heavy metals) or its metabolite. Metabolites of carcinogens such as aflatoxin, benzene, and PAHs have been detected in urine. Although the presence of parent compound or its metabolite provides evidence of exposure, it does not provide evidence that toxicological damage has occurred. Hence, measurement of chemical–DNA/protein adducts is of strong interest because these adducts are direct products of damage to critical macromolecules such as DNA and proteins and reflect an integration of the toxicokinetic process of absorption, distribution, metabolism, and excretion. They are often referred to as measures of ‘biologically effective dose.’

Type 2 Biomarkers Type 2 biomarkers are chemically nonspecific, as they relate to specific responses to absorbed dose, at the molecular, cellular, and target organ levels. Such responses may be physiological (enzyme inhibition or alteration in homeostasis), genotoxic (DNA damage, sister chromatid exchange (SCE), micronucleus formation, chromosomal aberration, and mutation in reporter genes, oncogenes, and tumor suppressor genes), or oncogenic (oncogene activation and tumor formation) at the cellular level. Organlevel responses are also considered type 2 biomarkers and may be characterized by dysfunction, hyperplasia, and polyps formation. Biomarkers of genotoxicity: A number of biomarkers, including structural chromosomal aberrations, SCEs, micronucleus formation, DNA strand breakage, and mutation, have been used to assess the genotoxicity of environmental compounds. Both chromosome-type aberrations (acentric rings, centric rings, inversions, reciprocal translocations, terminal deletions, minutes, and dicentric and polymeric aberrations) and chromatid-type aberrations (breaks, gaps, minutes, acentric rings, centric rings, inversions, symmetrical exchanges, isochromatid aberrations, and asymmetric interchanges) have been reported. Chromosomal aberrations are detected by cytological methods and have been classified as structural (based on the changes in structure and morphology of chromosomes) and numeric, including aneuploidy when there is a gain or loss of one chromosome and polyploidy when there is a loss or gain of a whole set of chromosomes. SCEs constitute an abnormality that occurs during cell replication as a chromosome duplicates its genetic material forming sister chromatids attached at the centromere. Although this process is not linked to a disease end point, it is an indication that chromosomal alterations have occurred. Micronucleus formation is another biomarker of genotoxicity. It is a measure of chromosome fragments in interphase cells, as a result of their exclusion from the nucleus. It is hence an indication of chromosome breakage that has occurred during the previous cell division due to an impact of the mitotic spindle, representing aneuploidy. DNA strand breakage is a genotoxicity biomarker commonly detected in the single-cell gel electrophoresis (comet) assay. The degree of DNA migration on the agarose gel is proportional to the amount of DNA damage. Other assays such as tandem-label

432

Environmental Epidemiology and Human Health: Biomarkers of Disease and Genetic Susceptibility

cell-fluorescence in situ hybridization (FISH) and terminal deoxynucleotidyl transferase-mediated dUTP nick end labeling (TUNEL) have been used for measuring DNA double-strand breakage. Gene mutations often involve point mutations and reflect changes in the nucleotide sequences of DNA, as represented by base-pair substitutions or frameshifts. Mutations may also involve changes in DNA including duplications, deletions, and rearrangements. Several mutational assays have been developed to test the genotoxicity of various environmental compounds. Current testing protocols include the Salmonella-based (Ames) assay, Neurospora (fungal-based) assay, mammalian cell culture assays (mouse lymphoma L5178Y assay and human HeLa assay), mammalian in vivo assays (mouse-specific locus assay), classic assay in Drosophila, and plant system assays (Tradescania and Zea assays). Biomarkers of oncogenic response and carcinogenesis: Carcinogenesis is a complex multistep process that involves epigenetic events such as the inappropriate expression of certain cellular genes and genetic events that include mutational activation of oncogenes and the inactivation of tumor suppressor genes. This process has been divided into at least three stages known as initiation, promotion, and progression. During the initiation phase, a normal cell undergoes an irreversible change characterized by an intrinsic capacity for autonomous growth. However, this capacity remains latent for a period of time during which the initiated cell is morphologically similar to the normal cell. Initiation involves alteration of cellular DNA, and this effect may be triggered by exposure to a known carcinogen or to an ultimate carcinogen generated by metabolic activation of a procarcinogen. Initiation is hence a genotoxic or DNA-damaging event, one in which an alteration in the DNA sequence is produced. It has also been reported that important epigenetic mechanisms of gene regulation such as DNA methylation and histone modifications play essential roles in tumor initiation and progression, both independently and cooperatively. In the promotion stage of carcinogenesis, specific agents, known as promoters, enhance the development of neoplasm in initiated cells. Promoters do not cause cancer by themselves. The temporal sequence of exposure is important in characterizing promotion, a process that is considered as a nongenotoxic or epigenetic event. If the agent potentiates carcinogenesis from coexposure with an initiation, it is called a cocarcinogen rather than a promoter. The third stage of carcinogenesis, progression, involves additional genetic damage and enhances the development of malignant tumors from benign tumors. Associated with progression is the development of an increased degree of karyotypic instability and of aneuploidy leading to chromosomal rearrangements, especially in leukemia. Early detection in cancer prevention and control has been one of the primary stimuli for the quest to discover cancer at an early and treatable stage. Hence, several biomarkers have been used for cancer screening, including prostate-specific antigen (PSA) for prostate cancer, alpha-fetoprotein for liver cancer, thyrocalcitonin for medullary carcinoma of the thyroid, 5-hydroxy-indolacetic acid in the urine for carcinoid tumors, carcinoembryonic antigen (CEA) for tumors at several sites, and the use of Papanicolaou cytology test as marker of preclinical cervical cancer. Both oncogenes and tumor suppressor genes are involved in the regulation of carcinogenesis. Research has indicated that the expression levels of both oncogenes and tumor suppressor genes tend to correlate with tumor stages. Hence, the assessment of their expression levels has been used as a biomarker of tumorigenesis. The current state of knowledge shows that cancer results from changes in the structure or expression of specific genes, by various mechanisms that may include point mutation, gene amplification, translocation, chromosomal aberrations, somatic recombinations, DNA methylation, or gene conversion. Oncogenes are dominant-acting structural genes that encode for protein products capable of transforming the phenotype of a cell. They are known to encode for growth factors, growth factor receptors, regulatory proteins in signal transduction, nuclear regulatory proteins, and protein kinases. The activation of these gene products including c-onc, c-myc, c-abl, N-myc, K-ras, C-H-ras, C-Uras, C-K-ras, myc, myb, erb B, mos, Ink1 and Ink2, Jun, and others contributes to the neoplastic process. Tumor suppressor genes, however, are regulatory genes that function to suppress or limit growth by inhibiting the activity of structural genes that are responsible for cellular growth. Although protooncogenes have to be activated to influence carcinogenesis, tumor suppressor genes have to be inactivated for the transformed phenotype to be expressed. Hence, several human cancers have been associated with mutations in various tumor suppressor genes including p53, RB1, APC, WT-1, NF-1, p16INK4, BRCA1, BRCA2, TSC2, MSH2, MLH1, VHL, and PTC. The ubiquitin-proteasome system (UPS) has been reported to play a significant role in cell fate and carcinogenesis. UPS inhibition has been found to be a prerequisite for apoptosis and is already clinically exploited with the proteasome inhibitor bortezomib in multiple myeloma. Several classes of environmental toxins including pesticides, heavy metals, and pharmaceutical drugs have been reported to induce UPS dysfunction, leading to the pathogenesis and progression of both chronic and degenerative diseases. The cyclooxygenase (COX) enzymatic system has also been implicated in carcinogenesis and apoptosis. It includes two isoenzymes, COX-1 and COX-2, that convert arachidonic acid to prostaglandins. COX-1 is constitutively expressed and considered to be a housekeeping gene, whereas COX-2 is not usually detectable in normal tissues, but can be readily induced in processes like inflammation, reproduction, and carcinogenesis. The mechanisms by which COX-2 is thought to be involved in the carcinogenesis include resisting apoptosis, increasing cell proliferation, stimulating angiogenesis, and modulating the invasive properties of cancer cells. Hence, COX-2 inhibition by nonsteroidal antiinflammatory drugs (NSAID), such as aspirin, has been reported as one of the molecular mechanisms involved in their chemoprevention against colon cancer, breast cancer, and other cancer sites. Disruptions of control mechanisms of the cell cycle can initiate carcinogenesis and play a role in progression to cancer. Research has shown that the increase in cell proliferation during endometrial carcinogenesis parallels the progressive derailment of cyclin B1, cyclin D1, cyclin E, p16, p21, p27, p53, and cdk2, indicating the importance of these cell cycle regulators in endometrial carcinogenesis. Inactivation of the p53 tumor suppressor gene through a point mutation (G/T transversion being the most common) or loss of heterozygosity is one of the most common genetic changes found in various types of human tumors.

Environmental Epidemiology and Human Health: Biomarkers of Disease and Genetic Susceptibility

433

Type 3 Biomarkers Type 3 biomarkers are susceptibility factors that increase the sensitivity of specific individuals and population subgroups to be more vulnerable to the toxic effects of environmental compounds. Individual susceptibility is influenced by both genetic and environmental factors. Although acquired susceptibility indicators such as age, diet, lifestyle and previous exposure to toxic compounds, infections, and diseases appear to play a role, genetically determined biomarkers associated with various types of polymorphisms in biotransformation enzymes have gained a significant interest in molecular/cancer epidemiology. Recent advances in molecular biology, genomics, pharmacology, and toxicology and understanding of molecular mechanisms of action have provided assistance in identifying population subgroups that are susceptible to diseases in general, and cancers in particular. Hence, susceptibility biomarkers have been used in environmental epidemiology to stratify populations into specific subgroups so that disease risks/rates for both susceptible and nonsusceptible groups can be made more accurately. Genetic variation may contribute to significant individual differences in the biological responses from exposure to xenobiotic compounds. Studies investigating the relationship between common genetic variants and human diseases such as cancer are gaining a lot of interest. Several studies are being reported on specific alleles of different genes that have been associated with various cancer types. Variants of several metabolizing enzymes including the inducible aryl hydrocarbon hydroxylase, the noninducible N-acetyl transferase (NAT), and the glutathione-S-transferase (GST) have been well characterized. Given the same level of environmental exposure, individuals will vary in their biological response depending on their genetic makeup, acquired characteristics, and other previous exposures. Polymorphisms in phase I biotransformation enzymes: More common genetic traits that control the biotransformation of xenobiotics into toxic or nontoxic metabolites appear to play a very important role. The superfamily of cytochrome P-450 enzymes catalyzes the oxidative metabolism of both endogenous compounds (steroids or fatty acids) and exogenous chemicals (drugs, PCBs, PAHs, or aromatic amines). Many of the cytochrome P-450 genes exist in variant forms or polymorphisms that have different levels of biological activity. Hence, genetic variation in this enzyme system is important in the modulation of various types of toxicity. Members of the cytochrome P-450 superfamily, such as CYP1A1, catalyze the oxygenation of PAHs, producing intermediates that are highly reactive with DNA. CYP1A1 is induced by environmental compounds such PAHs, PCBs, and dioxins. Research has pointed out significant variations in CYP1A1 expression in various populations in response to such environmental exposure. For example, a 10% variation is observed in Caucasian populations. CYP1A1 activity has been reported to vary more than 50-fold in lung cancer, whereas its inducibility varies by 20-fold in human liver. An increased risk of lung cancer has been associated with smokers expressing a high level of CYP1A1 inducibility. A polymorphism in CYP1A1 gene, known as Msp and characterized by an exon 7 mutation, has been reported in Japanese, showing a threefold increase in lung cancer. Variations in other cytochrome P-450 enzymes have also been associated with other types of cancer. CYP1A2 is known to metabolize aflatoxins, arylamines, and heterocyclic amines. Polymorphism in the CYP1A2 gene, especially in combination with the variant NAT2 gene, has been associated with colon cancer risk. It has been reported that there is more than 40-fold variation in CYP1A2 expression in the liver. P-450 CYP2E1 has been known to metabolize benzene, butadiene, carbon tetrachloride, and N-nitrosamines. Its activity in the liver has been reported to vary by 50-fold in humans. P-450 CYP2D6 also varies significantly among human populations; with approximately 90–95% of the US Caucasian population considered to be high metabolizers, showing a 10- to 200-fold higher rate than slow metabolizers. Polymorphisms in phase II biotransformation: In contrast to phase I biotransformation-activating enzymes, phase II metabolizing enzymes such as GST, NAT, glucuronyl transferase, epoxide hydrolase, and sulfotransferase generally detoxify xenobiotic compounds by conjugating them with endogenous substances (glutathione, acetyl, sulfate, or glucuronide) to yield products that are more water-soluble and readily excreted. Toxicity risk is determined by the balance between phase I and phase II reactions. GSTM1 is known to detoxify a number of reactive, electrophilic substances including PAHs. It has been reported that 50% of Caucasians have a deletion in the GSTM1 gene, and this mutation has been associated with increased risk of bladder and lung cancer. Research on lung biopsies has shown a strong correlation between PAH–DNA adducts, lung cancer, and mutation in the GSTM1. Another phase II biotransformation enzyme, NAT, is a noninducible liver enzyme that detoxifies carcinogenic aromatic amines including some of the main carcinogenic components of tobacco smoke and cooked meat, through acetylation. It has been reported that 50–60% Caucasians and 30–40% African Americans are slow acetylators. Fast acetylators possess the NAT1 gene, and slow acetylators contain the NAT2 gene. Research has shown that slow acetylators are at high risk of developing bladder cancer due to exposure to environmental compounds such as 2-naphtyl amine and 4-amino-benzo(a)pyrene. Research has also demonstrated that among women who smoke, slow acetylators are at higher risk of developing breast cancer. However, the action of NATs on carcinogens can also produce electrophilic ions from the metabolism of heterocyclic amines leading to a high risk of colon cancer in rapid acetylators. Overwhelming evidence suggests strong associations between environmental exposures and human cancers. However, several investigations have also demonstrated that specific polymorphisms in a wide spectrum of genes have a significant impact in modifying the susceptibility and health effects associated with these exposures. Hence, studies on susceptibility genes and genetically susceptible population subgroups in the context of gene–environment interactions are important in understanding both environmental and genetic risk factors, as well as disease outcomes. They are also essential for developing new clinical and public health strategies for disease prevention and control. Using breast and lung cancers as examples, the following sections are aimed at illustrating the role played by genetic polymorphism in environmentally induced diseases.

434

Environmental Epidemiology and Human Health: Biomarkers of Disease and Genetic Susceptibility

Breast Cancer and Genetic Polymorphism Breast cancer is the most common type of cancer among women in the United States, accounting for approximately 200 000 new cases and 40 000 deaths in 2006. Globally, over 1 million new cases were diagnosed, and there were approximately 400 000 deaths in 2002. It is the third most common type of cancer, after cancers of the lung and stomach. Breast cancer susceptibility has been associated with genetic factors. Although 10–15% of breast cancer cases have been associated with family history, approximately 5–7% can be explained by rare, highly penetrant mutations in breast cancer genes including BRCA1 and BRCA2. Other rarer, high-penetrance genes such as TP53 and PTEN account for less than 5% of the risk. Mutations in the TP53 and BRCA1 genes have been associated with a high lifetime risk of breast cancer and other cancers. p53 is a tumor suppressor gene whose protein is produced in response to exposure to genotoxic agents or radiation, resulting in cell cycle arrest at the G0/G1 checkpoint of the cell cycle, stimulation of DNA repair mechanisms, and apoptosis. TP53 regulates cellular stress response by controlling apoptosis and cell cycle regulation through arrest at the G2 checkpoint. Mutation in the tumor suppressor gene may result in decreased p53 protein translation, leading to increased replication of cells with damaged DNA. Mutations in BRCA1 gene (on chromosome 17q12–21) also result in reduced protein expression and increased breast cancer risk. Hence, under similar exposure conditions, women with p53 and BRCA1 mutations present greater susceptibility to breast cancer. Women with germ line BRCA1 mutations are estimated to have an 80–90% risk of developing breast cancer. However, published research on many cancer cell lines has also suggested that p53 mutation may lead to the inactivation of endogenous wild-type p53 protein in a dominantnegative fashion, resulting in an increase in p53 expression due to stabilization. Hence, dominant-negative p53 mutants may accelerate tumor development and growth. Genetic association studies have also been performed with other common alleles of specific genes including those of phase I biotransformation enzyme system (especially CYP1A1 and CYP2D6) and phase II metabolizing system (GSTM1 and GSTT1, and NAT1 and NAT2). CYP1A1 has been shown to play a key role in phase I metabolism of PAHs and in estrogen metabolism. Several mutations in CYP1A1 have been described, and four specific polymorphisms (3801T/C, lle462Val, 3205T/C, and Thr461Asp) have been studied in relation to breast cancer. Although most studies found no evidence of an association between the 3801T/C (CYP1A1)2A and CYP1A1)2B) and 3205T/C (CYP1A1)3) polymorphisms and breast cancer, other reported an increased breast cancer risk in Ile462Val (CYP1A1)2b and CYP1A1)2C) and Thr461Asp (CYP1A1)4) polymorphisms. As indicated earlier in the text, a genetic change that increases the expression level of metabolizing proteins may either increase the amount of reactive metabolites leading to an increase of cancer risk or reduce the level of reactive intermediates leading to a reduction in cancer risk. GSTs play an important role in the detoxification of xenobiotic compounds by catalyzing the conjugation of the glutathione moiety to the substrate. Six major gene families (-A, -M, -P, -S, -T, and -Z) encoding for six families of enzymes (a, m, p, s, T, and z) have been characterized. Polymorphisms exist in many of the GST genes and in some cases resulting from the absence of gene in some individuals (GSTM1 null genotype). The GSTM1 and GSTT1 genes are located on chromosome 1 and chromosome 11q, respectively. The GSTP1 gene has also been characterized. Although few studies have shown an association between possession of GSTM1 null genotype and increased breast cancer risk, others have shown no significant relationship. Also, there is a clear lack of association between GSTT1 and GSTP1 and breast cancer. NATs catalyze the detoxification of xenobiotic compounds through n-acetylation (NAT1) or o-acetylation (NAT2). At least 15 NAT1 or NAT2 alleles have been characterized. Although NAT1)4 is the wild type, NAT1)14 and NAT1)15 have been shown to be defective alleles with low catalytic activity. By contrast, possession of NAT1)10 has been associated with high activity in the colon and bladder, but not in the breast and placenta and, hence, may increase the risk of breast cancer among smokers. NAT2)4 and NAT2)12A have been associated with fast acetylation status. Breast cancer risk has also been reported to be highly modulated by steroid hormone-metabolizing genes. Factors such as age at first pregnancy, age at menarche, number of pregnancies, and age at menopause are known to modulate both endogenous hormones and breast cancer susceptibility genes including CYP17, CYP19 (aromatase), and the gene for 17-beta-hydroxysteroid dehydrogenase type 2. Both CYP17 and CYP19 are involved in steroid hormone metabolism forming testosterone or estrogen and have been associated with increased breast cancer risk. Another important gene is related to the catecholamine-o-methyltransferase (COMT) enzyme that metabolizes catechol estrogens during their conjugation and inactivation. Genetic polymorphism in COMT (guanine to adenine, creating a valine-to-methionine substitution at codon 158) gene is associated with decreased activity, reduced catechol–estrogen conjugation, and increased breast cancer risk. Catechol-estrogens have been reported to directly or indirectly cause oxidative DNA damage, lipid peroxidation, and DNA adducts through their quinine metabolites. Very high inherent susceptibility has been associated with dominant genetic disorders such as the rare Li–Fraumeni syndrome in which persons inheriting a germ line deletion of one allele of the p53 tumor suppressor gene are at extremely high risk (100%) of developing breast cancer and other types of cancer. It has been reported that mutations of highly penetrant breast cancer susceptibility genes such as TP53 (Li–Fraumeni syndrome), PTEN (Cowden syndrome), MSH2 (Muir–Torre syndrome), and STK11 (Peutz–Jeghers syndrome) are extremely rare in the general population (less than 1%), and unlikely to be manifested in families with inherited breast cancer susceptibility. Table 3 presents the genes conferring a high risk of breast cancer and other cancers.

Lung Cancer and Genetic Polymorphism Lung cancer is the leading cause of cancer deaths in the United States, accounting for 29% of all cancer deaths in 2006. From a global perspective, it is also the most common cancer worldwide. Lung cancer has been strongly associated with exposure to tobacco smoke. Hence, genetic susceptibility has been investigated in terms of an individual’s ability to metabolize potential carcinogens such as PAHs, PCBs, and heterocyclic amines present in cigarette smoke. Individual susceptibility is likely to be modified by the

Environmental Epidemiology and Human Health: Biomarkers of Disease and Genetic Susceptibility Table 3

435

Highly penetrant genes in breast cancer and other cancers

Gene

Chromosome

MIM

Risk

Associated syndrome

Clinical manifestation

BRCA1 BRCA2 TP53

17q21 13q12.3 17p13.1

113 705 600 185 191 170

56–87% (by age 70) 84% (by age 70) 50–89% (by age 50)

Hereditary breast/ovarian cancer Hereditary breast/ovarian cancer Li–Fraumeni syndrome

601 728

30–40%

Cowden disease

Breast cancer and ovarian cancer Breast cancer and ovarian cancer Cancer of the breast and thyroid, sarcoma, brain, and breast cancer Multiple hematomatous skin lesions, mucous membrane, and cancer of the breast and thyroid Melanocytic macules of lips; multiple polyps; tumors of the intestinal tract, breast, ovaries, etc. Colorectal cancer and tumors of the endometrium, ovaries, intestinal tract, breast, etc. Colorectal cancer, tumors of the endometrium, ovaries, intestinal tract, breast, etc. Progressive cerebral ataxia, hypersensitivity to radiation, and increased cancer risk

PTEN/MMAC1 10q23

STK11

19p13.3

602 216

High

Peutz–Jeghers syndrome

MSH1

3p21.3

120 436

12% (lifetime)

Muir–Torre syndrome – hereditary nonpolyposis colorectal cancer

MSH2

2p21.22

120 435

12% (lifetime)

Muir–Torre syndrome – hereditary nonpolyposis colorectal cancer

ATM

11q22.3

208 900

OR (1–6.8)

Ataxia telangiectasia

MIM, Mendelian inheritance in man; OR, odds ratio.

genotype for biotransformation enzymes involved in either detoxification or bioactivation of xenobiotic compounds such as those present in tobacco smoke. It has been reported that tobacco smoke is the largest single cause of lung and oral cancers. Genetic studies have shown that polymorphisms in phase I biotransformation genes play an important role in the development of lung cancer from exposure to xenobiotic compounds. It has been demonstrated that variants of cytochrome P-450 genes, alone or in combination, constitute important risk modifiers of tobacco-related cancers, and especially lung cancer. Extensive studies have been conducted to identify lung cancer-associated genotypes, especially CYP1A1, CYP2D6, and CYP2E1. For example, different lung cancer risks have been associated with alleles of CYP1A1 studies in Japanese populations, showing an association between increased cancer risk and the prevalence of both m1 and m2 alleles. This risk is particularly high for the development of squamous cell carcinoma among light smokers. In African Americans, the m3 allele does not seem to confer high lung cancer risk; however, an increase in adenocarcinoma has been reported. Also, the m4 allele has been related to the m1 genotype in Caucasians, but no significant increase in lung cancer risk has been recorded with this m4 allele. Polymorphisms in other phase I metabolizing genes, such as CYP2D6 and CYP2E1, which catalyze the metabolism of aminecontaining compounds, have also been examined. However, conflicting results have been reported with regard to the influence of these specific genotypes on lung cancer risk. Polymorphisms in the phase II biotransformation genes including those of the GST and NAT superfamilies have been studied. Among the four different families (a, m, p, and T) of GST that have been identified, three are polymorphic in humans. Significant ethnic differences have been associated with the GSTM1 null genotype, showing 22– 35% in African Americans, 38–67% in Caucasians, and 33–63% in East Asians. Genetic studies on NAT polymorphisms have not shown any significant associations with lung cancer risk. Research has also shown that gene–gene interactions play a key role in lung cancer susceptibility. For example, a higher lung cancer risk is observed, especially for squamous cell carcinoma in Japanese, with the combination of CYP1A1)2 or CYP1A1)1 and the null GSTM1 gene in susceptible populations compared to those having other combinations of genotypes.

Conclusions Recent developments in molecular biology and bioanalytical technology have provided new tools for use in environmental health research and human health risk assessment. New analytical methodologies have significantly improved the validity of many biomarkers (of exposure, susceptibility, or effect) that are currently being used in health and disease management. There is compelling evidence from molecular epidemiological research that environmental factors play a key role in human cancers and that the prevalence and incidence rates of their diseases are strongly modulated by genetic and acquired susceptibility. Genetic polymorphism is known to play an important role in understanding the variability in the biological response to carcinogens. This suggests that certain population subgroups with specific genetic traits are more likely to have greater risks of cancer from selected exposures than other members of the population.

Acknowledgments This work has been supported by a grant (#1G12RR13459) from the National Institutes of Health through the RCMI Center for Environmental Health at Jackson State University (Jackson, Mississippi, USA).

436

Environmental Epidemiology and Human Health: Biomarkers of Disease and Genetic Susceptibility

See also: Biomarkers of Environmental Exposures in Blood; Environmental Epidemiology and Human Health: Biomarkers of Disease and Genetic Susceptibility; Hair for Biomonitoring of Environmental Exposures; The Exposome: An Approach Toward a Comprehensive Study of Exposures in Disease; Toenails for Biomonitoring of Environmental Exposures; Tooth Biomarkers in Environmental Health Research.

Further Reading Bapat, B., Esufali, S., 2000. The role of genetic modifiers, both mutations and polymorphisms, and environmental factors in cancer susceptibility. In: Ehrlich, M. (Ed.), DNA Alterations in Cancer – Genetic and Epigenetic Changes. Eaton Publishing, Natick, MA, pp. 85–101. Brownson, R.C., Remington, P.L., Davis, J.R., 1998. In: Chronic Disease Epidemiology and Control. American Public Health Association, Washington, DC, p. 546. Buskens, C.J., Ristimaki, A., Offerhaus, G.J., Richel, D.J., van Lanschot, J.J., 2003. Role of cyclooxygenase-2 in the development and treatment of oesophageal adenocarcinoma. Scandinavian Journal of Gastroenterology 239 (supplement), 87–93. DeRuych, K., Szaumkessel, M., DeRudder, I., et al., 2007. Polymorphisms in base-excision repair and nucleotide-excision repair genes in relation to lung cancer genes. Mutation Research 631, 101–110. Dunning, A.M., Healey, C.S., Pharoah, P.D.P., Teare, M.D., Ponder, B.A.J., Easton, D.F., 1999. A systematic review of genetic polymorphisms and breast cancer risk. Cancer Epidemiology, Biomarkers & Prevention 8, 843–854. Friis, R.H., Sellers, T.A., 1999. In: Epidemiology for Public Health Practice, 2nd edn. Aspen Publishers, Inc, Gaithersburg, MD, p. 506. Hung, R.J., Boffetta, P., Brockmoller, J., et al., 2003. CYP1A1 and GSTM1 genetic polymorphisms and lung cancer in Caucasian non-smokers: A pooled analysis. Carcinogenesis 24 (5), 875–882. Jain, M., Kuma, S., Rastogi, N., et al., 2006. GSTT1, GSTM1, GSTP1 genetic polymorphisms and interaction with tobacco, alcohol and occupational exposure in esophageal cancer patients from North India. Cancer Letters 242, 60–67. Kawajiri, K., 1999. Molecular epidemiology of lung cancer. In: Puga, A., Wallace, K.B. (Eds.), Molecular Biology of the Toxic Response. Taylor & Francis, Philadelphia, PA, pp. 53–62. Kiyohara, C., Otsu, A., Shirakawa, T., Fukuda, S., Hopkin, J.M., 2002. Genetic polymorphisms and lung cancer susceptibility: A review. Lung Cancer 37 (3), 241–256. Krajinovic, M., Richer, C., Sinnett, H., Labuda, D., Sinnett, D., 2000. Genetic polymorphisms of N-acetyltransferases 1 and 2 and gene-gene interaction in the susceptibility to childhood acute lymphoblastic leukemia. Cancer Epidemiology, Biomarkers & Prevention 9, 557–562. Li, L.-C., Carroll, P.R., Dahiya, R., 2005. Epigenetic changes in prostate cancer: Implication for diagnosis and treatment. Journal of the National Cancer Institute 97 (2), 103–115. Marutani, M., Tonoki, H., Tada, M., et al., 1999. Dominant-negative mutations of the tumor suppressor p53 relating to early onset of glioblastoma multiforme. Cancer Research 59, 4765–4769. Mechanic, L.E., Millikan, R.C., Player, J., et al., 2006. Polymorphisms in nucleotide excision repair genes, smoking and breast cancer in African Americans and whites: A population-based case-control study. Carcinogenesis 27 (7), 1377–1385. Meyer, U.A., 1999. Polymorphisms of genes of toxicologic significance. In: Puga, A., Wallace, K.B. (Eds.), Molecular Biology of the Toxic Response. Taylor & Francis, Philadelphia, PA, pp. 63–71. Miller, A.B., Bartsch, H., Boffetta, P., Dragsted, L., Vainio, H. (Eds.), 2001. Biomarkers in Cancer Chemoprevention. nternational Agency for Research on Cancer, Lyon, France, p. 294. Scientific Publication No. 154. Millikan, R.C., 2000. NAT1)10 and NAT1)11 polymorphisms and breast cancer risk. Cancer Epidemiology, Biomarkers & Prevention 9, 217–219. Moorman, P.G., Sesay, J., Nwosu, V., et al., 2005. Cyclooxygenase 2 polymorphism (Val511Ala), nonsteroidal anti-inflammatory drug use and breast cancer in African American women. Cancer Epidemiology Biomarkers 14 (12), 3013–3014. Moran, E.M., 2002. Epidemiological and clinical aspects of nonsteroidal anti-inflammatory drugs and cancer risks. Environmental Pathology, Toxicology and Oncology 21 (2), 193–201. Mostorides, S., Maronpot, R.R., 2002. Carcinogenesis. In: Haschek, W.M., Rousseaux, C.G., Wallig, M.A. (Eds.), Handbook of Toxicologic Pathology. Academic Press, New York, pp. 3–122. Nauman, C.H., Griffith, J., Blancato, J.N., Aldrich, T.E., 1993. Biomarkers in environmental epidemiology. In: Aldrich, T., Griffith, J., Cooke, C. (Eds.), Environmental Epidemiology and Risk Assessment. Van Nostrand Reinhold, New York, NY, pp. 152–181. Nebert, D.W., 2002. Ecogenetics: Genetic susceptibility to environmental adversity. In: Wilson, S.H., Suk, W.A. (Eds.), Biomarkers of Environmentally Associated Disease: Technologies, Concepts, and Perspectives. CRC Press, LLC, Boca-Raton, FL, pp. 39–53. Ozoren, N., El-Deiry, W.S., 2000. Introduction to cancer genes and growth control. In: Ehrlich, M. (Ed.), DNA Alterations in Cancer – Genetic and Epigenetic Changes. Eaton Publishing, Natick, MA, pp. 3–43. Perera, F.P., 1996. Molecular epidemiology: Insights into cancer susceptibility, risk assessment and prevention. Journal of the National Cancer Institute 88 (8), 496–509. Rebbeck, T.R., 2000. Inherited susceptibility to breast cancer in women: High penetrance and low-penetrance genes. In: Ehrlich, M. (Ed.), DNA Alterations in Cancer – Genetic and Epigenetic Changes. Eaton Publishing, Natick, MA, pp. 253–270. Semenza, J.C., Ziogas, A., Largent, J., Peel, D., Anton-Culver, H., 2001. Gene-environment interactions in renal cell carcinoma. American Journal of Epidemiology 153 (9), 851–859. Smart, R.C., Akunda, J.K., 2001. Carcinogenesis. In: Hodgson, E., Smart, R.C. (Eds.), Introduction to Biochemical Toxicology, 3rd edn. Wiley, New York, pp. 343–395. Smith, T.R., Levine, E.A., Perrier, N.D., et al., 2003. DNA-repair genetic polymorphisms and breast cancer risk. Cancer Epidemiology, Biomarkers & Prevention 12, 1200–1204. Sun, F., Khanthasamy, A., Anantharam, V., Kanthasamy, A.G., 2007. Environmental neurotoxic chemicals-induced ubiquitin proteasome system dysfunction in the pathogenesis and progression of Parkinson’s disease. Pharmacology and Therapeutics 114, 327–344. Tchounwou, P.B., Centeno, J., 2008. Toxicologic pathology. In: Gad, S.C. (Ed.), Preclinical Development Handbook. Wiley, New York, NY. Terry, P.D., Goodman, M., 2006. Is the association between cigarette smoking and breast cancer modified by genotype? A review of epidemiologic studies and meta-analysis. Cancer Epidemiology, Biomarkers & Prevention 15 (4), 602–611. Voutsadakis, I.A., 2007. Pathogenesis of colorectal carcinoma and therapeutic implications: The roles of the ubiquitin-proteasome system and Cox-2. Journal of Cellular and Molecular Medicine 11 (2), 252–285. Young, K.E., Robbins, W.E., Xun, L., Elashoff, D., Rothmann, S.A., Perreault, S.D., 2003. Evaluation of chromosome breakage and DNA integrity in sperm: An investigation of remote semen collection conditions. Journal of Andrology 24 (6), 853–861. Zienolddiny, S., Campa, D., Lind, H., et al., 2006. Polymorphisms of DNA repair genes and risk of non-small cell lung cancer. Carcinogenesis 27 (3), 560–567.

Environmental Factors in Children’s Asthma and Respiratory Effects PD Sly and A Chacko, The University of Queensland, Brisbane, Australia PG Holt, The University of Queensland, Brisbane, Australia; and The University of Western Australia, Perth, Australia © 2015 Elsevier Inc. All rights reserved.

Abbreviations AR Airway responsiveness CO Carbon monoxide CpG Cytosine and guanine separated by a phosphate DCs Dendritic cells DEPs Diesel exhaust particles DNA Deoxyribonucleic acid ETS Environmental tobacco smoke FEF25–75 Forced mid-expiratory flow rate FEV1 Forced expiratory volume in one second FVC Forced vital capacity GSH Glutathione GSTs Glutathione S-transferases HDM House dust mite HDMA House dust mite aerosol IAP Indoor air pollution Ig Immuno-globulins IL Interleukin LPS Lipopolysaccharide LRI Lower respiratory infections MHC Major histocompatability complex mVOC’s microbial volatile organic compounds NAC N-acetyl cysteine NO Nitric oxide NO2 Nitrogen dioxide O3 Ozone OC Organochlorides OP Organophosphates OR Odds ratio PM2.5 Particulate matter less than 2.5 mm PM10 Particulate matter less than 10 mm PUFA Polyunsaturated fatty acids RNA Ribonucleic acid ROS Reactive oxygen species RR Relative risks RTDC Respiratory tract dendritic cells SOD Superoxide dismutase Th T-helper cells TLRs Toll-like receptors VOC Volatile organic compounds

Development of Asthma Childhood asthma is a condition characterized by airflow obstruction that varies in time spontaneously, in response to various environmental stimuli and in response to treatment. Asthma is more common in childhood than in adulthood and is more correctly thought of as a syndrome than as a discrete condition. A number of epidemiologically-distinct phenotypes are recognizable in

Encyclopedia of Environmental Health, 2nd edition, Volume 2

https://doi.org/10.1016/B978-0-12-409548-9.09490-2

437

438

Environmental Factors in Children’s Asthma and Respiratory Effects

childhood; however, these are best defined from longitudinal cohort studies and are of limited clinical use. The most commonly recognized phenotypes are: transient infantile wheeze, where children have recurrent wheeze during the first three years of life but rarely afterwards; viral-associated wheeze, where children typically have episodic wheeze associated with respiratory viral infections and may not wheeze at other times; and atopic asthma, where children have wheeze associated with allergic sensitization to aeroallergens and frequently have features of other atopic diseases such as atopic dermatitis and allergic rhino-conjunctivitis.

Asthma as a Developmental Disease Asthma can be thought of a failure of development where the normal development of the respiratory and immune systems is altered by the impacts of environmental exposures acting on underlying genetic predispositions. When considered from this point of view it makes sense to first briefly review the basics of normal respiratory and immune system development and then consider how alterations of these by genetic and environmental factors results in the development of asthma. The consequence of a given environmental exposure is largely determined by the developmental stage during which the exposure occurs. Exposures occurring during organogenesis may result in structural abnormalities whereas the same exposure occur after organ maturation has been completed may have no effect or an effect on organ function. This concept is known as “windows of susceptibility”, which differ with different organs and different exposures. While many organ systems are essentially fully developed at birth or soon after, the respiratory, immune and central nervous systems are not and have prolonged periods of postnatal development and maturation. Thus, these three organ systems are particularly vulnerable to adverse environmental exposures. In the context of childhood asthma the developmental stage susceptibility of the respiratory and immune systems is important in the pathogenesis of asthma.

Normal Development of the Respiratory System Knowledge of the phases of normal development of the lung allows an understanding of the how the timing of adverse environmental exposures produces adverse effects on lung structure and function.

Pre-natal Development Lung development begins early in gestation with the primary organogenesis occurring in the embryonic and pseudoglandular periods. Environmental exposures occurring these periods are likely to have structural consequences. Airway development is essentially complete before birth. Airway branching is complete to the terminal bronchioles by 16 weeks gestation and the pulmonary vasculature develops along with the airways. Airway smooth muscle development begins around 8–10 weeks gestation and has extended to respiratory bronchioles by 26 weeks gestation. Cartilage development is essentially complete by 28 weeks. Alveolar development begins around 24 weeks gestation and at birth approximately 30–50% of the final compliment of alveoli is present. Lamellar bodies, the structures responsible for secreting and storing surfactant, appear within type II alveolar epithelial cells by 24 weeks gestation.

Post-natal Development After birth, alveolarization continues rapidly for the first 18–24 months. While the timing of cessation of alveolar development is not known with certainty and may continue into later life, the rate of alveolar formation is most rapid in early post-natal life. The pulmonary microvasculature largely develops during this secondary phase of alveolarization. Lung volume increases along with somatic growth, with the lung volume approximately doubling from birth to 18 months, doubling again by 5 years of age and doubling again by adult life. Airway calibre also increases with somatic growth. Growth in lung function continues longer in boys, continuing into the early 20 years, whereas lung growth appears to stop in the late teen years in girls. Boys are thought to have relatively smaller airways for the size of their lungs than girls in early life and this is thought to contribute to the increased prevalence of wheezing in boys during infancy and the preschool years. Lungs grow along trajectories set in early life, similar to percentiles for somatic growth. This means that adverse influences on lung growth in early life, such as maternal smoking during pregnancy, are likely to have life-long consequences. Maternal exposure to air toxicants, including ambient air pollutants, household chemicals and personal care products, especially those in aerosol form have also been shown to limit growth of lung function.

Normal Development of the Immune System The immune system has two major arms, the innate immune system and the adaptive immune system. The innate immune system in the lungs represents the first line of defence against invading organisms, consisting of non-specific responses triggered primarily by resident macrophages, dendritic cells, and epithelial cells that recognise conserved molecular patterns carried on the surface of

Environmental Factors in Children’s Asthma and Respiratory Effects

439

micro-organisms. These cells secrete cytokines and chemokines that recruit inflammatory cells to the lungs. The innate immune system relies on a limited number of pattern recognition receptors located on macrophages, dendritic cells, epithelial and inflammatory cells. Secreted receptors, such as CD14 or lipopolysaccharide (LPS)-binding protein, bind to microbes and facilitate their destruction by phagocytosis or via the complement system. Toll-like receptors (TLRs) induce antimicrobial genes and inflammatory cytokines within a variety of cells, and additionally activate dendritic cells (DCs), the major professional antigen presenting cells in the airways, to initiate adaptive immune responses. The adaptive immune system adds specificity to host defense responses by recognition of individual specific antigens, resulting in the subsequent activation of both humoral (antibody mediated) and cellular responses mediated respectively by B and T-cells. A fundamental characteristic of the adaptive immune system is the development of immunologic memory, in which a rapid response is mounted on subsequent re-infection with individual pathogens. While the strengths of such a mechanism in providing resistance to infection is important for survival in the face of infectious diseases, inadequately regulated immunological memory is also the basis for immunopathology in allergic disease and in some cases during reinfection with viruses that elicit strong memory responses.

Pre-natal Development Both the innate and adaptive immune systems undergo considerable development in utero but both are immature at birth. Circulating T-cells can be demonstrated by 15 weeks gestation and are capable of proliferating in response to mitogen stimulation in vitro by 17 weeks gestation. Surface markers characteristic of T-cells, i.e. CD3, CD4 and CD8, have been demonstrated by 18 weeks gestation, as has the surface expression of Major Histocompatibility Complex Class II molecules which bind antigen fragments which are presented to T-cells resulting in their subsequent activation. T-cell responses to antigen demonstrated using ex-vivo stimulation protocols have been shown to occur as early as 22 weeks gestation. However, as discussed later, considerable doubt exists over the specificity of these responses. Fetal and placental tissues secrete cytokines in utero and measuring these in cord blood can give an indication of the maturational state of the fetal immune system.

Post-natal Development The immune system is functionally immature at birth and considerable maturation of both the innate and adaptive arms occur after birth. Monocytes circulating in neonates respond less well to a variety of bacterial and viral signals than do adult monocytes. DC function is also immature at birth in several important ways; neonatal DCs have reduced ability to present antigen and reduced ability to induce T-cell differentiation; their ability to secrete bioactive interleukin (IL)-12, a key cytokine for inducing T-cells to differentiate into T-helper (Th)-1 cells, is deficient at birth and matures slowly through childhood and they also show a reduced ability to secrete type I interferons, an important part of the innate anti-viral response. Circulating T-cell numbers are increased in infancy relative to later life but many of these show characteristics of functionally immature cells known as recent thymic emigrants, including the expression of the surface markers CD1 and CD38 and coexpression of T-cell markers CD4 and CD8, while very few express classical activation markers, such as CD25, CD69, or CD154. T cells from neonates and infants are not able to sustain responses to stimuli in vitro. While initial rapid proliferative responses associated with cytokine production are seen these response are not maintained with most cells undergoing apoptosis. They fail to generate true memory responses, further questioning the specificity of allergen-induced T-cell responses in early life. Thus the results of studies stimulating cord blood mononuclear cells with specific allergen stimuli must be interpreted with caution and should not be construed as evidence of in utero allergen priming. T-cell responses in early life are characterized by the production of Th-2 cytokines, which appears to be related to an active suppression of secretion of Th-1 cytokines in utero and an inability of neonatal DCs to induce Th-1 differentiation in early life. The ability to produce a wide variety of Th-1 cytokines is reduced in early life and maturation is not complete until adolescence. Postnatal maturation of the immune system is driven by environmental exposures, especially to microbial products. Postnatal colonization of the gut and skin with bacteria is thought to provide potent maturational signals. Other maturational signals are thought to be provided by exposure to components of microbial cell walls, such as LPS from gram negative organisms, lipoteichoic acids from gram positive organisms, glucans from fungi and many others. These maturational signals result in an increased expression of MHC class II on DC and an increased ability of T-cells to produce Th-1 cytokines with age.

Mechanisms Underlying the Development of Allergic Sensitization When a protein antigen is encountered at a mucosal surface, such as the airway epithelium, for the first time, a series of events are initiated that result in the development of either immunological memory or tolerance to that antigen. In the lungs, respiratory tract DC (RTDC) form a network within the epithelium with dendrites protruding between epithelial cells that “sample” the luminal environment via their specific surface receptors. Protein antigens are taken up by the RTDC through these dendrites by endocytosis and are processed. The RTDC then undergo a transformation from cells specializing in uptake of antigen to cells specializing in antigen presentation. These differentiated cells leave the lungs and traffic to the regional lymph nodes via the lymphatic system. On the first occasion protein antigens are encountered they are presented to naïve T-cells, together with maturational signals

440

Environmental Factors in Children’s Asthma and Respiratory Effects

from the RTDC that determine how the T-cells differentiate. If the protein antigen is completely inert and the RTDC containing the protein has not recently encountered any inflammatory signal and is in a baseline “resting” state, the RTDC will trigger the induction of a state of immunological unresponsiveness (“tolerance”) to that antigen by stimulating the differentiation of populations of Tregulatory cells. If however the RTDC have been activated by an environmental signal (e.g. via TLR stimulation), they will provide a strong IL-12 signal in conjunction with the processed antigen, and the recipient T-cells will differentiate as Th-1 cells and produce Th-1 cytokines. Some of the progeny of these activated Th-cells differentiate further into long lived “memory” cells. In the absence of a strong IL-12 signal or in the presence of an IL-10 signal (which is the default signal delivered by resting DC), the T-cells will be more likely to differentiate into Th-2 cells and produce Th-2 cytokines such as IL-4 and IL-5 and will give rise to Th2-memory cells. When airborne protein antigens are encountered repeatedly by individuals who do not develop tolerance at first exposure, a progressive expansion of the antigen-specific T-memory cells occurs, accompanied by production of a range of cytokines. Antigen is also presented to B-cells, which triggers the production of antibodies directed against the antigen. B-cells require cytokinemediated T-cell help for the efficient production of antibody. The default response of B-cells is to produce IgG antibodies, especially IgG1 subclass antibodies; however, in the presence of IL-4 class switching can occur, resulting in the production of IgG4 and IgE antibodies. The active suppression of Th-1 cytokine production during fetal life leads to Th-2 biased T-cell responses. This Th-2 bias persists into early postnatal life and, in those who do not develop allergic sensitization, switches to a low-level Th-1 bias as the immune system matures under the influence of environmental exposures. In those who develop allergic sensitization to protein antigens from the environment during this period, the Th-2 bias remains imprinted in specific areas of their immune response repertoire into later life. Viewed in this light allergic sensitization can be considered as a mal-adaptive response to exposure to protein antigens, generally known as allergens. Genetic predisposition is recognized as an important component of risk for development of allergic sensitization. This predisposition is manifest in the observation that allergies run in families. Children at high risk of developing allergies, based on a strong family history of allergy, show sluggish postnatal immune maturation, as evidenced by decreased T-cell proliferation to stimulation in vitro and decreased ability to secrete cytokines, especially Th-1 cytokines. In addition, sluggish postnatal immune system maturation is a major risk factor for the development of allergic sensitization and atopic asthma in later life.

Impact of Intrauterine Exposures on Respiratory and Immune System Development and the Risk of Asthma The intrauterine milieu is the first environment a child is exposed to and adverse exposures have the potential to alter the development of the respiratory and immune systems. The most well studied adverse intrauterine exposure is maternal smoking although there is increasing interest in maternal exposures to bioaerosols, indoor air pollution, infections and other toxic substances, including heavy metals, pesticides, polychlorinated biphenyls and persistent toxic substances present in our environment. Maternal smoking during pregnancy results in the direct exposure of the developing fetus to nicotine and carbon monoxide (CO). These components of cigarettes are thought to be responsible for the adverse effects of maternal smoking of the fetus. Nicotine is concentrated in the fetal circulation reaching concentrations many times higher than present in the maternal circulation. Nicotine and CO result in constriction of the uteroplacental circulation, compromising oxygen delivery to the fetus. The adverse consequences of maternal smoking on fetal development include: alteration in airway growth resulting in lower lung function at birth; increased deposition of collagen in both large and small airways; decreased immune maturation demonstrated by lower levels of cytokines in cord blood; lower birth weigh and abnormal control of breathing with blunted ventilatory responses to hypoxia. Maternal smoking during pregnancy is a major risk factor for wheeze in infancy and is an independent risk factor for asthma in childhood. The combined exposures to maternal smoking and maternal alcohol intake during pregnancy have additive effects on suppressing peripheral blood mononuclear cell proliferative responses over the first 2–3 months of postnatal life. This is likely to further compromise the infant’s ability to mount an effective response to respiratory infections. Indeed in one small study infants born to smoking mothers had more chronic upper respiratory symptoms in the first year of life than infants born to non-smokers. Maternal smoking during pregnancy also increases the risk of asthma in grandchildren born to women exposed to maternal smoking during pregnancy through epigenetic mechanisms. When the maternal grandmother smoked during pregnancy but the child’s mother did not the risk of developing asthma was 80% higher in children (odds ratio 1.8, 95% CI1.0-3.3). This risk increased to 2.6 fold (odds ratio 2.6, 95%CI 1-6-4.5) if the child’s mother also smoked during pregnancy. This can be explained by an effect of the tobacco smoke products on ova development in fetal girls, which are formed in utero and is an example of an epigenetic phenomenon increasing risk of asthma. The precise mechanism by which this occurs is unknown but is likely to involve gene silencing by methylating CpG sites in the promoter regions of susceptible genes. It is interesting to speculate whether epigenetic gene silencing was the mechanism underlying the lower levels of cord blood cytokines (interferon g and IL-4) reported in the cord blood of babies born to smoking mothers. Maternal bioaerosol exposure in the form of bacterial lipopolysaccharides and dust from animal-holding barns has recently revealed interesting findings. There is now strong evidence that offspring of the northern European “farmer mothers”, who are heavily exposed to these bioaerosols, have significantly higher levels of resistance to both asthma and allergy relative to the household dust-exposed populations. This resistance was independently associated with exposure of children themselves to barn dust postnatally and with exposure of their mothers during pregnancy.

Environmental Factors in Children’s Asthma and Respiratory Effects

441

Indoor air pollution, in particular biomass fuel used for cooking and heating; and infections during pregnancy needing antibiotics have been associated with acute and chronic respiratory diseases including wheeze in offspring.

Impact of Postnatal Exposures on Respiratory System Development and the Risk of Asthma Normal lungs grow along trajectories; however, exposure to inflammatory or irritant stimuli can retard lung growth. Lung function can also be damaged by postnatal exposures, such as environmental tobacco smoke or viral infections, in early life. Inflammatory stimuli from environmental exposures, e.g. to allergens, air toxics or pollutants, could also limit lung growth. Children who grew up in a “clean” environment as a result of a strict environmental control regimen to reduce house dust mite (HDM) allergens to very low levels had better lung function at 3 years of age than children who grew up in a conventional environment. Lung function was not different between the groups at 4 weeks of age, showing that this was a postnatal effect. The implication from these data is that the inflammatory stimuli present in a normal home environment are sufficient to impair lung growth. While the mechanism(s) responsible have not been studied the most likely candidates are direct inflammatory or oxidant damage to airways. Maternal smoking has been associated with increased levels of urinary F2-isprostanes, a by product of lipid peroxidation in the lungs, in infants. The levels of this biomarker of oxidative stress injury was correlated with levels of cotinine in the infant’s urine, suggesting that environmental tobacco smoke (ETS) exposure was responsible for the oxidant injury. Epigenetic mechanisms are also likely to be involved. Rhesus monkeys reared in controlled laboratory conditions show that postnatal exposures to the irritant stimuli O3 and house dust mite allergen can alter airway growth. Infant monkeys were exposed to filtered air, house dust mite aerosol (HDMA), O3 or HDMA þ O3 over a 6 month period. Half of the monkeys were previously sensitised to HDM. Repeated exposure to 0.5 part per million of O3 during the first 5 months of life resulted in structural alterations in the distal airways, with an earlier transition from terminal to respiratory bronchioles. This resulted in terminal bronchioles that were shorter (by 45%) and narrower (by 38%). In addition, the orientation of the airway smooth muscle bundles to the airway axis was altered, with bundles lying more perpendicular. These changes would be expected to increase the likelihood of lower airway disease in the presence of respiratory viral infections. Repeated exposures to HMDA and O3, either sequentially or simultaneously resulted in exaggerated structural and functional changes in airways. While these data support the concepts presented above, no detailed assessments of lung growth have been undertaken and the effects of early life viral infections have not been studied by direct measurement. Air pollution, both outdoor and indoor, has been identified as a potential risk factor for both the initiation/induction and the exacerbation of asthma. One potential mechanism is through the induction of pulmonary inflammation, with the resultant effects on postnatal lung growth. Pollutants or irritants that may induce pulmonary inflammation include: combustion related products formed by the burning of organic fuels, including nitrogen dioxide, particulate matter and diesel exhaust particulates; bioaerosols including molds, allergens and bacterial products (e.g. LPS) and air toxics including formaldehyde and other volatile organic compounds. The effects of exposure to individual pollutants will be discuss further later in this chapter. Recurrent viral lower respiratory infections (LRI) may also alter lung growth. While no definitive data exist demonstrating the adverse effects of recurrent viral infections on lung growth this concept is not fanciful. Recurrent LRI in early life are major risk factor for the subsequent development of asthma. The impact of ETS exposure (currently known as secondhand tobacco smoke exposure in the USA) on the development of childhood asthma is somewhat controversial. Maternal smoking during pregnancy is associated with lower lung function at birth and is a major risk factor for transient infantile wheeze. There is also evidence that maternal smoking during pregnancy is an independent risk factor for asthma, and for increased bronchial responsiveness for asthma persisting into adolescence. There is also evidence that postnatal exposure to tobacco smoke is associated with reduced lung growth and childhood asthma. However, separating the effects of pre- and postnatal ETS exposure, especially from smoking mothers, is difficult as women who smoke do not appear to refrain during pregnancy and recommence later. Studies conducted in countries where smoking amongst men is common but smoking amongst women is uncommon do demonstrate the adverse effects on lung function from postnatal ETS exposure. A study measuring lung function in 360 Turkish children demonstrated that paternal but not maternal smoking was associated with a reduction in various indices of lung function when the children where 9–13 years old. Longitudinal studies have also demonstrated the effect of exposure to ambient air pollution on lung growth in children. The Children’s Health Study in southern California recruited 1759 children (average age 10 years) from elementary schools in 12 communities. Pollution measures showed significant correlations between NO2, acid vapor, and particulate matter (PM) across the 12 communities, demonstrating that exposures to these pollutants occurred as a “package”. In this study O3 was not correlated with other pollutants. Lung function was measured sequentially over 8 years and showed slower growth in lung function. This was most marked for FEV1 with deficits in growth associated with exposure to NO2 (p ¼ 0.005), acid vapor (p ¼ 0.004), PM2.5 (p ¼ 0.04) and elemental carbon (p ¼ 0.007). A meta-analysis reviewed studies reporting the effect of exposure to ETS on the development of asthma in children published between 1970 and 2005 and included 38 epidemiologic studies that met the inclusion criteria from 300 potentially relevant articles. The summary relative risk of incident asthma in children 6–18 years of age was 1.33 (95% CI 1.14-1.56) from ETS exposure once atopic history (familial or child) and the child’s own smoking had been controlled for. This estimate was 1.27 times the estimate from studies of younger children. The authors concluded that ETS exposure duration may be an important factor in the induction of asthma. Similarly, airway macrophages obtained from 8 to 15 year old children in induced sputum specimens show increased

442

Environmental Factors in Children’s Asthma and Respiratory Effects

elemental carbon in the macrophages from increased pollution exposure. This also translates into lower lung function in children more heavily exposure with a decrease of 17% in FEV1, of 12.9% in FVC and of 34.7% in FEF25–75 for every 1 mm2 increase in carbon content of the macrophages.

Impact of Postnatal Exposures to Environmental Pollutants on Immune System Development and the Risk of Asthma Early postnatal life represents an important period for functional maturation of the immune system. This is particularly the case with respect to the elements of the mucosal immune system situated within the respiratory tract. A longstanding literature involving both experimental animals models as well as human data, demonstrate that acute and chronic exposure to airborne toxicants, gaseous and/or particulate, can exert profound effects on the functions of T-cells, B-cells, macrophages and professional antigen presenting cells in lung and airway tissues and in downstream draining lymphoid organs. Importantly, these effects include immunostimulatory and immunosuppressive outcomes, the former most likely resulting from disturbance of subtle immunoregulatory circuits controlled by resident macrophages and circulating T-regulatory cells that normally maintain immunological homeostasis in the lungs. An additional target for the effect of air pollutants in this context are airway mucosal dendritic cell networks, which in adult experimental animals are highly sensitive to inhaled irritants. While effects on immune functions in children have been documented, there is a paucity of information on the effects of air pollutants on respiratory mucosal immune function in the crucial period of infancy, during which the immune system programs long term memory (or protective tolerance) to inhaled allergens. These latter responses depend crucially on the functions of airway mucosal dendritic cells which operate as “sentinels” for the acquisition of inhaled antigens and their subsequent translation into appropriate tolerogenic or stimulatory signals for presentation to the T-cell system. This cellular network is poorly developed at birth, and is progressively established within airway tissues between birth and weaning. This maturation process is “driven” by exposure to irritant stimuli (in particular antigenic stimuli but also chemical irritants) from the external environment. It is thus highly likely, but as yet unproven, that air pollutants can modify this process in infancy, with potentially major long term effects on immune responder phenotypes.

Synergistic Interactions Between Allergic Sensitization and Lower Respiratory Viral Infections in Early Life on the Development of Asthma Major risk factors in early for the development of asthma include allergic sensitization and lower respiratory viral infections (LRI), especially occurring in the first year of life. Longitudinal cohort studies conducted in Perth, Australia have demonstrated that the risk of asthma at 6 years of age is doubled by having a family history of asthma and allergies; is increased fourfold by having two or more LRI associated with wheeze in the first year of life; but is increased by ninefold in infants who have both risk factors. This synergistic interaction only occurs in children who are sensitized early; by 2 years of age. These data have led to the development of a conceptual model for the induction of asthma in children. Allergic sensitization alone is insufficient to produce persistent asthma and a “second hit” is required. Viral LRI can provide the second inflammatory stimulus required, especially when this occurs contemporaneously with allergic inflammation in sensitized children. Theoretically, inflammation resulting from environmental exposures to irritant stimuli, such as volatile organic compounds such as formaldehyde, could also provide the “second hit”.

Factors Modifying the Induction of Asthma Genetic Susceptibility It is clear asthma is not caused by a single gene. Data from twin studies have documented that up to 70% of asthma is heritable. The number of novel asthma genes identified is increasing rapidly. Disease association studies have identified over 100 replicated genes. The most frequently replicated genes include: interleukin-4 (IL4), interleukin-13 (IL13), b2 adrenergic receptor (ADRB2), human leukocyte antigen DRB1 (HLA-DRB1), tumor necrosis factor (TNF), lymphotoxin-alpha (LTA), high-affinity IgE receptor (FCER1B) and IL-4 receptor (IL4RA). These loci likely represent true asthma or atopy susceptibility loci or genes important for disease modification. However, there is little knowledge about the mechanisms of the gene that leads to abnormal gene function and pathogenesis of asthma, nor of the environmental factors that interact to influence gene expression. Genome linkage and association studies have found several chromosomal regions and loci, such as chromosome 2q, 5q, 6q, 11q, 12q, 13q, 16q and 17q, that harbor asthma susceptibility genes. There is also increasing evidence of separate genetic predispositions for a number of phenotypic features of asthma and putative asthma risk factors, including low lung function, increased airway responsiveness, increased susceptibility to lower respiratory tract infections in early life, delayed immune maturation, allergic sensitization, airway remodeling and decline in lung function. Again, how each of these contributes to asthma has not been well studies and likely varies between individuals. Thus, asthma is a complex genetic disease with distinct and multiple genetic determinants which interact with environmental exposures, resulting in phenotypes which although similar clinically, have distinct pathogenesis.

Environmental Factors in Children’s Asthma and Respiratory Effects

443

Epigenetic Phenomena Over the last two decades, it has become clear that development or persistence of disease within a community is not dependent just on genetic continuity of defective genes or environmental continuity with the persistence of defective environment or infectious agents alone. Complex diseases such as asthma depend on an intricate interplay between genetic predisposition and environmental factors that lead to changed epigenetic states which may be initiated early in development, thus setting up a the trajectory along which the individual develops and progresses to the disease state. Studies have also found ancestral influences on the interplay of genes and environment and have demonstrated that the effects of life conditions in the parents have a physiological effect on subsequent generations. Thus, epigenetic inheritance, the transmission of phenotypic variations from one generation to the next, that are not dependent on DNA sequences, has become increasingly recognized as having an important role in disease causation. Environmental toxicants have the potential to alter gene expression and modify disease susceptibility through a variety of mechanisms, including: inducing methylation of CpG dinucleotide sequences in promoter regions regulating common genes; in transposable elements adjacent to genes with metastable epi-alleles; and in regulatory elements of imprintable genes. Epigenetic epidemiological studies have reported cell-to-cell transmission during an individual’s lifetime as well as transgenerational inheritance, whereby effects that were environmentally induced in the parents are then transmitted and have persistent effects in the following generation. Although both male and female parents can transmit epigenetic defects, maternal transmission offers many more routes including through the ovum, placenta, the uterine environment, breast milk as well as maternal behavior; while paternal transmission can be achieved only through the sperm and paternal behavior.

Free Radicals and Oxidative Stress Free radicals are atoms or groups of atoms with unpaired electrons formed when oxygen interacts with certain molecules. They are highly reactive and can initiate cellular or oxidative damage when they react with important cellular components such as proteins, membranes or DNA. The primary site of free radical damage is the DNA found within the mitochondria and over time, DNA damage accumulates, causing irreversible damage to the mitochondria with resultant cellular death. Thus free radical damage can disrupt all levels of cellular function resulting in cellular damage and tissue injury. External factors such as pollution, sunlight, heavy and toxic metals trigger the production of free radicals and the toxic effects of lead, cadmium, pesticides, alcohol and cigarette smoke are thought to be initiated by free radical activity.

Antioxidants Antioxidants are molecules or compounds that bind with free radicals to neutralize and inactivate their activity, thus protecting against oxidative stress and cellular damage. The imbalance of free radicals to protective antioxidant activity leading to oxidative stress has been implicated as the cause of many diseases including cancer and Alzheimer’s disease and has been demonstrated to have an impact on the body’s aging process. Endogenous and exogenous sources of antioxidants include:

• •

Endogenous: (a) Glutathione (GSH), alpha lipoic acid, Co-enzyme Q10 and thioredoxin (b) Intracellular enzymes including superoxide dismutase (SOD), glutathione peroxidase Exogenous: (a) essential nutrients – Vitamin C, Vitamin E, beta-carotene, selenium and N-acetyl cysteine (NAC) (b) dietary compounds – bioflavonoids, proanthrocyanidans

The major non-enzymatic antioxidants of the lungs are glutathione, vitamins C & E, beta carotene; and enzymatic antioxidants include superoxide dismutases, peroxidases and catalases. These antioxidants are the first line of defence against the deleterious effects of oxidants and aberrations of antioxidant/oxidant balance has been reported to be associated with airflow limitation. Reactive oxygen species (ROS), formed in various biochemical reactions, are normally scavenged by antioxidants. They play an important role in airway hyperresponsiveness and airway inflammation and recent studies have demonstrated antioxidants such as alpha lipoic acid are able to reduce airway inflammation and hyperreactivity. Glutathione (GSH), a tripeptide consisting of three amino acids: cysteine, glycine and glutamic acid, is a very important intracellular antioxidant which is produced and stored predominantly in the liver. It is found in most cells especially the lungs, liver and intestinal tract. GSH levels have been demonstrated to decline with age and low levels have been associated with several disease states including Alzheimer’s disease, Parkinson’s disease, cancer and aging. Glutathione is a crucial factor in the development of the immune cellular response and has been reported to play an important role in the development of allergic sensitization and asthma. Glutathione is found in almost all fruit and vegetables and are thus excellent dietary sources of glutathione. Dietary supplementation with GSH is not effective because gastrointestinal absorption is poor, but diets high in glutamine, such as lean meats, eggs, whole grains can stimulate hepatic production of GSH. Cysteine, an amino acid required for glutathione synthesis is found in whey protein and whey protein concentrate is an effective delivery system for GSH replenishment. Intracellular GSH levels in healthy young adults can be increased using a whey-based oral supplement and stimulation of the immune system with increased polymorphonuclear cell counts through supplementation with whey protein.

444

Environmental Factors in Children’s Asthma and Respiratory Effects

Glutathione S-transferases (GSTs) are a family of antioxidant enzymes that detoxify endogenous compounds such as peroxidised lipids as well as the metabolise xenobiotics and reactive oxygen species found in second hand smoke and diesel exhaust particles. When these enzymes are missing or genetically defective, the capacity of the lungs to detoxify these hazardous compounds are markedly diminished, leading to the inflammatory responses induced by these substances. Genetic polymorphisms in GSTs may thus influence the susceptibility of asthmatics to oxidative stress. Mould has been demonstrated to deplete intracellular levels of GSH. This may be the mechanism contributing to increased airway inflammation and airway symptoms when allergic subjects are exposed to mould.

Diet and Defense Mechanisms Although there are several enzyme systems within the body that scavenge free radicals, the principal micronutrient antioxidants are vitamin C, vitamin E, selenium and beta-carotene. The body cannot manufacture these micronutrients so they must be supplied in the diet. The effectiveness of antioxidants such as vitamin C & E depend on the availability of GSH. Oxidative stress may reduce the levels of glutathione as well as the enzymatic activity of enzymes such as superoxide dismutase resulting in increase apoptosis and airway remodeling. Low dietary intake of foods high in antioxidant vitamins C, E, selenium and bioflavonoids may further perturb the oxidant/antioxidant balance. Studies have reported protective effects of diets high in fish and fruit and vegetables and asthma in children. Observational studies of children who eat fish regularly, thus having high dietary intake of omega-3 fatty acids, have a 30–50% reduction in childhood asthma. Fish oil supplements which are rich in the omega-3 polyunsaturated fatty acids eicosapentaenoic acid and docosahexaenoic acid have been found to be beneficial for children with bronchial asthma. Also consumption of fruit and vegetables was associated with reduced asthma and respiratory symptoms in women in a French cohort study and in a cross-sectional study of children in Crete, who have a traditional Mediterranean diet rich in fruit and vegetables. In addition to this a randomized control trial in adults revealed that those consuming a diet high in anti-oxidants were less likely to have asthma exacerbations. Vitamin C & E supplementation has been reported to protect asthmatic children against the effects of ozone. Vitamin C supplementation has been shown in a randomized control trial to improve the childhood asthma control score among young children. Adolescents with low dietary vitamin C intake have lower pulmonary function and more respiratory symptoms than those with higher levels. Selenium is a trace metal that is required for proper function of the body’s antioxidant enzyme systems and children with higher dietary intake of selenium have less asthma. Conditions during the prenatal period, particularly maternal nutrition have been shown in epidemiological studies and animal models to have long term effects on health. Interestingly in a recent meta-analysis maternal obesity or high gestational weight gain was associated with higher odds of asthma or wheeze. This could be a result of the chronic low-grade inflammatory state of obesity affecting the developing fetus; obese women are likely to have a diet low in antioxidants, fruit or vegetables. These long term effects have been attributed to metabolic imprinting, whereby the offspring adapts to the conditions experienced in utero, leading to persistent changes in cell numbers, structure and function of organs, as well as changes in metabolic, endocrine and immunological functions. Changes in DNA methylation or other chromatin effects induced in one generation can persist beyond that generation. A grandfather’s diet can affect the diseases contracted by his grandchildren and a grandmother’s smoking can increase the asthma risk of her grandchildren through epigenetic mechanisms. Vitamins D & E and zinc have been shown, in animal models, to modify fetal lung development and vitamins D & E, zinc, and PUFA to modulate T-cell responses. High maternal intake of the antioxidants vitamin E and zinc during the first and second trimesters of pregnancy were associated with reduced early childhood wheezing. Other studies have reported associations between low maternal vitamin E intake with increased wheezing and asthma in early childhood. High maternal intake of fruit, vegetables and fish has been associated with lower asthma prevalence in their offspring. In a cohort of high risk children, maternal fish oil supplementation during pregnancy was found to be associated with reduction in infant allergy. No associations were found for vitamin C and selenium. The gastrointestinal microbiota is a key component of human homeostasis and “peripheral metabolism” (i.e. occurring in the gut) increases energy extraction from food. Alterations to the gastrointestinal microbiota (dysbiosis) have been described in a variety of chronic inflammatory diseases such as inflammatory bowel disease, obesity and asthma. However, we cannot state with certainty whether dysbiosis results from these inflammatory conditions, treatments associated with the condition or may be involved in disease initiation and/or progression. The changes observed in gastrointestinal microbiota commonly involve a reduction in socalled probiotic species, including lactobacillus and bifidobacteria as well as outgrowth of potentially pathogenic bacteria. The gastrointestinal microbiota is susceptible to environmental influences; include place and mode of delivery and the presence of siblings and pets in the home in early life. The composition of the microbiota may protect or predispose individuals to obesity. The infant bowel is sterile at birth and the microbiota is established in early post-natal life. The composition of the microbiota is different in breast and formula fed infants and the timing of cessation of breastfeeding is an important event in establishing the microbiota. Infants with more short chain fatty acid producing bacteria have a more rapid increase in BMI in early life. The gastrointestinal microbiota is also involved in the biotransformation of environmental toxicants and may increase or decrease the toxicity of the chemical. There is little direct knowledge to determine what contribution biotransformation of such chemicals may make to human disease.

Environmental Factors in Children’s Asthma and Respiratory Effects

445

Exposure to Viral Pathogens Two consistent risk factors identified from collective birth cohort studies are lower respiratory viral infections and sensitization to perennial aeroallergens (discussed in following section). Rhinoviruses, particularly Rhinovirus C, and also to a lesser extent respiratory syncytial virus, parainfluenza, adenovirus, metapneumovirus, and influenza infections occurring in the first two years of life have been identified as risk factors.

Bacterial Infection Bacterial infections are known to impair the lung’s mucocilliary clearance and increase mucous production in the lung. It has been proposed that certain bacteria may cause chronic lower airway inflammation. Current evidence has found that the microbiome of asthmatics is different from healthy controls, suggesting that bacteria interact in a crucial way with pathogen recognition receptors in the lower airways to program the immune response. However, whether such differences contribute to the causation of asthma or result from asthma and asthma treatment is uncertain. Also certain bacteria, primarily Chlamydia pneumoniae and Mycoplasma pneumoniae may contribute to disease chronicity, severity and instability.

Gene-environmental Interactions Although significant advances have been made in understanding the role of genetic variations in asthma, there are still large gaps in our knowledge of the complex and intricate web of gene-environmental interactions in disease pathogenesis. Discrepancies are evident in the literature on the benefits of various dietary components found in fruit and vegetables, fish and antioxidant vitamins and minerals and prevalence of childhood asthma. The genetic status of the individual must determine the role of dietary antioxidants as defense mechanisms and the beneficial effects of antioxidant supplementation in asthma prevention.

Environmental Exposures: Bioaerosols Bioaerosols that have been associated with the development and/or exacerbation of asthma in children include aeroallergens, mould and bacteria (particularly endotoxin). Aeroallergens include house dust mite, cockroaches, animal dander, pollens and some mould species. There are many species of mould, some of which are allergenic and others that may affect respiratory health via other mechanisms such as infection. Both the indoor and outdoor environments are important for exposure to bioaerosols.

Asthma Exacerbation Allergens: There is a large body of literature on the acute effects of allergens on asthma symptoms. Most asthmatic children are sensitised to one or more allergen, although this does not necessarily mean that the allergens caused their asthma. However, exposure to the relevant allergen may exacerbate symptoms in these children. Exposure to pollen and fungal allergens in the ambient air has been associated with increased asthma hospitalisations. A similar association with indoor allergens has been harder to demonstrate, although there is some evidence that sensitised children sensitised exposed to high levels of the relevant indoor allergen were more likely to be hospitalised or re-admitted after a previous hospitalisation for asthma. The role of indoor allergens on asthma severity remains unclear. There has been considerable research into house dust mite and while some studies have reported associations between exposure and disease activity in dust mite sensitised asthmatic children others have not. Studies showing improved symptoms in asthmatic children after long term allergen avoidance provide some evidence that allergens may trigger asthma symptoms. Moulds: Moulds and fungi are ubiquitous in both the indoor and outdoor environments. There are many species of mould, most of which are not pathogenic. However, some species are allergenic and moulds are common and important, although not dominant, allergens in some environments. Non-allergenic products of moulds may also contribute to respiratory symptoms. Mycotoxins and volatile substances (microbial volatile organic compounds – mVOC’s) can cause airway irritation and inflammation independent of allergic reactions. Further, the cell walls of most fungi contain (1 / 3)-b-D-glucan, an inflammatory agent that may contribute to symptoms in asthmatics. There is some evidence that damp housing, a risk for mould proliferation, has been associated with wheeze in children, although the data are not consistent. Data on the respiratory effects of (1 / 3)-b-D-glucan are also mixed. Currently there is only suggestive evidence of an association between moulds, their components and metabolites, and asthma severity. Endotoxin: Endotoxin is a pro-inflammatory LPS, which is part of the outer cell wall of gram-negative bacteria. Depending on the timing of exposure endotoxin may be either a risk for asthma symptoms or protective against the development of atopy. Endotoxin is considered a cause of occupational asthma and in the home environment has been associated with increased asthma severity in adults. The effect of domestic exposure of endotoxin on asthmatic children is less clear. Endotoxin in the home has been associated with increased asthma symptoms in children, including wheeze in young children. However, other studies have not found associations between endotoxin and peak flow variability in asthmatic children.

446

Environmental Factors in Children’s Asthma and Respiratory Effects

Environmental Exposures: Air Pollutants There is now considerable evidence that air pollutants can trigger asthma symptoms in children. The role of air pollution in the development of disease, however, is controversial although evidence is accumulating that early life exposures may be important in the pathogenesis of asthma. Although children’s exposure to air pollution is contiguous between the indoor and outdoor environments research has generally focused on one or the other and, therefore, they will be discussed separately.

Outdoor Air Pollution Asthma exacerbation

Short-term changes in outdoor air pollutants are associated with increased symptoms and hospitalisations as well as lung function decrements in asthmatic children. The acute effects of air pollution have been demonstrated in various time-series and panel studies. Pollutants that have most often been associated with short-term respiratory effects in children include particulate matter (PM10 and PM2.5), ozone, nitrogen dioxide and sulphur dioxide. Interestingly, significant impacts of air pollution have been observed in various geographical locations, including cities in North and South America, Europe, Asia and Oceana, where there are diverse pollution mixes and concentrations. Although not all studies have found adverse effects of air pollution on children’s respiratory health, effects have been reported in areas of relatively low urban air pollution. There has been considerable interest in the impact of traffic related air pollutants on asthma exacerbations. Motor vehicle exhausts are a major contributor to urban air pollution and concentrations of many pollutants are greatest near roadways. Many studies have now reported increased asthma symptoms in children living close to major roads. There has been particular interest in fine particles emitted from diesel engines. Diesel engines are a major contributor to the fine particulate matter in many cities worldwide. Experimentally diesel exhaust particles can increase airway inflammation and may also have an adjuvant role in the development of allergic sensitization. However, other traffic pollutants are likely to be important and the specific components of motor vehicle emissions that contribute to health effects remain uncertain.

Lung growth

One of the most extensive studies on the impact of air pollution on lung growth has been The Children’s Health Study in Southern California, which, in two separate cohorts, has demonstrated that lung function growth in children was slower in the most polluted cities. Within this study the researchers also measured lung function changes over a 5 year period in children who had moved between areas of different air pollution levels. Children who moved from high to low areas of pollution (PM10) had improved lung function growth rates, while those who moved to areas of higher PM10 concentrations had reduced growth. Residential proximity to freeways (< 500 m) has been associated with reduced lung function growth in children regardless of baseline traffic levels. The impact of long-term exposure to ambient air pollution on lung function growth has also been observed in children from Austria and Poland.

Allergic sensitisation

There is good experimental evidence that air pollution contributes to allergic sensitization. There is particular interest in the adjuvant role of DEPs and these have been shown to enhance responses to allergens in both animal and human models. Further, some allergens can bind to DEPs which may then act as carrier for these proteins. Some but not all epidemiological studies have found evidence for increased allergic sensitisation in children living in high compared to low pollution.

Asthma initiation

The role of ambient air pollution in the development of asthma is contentious. Asthma rates are often low in countries with high concentrations of air pollution and in many countries asthma prevalence has increased as outdoor air pollution has improved. However, although regional levels of air pollution may have improved, the relative contribution of traffic-related pollutants has increased and there is now growing evidence that within countries asthma prevalence may be greater in areas of increased local concentrations of air pollutants. Increased asthma prevalence near busy roads has been found in some but not all studies. Two studies have investigated asthma incidence in school aged children. In a case control study children in the highest exposure tertile during the first 3 years of life had increased odds of asthma in later childhood compared to those in the lowest tertile (OR 2.28, 95%CI 1.1 – 4.6). In a study of 2506 school-children over a 4 year period the incident asthma was significantly increased in boys living in roadside areas relative to rural areas (OR 3.75, 95%CI 1.00 – 14.06) with a similar, but non-significant, trend for girls (OR 4.06, 95%CI 0.91 – 18.10). Finally in a large longitudinal birth cohort an association between traffic-related air pollution and increased wheeze, respiratory infections and asthma onset in children at age 2 and 4 years has been reported. More recent studies have shown that both intrauterine and early postnatal exposure to ambient air pollution reduces lung function in preschool-aged children. When sophisticated modelling is used to estimate individual exposures to benzene and nitrogen dioxide, as an surrogate for exposure to traffic-related pollution, significant reductions in FEV1 were seen with both pollutants. A dose–response relationship was demonstrated with the risk of low lung function, defined as FEV1 < 80% predicted, increasing with increasing pollutant exposures. In addition, the effects of exposure were most pronounced during the second trimester.

Environmental Factors in Children’s Asthma and Respiratory Effects

447

Indoor Air Pollution Although the majority of air pollution research has focussed on outdoor air there has been a growing recognition the IAP is of equal or greater significance to human health. Indoor air pollution poses different risks in developed and developing countries. Therefore, these will be discussed separately

IAP in developed countries

In most Western countries, children spend the majority of their time indoors. For many air pollutants concentrations are often considerably higher indoors than outdoors and, therefore, the indoor environment is a major contributor to personal exposure to these air pollutants. Indeed studies have demonstrated the importance of the indoor environment for personal exposure to pollutants such as NO2, formaldehyde and VOCs.

Asthma exacerbation

Despite the importance of the indoor environment to pollution exposure studies it is very difficult to conduct long term, regular monitoring of indoor pollutants in a large sample of homes. Therefore, there have been very few time-series or panel studies to determine short term health effects of indoor pollutants. Studies of IAP have predominantly been cross-sectional or case–control. There is, however, an increasing body of literature demonstrating an association between individual IAPs and asthma or asthma-like symptoms in children. Although ETS remains one of the most important indoor pollutants it has been discussed above and will not be included in the following section. Nitrogen dioxide: NO2 is emitted primarily from unflued gas appliances and increased respiratory symptoms in children have been associated with exposure to either gas appliances or indoor NO2 concentrations. A meta-analysis of a range of longitudinal and cross-sectional studies of NO2 exposures estimated that the increase in odds of lower respiratory tract illness in children exposed to a long term increase in NO2 exposure of 15 ppb was about 20%. NO2 is one of the few indoor pollutants that have been monitored on regular occasions in longitudinal studies. The results of these studies have varied. Significant associations have been shown between daily variations in personal NO2 and asthma symptoms. Also increased severity of viral induced asthma in children exposed to increased levels of NO2 in the week prior to infection has been reported. Most recently, in the ACHAPS panel study, the most consistent results were seen with NO2. Consistent relationships were shown between NO2 and night and day time symptoms in asthmatics (cough, wheeze and shortness of breath). The effects were greater for NO2 24-h than for NO2 1-h levels. Asthmatic children were also more likely to use more reliever medications for asthma on days with higher NO2 concentrations. Formaldehyde: Formaldehyde is a strong irritant of the eyes and upper airways. However, the role of formaldehyde in lower respiratory symptoms and asthma is controversial. Formaldehyde is considered a potential agent for occupational asthma 31, although this seems to be rare. Exposure of children to formaldehyde in homes and schools has been associated with asthma, asthma severity and airway inflammation although the data are not consistent. Volatile organic compounds and other household chemicals: Common household products, such as include building materials, cleaning agents, aerosol sprays, floor and wall coverings and plastics, can emit a range of chemicals. These include VOCs, chlorine, ammonia, surfactants, acids, bases and oxidants. Reactions between these compounds as well as between these pollutants and oxidative gases can create highly irritative secondary pollutants. There is limited, but expanding, data on the association between these indoor chemicals and respiratory health in children. Associations between either measured VOC concentrations or the presence of household chemical products and poor respiratory outcomes have been reported for infants, preschool and school children. Phthalates emitted from plastic surface materials have also been associated with bronchial obstruction in infants and asthma like symptoms in preschool children. Not all studies have found adverse impacts of domestic household chemicals on respiratory health of children. Particulate matter: In developed countries the main contributors to indoor PM levels are outdoor levels, combustion sources and general activity. ETS is a major indoor source of PM, but cooking and heating are also important contributors to the fine particle mass (PM2.5) in homes. General activities such as cleaning are associated with increased levels with particulates in the coarser range (PM2.5 – PM10). The contribution of outdoor sources to indoor PM levels depends on the outdoor concentrations, ventilation of the building and strength of indoor sources. Although there is a lot of data on the relationship between indoor and outdoor PM concentrations there are very little data, in developed countries, on the health effects of exposure to PM indoors. One study found a significant inverse relationship between personal PM2.5 concentration and FEV1 in asthmatic children and the ACHAPS panel study found associations with PM10 and respiratory symptoms, which primarily included cough and wheeze. Pesticides: Strictly speaking pesticides are more than just an ‘air pollutant’. Considerable exposure of children to pesticides can occur through diet, hand-to-mouth activities (young children) and dermal absorption. However, pesticide use in homes can be widespread and inhalation is an important route of exposure. There are many different pesticides, including organophosphates (OP), carbamates, pyrethrins and organochlorines (OC – banned in most Western countries). Health effects vary between compounds. The most common household pesticides include OPs (eg. chlorpyrifos, malathion and diazinon) and carbamates (eg. propoxur and baygon). Measurable levels of these insecticides have been found in the blood of pregnant women together with a good correlation between maternal and newborn blood insecticide levels suggesting materno-fetal transfer during pregnancy. There is little evidence of an affect of pesticides on asthma symptoms in children. Pesticide use in or near homes has been associated with respiratory disease and chronic respiratory symptoms in Lebanese school children. Conversely, neither asthma

448

Environmental Factors in Children’s Asthma and Respiratory Effects

emergency visits nor hospital admissions were increased in a New York hospital after city-wide spraying of OP insecticides as part of a mosquito eradication program. There is some evidence that early exposure to pesticides may increase the risk of asthma and allergies in children. Children exposed to herbicides and pesticides in the first year of life had an increased risk of physician-diagnosed asthma by age 5. Pesticide exposure during pregnancy has not been associated with respiratory symptoms but suggestive evidence of an association between exposure and allergies and hayfever, particularly in boys has been reported. Although data in children are limited, evidence from occupational studies suggests that OP and carbamate insecticides may contribute to the development of asthma.

Lung growth

A number of cross-sectional studies have been conducted measuring the association between indoor pollutants and lung function in children and results have been conflicting. Most of this research has focused on the impact of either gas stoves or indoor NO2. Results of recent studies suggest that adverse effects of gas stoves on lung function in children may be evident in girls and not boys and that this relationship is modified by the presence of asthma or atopy. One of the few longitudinal studies of lung function growth and indoor NO2 found no effect of indoor NO2 on lung function growth over a 2-year period, although NO2 was measured only at the start of the study. A more recent study has reported an association between maternal exposure to household chemicals during pregnancy and reduced lung function in the children when they were 8 years old.

Allergic sensitisation

In animal models exposure to some of the important indoor pollutants such as ETS, NO2 and formaldehyde can enhance sensitisation to inhaled allergens. Experimentally, prior exposure to air pollutants can increase bronchial responsiveness to inhaled allergens (see above) and this was recently demonstrated for low-dose exposure (100 ug m 3) to formaldehyde. Indoor concentrations of formaldehyde has been associated with atopy, increased specific IgE and bacteria specific IgG levels in children. Further, renovations in homes of very young children have been associated with eczema during early childhood. However, based on laboratory and occupational studies there seems to be little evidence that VOCs or other common household chemicals are strong respiratory sensitizers. Given the importance of the indoor environment for exposure to allergens and pollutants there is surprisingly little data on the interaction of allergens and pollutants on either asthma development or symptoms. However, two interesting epidemiological studies have provided some evidence that interactions between indoor allergens and indoor pollutants may be important for the development of allergic sensitisation in children.

Asthma initiation

A causative role for IAP in asthma is difficult to demonstrate and there are few data on the association between early life exposure to IAPs and the development of childhood asthma. Infants spend most of their time indoors at home and this is likely to be an important environment for exposure to environmental pollutants. There have now been three studies that have either purposefully or inadvertently investigated household exposures in very early life and asthma in childhood. An association with exposure to gas heaters in the first year of life and asthma at 7 years (RR ¼ 1.92, 95%CI 1.33 – 2.76) has been reported in Australia. An association between the presence of fume-emitting heaters in the homes of children during their first year of life and asthma by age 8 years (RR for recent wheeze and increased AR 2.08, 95%CI 1.31 – 3.31) has also been reported. Increased risk of chronic wheeze in 7 year old children whose mothers had increased exposure to household chemicals during pregnancy has been reported from a birth cohort. In this study increased use of household chemical products, particularly air fresheners and aerosols, was associated with higher total VOCs in homes.

Childhood asthma and indoor air pollution in developing countries

Worldwide IAP remains major contributor to the global burden of disease and is one of the major environmental causes of ill health in both children and adults. Most of this problem is due to the burning of biomass fuel in the homes of poorer residents of developing countries. For children, much of the increase in disease burden is associated with acute respiratory infections, such as pneumonia. There has been little attention paid to the impact of biomass burning on asthma in children and those studies that investigated this have been conflicting. The prevalence of asthma in many developing countries is low compared to developed countries. In a recent large study in Guatemala, however, it was demonstrated that children from homes using open fires had increased prevalence of asthma and asthma symptoms compared to children from homes using improved stoves with chimneys. This suggests that open biomass burning inside homes can contribute to asthma symptoms and severity in children but this problem is yet to be fully explored.

Further Reading Adcock, I.M., Tsaprouni, L., Bhavsar, P., Ito, K., 2007. Epigenetic regulation of airway inflammation. Current Opinion in Immunology 19, 694–700. Gauderman, W.J., Avol, E., Gilliland, F., Vora, H., et al., 2004. The effect of air pollution on lung development from 10 to 18 years of age. New England Journal of Medicine 351, 1057–1068.

Environmental Factors in Children’s Asthma and Respiratory Effects

449

Gilliland, F.D., et al., 2006. Glutathione s-transferases M1 and P1 prevent aggravation of allergic responses by secondhand smoke. American Journal of Respiratory and Critical Care Medicine 174 (12), 1335–1341. Holt, P.G., Rowe, J., 2003. The developing immune system and allergy. In: Leung, D.Y., Sampson, H.A., Geha, R., Szefler, S.J. (Eds.), Pediatric allergy principles and practice. Mosby, St Louis, pp. 69–79. Holt, P.G., et al., 1999. The role of allergy in the development of asthma. Nature 402 (Suppl.), B12–B17. Holt, P.G., Stickland, D.H., Wikstrom, M.E., Jahnsen, F.L., 2008. Regulation of immunological homeostasis in the respiratory tract. Nature Reviews Immunology 8, 142–152. https://doi.org/10.1038/nri2236. Holt, P.G., Upham, J.W., Sly, P.D., 2005. Contemporaneous maturation of immunological and respiratory functions during early childhood: Implications for development of asthma prevention strategies. Journal of Allergy and Clinical Immunology 116 (1), 16–24. Kusel, M.M.H., de Klerk, N.H., Kebadze, T., et al., 2007. Early life respiratory viral infections, atopic sensitisation and risk of subsequent development of persistent asthma. Journal of Allergy and Clinical Immunology 119, 1105–1110. Walker, M., Holt, K.E., Anderson, G.P., Teo, S.M., Sly, P.D., 2014. Elucidation of pathways driving asthma pathogenesis: Development of asystems-level analytic strategy. Frontiers in Immunology 5, 447 eCollection 2014. Lothian, J.B., Grey, V., Lands, L.C., 2006. Effect of whey protein to modulate immune response in children with atopic asthma. International Journal of Food Sciences and Nutrition 57 (3–4), 204–211. Macaubas, C., de Klerk, N., Holt, B.J., et al., 2003. Association between antenatal cytokine production and the development of atopy and asthma at age 6 years. Lancet 362, 1192–1197. Martinez, F., 2007. Genes, environments, development and asthma: a reappraisal. European Respiratory Journal 29 (1), 179–184. Martinez, F.D., Wright, A.L., Taussig, L.M., et al., 1995. Asthma and wheezing in the first six years of life. New England Journal of Medicine 332, 133–138. Romieu, I., et al., 2007. Maternal fish intake during pregnancy and atopy and asthma in infancy. Clinical and Experimental Allergy 37, 518–525. Moreno-Macias, H., Romieu, I., 2014. Effects of antioxidant supplements and nutrients on patients with asthma and allergies. Journal of Allergy and Clinical Immunology 133, 1237–1244. Rowe, J., Kusel, M., Holt, B.J., et al., 2007. Pre versus postnatal sensitization to environmental allergens in a high risk birth cohort. Journal of Allergy and Clinical Immunology 119, 1164–1173. Stern, D.A., Morgan, W.J., Guerra Martinez, F.D., 2007. Poor lung function in early infancy and lung function by age 22 years: A non-selective longitudinal cohort study. Lancet 370, 758–768. Stick, S.M., Burton, P.R., Gurrin, L., Sly, P.D., Le Souef, P.N., 1996. Effects of maternal smoking during pregnancy and a family history of asthma on respiratory function in newborn infants. Lancet 348, 1060–1064. Thornton, C.A., Upham, J.W., Wikstrom, M.E., et al., 2004. Functional maturation of CD4þ CD25 þ CTLA4 þ CD45RA þ T-regulatory cells in human neonatal T-cell responses to environmental antigens/allergens. Journal of Immunology 173, 3084–3092. Vork, K.L., Broadwin, R.L., Blaisdell, R.J., 2007. Developing asthma in childhood from exposure to secondhand tobacco smoke: Insights from a meta-regression. Environmental Health Perspectives 115, 1394–1400. Woodcock, A., Lowe, L.A., Murray, C.S., et al., 2004. Early life environmental control: Effect on symptoms, sensitization, and lung function at age 3 years. American Journal of Respiratory and Critical Care Medicine 170 (4), 433–439. Yerkovich, S.T., Wikstrom, M.E., Suriyaarachchi, D., et al., 2007. Postnatal development of monocyte cytokine responses to bacterial lipopolysaccharide. Pediatric Research 62, 547–552. Environmental Health Criteria 237, Principles for Evaluating Health Risks in Children Associated with Exposure to Chemicals. International Programme on Chemical Safety. World Health Organization 2006. Williams G. Marks G. Denison L. and Jalaludin B (2012) Australian Child Health and Air Pollution Study (ACHAPS).

Relevant Websites http://www.cape.ca/children/ – Canadian Association of Physicians for the Environment (CAPE). http://www.cehn.org/ – Children’s Environmental Health Network. http://www.dea.org.au/node/46 – Doctors for the Environment Australia. http://www.iceh.org/ – Institute for Children’s Environmental Health. http://www.inchesnetwork.net/ – International Network on Children’s Health, Environmental and Safety. http://www.niehs.nih.gov/health/topics/population/children/index.cfm – National Institute of Environmental Health Sciences. http://phpartners.org/environmentalhealth.html – Partners in Information Access for the Public Health Workforce. http://www.who.int/ceh/en/ – World Health Organisation – Children’s Environmental Health. http://www.who.int/heca/infomaterials/en/ – World Health Organisation – Healthy Environments for Children Alliance.

Environmental Health and Bioterrorismq Vladan Radosavljevic, Military Medical Headquarter, Belgrade, Serbia; and University of Defence, Belgrade, Serbia © 2019 Elsevier B.V. All rights reserved.

Abbreviations CDC Centers for Disease Control and Prevention FMD Foot-and-mouth disease GMO Genetically modified organism MDR-TB Multidrug-resistant tuberculosis SARS Severe acute respiratory syndrome TBE Tick-borne encephalitis WHO World Health Organization WNV West Nile virus

Introduction Biological warfare has complex and permanent relations with the environment in comparison to other war types (conventional, nuclear, and chemical). Changes of the environment interfere with many of the major determinants of biological warfare. Bioterrorism interfaces with the environment as a source of bioterrorism agents, as a means of bioterrorism and as a target. These three interfaces bioterrorism/environment allow us to prevent bioterrorism and protect environment. The most probable type and the key issue of biological war is bioterrorism. Bioterrorism itself is defined as a release of biological agents or toxins that affect human beings, animals, or plants with the intent to harm or intimidate. The essence of bioterrorism is a biological attack. Four components are required for a biological attack: perpetrators, agents, mediums/means of delivery, and targets. To simplify the correlation and impact of the environment on bioterrorism, each component of biological attack, and its correlation to environmental health, has been analyzed. Biological weapons might act on many different targets; could easily be disseminated by food and water, by insect vectors, or by an aerosol; might have many means to penetrate targets; and might be used even by low-qualified terrorists. Considering these facts, it is practically impossible to have a unique doctrine for each eventual threat. The “global environment”dpolitical, social, economic, and psychological environmentsdand the mass media distinguish themselves as very important, thus adding a new dimension to natural epidemics and biological attacks. Discoveries that certain bioterrorist (emerging and reemerging) pathogens have their origin in environmental changes have given rise to an urgent need to understand how these environmental changes impact bioterrorism. An environmental change manifests itself through a complex web of ecologic and social factors that may ultimately affect bioterrorism activities. Transmission dynamics of infectious pathogens mediate the effects that environmental changes have on bioterrorism activities. Bioterrorist occurrence could be the outcome of the interplay between environmental change and the transmission cycle of a pathogen. Environmental changes include anthropogenic changes that affect landscape ecology, human ecology, and human-created environments as well as natural perturbations and natural disasters. Environmental characteristics are defined as directly measurable physical, chemical, biological, or social components of the environments including populations and traits of relevant organisms. Every environmental perturbation influences the ecological balance and context within populations, in which disease manifests itself. Many outbreaks are interrelated to global and local changes caused by climate change, human-induced landscape changes, or the direct impact of human activities. Landscape impacts such as de(re)forestation, human settlement sprawl, industrial development, road construction (e.g., linear disturbances), large water control projects (e.g., dams, canals, irrigation systems, and reservoirs), and climate change have been accompanied by the spread of pathogens into new areas. Changing environmental process might affect transmission cycles of infectious pathogens. These changes affect the hosts or vectors of disease and the pathogens and parasites that breed, develop, and transmit disease. Vector-borne zoonoses tend to be the most ecologically complex infectious diseases in which environmental change may have the greatest number and diversity of effects, some promoting transmission and others diminishing it. Habitat and species losses may reduce the normal buffering within ecosystems, leading to disease outbreaks. Finally, the juxtaposition of new vectors, hosts, and parasites within disturbed ecosystems provides a potential for the evolution of novel transmission pathways and thus new “emerging diseases.” It is needed to learn more about the underlying complex causal relationships and apply this information to the prediction of future impacts, using more complete, better validated, integrated, models.

q

Change History: August 2018. Vladan Radosavljevic made changes to the text and references. This is an update of V. Radosavijevic, Environmental Health and Bioterrorism, Editor(s): J.O. Nriagu, Encyclopedia of Environmental Health, Elsevier, 2011, Pages 392–399.

450

Encyclopedia of Environmental Health, 2nd edition, Volume 2

https://doi.org/10.1016/B978-0-12-409548-9.11435-6

Environmental Health and Bioterrorism

451

Majority of human infectious diseases are of animal origin (zoonoses). For many emerging pathogens, wildlife and sometimes even domestic animals show no signs of infection and play the role of asymptomatic reservoirs (the most dangerous: Ebola virus, virus of Crimean-Congo Haemorrhagic Fever, etc.). OIE-World Organization for Animal Health estimates that 60% of existing human infectious diseases are zoonotic; at least 75% of emerging infectious diseases of humans (including Ebola, HIV, and Influenza) have an animal origin; five new human diseases appear every year and three of them are of animal origin; 80% of agents with potential bioterrorist use are zoonotic pathogens. Habitat fragmentation causes a reduction in biodiversity within the host communities, increasing disease risk through the increase in both the absolute and relative density of the primary reservoir. A relatively new and clear example of biological agent that could be extracted from the environment is given by the permafrost (an area of land that is permanently frozen below the surface) located in Siberia, Canada, and Alaska. Due to global warming some infected bodies appeared from the permafrost and intact biological agents re-emerged. Anthrax outbreak was described in 2016 in Siberia, after no outbreaks for 75 years. An exceptionally warm summer thawed out a reindeer carcass buried in a permafrost pit that was a source of spores. As human bodies infected by smallpox virus are frozen in the permafrost, the possibility of a re-emergence of this virus consequently of a permafrost thaw, and then its use for bioterrorism purpose, is a valid assumption made by several medical intelligence departments. Since urban growth in many countries occurs without planned sanitation, water treatment, and sewerage, increased exposure to mosquitoes, rodents, and other vermin provides more opportunities for diseases such as tuberculosis and hantavirus. Mining, damming of rivers, and increased irrigation for agriculture also give mosquitoes more standing water to breed in. Man, in so doing, makes himself his major bioterrorist. Carefully controlled use of resources would gain great benefits to struggle against bioterrorism, and other threats of natural pathogens. In the context of bioterrorism, infectious diseases are not only the public health issue, but also the issue of national and international security.

Bioterrorism Related to Humans Perpetrators The first link in conducting biological attack is perpetrator. The most prevailing and dangerous bioterrorist is man through his numerous activities (auto-bioterrorism). Auto-bioterrorism could be performed through many environmental changes (Table 1). Some changes affect disease only through a series of casual linkages. For example, a dam does not interfere with health directly. Instead, it causes changes in water flow, which affects mosquito habitats, and that, in turn, could affect transmission potential of biological agent. A new road may affect disease through major demographic shifts. The real threats are terrorist/disaffected groups, making environment very important source of agents. Highly motivated perpetrators (mainly poor terrorists/fanatics with suicidal tendency, e.g., suicidal bio-bombers) are the most probable candidates for getting biological agents from the nature. At least 22 countries are believed to have had active biowarfare research programs. Several major international terrorist organizations, such as the Al Qaeda network, are believed to have the financial resources and political contacts needed to access state-of-the-art bio-weapon disease cultures and production technologies. Aum Shinriko was also involved in developing terrorist bioweapons employing anthrax bacilli, botulinum toxin, Ebola virus, and Q fever (Coxiella burnetii). Table 1

Environmental changes caused by humans and bioterrorism-related diseases they may impact

Environmental change

Description

Disease

Urbanization

Increasing migration to and growth within towns

Agricultural intensification

Changing crop and animal management practices, fertilization, increased interplay between humans and domesticated animals Loss of forest cover, large fires, changing water flow patterns, reforestation, and human encroachment along and into forested areas Water flow changes due to dam construction and irrigation networks

Influenza (pandemic), severe acute respiratory syndrome, plague, diseases caused by fecal-oral pathogens (Entamoeba histolitica, Giardia lamblia), multidrugresistant tuberculosis Avian fly, brucellosis, psittacosis, Q fever, salmonellosis, anthrax, Nipah virus infection

De(re)forestation Water projects Climate changes

Change temperature and precipitation

Tick-borne hemorrhagic fevers, mosquito-borne encephalitis complex, hantavirus hemorrhagic fevers Infections caused by Escherichia coli, pathogenic vibrios, Shigella sp., Cryptospridium parvum, noroviruses infections, Hepatitis A Yellow fever West Nile fever and some other vector-borne diseases

Notes: Mosquito-borne encephalitis complex: Venezuelan equine encephalitis, Eastern equine encephalitis, and Western equine encephalitis, La Crosse and California encephalitis, Japanese encephalitis, West Nile encephalitis; tick-borne hemorrhagic fevers: Kyasanur Forest hemorrhagic fever, Crimean/Congo hemorrhagic fever, Omsk hemorrhagic fever, Alkhurma hemorrhagic fever.

452

Environmental Health and Bioterrorism

Agents The World Health Organization (WHO) defines a biological agent as an agent that produces its effect through multiplication within a target host intended for use in war to cause disease or death in human beings, animals, or plants. Biological agents also include protein biotoxins produced by microorganisms, poisonous animals, and plants. Some plants are known to naturally produce highly toxic compounds (cyano-genic molecules, cardiotoxic alkaloids, ricin, etc.). When ingested or inoculated, small amounts of these toxic compounds have potential to cause severe damage to health of human beings or animals even up to death, including cardiotoxicity, neurotoxicity, cytotoxicity, metabolic disorders and inhibition of cell division. For example, ricin can be extracted from castor bean. Al Qaeda in Yemen tried that in 2011, without success. It shows however the interest those terrorists bring to the possibility to extract bioagent from the environment. The Centers for Disease Control and Prevention (CDC) has classified critical biological agents into three major categories (A, B, and C). The category A agents include Variola major (smallpox), Bacillus anthracis (anthrax), Yersinia pestis (plague), Clostridium botulinum toxin (botulism), Francisella tularensis (tularemia), and viruses related to Ebola and Marburg hemorrhagic fevers, Lassa fever, and Argentine hemorrhagic fever. The category B agents contain approximately 30 potential weapons of bioterrorism (the majority of them are ubiquitous agents), including a wide variety of bacteria, viruses, protozoa, and toxins. The category C agents include Nipah virus, Hantavirus, tick-borne hemorrhagic fever viruses, tick-borne encephalitis (TBE) virus complex, yellow fever virus, and multidrug-resistant tuberculosis (MDR-TB). Additionally, there are several emerging pathogens with the potential for bioterrorism: severe acute respiratory syndrome virus (SARS virus), pandemic and avian influenza virus, West Nile virus (WNV), and monkeypox virus. Since more than approximately 80% of potential bioweapons are zoonoses, animals are likely to be at high risk and thus the surveillance of animals may provide early warning of a bioterrorist attack. Traditional biological weapons include naturally occurring organisms or toxins, characterized by easy production, high toxicity, stability, and abundance of modes of transmission. Dangers associated with conventional agents can be enhanced by genetic modification (increased virulence, antibiotic resistance, toxin production, enhanced aerosol stability, and improved survival in the environment). Many infectious microorganisms considered suitable for bioterrorism could be obtained from natural sources, such as infected animals, patients, or even contaminated soil (anthrax spores). SARS-CoV-like viruses were isolated from Himalayan palm civets and a raccoon dog in an animal market in southern China, which suggests that SARS-CoV may originate from these or other wild animals. Since 2003, the highly pathogenic H5N1 strain of avian influenza A has spread to poultry in 17 countries in Asia and Eastern Europe and now is considered endemic in some of these countries. It is noted that pig’s trachea contains receptors for avian and human influenza viruses and supports the growth of viruses of human and avian origin. Genetic reassortment between human and avian influenza viruses may occur in pigs leading to a novel strain against which there would be little or no population immunity and that would be highly pathogenic, capable of human-to-human transmission and having pandemic potential. In 2003, monkeypox virus emerged for the first time in the Western Hemisphere when an outbreak of human monkeypox occurred in the Midwestern United States. Most of the patients got sick by having direct contact with pet prairie dogs already infected by being housed with rodents imported from Ghana to Western Africa. In August 1999, West Nile virus was detected for the first time in North America by causing an outbreak in New York City. It is possible that the virus was imported to North America by infected birds, infected mosquitoes, or viremic humans. Natural pathogens widely vary in virulence; many strains isolated from nature may have low virulence. Micro-biologists have catalogued more than 77 different strains of B. anthracis, only a minority of them being highly virulent. Considering this, a terrorist should almost certainly have to isolate many different strains before finding one sufficiently potent to be used as a weapon. Since obtaining virulent microorganisms from nature is technically difficult, it would probably be easier for a terrorist to steal well-characterized strains from any research laboratory or to purchase the known pathogenic strains from a national culture collection or a commercial supplier, claiming to be engaged in a legitimate bio-medical research. From 1985 to 1989, Iraqi government ordered virulent strains of anthrax and other lethal pathogens from culture collections in France and the United States, ostensibly for public health researchda purpose that was legal at the time, and indeed approved by the Department of Commerce.

Mediums/Means of Delivery The medium of delivery could be air (airborne pathogens), and dissemination of an agent through ventilation/air conditioning systems is a very powerful way of attacking by terrorists. Aerosolized release of 100 kg of anthrax spores upwind of Washington, DC, could result in approximately 130,000–3 million deaths, a weapon as deadly as a hydrogen bomb. Other means of delivery could be food and water (food-borne and water-borne pathogens) when human exposure to waterborne infections occurs by contact with contaminated drinking water, recreational water, or food. This may result from human action, such as improper disposal of sewage wastes, or be due to weather events. Heavy rainfall and runoff influence the transport of other microbial and toxic agents from agricultural fields, human septic systems, and toxic dumpsites. Rainfall can alter the transport and dissemination of microbial pathogens (such as cryptosporidium and giardia), and temperature may affect their survival and growth. This group includes infectious diseases for which the environment (e.g., food and water) plays a significant role in a pathogen’s transmission cycle. Transmission occurs between humans and the environment directly (cholera, hepatitis

Environmental Health and Bioterrorism

453

A, entero-viruses, noroviruses, shigellosis). These pathogens survive in the environment for long periods of time. Fomites with personal infiltration of suicidal bio-bombers in the targets or by different facilities like mail might be a mean of delivery. Even animals, like birds infected with avian influenza, might serve as vectors for infectious diseases. In vector-borne diseases, transmission occurs through contact between humans and vectors (defined here as arthropods that take pathogens from one host to another). Transmission cycles share common attributes: namely, all are affected by the population level and vector, and all are driven by a transmission potential governed by a number of biological and environmental characteristics. Environmental changes can affect population levels of the host, vector (vector survival and reproduction), or environmental stage of the pathogen (pathogen’s incubation rate within the vector organism) as well as the transmission rate (vector’s biting rate) at which pathogens move between hosts, vectors, and environment. Vectors, pathogens, and hosts each survive and reproduce within a range of optimal climatic conditions: temperature and precipitation are the most important, although wind and daylight duration are also important. The crossover to humans of the Nipah virus is related to a host of changes that create more favorable conditions for their spread. Often fatal Nipah virus, normally found in Asian fruit bats, is believed to have passed over to humans when bats lost their habitats due to forest fires at Sumatra and the clearance of land for palm plantations. Trying to find new food, bats came into contact with pigs, which, in turn, passed the disease to their human handlers in the late 1990s. Nipah infection causes severe encephalitis in humans, with a 40% mortality rate recorded among infected patients in Malaysia and Singapore. At least 109 people died as a result of the epidemic, and more than 1 million pigs were destroyed in an effort to control the disease. Soil can also be the medium of transmission (B. anthracis, Giardia lamblia, Burkholderia mallei, and pseudomallei, C. burnetii). Environmental change impacts these diseases caused by pathogens within some transmission group via mechanisms that are primarily mediated by social processes. The initial spread of SARS or other respiratory diseases depended mainly on the social connectivity of the first (index) case in a community. As public health moves more toward examining how both ecological and social processes affect disease transmission, and more specifically toward examining the fundamental role of environmental change in creating the landscape of human disease, a systems theory framework is needed to integrate data from disparate fields.

Targets There are two types of targets: direct (biological) and indirect (political/economical). Biological/direct targets could be “hard” and “soft” targets. The US anthrax attack in 2001 comprised both types of attack: the Hart Senate Building in Washington was “hard” and the US Postal Offices were “soft” targets. The estimated cost of decontaminating parts of the Hart Senate building in Washington, DC, was $23 million, the economic impact involving potential exposure to anthrax was estimated at $26.2 billion/100,000 persons exposed (indirect/economic target), and the cost and resources needed to decontaminate the environment should be added to this. Health damage could be both somatic and psychological. Therefore, biological attacks cause two types of epidemics: epidemic of infectious disease and epidemic/pandemic of fear and panic. Epidemic/pandemic of fear and panic multiplies economic damage (losses in tourism, investment, and export). The main objective of bioterrorists is to propagate fear, anxiety, uncertainty, and depression within the population, induce mistrust of authorities/government, inflict economic damage, and disrupt travel and commerce. The cause of physical disease is the second important objective. Even the use of biological weapons for small-scale attack on “soft” targets (airports, railway stations, food production industries) can bring about devastating losses with strategic dimensions. At least one case of SARS or avian influenza is enough to cause catastrophic economic consequences. The world airline industry lost $10 billion in 2003 due to SARS. The developed, Western countries have an intensive food production and centralized food industry. It means that only one successful bioterrorist action can contaminate huge amounts of food and threaten lives of thousands or hundreds of thousands inhabitants. Preharvest threats target livestock and crops, carrying the risk of economic devastation compounded by direct costs (international trade restrictions, slaughter of animals, loss of production) and indirect costs throughout related communities (tourism). Postharvest threats affect the food industries (processing, transportation, delivery) and public health (possible human illness and death). Detrimental social, political, diplomatic and even military consequences could follow an agroterrorist attack.

Prevention Basic prevention of biological attack includes impeding the access of bioterrorists and of the biological agents to the target territory. These activities could be improved by better international cooperation and control, and by better border control. From the environmental point of view, basic prevention should also improve the ability to understand and control potential dynamics of disease transmission within human and animal population, as well as plant diseases, in both industrialized and developing country settings. This should enhance the capacity to combat the effects of biological weapons and emerging diseases in biological communities and biodiversity. The primary prevention of biological attack comprises monitoring and surveillance of potential internal/indigenous sources of biological agents and bioterrorists. Animals in many habitats may be studied to monitor health hazards in the environment. Chickens have been used for the surveillance of arboviruses like: West Nile Virus (WNV), western equine encephalomyelitis (WEE), and St. Louis encephalitis (SLE) viruses, making them excellent sentinel animals of arboviruses. Mussels, clams and oysters are particularly suitable as surveillance tools, because they are able to concentrate microbial organisms and pathogens to concentrations in excess of 1000-fold. In Sverdlovsk outbreak in 1979, livestock 60 km away from the plant died, whereas human cases

454

Environmental Health and Bioterrorism

occurred within 4 km downwind of the facility. The ideal case regarding environmental health is the eradication of the diseases (by minimal environmental changes destroyed reservoirs), then the elimination of the diseases (not affected people). The outbreak of a disease could occur due to accidental infection during a test and research of biological weapons. A Soviet field test of weaponized smallpox killed three people (two of them children) and involved the disinfection of homes, quarantine of hundreds of people, and administration of 50,000 vaccine units. Subtle differences between usual and unusual occurrence of diseases must be recognized (detection of unusual diseases, spread in unusual ways). A developed network of data collecting, rapid data transmission to the relevant public health decision-makers, and their careful analysis are the priorities. Early detection could save many lives by triggering an effective containment strategy such as vaccination, treatment, and, if necessary, isolation and quarantine. Also, active monitoring of domestic or wild animals for biomarkers could be very useful. The choice of surveillance type depends on the characteristics of a pathogen and the objective of the program. Passive surveillance is best employed when the objective of the program is targeted toward early detection of outbreaks or monitoring the extent of disease for making decisions on control strategies. Active surveillance is best employed when a disease is targeted for elimination. A sentinel is a naïve animal which is intentionally placed in an environment of potential infection that is monitored at short time intervals to detect infection. There are three fundamental components of the sentinel framework: the pathogen under surveillance, the target population and the sentinel population. The nature of a particular biological weapon could also have a consequential impact on recovery efforts. For example, anthrax spores can persist in the environment for decades; this could make decontamination efforts problematic and lead to persistent health concerns. Viable, infectious anthrax bacilli were cultured from animal bones recovered from archeological sites dating back to 150–250 years. For example, after the destruction of the community by a biological attack, people are being displaced, and they experience additional stress, loss of dignity by being forced into public shelters, and the feeling of anxiety and fear because of strange environments and the disruption of former social networks. Unexplained epidemic illness occurs (also known as mass sociogenic illness, mass psychogenic illness, or mass hysteria), involving the rapid spread of medically unexplained signs and symptoms, and is often misinterpreted as a sign of a serious physical illness. The potential for new, larger, and more sophisticated attacks has created a sense of vulnerability. Biological weapons induce loss of confidence in authorities. People have to learn to live with the threat of bioterrorism.

Agroterrorism Agroterrorism implies deliberate attack against commercial crops or livestock population. It can be made using a variety of viruses, bacteria, and fungi either as targets in their own right or as vehicles to attack humans or animals. Agroterrorism is a multidimensional threat, involving a wide range of motives and perpetrators, and encompassing a wide range of actions from single act of sabotage to strategic wartime programs with potentially disastrous “spillover” effects on susceptible wildlife and endangered species population. Traditional governmental responses to deliberate attack with foreign livestock pathogensdsweeping quarantines, mass slaughter, and burning or burial of millions of carcasses under the ceaseless eye of television, together with staggering financial losses triggered by international trade embargoes are exactly what terrorists want to see. Consequences of such an attack would be lasting damage to the rural economy and public confidence in government and enormous costs for taxpayers. And should the foreign disease infect humans as well as livestock, families would also be at risk, all of which would greatly embolden and encourage terrorists. Through history, outbreaks of crop diseases were associated with famine. Agriculture of any country is particularly vulnerable to foreign diseases, to which domestic animals and plants have not built up a natural resistance. In addition, with crops and animals concentrated in fewer production facilities, and with the frequent transportation of animals among these facilities, a single pathogen introduction could cause very widespread infection. Capabilities of a country to detect a disease and respond to it might be overwhelmed by a deliberate attack, especially if an attack involves a foreign disease or several simultaneous outbreaks. The public reaction to an agroterrorist attack might further amplify these financial losses, if food safety concerns prompt voluntary boycotts of domestic agricultural products. So, agroterrorism remains an attractive option because: 1. Animal or plant biological pathogens are easy to acquire. Due to the endemic nature of many of these disease agents in large geographic regions worldwide, samples of agents are readily available; 2. In general these agents can easily be disseminated. Animals and plants provide the primary means of transmission. Sophisticated weaponization is not required; 3. Many animal or plant diseases are not zoonotic. Therefore, they are not harmful for the perpetrator and there are no requirements for elaborating personal protective equipment and containment procedures. Bio-weapons and emerging disease outbreaks could result in severe erosion of genetic diversity in populations of wild and domestic animals and plants, leading to the extinction of endangered species. The threat lies in the release and proliferation of a broad spectrum of diseases of domesticated livestock and crops among naive, susceptible populations of wildlife and plants. The threat of an agroterrorist attack depends on motivations and technical requirements of agroterrorism.

Technical Requirements of Agroterrorism Technical barriers to agroterrorism are lower than those to human-targeted bioterrorism. Bioweapon attacks against agriculture do not require any specialized knowledge, sophisticated technologies, or laboratory disease cultures. A perpetrator with a basic

Environmental Health and Bioterrorism

455

understanding of microbiology could simply visit an area where foot-and-mouth disease (FMD) occurs naturally, obtain diseased tissue, culture an infectious substance, and clandestinely infect the herd. Even a larger program of sabotage could use this method for multiple, simultaneous attacks. Certain livestock and poultry viruses can travel great distances on their own. In 1981, 3 days after an outbreak of FMD in Britain and France, single cases appeared across the English Channel on the Isle of Wight. Prevailing wind patterns corroborated the hypothesis that the virus had traveled a distance of 175 miles as an airborne aerosol. Biotechnology techniques and equipment available at the open commercial market permit the large-scale production of bioweapons in small-scale facilities, at a relatively low cost. The cost of developing smaller-scale bioweapons facilities and arsenals befall within the range of $10,000–100,000.

Motivations for Agroterrorism Terrorists’ motives vary widely. The two most common are the profit motive and the anti-GMO (genetically modified organism) motive. Handling human pathogens is extremely dangerous; a terrorist puts himself in danger when developing or dispersing bioweapons against humans. However, animal and plant pathogens do not usually affect humans. The psychological barrier against human casualties is lower when targeting animals or plants. Killing plants and animals is not generally ethically objectionable as killing people. Agricultural targets are “soft targets,” or ones that maintain such a low level of security that a terrorist could carry out an attack unobserved. A terrorist may choose to use bioweapons against agriculture simply because it is the easiest and cheapest way to cause large-scale damage. FMD has long been considered the most dangerous foreign disease that might be inadvertently introduced into each country, and it is also the most likely terrorist threat. Because of its high level of virulence, FMD is particularly expensive to eradicate, and it triggers immediate export restrictions. In Canadian outbreak of FMD from 1951 to 1953, 2000 animals had to be destroyed, at the cost of approximately $2 million. Trade restrictions, however, decreased the value of Canadian livestock by $650 million, and the total economic impact due to international embargoes was about $2 billion. An outbreak of FMD in Italy in 1993 produced 10 times higher costs in market disruption than in its eradication. In 1996, an FMD outbreak among swine in Taiwan caused the killing of 4 million hogs, and the long-term losses to swine-related industries were projected to reach $7 billion. Direct costs of containing the 2001 FMD epizootic appear to have been far less than the indirect costs associated with consequent lost income and investment in nonagricultural sectors of the economy. Losses to the tourism industry because of restrictions in traveling in the affected areas were estimated at $350 million per week, or 25 times higher (2500%) than the concurrent direct losses in the agricultural sector ($14 million per week). The FMD hysteria and the highly publicized slaughter and burning of animal carcasses (the “CNN factor”) severely impacted the entire industry of the United Kingdom, with economic losses estimated to be more than $4 billion. Vaccines can keep animals from acquiring diseases, but in most cases, they do not keep animals from being carriers. A cow vaccinated against FMD can carry the pathogen in her throat tissues for two and a half year after the exposure. To eradicate a pathogen completely, both infected and vaccinated animals have to be destroyed.

Agents Zoonotic disease organisms known to have been cultivated and tested for bioweapon applications include anthrax (B. anthracis), bubonic plague (Y. pestis), brucellosis (Brucella abortus), tularemia (F. tularensis), Clostridium botulinum, C. burnetii, Burkholderia spp., Fusarium spp., Morbillivirus spp., Staphylococcus spp., Venezuelan equine encephalomyelitis virus, and several hemorrhagic fever viruses (Ebola, Marburg, Lassa fever, and Rift Valley fever). Genetically modified zoonotic and epizootic diseases (plague, tularemia, and anthrax) and cultivated diseases of livestock (FMD, rinderpest, brucellosis) are potentially very serious threats to livestock, wildlife, and endangered species population. New biological weapons include many diseases that are highly infectious and contagious, zoonoses that are easy to produce, antibiotic-resistant, vaccine subverting, and able to cause severe morbidity or mortality. Organisms of particular concern in this regard are the viruses of Newcastle disease, CSF, avian influenza, African swine fever, and African horse sickness. There are concerns that plant diseases developed for use against cereal crops, opium poppies (Papaver somni-ferum), and coca (Erythroxylon spp., e.g., Fusarium spp. and Pleospora papaveraceae) might infect and proliferate among nontarget plant species. The genetic diversity of local crop varieties and traditional livestock breeds is a critically important asset of global agriculture that may be subject to severe damage from deliberate or accidental bioweapon releases. There is a growing but still insufficient recognition of the importance of disease control for the conservation of biodiversity and endangered species population. Disease outbreaks caused by the release of weapons-grade rinderpest virus or anthrax bacilli could have an even greater impact than historical examples might indicate, given the enhanced virulence and resilience of cultivated disease strains, and accelerated rates of dispersal of disease vectors and infectious materials by motor vehicles and aircraft. They could have disastrous consequences for endemic and endangered populations of wild and domestic ungulates within many areas of the globe. Once established in a new locality, introduced diseases may not be recognized rapidly and may be difficult or impossible to eradicate. Known, but formerly uncommon, diseases (Ebola and Marburg fever) are emerging as major threats to human, livestock, and wildlife population as the result of progressive human-mediated changes in the ecology of host-pathogen and

456

Environmental Health and Bioterrorism

human-wildlife interactions. Breakdowns in medical and veterinary support systems during wars and civil conflicts have resulted in epidemic outbreaks of diseases within and among human, livestock, and wildlife populations (monkeypox, Marburg fever, Ebola, and bubonic plague). The Iran/Iraq War and the Arabian Gulf War precipitated rinderpest among livestock population in the region, probably caused or aggravated by war-related displacements of pastoralists and their flocks. Disruption of Government Veterinary Services during the Rhodesia-Zimbabwe civil war had contributed to epizootic outbreaks of anthrax and rabies among wild and domestic animals in Zimbabwe. Anthrax mortality among humans and livestock reached epidemic proportions in 1979–80 and continued to proliferate for more than 4 years after the end of the war. Anthrax ultimately spread through six of the eight provinces of Zimbabwe, with many recorded human cases before effective control of the disease was finally reestablished in 1987. The threat of catastrophic impacts from disease epidemics resulting from agricultural bioweapon releases is proportionally higher in developing countries, due to severe limitations in the availability of doctors, veterinarians, and medical facilities for treatment and quarantine. The spillover of weaponized livestock diseases into susceptible wildlife population could amplify and exacerbate the effects of initial attacks and create situations in which disease containment and control could become extremely difficult, and total eradication nearly impossible. Rinderpest could have particularly devastating spillover effects on susceptible wildlife species. Should FMD become established within wildlife populations, control efforts currently underway might include the attempted extirpation of some of the large wild and feral deer populations in some areas. Many formerly ubiquitous diseases that have been eradicated from livestock population in the United States and Western Europe are still common in other areas of the globe (anthrax, rinderpest, and FMD), and are readily accessible to political fringe groups and terrorist organizations. Vaccines for many diseases still common in the Third World countries have been phased out in Europe and North America, and these, along with drugs for treatment, may not be readily available in sufficient quantities to suppress large-scale disease outbreaks.

Prevention The above-mentioned examples demonstrate the critical importance of early detection and reporting of disease outbreaks. The international reporting system for wildlife diseases initiated by OIE Working Group on Wildlife Diseases has thus been of great importance for alerting national veterinary services to the necessity for the monitoring and reporting of specified wildlife diseases. Environment can provide useful signals and indicators for early warning and health monitoring (birds for West Nile Virus and other viral encephalitis agents, pets for anthrax and plague, cattle for anthrax, and Rift Valley fever, etc.). The threat of an agroterrorist attack can be countered at four levels: 1. 2. 3. 4.

at at at at

the the the the

organism level, through animal or plant disease resistance; farm level, through facility management techniques designed to prevent disease introduction or transmission; agricultural sector level, through disease detection and response procedures; national level, through policies designed to minimize the social and economic costs of a catastrophic disease outbreak.

These four levels are not independent from each other. The threat of agroterrorism cannot be fully countered at any one level. A disease that is introduced deliberately may be indistinguishable from the one that is introduced inadvertently, or from the one that arises naturally. The questions are: who would carry out such an attack and for what reasons; who has developed antiagriculture bio-weapons in the past; who has actually used bioweapons against agriculture; and the technical requirements of an agroterrorist attack. To control the spread of disease, the exposed animals must also be destroyed. Control measures for zoonotic diseases result in efforts to eradicate certain wildlife species that are potential reservoirs, intermediate hosts, or vectors for disease transmission to humans or domestic animals. Wild species that are naturally rare, and species that have been severely depleted in numbers due to overharvesting or habitat degradation, are particularly at the risk of extinction from introduced diseases of domestic animals. The traditional livestock breeds and varieties that constitute the most critical reservoirs of genetic diversity for domesticated animal species are also highly susceptible to severe losses or extinction from even highly localized disease outbreaks. Containment of bubonic plague outbreaks necessitates the control or eradication of rodent population within affected areas, to prevent the transmission of the disease from rodents to humans. Population of many wildlife species is already routinely subject to stringent control or local extirpation in attempts to control the transmission of diseases to domestic animals, in some instances without adequate data to validate the actual need for efficacy of such efforts. In the United States, the control of brucellosis in cattle has resulted in culling or attempted eradication of populations of bison (Bison bison), wapiti (Cervus canadensis), and white-tailed deer (Odocoileus virginianus). These important reservoirs of livestock genetic diversity are highly susceptible to extinction from even extremely localized disease outbreaks. There appears to be little possibility of preventing bioweapon attacks against domesticated animals, and of preventing the subsequent spillover of weaponized livestock diseases into wildlife populations. People’s ability to understand and control the spread of diseases within human and animal populations is increasing. However, it is still insufficient to counter the existing threats presented by bioweapons and a growing number of newly recognized and highly virulent infectious diseases, such as Ebola and Marburg fever, as well as less devastating but potentially fatal human and animal diseases, such as the West Nile virus. Interdisciplinary and

Environmental Health and Bioterrorism

457

international efforts to increase the surveillance, identification, and reporting of disease pathogens, and to better understand the dynamics of disease transmission within and among human and animal populations will enhance the ability to combat the effects of bioweapons and emerging diseases on biota and biodiversity.

See also: Biological Agents and Infectious Diseases in War and Terrorism; Political and Social Violence: Health Effects.

Further Reading Hunger, I., Radosavljevic, V., Belojevic, G., Rotz, L.D. (Eds.), 2013. Biopreparedness and public health. Springer, Heidelberg. Neo, J.P.S., Tan, B.H., 2017. The use of animals as a surveillance tool for monitoring environmental health hazards, human health hazards and bioterrorism. Veterinary Microbiology 203, 40–48. Radosavljevic, V., Banjari, I., Belojevic, G. (Eds.), 2018. Defence against bioterrorism: Methods for prevention and control. Springer, Heidelberg.

Environmental Health and Leishmaniasis by Indication on Afghanistan: A Review Sayed Hussain Mosawi, Ghalib University, Kabul, Afghanistan; and Afghanistan Development Studies Centre, Kabul, Afghanistan Zabih Zarei, Tehran University of Medical Sciences, Tehran, Iran Morteza Shams, Ilam University of Medical Sciences, Ilam, Iran Khanali Mohammadi and Sayed Abdulqayum Sajjadi, Khatam Al Nabieen University, Kabul, Afghanistan © 2019 Elsevier B.V. All rights reserved.

Abbreviations ACL Anthroponotic cutaneous leishmaniasis ARMA Autoregressive–moving-average VL Visceral leishmaniasis ZCL Zoonotic cutaneous leishmaniasis

Introduction Leishmaniasis is a common disease that is present both in the new world (in the South and Central America, in Mexico City) and in the old world (Europe, Africa, Central Asia and the subcontinent). This disease has three major forms cutaneous, mucocutaneous and visceral. Although the cutaneous leishmaniasis is not fatal, bad skin lesions, mental and psychological abnormalities are complications of this form. On the other hand, the visceral leishmaniasis (VL) is fatal (the case-fatality rate is high and 85%). Unfortunately, most of the cases of visceral leishmaniasis occur in populations that suffer from malnutrition and poverty. The social, political, and climatic factors are effective in increasing the number of cases. Annually more than 2 million people are affected by the disease. Currently, 10 million people in the world, are suffering from this disease. More than 90% of cases are seen in Afghanistan, Pakistan, Iran, Iraq, Syria, Jordan, Algeria, Tunisia, Morocco and Saudi Arabia (old world), Brazil and Peru (new world). About 12 million people in 98 countries are infected with leishmaniasis, and nearly 1 million new cases are infected each year, and between 50 and 100 thousands people are losing their lives (Alvar et al., 2012). About 200 million people in Asia, Africa, South and Central America, and southern Europe are suffering from this disease. Unfortunately, in spite of the recognition of the infectious agent, vector and modes of transmission of leishmaniasis and very important researches conducted, is an endemic disease in many countries of the world, and is predicated to be spreading. This disease, due to the fact that the vectors and reservoirs of the disease are found almost everywhere in Afghanistan, is also seen in Afghanistan. Cutaneous leishmaniasis is one of the major health problems in Afghanistan. In 2002, more than 200,000 cases of Leishmaniasis were reported from Kabul, and annually 67,000 cases are reported (Faulde et al., 2008). Despite of large number of cases of Kala-azar reported from the neighboring countries of Afghanistan, unfortunately the information in this area is very low from Afghanistan. The aim of this study is to examine the factors involved in leishmaniasis and the impact of the environment on them with an emphasis on Afghanistan.

Pathogenic Agents, Modes of Transmission and the Reservoirs of Disease Leishmaniasis is a neglected disease caused by a unicellular parasite, Leishmania. In Afghanistan there are two types of cutaneous Leishmaniasis caused by Leishmania major (ZCL: zoonotic cutaneous leishmaniasis) and Leishmania tropica (ACL: anthroponotic cutaneous leishmaniasis), most of the cutaneous leishmaniasis in Afghanistan is due to Leishmania tropica (Mosawi and Dalimi, 2015). This disease is transmitted by the bite of sand fly from infected animals (rodents and dogs) or from infected humans to healthy humans, and symptoms usually develop 3 months to 1 year later. The visceral Leishmaniasis is caused by Leishmania donovani and Leishmania infantum transmitted by the bite of sand flies. Kala-azar is present both in the forms of zoonotic and anthroponotic. The most important way of transmission of the disease is through female sand fly bite, whose flying limits is low, living in dark and wet places, especially near the nest of mice, near the garbage collection site and also in rainy deserts and forests. The semi-domestic sand flies are living around the town, under the rocks. Length of each flight is max 1 m and can be found up to several hundred meters far from their place of birth. Transmission from human to humans is possible in this disease. In the cutaneous leishmaniasis, the urban or dry type or ACL, the dogs like humans can be infected, and are thought to be one of the reservoir of the disease. There are two genera that transmit the disease: Phlebotomus in the Old World and Lutzomyia in the New World.

458

Encyclopedia of Environmental Health, 2nd edition, Volume 2

https://doi.org/10.1016/B978-0-12-409548-9.11740-3

Environmental Health and Leishmaniasis by Indication on Afghanistan: A Review

459

Clinical Manifestations After a few weeks to few months from the bite of the infected sand fly, a small red pustule appears that gradually grows bigger and bigger with central lesion. The number of lesions is usually single, may be multiple. In the cutaneous leishmaniasis caused by the Leishmania major, there is a purulent discharge on the wound, with a red prominent surrounding, that is why it is called the wet type. In leishmaniasis due to Leishmania tropica, the lesion is usually without secretion and pus, so it is called dry type. There is a possibility of secondary infection with bacteria, that develops complications in the patient, and lesion is usually painless and itchy. Kala-azar is a chronic and systemic disease, characterized by symptoms such as fever, splenomegaly, hepatomegaly, lymphadenopathy, fever, weight loss, and ultimately death.

Causes of Leishmaniasis Spreading The factors that can spread leishmaniasis can be described in the following sections:

Managing and Executive Factors - Inadequate justification of authorities regarding the importance and priority to control disease. - The inadequacy of efficient personnel and as a result of improper sanitational coverage, especially in poor countries and endemic areas. - The inadequate information of health workers about the importance of educating the patient about covering the lesion and ways to combat vectors, reservoirs and treatment of patients. - Lack of knowledge of physicians of the necessity of referring the patients for the laboratory test for the confirmation of diagnosis. - Ensuring the implementation of control methods without proper planning, for example, spraying, the use of insecticide treated nets, etc., which leads to false satisfaction of the authorities due to the ineffectiveness of these measures and the costs. - Inadequate cooperation of related organizations, including television, environmental protection organization, police, municipalities, education, etc. - Lack of attention to active screening and diagnosis of the disease, especially in ACL and at-risk groups, especially students. - Migration of sensitive people to the endemic areas. - In the ACL, the ability to transmit disease by residents or travelers to other areas (Shirzadi, 2012).

Health Education and Community Awareness - Inadequate knowledge of people about the cutaneous and visceral leishmaniasis, especially the pathogenesis. - The inadequacy of the community’s awareness about the importance of using insect repellents, insecticides, insecticide treated nets and their methods of use, especially in endemic areas. - The inadequacy of the community’s knowledge about wearing clothes (long sleeves, closed collars, long pants, etc.) especially in infected areas. - Lack of knowledge of the patients about dressing of lesion to prevent the contamination of the vectors and as a consequence of the continuity of the transmission chain in the ACL. - Lack of knowledge of the patients about wound dressing to prophylaxis infectious complications. - The lack of awareness of society, health experts and patients about the complications of infection that may be dangerous and may be mortal. - Lack of knowledge of the patients to refer on time for diagnosis and treatment. - Lack of regular education to the community (Shirzadi, 2012).

Vector Factors - The occurrence of natural disasters such as earthquakes, storms, etc., which creates a suitable environment for the reproduction of the vector of the disease. - When there is no information about the type of vector and methods to combat it on time. - Keeping birds at homes that makes the environment suitable for the reproduction of the vector of the disease (Shirzadi, 2012).

Factors Relating Reservoir, TREATMent and Personal Protection - Lack of active screening, especially in urban areas (patients are considered as reservoirs of infection). - Lack of monitoring treatment to track patients and preventing complications of the disease, until their improvement, and consequently to get rid of the reservoir of the disease in the urban type. - Not completing the course of treatment with glucantime, as it is painful, especially around the lesion and length of treatment and complications of treatment.

460

Environmental Health and Leishmaniasis by Indication on Afghanistan: A Review

- Increase parasite resistance to glucantime, especially in ACL. - Lack of information about the reservoirs of the disease on time, especially in rural areas and methods to combat them, especially in newly infected areas. - Unavailability of topical or oral medicines. - Failure to exercise personal protection by the community (Shirzadi, 2012).

Environmental Factors - Living in unhealthy conditions, especially on the outskirts of cities. - Inadequate environmental health measures including garbage collection, construction waste that causes a significant increase in the vectors of the disease. - Environmental changes including agricultural development, urbanization, building dams and so on. - Making residential areas adjacent to rodent nests. - Construction of residential houses near the live stalk stall. - Holding livestock near residential houses. - The lack of proper sewage system (Shirzadi, 2012).

Environmental Health and Leishmaniasis in Afghanistan One of the most important factors, causing leishmaniasis is the environmental factors and there is a direct relationship between the disease and environmental factors. Of the factors causing, including the presence of houses near the places where the sand flies breed and grow, as well near the nest of the rodents like mice, the accumulation of garbage in these places and the uncompleted construction works can provide conditions to be infected by leishmaniasis. In this section, we point out the most important factors that lie within the health framework.

Migration and Urbanization Many studies have shown that one of the causes of leishmaniasis is the migration of vulnerable people from villages to cities and staying in an environment such as old houses and lack of basic facilities. Those who migrate to major cities like Kabul, due financial problems are forced to build low-cost buildings in the out skirts of the city and close to mountainous areas with unhygienic conditions (Fig. 1), such as Itifagh township in Kabul and Jireil township in Herat. On the other hand, migrating to major cities like Kabul, Kandahar and Herat, which are the main foci of Leishmaniasis in Afghanistan, has led to an increase in the population, which is leading to an increase in cases of ACL. Infected people who come from endemic regions to susceptible cities play a role as

Fig. 1

Accumulation of waste and living out skirts of the city in old houses and lack of basic facilities, Ghazni province, Ghazni, Afghanistan.

Environmental Health and Leishmaniasis by Indication on Afghanistan: A Review

461

a reservoir for others and will make the area an endemic focus in the long run. With the advancement of technology and the industrialization of cities, many infectious diseases have been eliminated, but it seems that leishmaniasis is in line with development. In connection with visceral leishmaniasis, finding food by reservoir animals and adjacent to human places around garbage collection sites, especially nightgown animals and stray dogs, leads to stabblishing cycles through the absorption of vectors. In the areas of pilgrimage and tourism, the presence of nonindigenous and susceptible tourists, after having been infected with leishmaniasis, can stabilize the disease as a reservoir to their cities when they return to their homeland.

Destroyed Infrastructures and Waste Management Since the disease is depended to natural and man-made disasters such as earthquakes, war, and environmental and agricultural changes, the epidemiological aspect of the disease is constantly changing. In many parts of the world after the earthquake, there has been a significant increase in leishmaniasis in endemic areas. The existence of war for about half a century in Afghanistan and political instability have led to the destruction of infrastructure in this country. One of the most important problem in control of the leishmaniasis is the accumulation of waste, which provides a very favorable condition for reproduction and growth of sand flies (Fig. 1). Sandflies tend to make eggs in animal fertilizers that are suitable medium for larvae growth, so keeping livestocks in cites has a significant effect on distribution of sandflies in residential places. Because of the abundance of food in urban wastes, the absorption of carnivorous and rodents is high, especially from sunset to sun rise, put them available to sandflies. This is a behavioral pattern for sand flies that waiting for blood sucking in such places. The most important solution to this is to educate people to prevent the accumulation of wastes and the importance of keeping the waste free environment and to keep the livestock away from residential areas. The government also has the responsibility of collecting waste soon and on time and taking it to a suitable distance from the residential areas. Also rebuilding and repairing the cracks in the walls in the residential areas, rebuilding the outdoor open canals, proper sewage system and disposal of construction waste and debris, and demolishing and leveling of abandoned and ruined places should be taken onto consideration by governments.

Reservoir: Dog (ACL and VL)dMice (ZCL) Stray and domestic dogs are among the most important animals that affect ACL and VL (Fig. 2). Most of the infected dogs may present an intense cutaneous parasitism by Leishmania amastigotes or were asymptomatics. About 50% of the dogs infected with visceral leishmaniasis are asymptomatics that are as important as dogs with symptoms. Therefore, identifying infected dogs and treating or sterilization of stray dogs (it has been ethically rejected because of its low impact on the incidence of leishmaniases in humans) can help reduce disease. Also in ZCL, rodents can cause outbreaks of the disease and help in maintain circulating of infection in human societies. Rodents like Rhombomys opimus and Meriones spp. are the main reservoirs of ZCL in Afghanistan (Fig. 3). Therefore, combating rodents can reduce the disease and their population density should be controlled continuously with a strong commitment from all parts involved (Faulde et al., 2006). Given that rodents live in colonies and since they have high reproductive potential (produce 9–12 newborns per birth, and after a month they can produce the same amount), thus increasing explodingly. The presence of several infected rodents during the year can lead to epizootic centers among rodents, leading to an epidemic in human places near these colonies. Meriones hurrianae has entered in Sistan and Baluchistan (eastern Iran) from Pakistan and has created a endemic focus of ZCL. Environmental health and renovation of are effective in controlling of rodents. Using inexpensive and accurate molecular tools like LAMP assay is a promising and effective assay for the successful detection of a wide range of Leishmania infections in vectors, reservoirs and human host (Karani et al., 2014). Study on vaccines, especially DNA vaccines, are appropriate tools for controlling disease in reservoirs, especially dogs (Tabatabaie et al., 2018).

Vectors: Sand Flies The main vector of leishmaniasis is sand fly. In Afghanistan, since most leishmaniasis cases are ACL, the main vectors are Phlebotomus sergenti, while Phlebotomus papatasi is the main vector of the ZCL (Fig. 4). These insects grow and breed in places where wastes are not properly disposed and in houses where the live stocks are kept near the residential places. Vector control is an important antileishmaniasis plans due to lack of effective vaccine. Technologies like interior residual sprays and insecticide-treated nets, topical repellents, applications in reservoir nests are as disease control measures (Alexander and Maroli, 2003). Unfortunately, due to noncompliance with building principles there is no possibility to install a net on the windows and the entrance door of the rooms in most of the Kabul’s houses. The Leishmania parasites and their vectors’ adaptability has allowed the disease to spread into suburban and urban regions. This fact demands increased efforts in the fields of urban entomology, civil engineering, disease surveillance and tropical urban ecology.

Climate The prevalence and incidence of leishmaniasis is affected by several factors that the climate is one of the most important of them. In Afghanistan, leishmaniasis is present in all seasons, but the highest level of leishmaniasis occurs in the spring and autumn, which

462

Fig. 2

Environmental Health and Leishmaniasis by Indication on Afghanistan: A Review

(A) Stray dogs as an important reservoir of ACL and VL, Kabul, Afghanistan; (B–D) symptomatic dogs with VL.

indicates that there are active sand flies in all seasons, but the highest bite rates occur in summer and winter. Studies have shown that among climatic elements, humidity, temperature and precipitation have the most effect on the incidence of disease, respectively. Therefore, it can be said that each year, when the temperature, the amount of precipitation and the amount of moisture is high, it should be expected that the number of patients will increase. Combining meteorological information (climate variables) and statistical prediction models like ARMA is a reliable tool for prediction of CL cases. In this context, the Afghan ministry of public health should consider further measures in cooperation with the meteorological organization and the municipality. This cooperation should assist public health services in preparing for the future and designing of appropriate intervention strategies. Global warming is an important issue that increases the incidence of leishmaniasis in the world, and interestingly, most cases of the disease occur in the warmer part of the world. Due to global warming and lack of a regular electricity in countries like Afghanistan, people sleep outdoors in the warm seasons and do not use the mosquito nets and subsequently the probability of infection increases with the number of infectious bites.

Political Development; the Environment and the Health Enjoying from a healthy environment is one of the important index of political development. This shows the logical relationship between the environment and the health of people and society. Although, nowadays, environmental issues are directly addressed by development debates, but with regard to the direct role of environmental issues with health we can say, governments’ commitment to health affairs and healthy living of citizens makes this matter even more important. Welfare and health are index of political development that are directly affected by the political situation and the type of governance of the government. The relationship between politics and health can be explained from two different aspects; first, the environmental policies and development policies of governments have a direct impact on the expansion or deterioration of health affairs. The politics and

Environmental Health and Leishmaniasis by Indication on Afghanistan: A Review

Fig. 3

463

(A) Rhombomys opimus, (B) Meriones genus from different part of Iran.

policy of the state in offering the health and health services, on the other hand, explains the impact of politics on health. The first aspect expresses the indirect impact of government policy on health, while the second aspect expresses the direct relationship between health and politics. The most important aspects of the impact of politics on health can be put forward in two dimensions: a. the direct impact of politics on health: 1. Health policies of the government and the government’s plan for providing health services. 2. Political deterioration; causes lack of the standard health system. 3. The ineffectiveness of government policies causes increases in diseases and lack of preventive and curative measures. 4. Political deterioration of the government prevents offering balanced health services. b. Indirect effects: 1. Recklessness to environmental health by the government will spread the disease in the community. 2. The politics of neglecting the environment will destroy the safe environment and clean and healthy life. 3. The government’s inability to prevent environmental deterioration is the cause of spreading the diseases and the lack of healthy life. Regarding the above, it can be said that the policy of the Afghan government in the health sector directly affects the health of citizens. On the other hand, the environmental policies of the government also indirectly affect the health of the community and the people. Environmental deterioration, state indifference and incompetency of government in preventing of this inauspicious

464

Environmental Health and Leishmaniasis by Indication on Afghanistan: A Review

Fig. 4 (A) Sand-fly on the wall of houses, (B) Sand-fly on the Body of human, (C) Leishmania promastigotes in the body of sand-fly, (D) Male P. papatasi, (E) P. sergenti spermatheca, and (F) Male P. sergenti, from different part of Iran.

phenomenon. The government of Afghanistan needs to review its health and environmental policies and take effective and appropriate plans (Sajjadi, 2018). Leishmaniasis was discovered for the first time by Leishman in a British soldier in Dardam and this indicating on the relation between war and the disease. In many parts of the world, leishmaniasis has infected many soldiers. The existence of four decades of war and political instability in Afghanistan has prevented the government from accessing distance areas and, if available, does not have the funds to intervene. The war in Afghanistan has forced the people to flee from villages and small cities to populous cities such as Kabul, Herat and Kandahar, and consequently the spread of disease among people in the community. The war, due to the noise and presence in rodent colonies, causes forced rodents to move and settle in safer areas, which are the same as urban and rural areas. The existence of a war between the two countries causes the movement of animals from a common zone to both sides, and diseases that prevail among animals in the long run will find a zoonotic aspect and will adapt to humans. The soldier infected by ACL is considered a reservoir of the disease in the camps, and return of other infected soldiers to their disease-free cities can has a significant role in endemicity of Leishmaniasis. Unfortunately, there is no data on the rate of infection in Afghan soldiers, but there are cases of infected US soldiers in the in Afghanistan and the Persian Gulf war (Woodrow et al., 2006). During the IranIraq war, 80% of Iranian soldiers were infected with CL in Mehran, western Iran (Haddad et al., 2016). The war in Afghanistan has caused nomadic displacement and the abandonment of their dogs, and the companionship of them with wild carnivores such as wolves leads to the exchange of Leishmania infantum parasites. These dogs later become stray into cities and villages and cause illness in children. Because of the war and the lack of political development, people migrate to adjacent areas of neighboring countries, infected with leishmaniasis, then the disease emerges when return to their homeland. The psychological effects of scars advance sensitive people to the brink of death. The huge costs of importing expensive and specialized drugs of this disease lead to the withdrawal of the currency and the dependence of the country. During the war, many people are exposed to chemical bombs, and due to the side effects of these bombs, lesions of leishmaniasis can be expanded and catastrophic (Hajjaran et al., 2013).

Conclusions Leishmaniasis is a tropical neglected disease. The two terms of tropical and neglected emphasize the high association of this disease with environmental factors. Global warming has caused climate change in the world and this has increased the distribution of disease in the globe. Due to poverty and destruction of infrastructure in countries with other issues such as Afghanistan, the environment has undergone extensive deterioration. Immigration to large cities has caused marginalization in cities and buildings near rodent nests. Uncontrolled migration has increased the population of big cities that increases the risk of ACL. The lack of attention to

Environmental Health and Leishmaniasis by Indication on Afghanistan: A Review

465

environmental issues has increased the population of stray dogs in the cities, which increases the risk of VL and ACL. Failure to manage waste and its long-term accumulation in cities as well as living by livestock will create a good circumstance for sandflies nesting that increases the number of people at risk. The above suggests that governments in countries like Afghanistan should have a special focus on environmental issues and infrastructure reconstruction.

Acknowledgement We are grateful of Mr. Ghulam Mojtaba and Murtaza Hadari for their assistance.

References Alexander, B., Maroli, M., 2003. Control of phlebotomine sandflies. Medical and Veterinary Entomology 17, 1–18. Alvar, J., Velez, I.D., Bern, C., Herrero, M., Desjeux, P., Cano, J., Jannin, J., Den Boer, M., Team, W.L.C., 2012. Leishmaniasis worldwide and global estimates of its incidence. PLoS One 7, e35671. Faulde, M.K., Heyl, G., Amirih, M.L., 2006. Zoonotic cutaneous leishmaniasis, Afghanistan. Emerging Infectious Diseases 12, 1623. Faulde, M.K., Schrader, J., Heyl, G., Amirih, M., Hoerauf, A., 2008. Zoonotic cutaneous leishmaniasis outbreak in Mazar-e Sharif, northern Afghanistan: An epidemiological evaluation. International Journal of Medical Microbiology 298, 543–550. Haddad, M.H.F., Ghasemi, E., Maraghi, S., Tavala, M., 2016. Identification of Leishmania species isolated from human cutaneous leishmaniasis in Mehran, Western Iran using nested PCR. Iranian Journal of Parasitology 11, 65. Hajjaran, H., Mohebali, M., Akhavan, A., Taheri, A., Barikbin, B., Soheila, N.S., 2013. Unusual presentation of disseminated cutaneous leishmaniasis due to Leishmania major: Case reports of four Iranian patients. Asian Pacific Journal of Tropical Medicine 6, 333–336. Karani, M., Sotiriadou, I., Plutzer, J., Karanis, P., 2014. Bench-scale experiments for the development of a unified loop-mediated isothermal amplification (LAMP) assay for the in vitro diagnosis of Leishmania species’ promastigotes. Epidemiology and Infection 142, 1671–1677. Mosawi, S.H., Dalimi, A., 2015. Molecular detection of Leishmania spp. isolated from cutaneous lesions of patients referred to Herat regional hospital, Afghanistan. Eastern Mediterranean Health Journal 21, 878. Sajjadi, A., 2018. Foreign policy of Afghanistan. Khatam al Nabieen University Press, Afghanistan. Shirzadi, M.R., 2012. Leishmaniasis care guide in Iran. The Ministry of Health and Medical Education Press, Tehran, Iran. Tabatabaie, F., Samarghandi, N., Zarrati, S., Maleki, F., Ardestani, M.S., Elmi, T., Mosawi, S.H., 2018. Induction of immune responses by DNA vaccines formulated with dendrimer and poly (methyl methacrylate)(PMMA) nano-adjuvants in BALB/c mice infected with Leishmania major. Open Access Macedonian Journal of Medical Sciences 6, 229. Woodrow, J.P., Hartzell, J.D., Czarnik, J., Brett-Major, D.M., Wortmann, G., 2006. Cutaneous and presumed visceral leishmaniasis in a soldier deployed to Afghanistan. Medscape General Medicine 8, 43.

Further Reading Ashford, R.W., 1996. Leishmaniasis reservoirs and their significance in control. Clinics in Dermatology 14, 523–532. Berry, I., Berrang-Ford, L., 2016. Leishmaniasis, conflict, and political terror: A spatio-temporal analysis. Social Science & Medicine 167, 140–149. Kamhawi, S., 2006. Phlebotomine sand flies and Leishmania parasites: Friends or foes? Trends in Parasitology 22, 439–445. Reithinger, R., Aadil, K., Kolaczinski, J., Mohsen, M., Hami, S., 2005. Social impact of leishmaniasis, Afghanistan. Emerging Infectious Diseases 11, 634.

Environmental Health: An overview on the Evolution of the Concept and its Definitions O Santos, A Virgolino, RR Santos, J Costa, A Rodrigues, and A Vaz-Carneiro, University of Lisbon, Lisbon, Portugal © 2019 Elsevier B.V. All rights reserved.

Introduction During the last decades, especially from the 1970s onwards, a relevant amount of evidence about the impact of environmental factors on human health has been produced. The main concern with regard to the interplay between environment and human health has been the exposure to chemical, biological and physical factors in soil, water and air. Less frequently, environmental health (EH) professionals, researchers and relevant stakeholders in this area have also considered factors from the psychosocial environment as determinants of the health status. More recently, research has broadened the focus to embrace the digital world as an additional environmental ‘layer’, with implications for both the human health and the sustainability of natural resources. The concept of digital pollution is aptly summarized in the following excerpt of Judy Estrin and Sam Gill, published in Washington Monthly “digital pollution is more complicated than industrial pollution. Industrial pollution is the by-product of a value-producing process, not the product itself. On the internet, value and harm are often one and the same. It is the convenience of instantaneous communication that forces us to constantly check our phones out of worry that we might miss a message or notification. It is the way the internet allows more expression that amplifies hate speech, harassment, and misinformation than at any point in human history. And it is the helpful personalization of services that demands the constant collecting and digesting of personal information. The complex task of identifying where we might sacrifice some individual value to prevent collective harm will be crucial to curbing digital pollution. Science and data inform our decisions, but our collective priorities should ultimately determine what we do and how we do it.” This new vista is linked to environmental health because of the impacts on health arising from human-digital interactions, in terms of both physical (e.g., weight gain, sleep disorders) and psychological health (e.g., addictive behavior, depression, hikikomori syndrome). On the other hand, and with regards to the impact of human behavior on the environment, it is also relevant to address digital pollution in terms of the ecological footprint associated with the use of digital (namely, internet-based) systems. Indeed, the continuous increase of massive digital consumption has serious implications in terms of increased demand for energy, deployment of natural resources, toxic waste production, air pollution and global heating production. In 2015, Andrae and Edler (see the Further Reading section) estimated that digital-based communication technologies may contribute up to 23% of the globally released greenhouse gas emissions by 2030. Several efforts from different academic and non-academic institutions, including governmental organizations, have been made in order to define and operationalize the object of study and the area(s) of intervention of EH. These endeavors have their roots in the contemporaneous projection of the ‘biopsychosocial paradigm of health’, first proposed by Engels in 1977, but also in the increasing prominence of health promotion (e.g., the Ottawa Charter for Health Promotion as a key milestone, in 1986) and health protection as strategic pillars for the fight against disease and for providing longevity with quality of life and adequate functionality. Against this background, a parallel between the evolution of the concepts of health and environments (applied to health) can also be traced. Health departed from a strong biomedical perspective, mostly centered on toxic agents and physiological parameters, to move towards a biopsychosocial perspective (mainly throughout the second half of the 20th century), which also considers social and psychological (cognitive, emotional, behavioral) pathogenic agents and parameters. This paradigm shift is associated with human longevity gains and a dramatic modification of diseases profiles, from acute to chronic diseases, and main causes of death, from infectious diseases to behavioral-related diseases (e.g., cancer, diabetes, or obesity), most especially in the second half of the 20th century. Adding complexity to health as a construct implied the above stated expansion of the list of environments that are currently recognized as potentially affecting health, thus complementing the most traditional environmental toxic-related factors with psychosocial and, more recently, digital (mainly, internet-based) factors. Global changes, such as world population growth or the shortening of distances with the globalization of cultures and of consumption behaviors, together with the conciliation of efforts to meet standard levels of quality of life across the planet and the increase in human longevity, all these achievements of the humankind implied significant costs for natural resources. Therefore, although the main focus of EH has been placed on how to reduce exposure to environmental hazards and associated risks to human health, it is now more and more consensual that sustainable human health, as well as long-term individuals’ and communities’ wellbeing, is totally dependent on sustainable and healthy natural ecosystems. Within this scope, the Rockefeller FoundationLancet Commission on Planetary Health highlighted in 2014 the need to enlarge the meaning of EH as to a more comprehensive and bidirectional interpretation of the terms, ‘environment’ and ‘health’. Accordingly, the public health and epidemiology perspectives, mainly ensured by the identification of environmental hazards to human health and pathways to avoid/reduce environmental-induced disease, should be complemented by guaranteeing that human health and survival do not imply an exhaustion of natural supporting systems.

466

Encyclopedia of Environmental Health, 2nd edition, Volume 2

https://doi.org/10.1016/B978-0-12-409548-9.11815-9

Environmental Health: An overview on the Evolution of the Concept and its Definitions

467

When Health Meets Environment The influence of the environment on the lives of human beings has probably been acknowledged since the beginning of our history. All living beings depend on the environment they inhabit for the provision of a wide range of natural resources such as energy, clean air, drinking water, nutritious food, and safe places to live, just to name a few. It is undeniable that the access to these resources is a precondition for any species to prosper on Earth, humans included, and this consciousness goes back to prehistoric hunter-gather societies. Interestingly, and back in time, the responsibility of food provision or any event compromising resources availability has been attributed to the will of spirits or gods or even the ‘Mother Nature’, thus revealing a strong human sense of devotion and gratitude towards nature. This particular human-nature more or less mystical connection has not gone extinct, being nowadays more prominent among the remaining nomadic and indigenous communities that still follow this simple rationale: nature offers, humans receive and thank for that offer. For example, rainmaking rituals (e.g., “Who brings rain, brings life”, said the Zulu) are still being performed by some indigenous communities. A detailed analysis of the relation between humans and nature through time is out of the scope of this article; however, it is worthwhile to mention that different rituals, beliefs and attitudes towards nature throughout history perfectly illustrate the perception of human dependence on nature and, more recently, point out for an interdependence between the two. Living in community, either nomadic or sedentary, is a serious challenge, especially concerning the maintenance of a healthrisks-free housing environment. A paradigmatic example from the natural world is the one of ants, social insects that live in extremely organized groups. This is the most widely distributed group of insects on Earth, and their ecological success relies on the benefits to their survival of living in group. However, this evolutionary advantage comes with major costs. One example is the increased risk of infection and disease caused by parasites and pathogens (bacteria, viruses, protozoans, helminths and fungi). Although not having an immune system, ants developed multiple chemical and behavioral mechanisms to ensure a collective defense against environmental risks. These mechanisms are broadly referred to as ‘social immunity’ and include the usage of antimicrobial substances as nest material as well as specific sanitation behaviors, such as the elimination of waste products from the nest and corpses’ removal from the nests’ social area. The latter one is a very interesting and, at the same time, intriguing behavior. Sanitary risks of corpses are well known, and these are even greater for dense populations of social animals living in an enclosed area. Among several other mechanisms to counter the threat of epidemic disease caused by corpses, ants have developed complex processes of corpse management in terms of recognition (e.g., accumulation of oleic acid in dead bodies), assessment (i.e., postmortem time, infection status and origin), and response (i.e., cannibalism, burial, and corpse removal). Overall, these behaviors reveal an essential adaptation to systematically cope with environmental hazards, in particular those arising from living in a group, and their impacts on the community health. These fixed patterns of behaviors from ants perfectly illustrate the concerns that have been driving changes in behavior throughout the course of human history over the last 12,000 years. It is argued that the Neolithic Revolution was a wide-scale transition from hunter-gather societies to settled, agricultural societies, thus creating the conditions for world population growth to take place. However, inefficient sanitary conditions of the first human settlements (unlike ants) might have deaccelerated population growth. And this happened despite the efforts from those societies, because remains of water pipes, toilets and sewage lines, some of them dated from more than 4000 years ago, are found in ruins of ancient civilizations from India, Rome, Greece or Egypt. More recently, UNICEF and WHO reported that, in 2015, approximately 350,000 children under age five died from diarrheal diseases from drinking non-potable water, around 1.8 billion people consumed water contaminated with feces, and more than 2 billion people lacked access to basic sanitation facilities. These data show that unsafe drinking water and poor sanitation and hygiene conditions are still responsible for a variety of infectious, sometimes deadly, diseases. The Industrial Revolution had an important contribution to the growth of the world population and, consequently, to the growth of highly dense urban environments during the 18th and 19th centuries. This growth of urban population-clusters was accompanied by numerous and continuous challenges, namely with regards to resources’ exploitation to meet population needs (e.g., safe food, clean water, etc.) and exposure to environmental hazards (e.g., the cholera outbreak in New York in 1849), thus challenging communities to construct efficient managing systems, including waste-related management. At the beginning of the 20th century, the world population was approximately 1.65 billion. Meanwhile, in the first half of the century, the world faced two devastating global wars, encountered the horror of the Holocaust and experienced the destructive effects of nuclear bombings on the populations of Hiroshima and Nagasaki. This was maybe the first time humanity realized the power and the extension of its devastating capacity and realized the two-way direction of such behaviors: humans influence the environment, and in turn the environment influences human life. This idea was expressed by the American ecologist Aldo Leopold in his book “A Sand County Almanac, and Sketches Here and There”, published in 1949. Leopold defined “land health” as a condition under which “the land could be humanly occupied without rendering it dysfunctional”, i.e., the author argued that the degree to which the landscape satisfied human needs necessarily involves an assessment of land’s health. This insight was quite remarkable, because (1) it implies the recognition of the cultural value of wildlife and (2) acknowledges the human’s ethical responsibility on keeping natural ecosystems intact and healthy. This was followed by the development of an ethical model that recognized the ecological interdependence between land and humans, and this is usually known as a model centered in the ecosystems or ecocentric ethics. Later, in 1957, to fight against defoliation of deciduous trees, the U.S. Federal Government implemented an eradication program of gypsy moth based on the use of DDT (an organochlorine pesticide) and other pesticides. A few years later, in 1962, the marine biologist Rachel Carson published the well-known book “Silent Spring”, which focused on the effects of DDT in the ecosystems, while stressing out the lack of regulation of its widespread usage. The author also explains how DDT enters the food chain and

468

Environmental Health: An overview on the Evolution of the Concept and its Definitions

is accumulated in the adipose tissues of animals, ultimately increasing the risk of cancer and congenital disorders in humans. Although Carson is usually referred to as the ‘mother’ of environmental activism, her main contribution was to place human health in the context of larger environmental processes. In her own very insightful words, “Man’s attitudes toward nature is today critically important simply because we have now acquired a fateful power to alter and destroy nature. But man is a part of nature, and his war against nature is inevitably a war against himself .”. Leopold and Carson’s contributions have been followed by several others, and we would like to finish this section by pointing out the “Gaia hypothesis”, which was proposed in 1972 by the chemist James Lovelock. This hypothesis is mostly based on the concept of organismal health, and it is essentially an extension of Leopold’s metaphor of “land health”. Lovelock advanced the concept of “planet health”, i.e., the planet has the intrinsic capacity of self-regulation through the combination of a sort of complex mechanisms of defense at the service of homeostasis and, consequently, of assuring its survival. Despite the emergence of environmental activism, this is far from enough given the societal changes towards extraordinarily more demanding societies in terms of individual, social and technological living needs. Mass societies created mass production and mass consumption. In fact, hyper consumption and hypermodern individualism are the new era, prevailing in most societies from developed countries, as pointed out by Gilles Lipovetsky, which are made of increasingly artificial places called cities, which are more and more disconnected from nature, and have a more liquid form of social life, as in the words of Zygmunt Bauman. Notwithstanding, they keep it symbolically on a piece of grass in the backyard, a community park with a well-aligned set of trees or a wall painted in green. Even so, the social and political awareness regarding environment’s need for protection persisted and resisted over time, and eventually contributed for the United Nations Conference on the Human Environment, held in Stockholm in 1972, to mark a turning point in the development of international environmental policies, placing the human health in the context of larger ecological processes (i.e., health is not just a sanitary factor at the place we live). Twenty years later, in the United Nations Conference on Environment and Development, held in Rio de Janeiro in 1992, it was acknowledged that a sustainable development is achieved by considering both environmental protection and human health as priorities for the survival and prosperity of future generations. The establishment of a link between health and environment was based on this simple idea: human local activities also have a negative global impact on environment, which will sooner or later affect human health.

Linking Human Health to Ecosystem Services The concept of EH has evolved in different directions as previously discussed. In line with the change to a more comprehensive view of the interplay between humans and environmental resources and threats, a new paradigm is becoming more relevant as a transdisciplinary effort to put human activities and environment together from a sustainable standpoint. Following this new paradigm, several other terms and concepts emerged, such as, EcoHealth, One Health and, more recently, Planetary Health. The concepts of One Health and EcoHealth, though close to Planetary Health, are less focused on environmental sustainability, centring research efforts on how changes in the earth’s ecosystems affect human health. Indeed, One Health has a strong biomedical dimension and it addresses more health-directed questions. EcoHealth goes further in applying an ecosystem approach to health and devotes its attention to both environmental and socioeconomic health determinants. Both concepts offer, arguably, a rather unidirectional and anthropocentric perspective on the interplay between health and environment. Planetary Health, the newest of the three concepts, goes even further than EcoHealth in acknowledging the intricate links between human health and the health of our planet. It has theoretical roots in the fields of epidemiology and public health (traditionally concerned with the health of human communities and less focused on the surrounding natural ecosystems), raising the need to consider emerging threats to natural and human-made systems that support humanity, within a long-term and sustainable perspective. This is undoubtedly a multidisciplinary and multilevel approach that recognizes our dependence on a healthy Planet Earth. Nevertheless, there has been much discussion on the need to consider these three concepts equally valuable and complementary in their focus. Indeed, a working group focused on the promotion, support and implementation of interventions that may potentially improve the outcomes from Planetary Health, OneHealth and Environmental Health outcomes was created in the annual conference 2019 of the Consortium of Universities for Global Health. The main difference between these approaches and earlier attempts on the understanding of the interplay between environment and health is the grounding of the former in complexity theories which encompass the need to better understand the (1) effect of cumulative exposure, (2) the interaction between environmental determinants and how this interaction, and not each one considered in isolation, impact human health, and (3) human physiological changes throughout life. The acknowledgement of such complexity on the interaction between exposure-related mechanisms and health is somehow associated with the growing of exposomics, in particular after 2005. Exposomics deals with cumulative measurements of exposure to all kinds of environmental factors and associated biological responses within a lifespan perspective. The three pillars of exposomic research are (1) general external environment, including factors such as the urban environment, climate factors, social capital, stress inductors, (2) specific external environment, such as specific contaminants, diet, physical activity drivers or obstacles, tobacco, infection sources, among others, and (3) internal environment, which includes internal biological factors such as metabolic factors, gut microflora, inflammation, and oxidative stress. Inevitably, the quality of knowledge created by assuming an exposomic perspective imply the development of other -omic techniques (e.g., genomics, transcriptomics, proteomics, metabolomics, epigenomics).

Environmental Health: An overview on the Evolution of the Concept and its Definitions

469

Exploring Perspectives and Definitions of Environmental Health Over the last centuries, there has been a growing interest in the intersection between health and environment. A reflection of this movement was the emergence of the concept of ‘environmental health’, an apparently straightforward term resulting from the combination of two now-closely connected dimensions, each of them intrinsically complex by itself. But what do we really mean by EH? What are we referring to when we discuss and plan EH actions? Several definitions have been proposed for EH. An overview of some of the proposals is presented in Table 1 and a comprehensive analysis of the main components of each definition is provided in Table 2. Eight out of nine of the identified perspectives are provided by large-scale institutions or organizations conducting research or playing a leading role as policy-making advisors in this area. Most of the definitions are laid on the concepts of environmental exposure and health effects, therefore assuming a cause-toeffect relation of environment and health: environmental factors and their modification as determinants of the health status. In terms of environmental hazards, these can be expressed in three levels: individual (e.g., at home), local/community (e.g., air, food, water contaminants) and global (e.g., climate change). Although the individual level is not totally disregarded, EH is not typically centered on the person, but instead targets the population or local community, and that is usually the case in the field of public health. Indeed, EH is frequently proposed as a branch of public health and it is sometimes referred to as “environmental public health”. Interestingly, almost all definitions have a narrow scope regarding the type of environment under consideration: most of them only include factors from the physical, chemical and biological environments, as well as related behaviors. These definitions reflect the main concerns of EH practitioners when this discipline was first created: deforestation and soil erosion caused by population growth and related human activities, air and water pollution’s threat for populations’ health, occupational hazards in working environments, sanitation issues or pesticides used in agriculture and their potential harmful effects after food consumption. Consequently, most institutions are devoted to the study of environmental contaminants in the air, soil, water and food. Moreover, EH research topics are typically aligned with the five domains identified by the New Zealand Institute of Environmental Health (NZIEH; air quality, land management, building availability, water quality, and food safety) and include: sanitation facilities, drinking water contamination, waste treatment and disposal, air pollution (e.g., transportation sector, industries), indoor air quality, soil contamination (e.g., metals, waste disposal), pesticides use, food safety, agriculture methods (e.g., irrigation), among others. It can be assumed that the prevailing perspective on EH is rather conservative, while only few definitions consider environment in a broader sense. Accordingly, three definitions given in Table 1 (see also Table 2) go further and define EH as a discipline that addresses factors from the physical, chemical, biological and psychosocial environments. In spite of this, all definitions refer to factors that are external to the human beings and have the potential to affect their health. Although not clearly stated in all definitions, the environmental factors under consideration are viewed as modifiable in the short-to-long term via human interventions. Broadly speaking, these factors and hazards can be grouped into natural (i.e., naturally occurring, as for example natural radiation) or human-made (i.e., they result from human activities, as for example air pollution after coal combustion). The objectives of EH are commonly expressed as “disease prevention”, “protection against environmental hazards”, “improvement of people’s health”, “health and wellbeing promotion” or “prevention of human injury and illness”. To meet these general aims, EH practitioners assess population exposure to environmental hazards, searching for data to study and interpret the intricate links between environment and health. This is obviously a multidisciplinary task that involves professionals from different, although complementary, areas, such as medical doctors, epidemiologists, chemists, biochemists, sanitary engineers, biologists, physicists, climatologists, geographers, urbanists, architects, psychologists, communication experts, sociologists, among others. The term “environmental epidemiology” is often associated with research teams devoting their efforts to address EH questions, which is entirely aligned with the previously described process: assessments of the population in order to study health effects resulting from the exposure to environmental factors. Despite working at the population level with no restrictions with regard to age groups, more vulnerable sub-groups of the population, such as children, pregnant women and elderly, might be targeted by specific EH programs. Finally, data from exposure assessments is used to produce recommendations on how to reduce human exposure to environmental factors that have the potential to cause harmful health effects.

Public Perception of Environmental Health In the previous section, we have addressed the efforts towards the definition of EH from the EH practitioners’ point of view. However, a more complete understanding of EH cannot be achieved without insights on its social representation among nontechnical audiences, i.e. public perception of EH. This is especially important because EH and environmental public health promotion depend on effective communication of risk, both to the general public and to non-expert stakeholders. As noted by Per Stoknes in 2014 (see Further Reading), the effects of climate change on human health are usually neglected or undervalued by the public, not only but also because effects of climate change on individuals’ health or wellbeing seem too distant in time, space and influence. There seems to be a detachment effect between own attitudes and actions towards environment and individuals’ (and their close relatives’) health promotion. Thus, the question arises: if environmental issues are so often out-of-thought by the public, how is EH interpreted by them? To our knowledge, there are no studies exclusively addressing the comprehension of social representations of EH. Previous studies assessing public perception in this context focused on specific environmental hazards and the risks they pose to health.

470 Table 1

Environmental Health: An overview on the Evolution of the Concept and its Definitions Definitions of environmental health (EH). Definitions for related disciplines, i.e., global EH, environmental epidemiology and public EH, are also provided

Reference

Definition

World Health Organization (WHO)a

“Environmental health addresses all the physical, chemical, and biological factors external to a person, and all the related factors impacting behaviors. It encompasses the assessment and control of those environmental factors that can potentially affect health. It is targeted towards preventing disease and creating health-supportive environments. This definition excludes behavior not related to environment, as well as behavior related to the social and cultural environment, and genetics.” “The environmental hazards of concern in this report fall into four widely accepted classes: chemical, physical, biological, and psychosocial. Such hazards may be naturally occurring, such as radon or ultraviolet light from the sun, or they may be manmade (or “constructed”), such as particulates and gases released into the environment from automotive exhaust, industrial sources or tobacco smoke. As these examples demonstrate, environmental hazards may be encountered in the home, workplace, and community environments. (.) Taken in this context, use of the term environmental health (.) refers to freedom from illness or injury related to exposure to toxic agents and other environmental conditions that are potentially detrimental to human health.” “Environmental health and protection refers to protection against environmental factors that may adversely impact human health or the ecological balances essential to long term human health and environmental quality, whether in the natural or human-made environment. These factors include but are not limited to air, food and water contaminants, radiation, toxic chemicals, wastes, disease vectors, safety hazards and habitat alterations.” “Environmental health sciences (EHS) research is aimed at discovering and explaining how factors, including chemical, physical, synthetic, and infectious agents; social stressors; diet and medications; and our own microbiomes, among others, affect biological systems. The knowledge generated by EHS, inclusive of interactions between humans, animals, and our natural and built environments, provides a critical component of our understanding of human health and disease.” “Environmental health is the science and practice of preventing human injury and illness and promoting well-being by a) identifying and evaluating environmental sources and hazardous agents and b) limiting exposures to hazardous physical, chemical, and biological agents in air, water, soil, food, and other environmental media or settings that may adversely affect human health.” “A simple definition for Environmental health is that it is the study and management of environmental factors and their impact on human health, and more particularly the health of communities. (.) Environmental Health has been responsible for improving our life expectancy and quality of life. Practitioners have been instrumental in reducing air pollution, improving standards in housing and food safety, and mitigating infectious disease and effects of disasters. The components that make up Environmental Health can be grouped (.) as: air quality, land management, building habitability, water quality and food safety.” “Environmental health is that area of Public Health activity which strives to improve, protect and maintain health and well being through action on the physical environment and on life circumstances.” “Environmental health is the branch of public health that: focuses on the relationships between people and their environment; promotes human health and well-being; and fosters healthy and safe communities. Environmental health is a key part of any comprehensive public health system. The field works to advance policies and programs to reduce chemical and other environmental exposures in air, water, soil and food to protect people and provide communities with healthier environments.” “Environmental health can be defined as the assessment and management of ‘modifiable’ environmental influences from chemical, physical, biological, digital, social, psychological factors on human health and wellbeing, as well as all behaviors related to the physical, social and cultural environment. As a science, Environmental Health is concerned with all aspects of the natural and built environment that may affect human beings, from the earliest stages of development throughout life. It is also concerned with individuals’ and communities’ actions with implications for the quality and sustainability of natural environment systems. When managed and implemented effectively, Environmental Health can promote health, reduce disease burden, increase productivity and reduce the demand on the health services.”

Institute of Medicine US Committee on Enhancing Environmental Health Content in Nursing Practice (IM-US CEEHCNP)b

Gordonc

National Institute of Environmental Health Sciences (NIEHS)d

National Environmental Health Association (NEHA)e

New Zealand Institute of Environmental Health (NZIEH)f

The Royal Environmental Health Institute of Scotland (REHIS)g American Public Health Association (APHA)h

Environmental Health Institute (ISAMB)i

Environmental Health: An overview on the Evolution of the Concept and its Definitions Table 1

471

Definitions of environmental health (EH). Definitions for related disciplines, i.e., global EH, environmental epidemiology and public EH, are also provideddcont'd

Reference

Definition

National Institute of Environmental Health Sciences (NIEHS)j

Global environmental health: “Research, education, training, and research translation directed at health problems that are related to environmental exposures and transcend national boundaries, with a goal of improving health for all people by reducing the environmental exposures that lead to avoidable disease, disabilities and deaths.” “Public environmental health confronts major challenges to protect the populations of the effects of physical, chemical and biological agents in both developed and developing countries.” “Environmental public health focuses on protecting groups of people from threats to their health and safety posed by their environments. Protecting people from environmental health threats requires an understanding of basic human needs and how the environment can affect them.

Institute at Brown for Environment and Society (IBES)k CDC’s National Center for Environmental Health (NCEH)l

• Basic physical needs that are required for life: Air, Water, Food, Shelter • Needs for community that make life easier: Family, Church or other social group, Access to medical care, Jobs, Resources, Safety, Sanitation

• Emotional, spiritual, relational needs that contribute to personal happiness: A sense of Institute for Advanced Biosciences (IAB), Environmental Epidemiology applied to Reproduction and Respiratory healthm

control of life choices and events, Fulfillment, Ability to be close to others” “Environmental epidemiology is one of the main disciplines of environmental health research. Environmental epidemiology relies on state of the art tools in population sampling, exposure assessment, biochemistry (to assess exposure biomarkers), molecular biology (e.g. to identify epigenetic changes in human subjects) and biostatistics. The general aim of our research group in environmental epidemiology is to identify preventable risk factors of altered reproductive and respiratory health, focusing on environmental risk factors, and specifically air pollutants, in a life-course epidemiology approach.”

a

WHO (2019). Environmental health. http://www.searo.who.int/topics/environmental_health/en/ (accessed March 2019). Institute of Medicine (US) Committee on Enhancing Environmental Health Content in Nursing Practice, Pope, A. M., Snyder, M. A., Mood, L. H. (eds.) (1995). Nursing, health & the environment: strengthening the relationship to improve the Public’s Health. Washington, DC: National Academy Press. c Gordon, L. J. (1993). The future of environmental health, part 1. Journal of Environmental Health 55, 28–32 cited from Kotchian, S. (1997). Perspectives on the place of environmental health and protection in public health and public health agencies. Annual Review of Public Health 18, 245–259. d National Institute of Environmental Health Sciences (2018). 2018–2023 Strategic plan: Advancing environmental health sciences, improving health. https://www.niehs.nih.gov/about/ assets/files/niehs_strategic_plan_20182023_508.pdf (accessed March 2019). e National Environmental Health Association (2019). Definitions of environmental health. https://www.neha.org/about-neha/definitions-environmental-health (accessed March 2019). f New Zealand Institute of Environmental Health (2019). About environmental health. https://www.nzieh.org.nz/about-us/about-environmental-health/ (accessed March 2019). g The Royal Environmental Health Institute of Scotland (2019). What is REHIS? https://www.rehis.com/about/whats-rehis (accessed March 2019). h American Public Health Association (2019). Environmental Health. https://www.apha.org/topics-and-issues/environmental-health (accessed March 2019). i Environmental Health Institute (2019). ISAMB. http://isamb.medicina.ulisboa.pt/en/isamb-2/ (accessed March 2019). j National Institute of Environmental Health Sciences (2019). Global environmental health. https://www.niehs.nih.gov/research/programs/geh/index.cfm (accessed March 2019). k Institute at Brown for Environment and Society (IBES) (2019). Environmental Health. https://www.brown.edu/academics/institute-environment-society/research/environmental-health (accessed March 2019). l CDC’s National Center for Environmental Health (NCEH) (2019). What is environmental public health? https://blogs.cdc.gov/yourhealthyourenvironment/2014/04/22/what-isenvironmental-public-health/ (accessed March 2019). m Institute for Advanced Biosciences (IAB), Environmental Epidemiology applied to Reproduction and Respiratory health (2019). Environmental Epidemiology applied to Reproduction and Respiratory health. https://iab.univ-grenoble-alpes.fr/research/department-prevention-and-therapy-chronic-diseases/team-slama-environmental-epidemiology-applied-reproductionand-respiratory-health-e2r2h (accessed March 2019). b

Interestingly, EH professionals often found out that the public perception of an EH risk factor widely differs from that of the experts. This reveals the challenging character of EH public perception, determined by, at least, seven dimensions (Fig. 1). Although evidence on public perception of EH is limited, it is worth to mention an interesting study conducted in 2000 by Ross F. Conner and Sora Park Tanjasiri (University of California, at Irvine), which described a community-based health improvement program involving large and small urban and rural communities from Colorado. The authors found that those communities often perceived EH as a key aspect of the concept of “healthy community”. Moreover, urban communities seemed to incorporate more environmental concerns than rural and frontier communities, and these included land use, environment as risk factor, impacts on the environment, sustainable communities, resource allocation and energy use, among others. This is not, of course, a lay definition of EH, but it nicely captures public perception of a healthy environment. Studies addressing EH risks perception are subjected to bias based on ‘fear’. This means that the public reaction towards an environmental hazard is not necessarily related to the likelihood of exposure, but instead to the feared consequences. This perception is much focused on land and air contamination, and extremely constrained by the social and cultural contexts. However, this also

472

Environmental Health: An overview on the Evolution of the Concept and its Definitions

Table 2

Environmental health definitions: environment(s), environmental factors/hazards, and areas/topics of intervention

Reference

Environment(s)

Environmental factors/contaminants

EH practice/topics

WHOa

• Chemical, physical, biological

• Chemical, physical and biological

• Exposure assessment • Disease prevention • Creation of health-supportive

IM-US CEEHCNPb Gordonc

• Chemical, physical, biological,

• Natural and human-made (built)

• Natural or human-made

• Contaminants in the air, food and

• Exposure assessment • Human injury and illness prevention • Protection against environmental hazards

NIEHSd

• Chemical, physical,

psychosocial

environments

biological, social, behavior

NEHAe

• Chemical, physical, biological

NZIEHf

• Chemical, physical, biological

environmental factors • Modifiable environmental factors

water; radiation; toxic chemicals; wastes; disease vectors; safety hazards and habitat alterations • Chemical, physical, synthetic, and infectious agents; social stressors; diet and medications; and our own microbiomes • Contaminants in the air, water, soil, food, among others

• Air pollution, housing and food

environments creation

that impact human health and ecological balances

• Exposure assessment • Public health action information and support

• Health improvement • Exposure assessment • Policy/actions development • Human injury and illness prevention • Well-being promotion • Advance policies and programs to reduce exposure to environmental contaminants

• Exposure assessment • Protection against environmental hazards

REHISg

• Physical, life circumstances

hazards, infectious disease and disasters • Modifiable factors • Physical environmental contaminants

APHAh

• Chemical, physical, biological

• Contaminants in the air, water,

• Human health and wellbeing promotion • Healthy and safe communities/

ISAMBs

• Chemical, physical, biological,

• Natural and human-made (built)

• Lifespan perspective • Health prevention • Disease burden reduction, productivity

psychosocial, cultural, behavior

soil and food

environmental factors

• Modifiable

that impact human and community health

• Health and wellbeing improvement, protection and maintenance environments

increase and demand on the health services reduction.

Notes:

(a) EH, Environmental Health; WHO, World Health Organization; IM-US CEEHCNP, Institute of Medicine (US) Committee on Enhancing Environmental Health Content in Nursing Practice; NIEHS, National Institute of Environmental Health Sciences; NEHA, National Environmental Health Association; NZIEH, New Zealand Institute of Environmental Health; REHIS, The Royal Environmental Health Institute of Scotland; APHA, American Public Health Association; ISAMB, Environmental Health Institute. (b) In italics are the environments identified in the definitions and that fall outside the widely accepted categories: chemical, physical, biological, psychosocial, cultural and behavior. (c) Letters in superscript are references for each definition and the list is provided at the bottom of Table 1.

makes it clear that risk perception is an important component in setting priorities for health promotion and intervention. A national survey on public perceptions, attitudes and values towards the environment conducted in Ireland in 2006 is a particular telling example of this perspective. It was found that almost half of Irish adults think that waste management was the most important environmental issue in Ireland by that time, followed by a general low satisfaction level with the quality of the water. According to the Special Eurobarometer 340 (Science and Technology), conducted in 2010, European citizens are moderately interested in environmental problems and new medical discoveries, and also perceive to be moderately informed about these areas. Curiously, the Special Eurobarometer 419 (Public perceptions of science, research and innovation), applied in 2014, sought to compare the impact of people’s action and the impact of science and technological innovation over the next 15 years on different selected areas in the EU-28. Generally speaking, European citizens expected that science and technological innovation will have a much more positive impact than people’s actions in all selected areas (health and medical care, protection of environment, availability and quality of food, energy supply, fight against climate change, and quality of housing). Overall, differences in public perception depend on the scale (local versus global) and are affected by a communicational gap between authorities’ response and public expectations. Another important component of public perception of EH is the emergence of the concept of ‘environmental health literacy’. Following the Society for Public Health Education, EH literacy is defined as “the wide range of skills and competences that people need in order to seek out, comprehend, evaluate, and use environmental health information to make informed choices, reduce

Environmental Health: An overview on the Evolution of the Concept and its Definitions

473

Fig. 1 Public perception of environmental health. This is a challenging construct to which contribute, at least, seven dimensions: media narrative, sociocultural context, risk perception, personal and collective beliefs, authorities’ capacity to efficiently communicate, expert narrative, and environmental health literacy. Thus, if public perception is affected by all these items, it is plural and therefore quite variable. On the other hand, this means that to effectively measure public perception of environmental health, these items should all be taken into consideration.

health risks, improve quality of life and protect of environment”. In a recent review of the representations on EH literacy by Kathleen M. Gray, three dimensions of EH literacy were suggested: (1) awareness and knowledge, (2) skills and self-efficacy and (3) community change. Their integration is expected to prompt effective individual and community-level actions aimed at health protection. In the context of the above discussed, we strongly believe that public perception of EH must be considered for a broad definition of EH as well as for designing EH actions. A greater investment of research in this area should be made.

Final Remarks The interface between environment and health is a relevant topic, both due to its contemporaneity (according to the World Health Organization, almost a quarter of all deaths are attributable to the exposure to environmental risk factors) and to its comprehensiveness. Throughout the years, the concept of EH has evolved in order to accommodate the outbreak of science and technology. However, ongoing global environmental changes, mostly human-induced, challenge the traditional and widely accepted definitions of EH. In view of this scenario, the scope of EH needs to be expanded as to satisfactorily include some of the currently striking questions of the environmental agenda, namely climate change, human mobility, digital and robotic systems, spread of disease vectors and infectious agents worldwide, among others. Moreover, we have witnessed large-scale transformations with immediate and future health effects that require the adoption of a global perspective with regards to EH. More recently, some efforts have already been made in this respect, and one of them is the establishment of a new concept, ‘global environmental health’, which acknowledges the “Research, education, training, and research translation directed at health problems that are related to environmental exposures and transcend national boundaries, with a goal of improving health for all people by reducing the environmental exposures that lead to avoidable disease, disabilities and deaths.” We should bear in mind that the development of EH actions is still much country-specific. The acknowledgment of the deleterious effects for human health of adverse environmental conditions is now global, which is in line with the relevance attributed to EH in several United Nations’ Sustainable Development Goals. However, this overall conscience should not hinder governments from the development of national agendas that meet national environmental needs (e.g., basic sanitation, drinking water, reduction of plastic consumption, among others). This is also true when we take a leap to science. In fact, notwithstanding the global asymmetry in terms of number of institutions devoted to the study of EH in developing versus developed countries, the areas of interest are also diverse. Although recognizing that EH should be expanded, developing countries are still mostly focused on building up basic structures for safe water supply, the improvement of agricultural productivity or tackling poor hygiene conditions. Thus, it is our conviction that EH should harmoniously combine local and global approaches.

474

Environmental Health: An overview on the Evolution of the Concept and its Definitions

Further Reading Andrae, A.S.G., Edler, T., 2015. On global electricity usage of communication technology: Trends to 2030. Challenges 6, 117–157. https://doi.org/10.3390/challe6010117. Carson, R., 2002. Silent spring. Boston, MA, Houghton Mifflin. Conner, R.F., Tanjasiri, S.P., 2000. Communities defining environmental health: Examples from the Colorado (U.S.A.) healthy communities initiative. Reviews on Environmental Health 15, 215–229. European Union (2010) Special Eurobarometer 340 “Science and technology”. http://ec.europa.eu/commfrontoffice/publicopinion/archives/ebs/ebs_340_en.pdf (accessed March 2019) European Union (2014) Special Eurobarometer 419 “Public perceptions of science, research and innovation”. http://ec.europa.eu/commfrontoffice/publicopinion/archives/ebs/ebs_ 419_en.pdf (accessed March 2019) Gray, K.M., 2018. From content knowledge to community change: A review of representations of environmental health literacy. International Journal of Environmental Research and Public Health 15, 466. Stoknes, P.E., 2014. Rethinking climate communications and the “psychological climate paradox”. Energy Research & Social Science 1, 161–170. UN Environment, 2019. Global environment outlook – GEO-6: Healthy planet, healthy people. UN Environment, Nairobi. United Nations. Transforming our world: The 2030 agenda for sustainable development. A/RES/70/1. https://sustainabledevelopment.un.org/content/documents/21252030% 20Agenda%20for%20Sustainable%20Development%20web.pdf (accessed March 2019).

Environmental Health Concerns in Cameroon Wilfred A Abia, University of Yaounde 1, Yaounde, Cameroon; Institute for Management and Professional Training (IMPT), Yaounde, Cameroon; and Integrated Health for All Foundation (IHAF), Yaounde, Cameroon Denis M Jato, Institute for Management and Professional Training (IMPT), Yaounde, Cameroon; and Integrated Health for All Foundation (IHAF), Yaounde, Cameroon Emmanuel N Mfotie, University of Yaounde 1, Yaounde, Cameroon; and Institute for Management and Professional Training (IMPT), Yaounde, Cameroon © 2019 Elsevier B.V. All rights reserved.

Abbreviations and Acronyms ADB African Development Bank C2Cl4 Tetrachloroethylene CO Carbon monoxide H2CO Formaldehyde gas ICMWM Inter-Ministerial Commission for Municipal Waste Management in Cameroon NTD Neglected tropical diseases VOC Volatile organic compounds WASH Water, sanitation, and hygiene WHO World Health Organization

Introduction Cameroon is located in Central and West Africa, in the Gulf of Guinea and the Atlantic Ocean. It has a surface area of 475,442 km2, and a population of about 22.83 million by 2015 census, with a density of 49.58 people per square km. The World Bank reported urban population of 54.94% in 2016, with 1.29% practicing open defecation, while annual urban population growth rate stands at 3.63%. To the West, Cameroon is bordered by Nigeria, by Chad to the northeast; the Central African Republic to the east; and Equatorial Guinea, Gabon, and the Republic of the Congo to the south. There are two seasons; the dry and the rainy season, with the Equatorial climatic zone and the Tropical climatic zone. Temperatures range from 20 C to 28 C, and increase northward, sometimes to 40 C. Cameroon lies between latitudes 1 and 13 N, and longitudes 8 and 17 E.The country rises from the low marshy coastal area into the rain forest plateau, from where it moves higher into the central Adamawa plateau. It then slopes into a savanna plain that extends to the shores of Lake Chad. There are high mountains and plateaus at the center and western parts respectively. The high western area varies in peak, with the highest being mount Cameroon at 4100 m. Volcanic and tectonic activities affect both the western and central high plateaus, giving rise to faults, volcanic cones, and volcanic lakes. The dominant plateaus (500–900 m) in the south gently slopes to the Congo basin in the east. The defeat of Germany during the First World War in the colonial era saw Cameroon being partitioned between Britain and France. With Britain and France as colonial masters, Cameroon today has English and French as official languages. The population is extremely heterogeneous, consisting of approximately 250 ethnic groups, with the most dominant being the Cameroon Highlanders (31%) followed by the Equatorial Bantu (19%), Kirdi (11%), Fulani (10%), Northwestern Bantu (8%), Eastern Nigritic (7%), while there are other African (13%), and non-African (< 1%). Christianity, Islam and traditionalist are the three main religions in Cameroon. Cameroon is dominated by the Catholic (38.4%), and then the Protestant (26.3%), meanwhile the rest of the population are either other Christian (4.5%), Muslim (20.9%), animist (5.6%), or other (1%), as well as nonbeliever (3.2%) as estimated by the World Bank in 2005. Despite economic growth in some regions, poverty is on the rise, and is most prevalent in rural areas, which are especially affected by a shortage of jobs, declining incomes, poor school and health care infrastructure, and a lack of clean water and sanitation. Probably, underinvestment in social safety nets and ineffective public finance management partly contribute to Cameroon’s high rate of poverty. Likewise, unemployment, poverty, the search for educational opportunities, and corruption, leading to significant brain-drain, might have partly influenced international migration. Death and birth rates were estimated at 9.8 deaths/1000 population and 35.8 births/1000 population respectively with a population growth rate of 2.58%. The dominant population is children up to the age of 14 years (42.6%), followed by adults between the age range of 25 and 54 years old (30.71%), children of age range from 15 to 24 years (19.55%), the elderly (55–64 years: 3.97%

Encyclopedia of Environmental Health, 2nd edition, Volume 2

https://doi.org/10.1016/B978-0-12-409548-9.10628-1

475

476

Environmental Health Concerns in Cameroon

and 65 years: 3.18%). According to World Bank estimate in 2015, the total dependency ratio is over 84, with youth dependency ratio alone being the highest (78.4) relative to elderly (5.9). In general, there are more than 1000 government-operated health facilities, including a teaching hospital, 2 referral hospitals, three central hospitals, 8 provincial hospitals, 38 divisional hospitals, 132 district hospitals, and more than 847 Health Centers. Cameroon also has very active private health services, both confessional and those managed by individuals. There is a very limited number of practicing medical doctors. World Health Organization (WHO) estimated an almost 1:40,000 inhabitants doctor-patient ratio, far above the recommended 1:10,000 inhabitants ratio. Cameroon government spends about 1.5% of GDP on health financing, and 8.2% of total expenditure on health, which is below WHO recommendation of 10% and 2001 Abuja commitment of 15%. This ranks Cameroon 5th highest out of 37 countries in out-of-pocket payment in sub-Saharan Africa. Majority of quality medications are available in pharmacies, but very expensive. Hence, majority of patients resort to street drugs which come into the country through the very porous borders. Infant mortality in Cameroon was 61 deaths per 1000 live births in 2012, with an under-five mortality rate of 95 deaths per 1000 live births. However, efforts within the Millennium Development Goals caused consistent reduction in the under-five mortality rate. Due to socio-economic hardship, Cameroon is now experiencing the double burden of infectious and chronic noncommunicable diseases (NCDs), largely driven by malaria, HIV/AIDS, and tuberculosis. Cameroon has a large youth population, with more than 60% of the populace under the age of 25. Fertility is falling but remains relatively high, especially among poor, rural, and uneducated women, in part because of inadequate access to contraception. Life expectancy remains low at about 55 years due to the prevalence of killer diseases such as malaria, tuberculosis, HIV and AIDS and an elevated maternal mortality rate, which has remained high since 1990. Cameroon’s economy has equally been highly affected by political instability, with Boko Haram insurgence from Nigeria into the North of Cameroon, Seleka Rebels from Central Africa into the East and most recently, internal socio-political instability in the North West and South West Regions. Consequently, the influx of refugees fleeing conflicts has partly led to food insecurity in the receiving communities, most likely due to reduction in cross border trade with neighboring countries, government mismanagement, corruption, high production costs, inadequate infrastructure, and natural disasters. Generally, the major urban cities are Yaounde (Administrative Capital of Cameroon and headquarter of Centre region) and Douala (Economic Capital of Cameroon and headquarter of Littoral region) with population sizes of 3,066,000 and 2,943,000 respectively based on the World Bank estimate in 2015. The annual rate of change of urbanization rate from 2010 to 2015 was estimated at 3.6%, while the net migration rate was 0.1 migrant/1000 population (World Bank, 2016 estimate).

Environmental Management Cameroon’s Ministry of Environment and Forestry created in 1992 and the development of a National Environmental Management Plan are fallouts of the global initiative for sustainable development, which marked a new phase in the regulatory framework of waste management in Cameroon. However, the systematic devolution of waste management powers to six different ministerial departments partly resulted in lack of coordination, planning, regulation and delivering sustainable waste management. The Inter-Ministerial Commission for Municipal Waste Management in Cameroon (ICMWM), created by Prime Ministerial Decree No. 95/230/PM of 31st April 1995, in charge of formulating and developing appropriate policy for the management of municipal wastes is the highest body responsible for municipal solid waste management. Despite these efforts, the challenges of environmental management and poor town planning partly remain as major concerns in Cameroon, resulting in unhealthy environment. Poorly constructed houses, poor drainage systems, have partly led to urban disorder, further complicating waste management, resulting in flooding, destruction of property and the spread of diseases.

Poorly Constructed Houses Cameroon’s major cities of Yaounde and Douala are experiencing a population explosion, with an increasing demand for adequate and decent housing amidst inadequate resources and land scarcity. Majority of houses in major cities, especially those readily affordable by the common man, are poorly constructed, with poor ventilation and drainage systems. It is either the room sizes are so small, windows are too small, fewer compared to house size, facing another very close wall or corridors and with very narrow verandas. This obstructs free flow of air and light in and out of houses. Formaldehyde (H2CO) gas from building materials like carpets and plywood, paint and solvents emit volatile organic compounds (VOCs) as they get dry while lead paint degenerate into inhalable dust. Also, the use of firewood and stoves add significant amounts of smoke, while air fresheners, incense, body lotions, perfumes, pesticides and other indoor chemical sprays add particulates and strong scent into the air. Indoor burning of charcoal causes carbon monoxide (CO) poisoning while tetrachloroethylene (C2Cl4) emitted from dry-cleaned clothes, causing poisoning and fatalities. Very old and poorly constructed houses produce indoor biological sources of air pollution (dust from minute human skin flakes), methane emitted by inhabitants, while mold in walls, soil and surrounding gardens produce pollen, dust, and mycotoxins. With poor ventilation, these particles do not circulate freely, thereby causing inhabitants to inhale, with greater risk of disease. According to the WHO, air pollution poses significant risk of respiratory infections, heart disease, and lung cancer, causing difficulty in breathing, wheezing, coughing, and aggravate existing respiratory and cardiac conditions. The WHO reported 1965 and 11,400

Environmental Health Concerns in Cameroon

Fig. 1

477

House destroyed by flood waters which often flow in quantities larger the size of the visible canal.

deaths from indoor smoke in 2004 and indoor air pollution respectively in Cameroon. Also, many houses in the major cities are constructed in environmentally risky zones, with some on landslide-prone slopes while those in marshy areas get destroyed by floods (Fig. 1).

Poor Drainage Systems A lot has been, and is still being done on urban restructuring and urban innovation (Cameroon Law N 2004/003 of April 21st, 2004 laying rules for city planning in Cameroon, Article 53) to improve upon drainage systems. However, partly due to nonrespect or inadequate enforcement of laws guiding town planning has led to anarchical construction of houses which block drainage systems, thereby obstructing the free flow of water when it rains. Waste management is essentially collect-and-dump, without any recycling plan. The inability to recycle or properly manage nonbiodegrable plastic containers also presents a hazard as they block drainage systems (Fig. 2), thereby deviating rain water into homes in the neighborhoods. This is further compounded by the incivility of the inhabitants of the cities, leading to total urban disorder. Also, absence of free flow causes water to stagnate, producing very nasty odors and forming breeding habitats for mosquito larvae that eventually metamorphose into malaria vectors. Also, such mismanagement of wastes causes contamination of wells and streams by wastewater, with potential risk of infections. This has partly led to spiraling rates of waterborne diseases such as cholera and dysentery, with Cameroon’s Ministry of Public Health reporting more than 250 deaths from cholera in 2012, and also disability, structural damage, and widespread inconvenience. To combat flooding in Yaounde, government in partnership with the African Development Bank (ADB), constructed a 3.5 km canal which runs across Yaounde along the Mfoundi River. However, during an inspection tour the ADB team expressed concern over blockage of the canal by heaps of metal wastes, other debris and pollutants spotted in some sections of the canal, blamed on lack of adequate waste disposal facilities. With littering, the new canal is rather becoming an eyesore that breeds mosquitoes and increases the spread of malaria, thereby compounding population health. Consistent exploitation of Cameroon’s natural resources, continuous land deterioration, along with rapid environmental change, has further compounded the situation, making inundation and flooding constant nightmares in Cameroon.

Fig. 2

Nonbiodegradable plastic containers and garbage block drainage system, often causing flooding in neighboring areas.

478

Environmental Health Concerns in Cameroon

Bar Operation Many streets and junctions in Cameroon are littered with bars or drinking spots, with as many a 10 bars at some major junctions in the main cities. Majority of these bars do not have lavatories (toilets and washrooms). Even where lavatories exist, they are often in a sorry state, causing users to resort to open urination and/or defecation (along walls of houses, streets or into gutters). Much of these ends up being washed by rain into people’s homes during flooding. Apart from the social and economic burden caused by excessive alcohol consumption, its impact on the environment arising from the infiltration of polluted effluents to contaminate streams and homes predispose inhabitants to diseases. Despite the proximity and possible infiltration from toilets to contaminate ground water, such as wellsdwhich is used by some households in Yaounde as alternative source of water. Notwithstanding, strict follow-up on the November 9, 1990 decree regulating the bar sector and the November 22, 1993 decree on its implementation may reduce the effects. This was the case in the Mfoundi Division where over 31 bars were closed down due to their violation of the provisions in the above 1990 decree, such as nonrespect of closing hours, malpractices by on-licenses and off-licenses, loud music, poor localizations, love-making rooms or corners, etc. Some of the consequences include littering of used condoms every morning to the dismay of neighbors whose children get acquainted early to immorality, prostitution, drugs, and drunkenness.

Waste Disposal The burden of waste management particularly solids is exhibited by towns and cities in Cameroon, characteristic of many African cities. This is often due to inadequate finances, poor legislation on environmental protection, nonenforcement of regulations, bad governance and lack of a sustainable town and housing plan. Cameroon’s municipal solid waste management policy encompasses three major principles: municipal solid waste management integrated as an element of governmental organization, waste management as a specialist business requiring industrial approach and waste management requiring specially allocated funds.

Sustainable solid waste management

Sustainable solid waste management is imperative toward minimizing global environmental and public health risks. Unsanitary conditions resulting from rampant dumping of waste around cities, villages and other inhabited areas is responsible for the prevalence of parasites, tetanus, malaria, hookworm, cholera, and diarrhea in many African countries. Unsafe water, sanitation and hygiene (WASH) practices killed 1908 globally and 18,300 in Cameroon (diarrhea only) in 2009. With the many diseases linked to poor management of the environment, Cameroon falls among countries with the highest (30%–35%) environmental burden of disease as percentage of global disease burden.

Sewage

The uncontrolled and disorderly manner in which houses are constructed has led to difficulty in proper disposal of sewage. Some houses empty sewage from showers, bathtubs, washing machines, dishwashers, kitchen sinks, toilets, and bathrooms into open gutters or along walls (Fig. 3), often visible as “black water,” with very nauseating odor. Close to 80% of households in Yaounde discharge domestic wastewater primarily into gutters and many into steams. Such sewage end up being washed into homes, especially in marshy areas during floods. Majority of Cameroon’s cities use pit toilets which are neither easy to drain nor replace due to land scarcity. With this, many inhabitants release their sanitary sludge at the slightest drop of rain. Cameroon’s solid waste management company commonly called “HYSACAM” has continued to play a great role toward proper management of waste and thereby working toward a healthy environment for all. Notwithstanding, it is currently unable to access

Fig. 3

(A) Sewage disporsal into gutters in between house in Yaounde, and (B) industrial sewage in Bonaberi industrial Sone in Douala.

Environmental Health Concerns in Cameroon

Fig. 4

479

Garbage overflow and produces sewage after rains in Yaounde.

many quarters in major cities due to lack of access roads. This leads to random waste disposal and waste accumulation, forming sewage when it rains (Fig. 4). Such wastewater often pollutes water sources, leading to waterborne diseases such as cholera, diarrhea, amoebic dysentery, typhoid as well as malaria and skin disease. The Far North region of Cameroon has also been heavily impacted by cholera compared with other regions. The epidemiology and endomo-epidemics data from the Community Health Unit of the Ministry of Public Health reveals higher risks of waterborne diseases in Yaounde, particularly in households bordering rivers and the zones with stagnant waste water.

Garbage

The type of foods sold in Cameroon markets produce a lot of garbage which are difficult to manage by users. Partly due to delay by “HYSACAM,” and proximity of wastes to vendors, they usually pick used plastic containers and other stuffs (Fig. 5). Plastic containers picked from “HYSACAM” vans by scavengers are often washed (but not disinfected) and used to sell either water, locally produced juice/drinks like “folere” and other liquids, posing great threat to human health. Industrial and domestic wastes remain poorly managed and hugely challenging in Cameroon. Douala’s lagoon basically functions as Cameroon’s waste disposal, with high contamination of water, leading to high mortality of aquatic organisms. Cameroon like many African countries serves as the dumping ground for imported low quality or second-handed items such as cars, electrical appliances, clothes household utensils and shoes. This further aggravates the waste management problems in Cameroon when these materials get out of use. Although lauded for its services, the amount of wastes may speculatedly be far outweighs the capacity of the solid waste management company “HYSACAM,” partly leading to delay in waste collection. Despite the large percentage of households in Yaounde discharging refuse into “HYSACAM” vans, a good percentage in some quarters still dispose waste into the surrounding streams/ rivers. Wastes have generally been dumped in open air with inadequate regard to environmental protection, with about six dumpsites or “landfills” established in different parts of Limbe in the South West region, besides the nonmentioned parts, of Cameroon in the past 15 years. Notwithstanding, with the existence of a law on environmental protection (Law No. 96/12 of August 5, 1996, on the Framework Law on the Environmental Management), there is a likelihood of a more-healthier environment for healthier human.

Fig. 5

(A) Scavengers pick containers from dustbins and (B) wash for reuse to sell locally made drinks like “folere,” or water.

480

Environmental Health Concerns in Cameroon

Environmental Pollution in Cameroon Generally, Cameroonian population, like others elsewhere in sub-Saharan Africa, both in rural and urban areas seldom have access to adequate sanitation services. It is estimated that over 90% of sewage in many developing countries, including Cameroon, is discharged untreated, polluting rivers, lakes, and coastal areas. Environmental pollution for example, the poor-handling of industrial chemicals/waste, likewise, gas emissions by imported vehicles, as well as natural factors such as volcanoes and the release of poisonous gas, represent serious environmental concerns for land, water and air pollution. Also, the poor handling of pesticides and nondegradable plastics have negative impact on soil quality and thus reducing farm land and agricultural production.

Water Pollution The access to quality potable water by the Cameroonian population is still a challenge even in the large metropolises of the country where water-supply is interrupted from time to time. Besides the issue of accessibility, is the aspect of the quality of available portable water. Generally, available water for household usage in some communities is often contaminated with waste from multifaceted sources for example, chemical waste from industries polluting water, household waste and feces released during the rains, waste plastics and empty bottles blocking small bridges once carried by rains/stream, poor handling of pesticides, etc. Most of the wastes in the environment are related to lifestyle and activities. Water from rivers receives domestic, industrial, and agricultural wastes. The domestic wastes come from everyday human activities such as baths, excreta, food preparation, laundry, and dishes. Human activities produce biological wastes: urinary and fecal wastes that contribute to water pollution if a good waste management system is not put in place. In Yaounde, some sewage treatment stations are already operating (e.g., Messa station). Similar initiatives have also been undertaken in other cities as Limbe and Garoua. However, a lot of efforts should be made especially in Douala, the economic capital of the country (with more than 3000,000 inhabitants). In addition, effluents from collective installations such as barracks, hospitals, markets, schools, hotels and standpipes, consist of all kinds of solids like bottles, old aerosol cans, old appliances, carcasses, ceramics and stainless metals among others. These different types of wastes result in mineral, organic and physical pollution of rivers. On the other hand, industrial effluents also contribute significantly to water pollution despite the low industrialization level of the country. In Cameroon, as in most African countries, there are several industries, such as breweries, sugar refineries, food transformation and processing industries and tanneries, which dispose large quantities of organic wastes. Generally, domestic and industrial wastes are likely causes of the presence of potential bacterial agents that may serve as risk factors to the health and wellbeing of users of this large water bodies such as the Douala Lagoon. Chemicals such as pesticides, insecticides, weed killers, various destructors of plant pests, and even nitrogen fertilizers are commonly used for agricultural purposes. These chemicals can be drained into rivers and may constitute a fundamental factor in the pollution of runoff and groundwater. Efforts should therefore be made by all sectors such as the awareness to fight against water pollution by the tobacco and cigarette manufacturers in Yaounde through use of good purification installations for both air and waste water.

Air Pollution Air pollution is one of the most common causes of climate change in the world since it contributes to reduce the ozone layer which protects us from direct sunrays. The destruction of the ozone layer causes heat around the globe thus leading to the global climate change. Climate change is experienced in Cameroon by the variation of seasons, many floods observed in cities such as Douala and Yaounde. Notwithstanding, reports indicate that Cameroon has experienced a considerable decrease in rainfall in the past 10 years. In the West of Cameroon, many water points have dried up and the problem of access to water in many parts of this region is becoming more recurrent and persistent. In the north of the country, drought is obstructing pastures, which means that the price of meat is constantly rising. Finally, there is a decrease in agricultural productivity in the South of the country due to variation in the duration of seasons. There are different origins of air pollution in Cameroon (Fig. 6) and the most important is the fumes released into the air by heavy old trucks and cars imported from Western countries and motorcycle taxis found almost in every part of the country and especially in metropolitan cities. For example, 45,000 motor-taxis are found in Douala according to an estimate from the Douala Urban Municipality in 2014 although the syndicates claimed more than 50,000 motor-taxis. According to some statistics, transport is responsible for 61% of CO2 emissions released against 11% for manufacturing and construction in Cameroon. Other sources of air pollution include the use of firewood for cooking purposes which do not only contribute to global warming, but which through the destruction of bushes, shrubs and trees greatly reduce any form of protection of nature through the natural vegetation. Another important and not negligible source of pollution in Cameroon is the emission of wastes from industries (although the industrialization is just at primary level) which mostly use nonrenewable sources of energy. In fact, local industries produce a lot of wastes which contain toxic materials such as carbon monoxide, particulate matter, sulfur dioxide, oxides of nitrogen and lead constituting some of the major indoor and outdoor air pollutants that are dangerous for the environment. In addition, agriculture which is the mainstay of Cameroon’s economy like in many low income countries, represents a serious cause to air pollution via the use of fertilizers, pesticides and insecticides in farms. Although these materials help to improve the agricultural productivity, they can be very dangerous as some of them contain ammonia which is one of the most hazardous gases in the atmosphere. Another natural factor that contributes to some extend to air pollution in Cameroon is the appearance of volcanoes, earthquakes which pollute the atmosphere with some dust which can destroy the ozone layer. In Cameroon, the Mount Cameroon in the South West, Region, is part of

Environmental Health Concerns in Cameroon

Fig. 6

(A)

(C)

(B)

(D)

481

Sources of air pollution: (A) old heavy truck; (B) public transport by motorcycle taxis; (C) cooking with firewood; (D) industrial smokes.

the area of volcanic activity known as the Cameroon Volcanic Line, which also includes the Lake Nyos, a site of a disaster on the 21st of August 1986. It liberated huge amounts of carbon dioxide into the atmosphere which suffocated 1746 people and more than 3000 livestock in nearby towns and villages.

Soil Pollution This is the presence of toxic pollutants or contaminants in the soil that are mainly due to artificial wastes produced by humans. The soil pollution is a serious problem since high concentrations of these chemicals may pose a risk to human health and/or the ecosystem. In fact, the wastes produced from nature itself such as dead plants, carcasses of animals and rotten fruits and vegetables only add to the fertility of the soil. However, waste products from humans are full of chemicals that are not originally found in nature and lead to soil pollution. The most cases of soil pollution are found around oil and mining industries. Cameroon has one of the richest subsoils in sub-Saharan Africa and this rich potential is not yet fully exploited although the government announced new mining projects. Previously, gold exploration in Cameroon was carried out by the artisanal sector, but now major mining exploration companies are involved. The environmental impacts of small-scale mining have been studied worldwide and the main impacts are deforestation and soil degradation. One of the most significant environmental impacts is derived from the use of mercury (Hg). It is a pollutant causing growing concern because of its long-term impacts on ecosystems and human health. Another type of soil contaminant is iron especially with the upcoming Mballam project where the byproducts from this exploitation may cause soil pollution in the area of exploration making this soil unsuitable for any use. Oil exploration in areas like Bakassi Peninsula is another case where soil can be highly polluted. The oil-bearing areas have faced many environmental problems which include destruction of wildlife and biodiversity, and loss of fertile soil. In these areas, the movement of metals through the soil is reduced by the presence of organic matter and by solubility limitations. In fact, waste from oil exploration that cannot be reused or recycled must be stored or disposed of in some manner, if not, increasing the land area affected by oil extraction may be disastrous.

Health Implications Human exposures to harmful chemicals, pesticides, pathogens that may be present in our environment (e.g., in foods, water, air, untreated wastes) either at workplace or at home, are possible causes of many environmental diseases. The WHO estimates that

482

Environmental Health Concerns in Cameroon

approximately 25% of the global disease burden is caused by factors such as unsafe drinking water, poor sanitation, and indoor and outdoor air pollution. In Africa, this incidence is even higher with about 35% of all preventable illnesses being caused by environmental factors.

Common Diseases Linked to Environmental Pollution Unsanitary living conditions are responsible for the occurrence of many diseases caused either by pollutants or untreated discharges containing various pathogenic micro-organisms which are life-threatening to humans. The most common diseases in developing countries are usually called neglected tropical diseases (NTDs) since these diseases can be prevented or controlled if necessary measures are taken to guarantee a safe environment and appropriate living conditions for the population. NTDs are caused by a variety of pathogens such as viruses, bacteria, protozoa, and helminths. Cholera, amoebiasis, dysentery, diarrhea, malaria, poliomyelitis, hepatitis A, trachoma, typhoid fever, schistosomiasis (bilharzia), etc. are among the most common diseases related to poor sanitation, less accessibility to potable drinking water and overall to poverty. Other diseases related to the environment include lung diseases (asthma, bronchitis, lung cancer, silicosis cancer) and kidney diseases.

Food Intoxication Food-related pathogens are common in the environment and with the high disposal of waste and its poor management in many food transforming and processing industries, this may likely represent a reservoir of potential bacterial and other pathogens which can lead to massive food intoxication. Food intoxication applies most readily to the type of illnesses caused by toxins that may be in the food we eat. Bacteria especially Salmonella sp., Staphylococcus sp., Bacillus sp., and Escherichia coli are common causes of foodborne illness.

Diseases (Cholera, Typhoid, Diarrhea, Amoebiasis, River Blindness) The quality of water distributed by Cameroon Water Utilities corporation, known in French as “Camerounaise des Eaux” (CDE) is sometimes not the best quality drinking water. In addition, domestic water sources may contain some microbes which can increase the prevalence of waterborne diseases. Water related diseases have been estimated to account for two-thirds of diseases and for about 50% of deaths in Cameroon. Cholera is one of the water related diseases commonly found in heavily populated places with poor sanitation and limited water resources. The infection of a single individual can affect the entire population. In Cameroon, cholera was first reported in 1971 and main outbreaks have been reported in the Far North, North and Littoral regions of the country. In the Littoral region, which hosts the economic capital Douala, several environmental factors favor the survival of the pathogen (Vibrio cholerae). These include shallow dirty polluted foul-smelling groundwater, the presence of vast expanses of swamp, streams/drainage ditches infested with algae, and the high temperatures with low rainfall and drought during certain periods of the year. Amoebiasis, commonly called “amoebic dysentery” is one of the diseases caused by water pollution. Poor hygiene, use of water contaminated by sewage or nontreated water, presence of flies, and person-to-person contact are the main risk factors for the dissemination of the disease. Water contaminated by amoeba (Entamoeba histolytica) can cause infection to the large intestine as well as the liver. Symptoms can range from mild to severe diarrhea with blood and mucus. A research carried out on groundwater pollution Mingoa’s watershed (Yaounde) pointed to the presence of cysts of Entamoeba histolytica, Ascaris lumbricoides, and Entamoeba coli and thus confirming the approximate hygiene and environmental conditions. Unlike amoebiasis, diarrhea is one of the most common diseases caused by water pollution. Infection may result from contamination of water with virus, bacteria or parasites from feces. Diarrhea causes frequent passage of loose, watery stool which can cause dehydration and death in young children and infants. Eleven species of bacteria including Bacteroides fragilis, Pseudomonas aeruginosa, Aeromonas hydrophila, Klebsiella pneumoniae, and E. coli were isolated from water samples collected in the Douala Lagoon. These pathogens were noted to pose serious threat to the health and wellbeing of users of the Lagoon. Typhoid fever is of serious public health concern in Cameroon and recent reports suggest that the diagnosis of typhoid fever (Widal test) is becoming more frequent in health facilities in Cameroon. Typhoid fever is common in regions with poor sanitary conditions particularly in Africa. In Cameroon, annual reports from the Ministry of Public Health showed a rise in the number of cases diagnosed and about 1800 and 5300 patients were affected in 1987 and 1989 respectively. River blindness also known as onchocerciasis mostly occurs in sub-Saharan Africa. People living near the fast-moving streams of Africa are at high risk since these areas are breeding grounds for the Simulium black fly. These blackflies breed in fast-flowing rivers and streams, mostly in remote villages located near fertile land where people rely on agriculture. This infection spreads through poor hygiene and sanitation caused by the lack of adequate safe water supply. Studies show that providing people with adequate water supply can significantly reduce infection rates.

Measures to Address Environmental Health As any other government will do, undeniably, there have been lots and lots of efforts made by the Cameroon government toward securing a healthy environment for its people. Notwithstanding, due to the ever changing environmental conditions for example,

Environmental Health Concerns in Cameroon

483

climate change, mainly mandated by anthropogenic activities, our environmental management strategies need constant reviewing and assurance that the strategies are working, else modified accordingly. As such, to curb problems of environmental health concerns in Cameroon requires a holistic/multi-stakeholder approach, with key being the strict application of governing texts, respect of international treaties and conventions on environmental protection, and involvement of the population. Failure to intentionally ensure healthy environments would lead to the spread of diseases with consequent impoverishment of families. Identification and satisfactory implementation of measures to keep a healthy environment would reduce diseases, create jobs, and improve the economy. The following measures are proposed to that effect.

Improve Waste Management Systems and Services Due to the rapid growth of major cities like Yaounde and Douala resulting from huge rural-urban migration, the waste management companies need to keep abreast with the increase in waste dumped at allocated sites. To cope with the small road sizes, and avoid traffic and accidents, waste management companies like “HYSACAM” have already considered and now collecting waste (e.g., in Yaounde) at night to avoid causing traffic and accidents in the day. However, there should be disaggregation of the wastes into plastics containers, plastic bags, papers, tins, etc. and establishment of a recycling factory that reconverts all wastes into usable forms. This will create jobs, ease circulation of rain water, clean the environment, improve human health, and improve soil fertility.

Reinstitute Sanitation Officers to Instill Order As practiced many years back, there is need to reinstitute the use of Sanitation Officers whose role would be to ensure that each family owns a properly constructed toilet, dustbins, bathrooms, and keeps their surroundings clean. They are to play a supervisory role and ensure that clean-up campaigns are mandatory in all towns to ensure that all quarters are kept clean and that there is order in the disposal of wastes; both garbage and sewage. This will promote community participation in environmental management and have a nationwide positive effect.

Ensure Respect of Regulations on Operation of Drinking Spots Government should enact stricter laws to regulate alcohol consumption, with more stringent preconditions to operate a bar. As it is with pharmacies, there should be defined distances between bars, stipulated operating hours and obligatory standard lavatories for bar owners. To ensure implementation, bar owners whose surroundings are littered with urine or feces should be slammed with heavy sanctions. Additionally, bars should better be in the quarters or along small road-sides and not along main roads (sometimes highways) which provides for high chances of accidents. Thus, bars at major road-sides should be should not be a good distance from the main road.

Formulation and Implementation of Town Planning Regulations There is need for holistic development plan for the entire country, toward construction of befitting houses and decongestion of major cities like Yaounde and Douala which are already experiencing a population explosion. Such measures should be backed by strategic development plan maps for all cities with strict monitoring and evaluation to ensure implementation and compliance. Such plans should take population growth into consideration and establish prior town plans and map out streets and major drainage systems before houses are constructed. All marshy or flood-prone areas should be defined and construction of houses in such areas prohibited, meanwhile such areas, if need be, be reclaimed for human exploitation as may be appropriate. Control teams that will compel house owners to respect construction norms should be set up or reinforced. Houses with very old architecture (without safety tanks, with waste pipes into gutters, or very small windows, rooms without lighting, and without access roads to evacuate patients in the event of sickness) should be compelled to correct to ensure safety to human health. These suggestions, if considered fully or partly, will go a long way to complement the existing efforts to provide low-cost or affordable council houses as a way to address the immediate needs of existing inhabitants, for example, in the iconic city of Cameroon, Douala.

Provision of Enough Trash Cans for Both Private and Public Use Major cities like Yaounde and Douala have very few trash cans, dotted in very few places. The longer the distance between such cans, the more difficult it is for people to use them. Also, Sanitation Officers should be assigned to ensure that people respect the use of dustbins, setting examples on those who disobey as is done in other countries. The media should be used to sensitize the population on the benefits and sanctions attributed to waste management. School Environmental Health and Waste Management Clubs should be encouraged and promoted to also play role in capacity building on waste handling and management in schools and societies.

484

Environmental Health Concerns in Cameroon

Ensure Respect of the Ban On Plastic Bags Cameroon government’s decision to ban the use of nonbiodegradable plastic bags was a laudable initiative but has not been accompanied by measures to ensure its strict application. The inadequate access to nonbiodegradable plastic bags, in a long term, may serve as risk factor for the continuous use of nonbiodegradable plastic bags, in case people lay hands on it anywhere. Basically, there is need to provide biodegradable options and strict follow-up to ensure its application.

Genuine Decentralization of Town Planning and Motivation To ease application of these measures in all cities, there is need to nationalize and decentralize the processes to ensure environmental sanitation. Each council will certainly be innovative in ensure environmental health. Also, creation of a competition for the most clean city, council, town, school, hospital, etc., at all levels and a lucrative prize attached will be a strong motivating factor to push citizens to implement stipulated measures.

Strict Control of Importations Cameroon like many developing countries serves as a dumping ground for used products from highly industrialized countries. There is therefore the need for stringent measures to control and prohibit the importation of outdated, obsolete and highly degraded products like clothes, shoes, electronics, and cars. Despite being considered as a source of employment, the multitude of motorbikes produce serious environmental hazards (smoke from exhaust pipes, disorderly driving, and indiscriminate dumping of broken down parts) pose significant danger to the environment and human health. There is need to significantly control the sector, reduce or possibly eradicate their use by creating cleaning jobs. The case of e-waste cannot be over emphasized. Typically, there is need for awareness on e-waste in Cameroon considering the ever increasing dependence on modern technologies that generate e-waste, as well as the sea/oceans serving as free route for e-waste from western world to Africa.

Complete Devolution of Power to Local Councils Lack of funding, low human resource, political interference, implementation of inappropriate technologies and relative cost have contributed to failures to implement adequate environmental health policies at the local level. There is need to devolve powers, roles, responsibilities and financial resources from the plethora of top managements such as ministerial departments to bottom management such as local councils with full autonomy in line with the 1996 constitution. This will prevent duplication of efforts and wastage of both human and limited capital resources.

Strict Implementation of Legal Framework Guiding Environmental Management The existence of laws and Cameroon’s endorsement of international conventions on environmental management are laudable initiatives but not enough. There is need for awareness raising and sensitization on existing government alongside other stakeholders’ efforts, likewise need for re-enforcement of existing legislations and policy frameworks on sustainable waste management and environmental protection for optimum outcomes and sustainability. Notwithstanding, gap-analysis on some existing laws based on experience of the appropriate government experts may suggest revision of some laws as appropriate. This will lead to effective engagement of industries, commerce and the general public in more sustainable waste management practices.

Strengthening of Private–Public Partnerships This should involve material recovery and community composting. It should encourage private investment in the delivery of wasterelated recycling and recovery facilities. To succeed, there is a need for participative consultation through facilitated workshops with all key stakeholders (municipal councillors, government agencies, businessmen in the private sector, NGOs, waste contractors and representatives of community groups). During such workshops, sustainable waste management perspectives should be developed, and consensus built. With all stakeholders on board, waste could be recycled by conversion into other forms and use.

Environmental Health Education To successfully implement the above measures, there is need to increase awareness of the population through education of households on environmental management including different techniques of water, sanitation and hygiene (WASH) and environmental protection. This will improve community participation towards reducing medical risks associated with pollution, and health consequences. Involvement of the local population in development projects is prerequisite to their active engagement in environmental management. This can be achieved through messages on environmental health and safety, communicated through public and private TV and radio stations, billboards, schools, social gatherings, social media, banners, student clubs, and markets.

Environmental Health Concerns in Cameroon

485

Conclusion Environmental health, which encompasses all aspects of the natural and man-made/designed environment that may affect human health, remains a major concern in Cameroon, like elsewhere in sub-Saharan Africa. Industrial chemical spills and disposal of industrial debris into water or open fields have harmful effects on the environment, causing particularly water, air and soil pollution. The direct consequences of these bad practices are reflected through common diseases and food intoxication and climate change. Poorly constructed houses, and poor drainage systems are among the contributing factors to poor disposal of both sewage and garbage. Improving waste management services by reinstituting Sanitation Officers to instill order and respect for regulations on operation of drinking spots (bars) can equally curb this very disturbing trend. In addition, formulation and implementation of town planning regulations, provision and promotion of the use of trash cans in both private and public areas and stricter implementation of the ban on nonbiodegradable plastic bags are some of the key measures. Also, improving environmental management through effective devolution and decentralization of town planning, strict control of factories and importation and strengthening of public– private partnerships will equally significantly contribute toward a healthier environment with reduced health implications. Generally, tackling environmental health concerns requires a participatory approach, involving all stakeholders to ensure quality health for all.

Further Reading Abdourahimi, Saïdou, Fantong, W.Y., Aka, F.T., Kwato, N.M.G., 2016. Environmental pollution by metals in the oil bearing Bakassi Peninsula, Cameroon. Carpathian Journal of Earth and Environmental Sciences 11 (2), 529–538. Aka EL (2002) Caracte´risation de l’abiegue et e´valuation des effets potentiels sur les populations riveraines de Nkolbikok a` Nkolbisson (Yaounde´). Thèse de Master’s of Science en gestion de l’eau. Dschang: Faculté d’Agronomie et Sciences Agricoles, Université de Dschang. Akoachere, J.F., Oben, P.M., Mbivnjo, B.S., Ndip, L.M., Nkwelang, G., Ndip, R.N., 2008. Bacterial indicators of pollution of the Douala lagoon, Cameroon: Public Health implications. African Health Sciences 8 (2), 85–89. Available from: https://www.ajol.info/index.php/ahs/article/view/7055/58341. Akoachere, J.F., Omam, L.A., Massalla, T.N., 2013. Assessment of the relationship between bacteriological quality of dug-wells, hygiene behavior and well characteristics in two cholera endemic localities in Douala, Cameroon. BMC Public Health 13, 692. Baok G (2007) Pollution des eaux et rivie`res et impact sur les populations riveraines: cas de la rivie`re Mgoua dans la zone industrielle de Douala-Bassa. Master en gestion de l’eau option environment, Université de Dschang-FASA. Calamari, D., 1985. Review of the state of aquatic pollution of West and Central African inland waters. CIFA Occasional Paper 12, 26. Funoh, K.N., 2014. The impacts of artisanal gold mining on local livelihoods and the environment in the forested areas of Cameroon. Working Paper 150. CIFOR, Bogor. Ghangha E (1991) Clinical and epidemiological study of typhoid fever as seen in Banso Baptist Hospital. Thesis project submitted in partial fulfillment of the requirements for the degree of Doctor of Medicine (MD). University Teaching Hospital, Yaounde, Cameroon. Guevart, E., Noeske, J., Essomba, J.M., Edjenguele, M., Bita, A., Mouangue, A., Manga, B., 2006. Factors contributing to endemic cholera in Douala, Cameroon. Medicine Tropicale 66 (3), 283–291. Kuitcha, D., Kamgang, K.B.V., Sigha, N.L., Lienou, G., Ekodeck, G.E., 2008. Water supply, sanitation and health risks in Yaounde, Cameroon. African Journal of Environmental Science and Technology 2 (11), 379–386. Available from: http://www.academicjournals.org/AJest. Manga, V.E., Osric, T.F., Adam, D., 2007. Waste management in Cameroon: A new policy perspective? Resources, Conservation and Recycling. https://doi.org/10.1016/ j.resconrec.2007.07.003. Available from: https://www.researchgate.net/. Nelson, A.W., Feazel, L.M., Robertson, C.E., Spear, J.R., Frank, N.R., 2012. Microbiological water quality in a resource-limited urban area: A study in Cameroon, Africa. Journal of Publich Health in Africa 3 (2). https://www.publichealthinafrica.org/index.php/jphia/article/view/jphia.2012.e19. Rouwet D, Tanyileke G, and Costa A (2016) Cameroon’s Lake Nyos Gas Burst: 30 Years Later. Eos. American Geophysical Union. Retrieved 02 July 2017. https://eos.org/meetingreports/cameroons-lake-nyos-gas-burst-30-years-later Tyler, J.G., Joshua, Y., Rebecca, G., Arabi, M., Jiyoung, L., 2017. Water access, sanitation, and hygiene conditions and health outcomes among two settlement types in rural far North Cameroon. International Journal of Environmental Research and Public Health 14, 441. Available from: https://www.ncbi.nlm.nih.gov/pubmed/28425935. Wilson, D.C., Pow, S., Read, A., Kolganov, D., 2005. Regional waste management planning in the Kaliningrad Oblast of RussiadA case study of technical assistance to achieve sustainable improvements in waste management. In: Proceedings Sardinia 2005, Tenth International Waste Management and Landfill Symposium. World Health Organization (WHO) (1999) Environmental burden of disease globally. Available from: http://www.who.int/heli/risks/en/ Youmbi, J.G.T., Feumba, R., Njitat, V.T., Marsily, G., Ekodeck, G.E., 2013. Water pollution and health risks at Yaoundé, Cameroon pollution de l’eau souterraine et risques sanitaires à Yaoundé au Cameroun. Comptes Rendus Biologies 336 (5–6), 310–316.

Environmental Health Engineering: Rationale, Technologies and Practices for Various Needsq SE Mbuligwe, Ardhi University, Dar es Salaam, Tanzania © 2019 Elsevier B.V. All rights reserved.

Abbreviations EHE Environmental health engineering SWOT Strengths, weaknesses, opportunities, threats UASB Upflow anaerobic sludge blanket VIP Ventilated improved pit latrine WHO World Health Organization

Introduction Environmental health engineering (EHE) pertains to the application of engineering science and technology to modify and manage the human environment in order to prevent and control transmission of communicable diseases and improve the health of the community in general. EHE contributes to the reduction of the disease burden by reducing environmental risks to health. More broadly, EHE is the engineering of modifiable physical, chemical, biological, and social–cultural factors that influence or affect human health. The potential utility of EHE is underscored by global health status statistics. The World Health Organization (WHO) estimates that 24% of the entire global disease burden and 23% of all deaths are attributable to environmental factors. For diarrhea, the global burden attributable to environmental factors is 94%, whereas the corresponding figure for malaria is 42%. The WHO estimates further that the entire global disease burden for schistosomiasis, intestinal nematode infections, and trachoma is attributable to environmental factors. Traditionally, EHE focuses on domestic water supply and excreta management plus other engineering utility and infrastructure services such as stormwater drainage, solid waste management, and indoor environmental quality. Operationally, EHE focuses on domestic water supply and excreta management because these have a larger potential role in the transmission of communicable diseases among the majority of the world’s most disadvantaged groups, mostly in developing countries. It also focuses on eliminating conditions that are conducive to the proliferation of disease etiological agents, their vectors, and their influencing factors. Two key EHE considerations that are often underrated or neglected in connection with the planning and provision of EHE services, especially in developing countries, are (1) the diversity of EHE challenges and service needs and (2) the evolutionary nature of most of the EHE challenges and service needs. The diversity of EHE challenges and service needs pertains to their large number as well as the wide spectrum of their types, both of which call for diverse solution options and flexible implementation approaches. The evolutionary nature of the EHE challenges and service needs pertains to the fact that these continually change as EHE needs of individuals and communities evolve with time. The EHE challenges and service needs change not only with time but also in direct response to their influencing factors, including the sociocultural and economic status of the service beneficiaries. Therefore, the fixed nature of the solutions often prescribed for EHE challenges and provisions made for EHE needs, especially in developing countries, contradicts reality. Most of the solutions prescribed to the remedies of existing EHE deficiencies, and those that are provided for the future EHE service needs, are fixed in time as well as in space with respect to both service standards and service levels. As a result, over time they can easily fall out of step with the needs they are meant to cater for. This partly explains the failure of most efforts expended over the past three decades to solve sanitation problems, especially in low-income communities where sanitation problems are most serious and their effects a most harmful and far-reaching. However, the persistence of the sanitation problems, especially among the poor, both explains and justifies the renewed interest in these issues worldwide.

Water Supply and Excreta Management in Relation to Human Health Science established health implications of the interactions between excreta management and water supply since the groundbreaking epidemiological studies of Dr. John Snow in London in the middle of the 1800s, though much more recently, science has also

q

Change History: October 2018. The section editor revised the title of this chapter for the second edition. This is an update of S.E. Mbuligwe, Diverse Options for Diverse Environmental Health Engineering Needs: Rationale, Technologies and Practices, Editor(s): J. O. Nriagu, Encyclopedia of Environmental Health, Elsevier, 2011, Pages 147–157.

486

Encyclopedia of Environmental Health, 2nd edition, Volume 2

https://doi.org/10.1016/B978-0-12-409548-9.11707-5

Environmental Health Engineering: Rationale, Technologies and Practices for Various Needs

487

established the link between indoor environmental quality and human health. Furthermore, science has established the influence of other infrastructure and utility services on the effectiveness of water supply and excreta management services. Diseases that are related to the interactions between water supply and excreta management are usually classified according to their modes of transmission. In 1999, Duncan D. Mara and Richard G. A. Feachem presented a comprehensive review of waterand excreta-related pathogens and diseases based on their own work and those of other researchers in this field. The review covered work dating as far back as 1972 and encompassed several separate publications on classification of water- and excreta-related pathogens and water-related diseases, and environmental classification of water-related diseases and excreta-related diseases. On the basis of the review, they presented a unitary environmental classification of water- and excreta-related diseases. The strongest point of their classification system, which sets it apart from most previous attempts, is that it combines classification systems for both water-related and excreta-related diseases into a single system. To enhance the understanding of this discussion, a classification system for water-related and excreta-related diseases based on the unitary environmental classification system proposed by D. D. Mara and R. G. A. Feachem is outlined in. The main advantage of categorizing diseases according to the classification put forward by D. D. Mara and R. G. A. Feachem is comprehensiveness. Notably, like previous classification attempts, this system categorizes diseases primarily on the basis of their modes of transmission. This makes it easy to understand the connection between each disease and its causative factor, which in turn makes it possible to design suitable engineering and other intervention measures. Taking advantage of this, Table 1 incorporates applicable EHE and social engineering preventive and control measures for the identified disease categories. Although it is clear from Table 1 that some diseases are not amenable to direct, easily applicable engineering preventive and control measures, engineering can play an important role in lessening the impacts of their influencing factors. For example, lowcost biogas technology that uses wastes can reduce the cost of energy, indirectly making excreta management systems and other amenities more affordable and accessible. Much less indirectly, it can avail of energy for disinfecting drinking water through boiling or for cooking food adequately, all with consequent prevention of diseases transmitted through contaminated water and poorly cooked food. Engineering in general can play a more direct as well as indirect role in disease prevention and control by way of improvement of physical infrastructure and utility services such as roads, stormwater drainage, solid waste management, power supply, and communication. More generally, engineering can enhance positive environmental health-influencing factors, including social–cultural, economic, geospatial, technical and technological, and environmental. Also noteworthy is the role of engineering in the enhancement of the overall quality of life of individuals and communities. For the sake of clarity, additional explanations on diseases pertinent to the different categories listed in Table 1 are given here. Feco-oral waterborne and water-washed diseases include hepatitis A, E, and F, poliomyelitis, and rotaviral as well as adenoviral

Table 1

Environmental classification of water-related and excreta-related diseases and applicable preventive and control measures (based on the system proposed by Mara and Feachem)

Disease category

Individual disease types

Applicable engineering interventions

Social engineering/other interventions

Feco-oral waterborne and water-washed

Viral, bacterial, protozoan, helminthic

Provide hygiene education

Non-feco-oral water-washed Geohelminthiases

Skin infections, eye infections, louseborne fevers Ascariasis, trichuriasis, hookworm infection

Taeniases

Beef and pork worm infections

Water-based

Bacterial, helminthic, fungal

Insect-vector diseases

Water-related, excreta-related

Rodent-vector diseases

Rodent-borne excreted infections, leptospirosis, tularemia

For waterborne diseases, improve water quality; for water-washed diseases, improve water quantity, availability, and reliability Improve water quantity, availability, and reliability Provide adequate management of excreta and other wastes before reuse Provide adequate management of excreta and other wastes before reuse Improve domestic plumbing; provide adequate management of excreta and other wastes before reuse; provide effective stormwater drainage systems Eliminate breeding sites by filling up depressions; drain stagnant water ponds; provide effective stormwater drainage systems Secure houses/living areas from rodent infestation

Provide hygiene education Provide hygiene education; practice effective domestic hygiene Provide hygiene education; practice effective domestic hygiene Minimize contact with contaminated water; provide public education

Minimize exposure by avoiding breeding sites; use biological control; use larvicides; provide public education Minimize contact with contaminated water; use rodent control; provide hygiene and public education

488 Table 2

Environmental Health Engineering: Rationale, Technologies and Practices for Various Needs Emerging and other contaminants of environmental health engineering interest and applicable preventive and control measures

Disease/contaminants

Medium

Applicable engineering interventions

Social engineering and other interventions

Pharmaceuticals

Water, wastewater, animal wastes

Indoor air pollutants

Air

Use the pharmaceuticals with moderation; return excess or remainders to vendors; provide public education Use less polluting fuels; minimize exposure to polluted indoor environments

Reemerging diseases

Water, excreta, wastewater

Cancer-causing viruses

Water, excreta, wastewater Water, soil, air

Manage excreta properly; improve wastewater treatment; remediate contaminated sites (water, soil) Improve ventilation systems; treat contaminated air; provide suitable protective gear Improve excreta management; improve wastewater treatment; improve water quality Improve wastewater treatment; improve excreta management Confine contamination; clean up contaminated sites Improve treatment of wastewater and sludge

Accidentally released pollutants Other emerging contaminants

Water, sludge, soil

Improve hygiene; provide hygiene education; provide public education Improve hygiene; provide public education Provide public education Provide public education

diarrhea. Bacterial diseases in this category include cholera, typhoid, and paratyphoid. Protozoan diseases include amoebiasis, cryptosporidiosis (caused by Cryptosporidium sp.), and giardiasis (caused by Giardia lamblia). Concern on cryptosporidiosis and giardiasis focuses more on wastewater destined for reuse. Helminthic diseases include ascariasis. Non-feco-oral water-washed diseases include skin infections such as scabies, eye infections such as trachoma, and louseborne fevers, which still plague many developing countries. Geohelminthiases include hookworm infection, whereas taeniases include pork and beef tapeworm infections. Bacterial waterbased diseases include legionellosis, whereas helminthic ones include schistosomiasis and guinea worm infection. Fungal diseases in this category include pulmonary hemorrhage. Water-related insect-vector diseases include malaria, African sleeping sickness, and bancroftian filariasis. It is noteworthy that malaria is the leading killer disease in sub-Saharan Africa and many other parts of the world. Excreta-related insect-vector diseases include fly-borne and cockroach-borne excreted infections. Rodent-vector diseases include tularemia and rodent-borne excreted infections. In addition to the diseases covered in Table 1, the so-called emerging and reemerging (previously common) diseases and contaminants are of EHE interest. These are outlined in Table 2. The emerging contaminants are of special interest partly because some of them manage to pass through conventional wastewater treatment systems unaffected, which raises serious concerns regarding their proven and suspected health and environmental effects. Some of the emerging contaminants did not attract attention much earlier because until recently there were no reliable detection methods for them. This coupled with their low concentrations in the environment limited their detection. Ignorance on their health and environmental implications also contributed to their being ignored.

Excreta Management Options Excreta management facilities can be classified in different ways that suit specific purposes. Table 3 characterizes excreta management facility options in terms of key technical and socioeconomic characteristics that influence their applicability, implementation, use, operation and maintenance, and sustainability. Considering the currently used principal improved excreta management technologies, the hierarchy of the technologies in the order of increasing complexity is as follows: ventilated improved pit (VIP) latrines, pour flush toilets, pour flush toilets discharging into a sewer (sewered pour flush toilets), conventional septic tank systems (with flush toilets), and conventional sewerage system (with flush toilets). It is important to recognize that the septic tank system has most of the benefits of the sewerage system, and for this reason people who aspire for conventional sewerage in its absence opt for the septic tank system. Vault toilets are recommended by some sanitation experts as an intermediate technology between the pour flush and the flush toilet. It is not included in the rest of this discussion because it is not as popular and as acceptable worldwide as the pour flush toilet, to which it is comparable. The fact that it is not particularly more advantageous than the pour flush toilet justifies even more the decision to drop it out of this discussion. From a practical standpoint, a complex technology or complicated facility is more likely to fail than a simple one. However, a very simple technology may not be compatible with requirements of other technologies used together with it. It may even be contradictory with social aspirations of the target users. Worse still, it may be too inflexible for evolving needs and social–cultural demands. For example, a pit latrine, be it improved or unimproved, is neither compatible with nor amenable to easy and convenient upgrading to the same level as a self-contained house with internal plumbing. Generally, it is unwise to impose new technologies and practices, no matter how good or suitable they might be, on first-time users of improved sanitation facilities, especially those whose hygiene knowledge is low. With first-time technology users, the

Environmental Health Engineering: Rationale, Technologies and Practices for Various Needs Table 3

489

Characterization of excreta and wastewater management facility options

Facility characteristics

Range of variation of the characteristics

Facility technology complexity Facility technology novelty

Simple-to-complex technologies; simple-to-complicated technologies Established/tried-and-tested options to emerging options; from conventional/traditional to novel Completely decentralized individual facilities to completely centralized community facilities Dry to fully waterborne/water-carried On-site to off-site location; excreta receptacle and disposal components at one or separate locations Separate conveyance, treatment, and disposal of sludge and excreta to comanagement Easy-to-use to difficult-to-use facilities Low-cost to high-cost facilities Affordable-to-unaffordable facilities (by the target beneficiaries) Small to large in size; private (household) to public in use

Organizational, operational, and functional centralization Facility water use need Facility site location Separation of excreta/wastewater constituents Facility ease of use Facility relative cost Facility affordability level Facility and target user group characterization

established, tried-and-tested excreta management options are more likely to succeed. By the same token, the novel, experimental, and emerging technologies stand a better chance of succeeding when introduced to experienced users who have good hygiene knowledge. The established, tried-and-tested excreta and wastewater management technologies are typically (from the lower to the higher end of the hierarchy) VIP latrines, pour flush toilets, sewered pour flush toilets, septic tank systems (with flush toilets), and conventional sewerage (with flush toilets). Unsurprisingly, this hierarchy is the same as the one for the complexity of sanitation technologies. Degrees of decentralization of excreta and wastewater management systems range from on-site systems for individual single-household units, through condominium systems serving blocks of houses, through small sewerage systems for semiindependent communities such as academic institutions or housing estates, to large centralized sewerage systems for cities. Decentralized sanitation systems are flexible, and hence easier and cheaper to customize to specific needs; they are easier to implement too. However, if not planned and implemented properly, they can result in chaos. Additionally, adoption of decentralized sanitation systems can be thwarted by space limitations, especially in high-density areas. Furthermore, applicability of some decentralized sanitation technologies may be limited by geospatial site conditions such as high groundwater table or soils with low infiltration capacities. Nonetheless, decentralized systems are cheaper and easier to design as well as implement. Additionally, their application is flexible in terms of technology choice and implementation scheduling. With respect to water use, communities that do not have adequate water supply cannot opt for waterborne systems. Even those with adequate water supply can opt for waterborne systems only if suitable effluent disposal facilities are feasible at the site of interest. The disposal facilities may not be feasible at the site due to available area size limitations, site surface and subsurface constraints, and regulatory limitations. The treatment and disposal components of centralized sanitation systems are always located off-site. The actual excreta receptacles must be located on-site, and as such must be compatible with the buildings they serve and other on-site physical infrastructure services. Generally, facilities based on simple technologies are easier to use than complicated ones. Also, established technologies and practices are easier to adopt and use than new ones. Sanitation facilities that are too demanding on the user are almost certainly bound to fail. As a rule, ease of use is proportional to the functional sustainability of a facility. Expensive facilities have a lower chance of being adopted, regardless of their merits, especially among the poor because excreta management has to compete with other equally pressing or even more pressing needs such as food, energy, education, and healthcare costs. Undoubtedly, of all the principal improved sanitation facilities, VIP latrines are cheapest whereas flush toilets (connected to sewerage or septic tank systems) are the most expensive. Based on prevalence and user acceptance rates, the typical sanitation facilities for different levels of sanitation standard aspirations are (in the order of rising standards) unimproved pit latrines, VIP latrines, pour flush toilets connected to a septic tank, pour flush toilets connected to a sewerage system, squatting-type flush toilets, and sitting-type flush toilets. It is noteworthy that in some countries, sitting-type flush toilets may turn away some potential users if installed in public places. This is because of the fear of contracting diseases that can be acquired through contact with contaminated surfaces. These include venereal diseases such as syphilis and gonorrhea. Partly for this reason, in many developing countries where the pertinent diseases are still prevalent, squatting-type flush toilets are preferable to the sitting-type ones. As a general rule, where both squatting-type and sitting-type flush toilets are available and acceptable, public places should be provided with squatting-type flush toilets. Obviously, this recommendation does not apply in developed countries where the use of sitting-type flush toilets in public places is almost a rule. Experience shows that potential users of sanitation facilities aspire for and adopt excreta and wastewater management technologies and practices they are conversant with, considering many factors with different levels of priority. The most outstanding of these are outlined in Table 4.

490

Environmental Health Engineering: Rationale, Technologies and Practices for Various Needs

Table 4

Factors that users consider when aspiring for and adopting sanitation facilities

Consideration

Explanatory remarks

Feasibility and applicability

Compatibility of technology with site conditions; simplicity of the technology, including availability of construction materials; available land area size Compatibility with home design and with other infrastructure services Actual and perceived hygiene; cleanliness evidenced by odor control; physical separation of the excreta receptacle (toilet) from the storage or final disposal facility; not being prone to being soiled easily Construction/installation costs; operation and maintenance costs Affording privacy to users all the time; conforming to social–cultural requirements, including beliefs, social norms, and taboos Ease of use of the facility characterized by not being complicated and not requiring undue effort; distance; convenience of use characterized by accessibility even at night, for example Social–economic status associated with the facility or technology: a rich or poor person’s facility? Characterized by not posing safety risks to users such as injury or death Conformity with planning guidelines; conformity with environmental laws and public health protection regulations Characterized by how long it lasts before it needs to be renewed, replaced, or reconstructed Characterized by the attention needed from the user to keep it operating in an acceptable condition, including removal of sludge, and switching between pits

Compatibility Hygiene and cleanliness level Cost/affordability Privacy and social–cultural conformity Ease and convenience of use Status associated with it Safety for users Planning and regulatory conformity Durability and sustainability Operation and maintenance needs

The considerations outlined in Table 4 apply mostly to developing countries where many people still have no access to good sanitation facilities. In developed countries, especially in urban areas, most people already have access to sanitation facilities of the highest standard and at the highest service level possible. In rural areas and other isolated settlements, sanitation facility choices still have to be made. For these, Table 4 applies even in developed countries. Generally, all people aspire for the sanitation facilities and services of the highest standard possible. What dictates the actual realization of the aspirations include (1) income/affordability, (2) awareness of the availability and merits of the sanitation facilities and services, (3) general education and level of hygiene sensitization, (4) availability and reliability of supporting utility services, such as septic tank desludging services and central sewerage systems, (5) planning and regulatory restrictions, and (6) site conditions (surface and subsurface). Reflecting on the considerations outlined in Table 4, the aspiration for and adoption of the principal sanitation facilities (and technologies) can schematically be illustrated as shown in Fig. 1. In Fig. 1, the highest ranked sanitation facility is the flush toilet. The lowest ranked is the unimproved pit latrine, which is nonetheless better than not having any sanitation facility at all.

Flush toilet (squatting type)

Pour flush toilet connected to a sewerage system Pour flush toilet connected to a septic tank system Ventilated improved latrine

Unimproved pit latrine

Social−economic status level

Sanitation facility type aspired for

Flush toilet (sitting type)

Income/affordability, awareness/hygiene education level, etc.

Fig. 1 Schematic illustration of sanitation improvement aspiration and adoption ladder. Note that the list for the horizontal axis represents all the factors that apply as discussed.

Environmental Health Engineering: Rationale, Technologies and Practices for Various Needs

491

Fig. 1 indicates that as one’s social–economic status rises, so does the tendency to move up the sanitation service standard ladder. A higher social–economic status urges one to go for sanitation facilities and services of a higher standard. This implies that aspirations for better sanitation facilities and services evolve, as pointed out earlier in the text. It is noteworthy that the level of social– economic status also increases with increase in the factors indicated in the horizontal axis of Fig. 1. Fig. 1 interestingly shows that in one settlement in Dar es Salaam City, Tanzania, most residents who installed pour flush toilets when their houses were new replaced them with flush toilets not long after moving in. This occurred as soon as the house owners could afford the new types of toilets. The pour flush toilets were replaced with flush toilets because the latter were considered to be (1) associated with a higher status, (2) more convenient to use, (3) cleaner and aesthetically more appealing, (4) easier to use (their use requires virtually no effort), and (5) more effective with respect to removal of excreta deposited in the toilet bowl. Much of the reasons given in the previous paragraph in favor of flush toilets may not be completely convincing to some people, though the arguments give shape to people’s aspirations and influence their decisions. As such, they are important and have to be taken seriously when making decisions on sanitation in practice. It is recognized that there are more sanitation facility types than the ones shown in Fig. 1. Some of the additional sanitation facilities can serve as alternatives to the ones shown in the figure, whereas others are their variations. Still some more can fit in between some of the facilities shown in the figure. Even sewerage systems come in many variations, including the so-called conventional sewerage system, the small-bore sewer system, and the condominium sewerage system. The latter two are innovative variations of the conventional sewerage system. The difference between the innovative and the conventional sewerage system pertains more to the collection network than to the treatment component. The small-bore sewer system uses sewers of smaller sizes than in the conventional sewerage system and is designed in such a way that the sewers can accommodate localized mild and even negative gradients. The small-bore sewer system incorporates interceptor tanks between the house sewer and the main sewer for retaining solids that might cause blockage downstream. Condominium sewerage systems are designed to serve blocks of houses. Unlike in the conventional sewerage system and smallbore sewer system, the main sewers for the condominium sewerage system are located behind the houses served in order to shorten the lengths of the house sewers. Condominium sewerage systems use sewers of smaller sizes just like the small-bore sewer system. However, blockage is avoided due to the enhancement of flow characteristics caused by the shorter house sewers. An increase in the flushing effect of discharges from the house sewers due to two lines of houses discharging into the same main sewer within short distances also helps to avoid sewer blockage. Latrines and toilets come in many types as well. They include aqua-privies, double-vault compost latrines, and vault toilets. An aqua-privy is designed like a septic tank so that it is located just below the drop hole used for discharging excreta into the receptacle. A drop pipe extends from the drop hole 100–150 mm below the liquid level so that the extra depth serves as a water seal to minimize odor problems. A vault toilet is a variation of the pour flush toilet. It discharges into a watertight vault, which requires regular emptying. Innovative toilet and latrine technologies include ecological sanitation, waterless toilets, and chemical toilets. Pit latrines and septic tanks are designed to be emptied every 3–5 years. This is because during operation they accumulate solid material (sludge). As a result, after some time they fill up, which necessitates their emptying (desludging). Although biodegradation breaks down the organic matter deposited in the pit and in so doing reduces the rate of filling of the pit, this is not enough to prevent the eventual filling up of the pit. The average contribution of one user of a septic tank or pit latrine to the net accumulation of sludge that eventually fills up the tank or pit is 0.03–0.06 m3 person 1 year 1. This implies that a VIP latrine or septic tank serving five people can accumulate between 0.75 and 1.50 m3 of sludge in a 5-year period. Sludge emptied from pit latrines, vault toilets, septic tanks, and other similar facility types can be treated in sludge treatment facilities, including sludge ponds coupled to sludge-drying beds for final disposal. Fresh sludge can be composted so that it can be used as a soil conditioner. Additionally, fresh sludge can be used as a feed material for biogas plants. Composting to produce organic fertilizers and anaerobic digestion to produce biogas are beneficial uses of sludge. They can contribute to poverty alleviation in developing countries by reducing expenditure on fertilizers and energy while also protecting health and the environment. It has been established that mixtures of sludge and solid waste are suitable feed materials for both composting and biogas production. This means composting and biogas production can simultaneously solve excreta and solid waste management problems. To render the effluent from the septic tank system safer, engineered wetland systems coupled to sand filters can replace the soakaway pit. The soakaway pit, which is the most common means of disposing of septic tank effluent, often does not provide adequate treatment to the septic tank effluent with consequent groundwater pollution risks. It is also possible to replace the septic tank with an upflow anaerobic sludge blanket (UASB) reactor. The UASB reactor treats wastewater more efficiently than the septic tank while also producing biogas. Furthermore, a UASB reactor has a smaller footprint and lower sludge production rate than a comparable septic tank. In spite of its advantages over the septic tank, a UASB reactor also produces an effluent that requires additional treatment to remove nutrients and pathogens. This can be achieved in engineered wetland systems coupled to sand filters or other similar treatment systems. Properly designed sand filters can remove most pathogens from wastewater. At Ardhi University (Tanzania), experimental septic tanks and UASB reactors have been treating wastewater from the University’s campus in Dar es Salaam City since the late 1990s with satisfactory results. Effluents from the septic tanks and UASB reactors are posttreated in engineered wetland systems. To be absolutely confident that the final effluent is free of pathogens, a chlorine-based disinfection unit can be added. Solar disinfection units are also applicable because filter effluent is usually practically clear.

492

Environmental Health Engineering: Rationale, Technologies and Practices for Various Needs

Excreta Management Needs Analysis As pointed out earlier in the text, the following two key considerations are important when planning and providing EHE services, especially in developing countries: (1) the diversity of EHE challenges and service needs and (2) the evolving nature of most EHE challenges and service needs. Therefore, to be able to carry out an objective excreta management needs analysis, it is necessary to look at the diversity and evolutionary nature of its challenges and service needs. The diversity that influences EHE challenges and service needs stem from the factors exemplified in Table 5. The main aspects and influencing factors that characterize the evolutionary nature of EHE challenges and service needs are outlined in Table 6. Tables 5 and 6 emphasize the fact reiterated earlier in the text that EHE challenges and service needs have inherent diversity and that they change with time, which leads to their evolution. The diversity of the challenges and service needs is explained by the fact that at any one time, different individuals and communities face different EHE challenges, and hence have different perceived or actual EHE needs. The EHE challenges faced by individuals or communities change with time. Even when the EHE challenges do not actually change, the way they are perceived changes. Consequently, the EHE needs also change with time. The way the diversity of the EHE needs relate to time is illustrated in Fig. 2. Fig. 2 shows that as time passes, the diversity of needs in a community decreases. However, their evolution first rises rapidly and then falls at an increasingly decreasing rate. The decrease in diversity is attributable to the fact that, first, since the long-term aspirations of individual members of a community are similar, due to being shaped and influenced by the same factors, and even though they start off differently, with time they tend to converge, reducing in number as well as variety. Second, excreta management technologies that are regarded as the best and aspired for by most people are limited. So over time the EHE needs of different individuals converge toward the same preferences. Third, engineering and technology curricula of most countries are similar. Undoubtedly, the commonalities of the education systems and the technologies and practices these impose on people have a strong influencing factor. Additionally, in the global village that the world is becoming, communities and individuals are increasingly being influenced by similar factors. People across continents are being socioculturally shaped by common influences propagated by the global reach of the international media and the Internet. In addition to the observations already made, it is important to note that in Fig. 2 the evolution of the EHE challenges and service needs affects the diversity of the EHE service needs. As such, the diversity of the EHE service needs changes with as well as due to the evolution of the needs. It is worth pointing out further that most developing countries belong to the left side of the graph in terms of both diversity and evolution of sanitation facilities. Obviously, developed countries belong to the right side of the graph.

Table 5

Sources and causes of diversity of EHE challenges and service needs

Sources and factors of diversity

Characterization of diversity influence and applicable contrasts

Country’s development level

Dictates technology availability and accessibility and influences other social–cultural and economic factors; contrasts developed versus developing countries Influences effectiveness and sustainability of EHE technologies and practices; contrasts well-trained versus poorly trained EHE service caretakers and low-tech versus hightech EHE services Dictates types of EHE technology and practices; contrasts water-rich versus water-poor countries and well water supplied versus poorly water supplied areas Influence applicability and acceptability of some EHE technologies and practices; contrast well-educated versus poorly educated, sensitized versus nonsensitized, and knowledgeable versus nonknowledgeable Influence applicability of EHE technologies; contrast easily accessible versus relatively inaccessible areas Influence climate-dependent factors; influence applicability of EHE technologies; contrast warm versus temperate areas Influence applicability and acceptability of EHE technologies and practices; contrast traditional versus modern; socially closed versus open Influence applicability of EHE technologies and practices; contrast rich versus poor, lowcost versus high-cost, and affordable versus unaffordable Influences acceptability, availability, and accessibility of EHE services; contrasts urban versus periurban and periurban versus rural Influence efficacy of EHE service delivery; contrast appropriate and effective versus inappropriate and ineffective governance and empowered versus nonempowered EHE service beneficiaries Depending on past experiences, influence perception of, and hence acceptance of, new EHE technologies and practices; contrast pleasant versus unpleasant experiences

Expertise availability/expert education/training and availability Water availability/accessibility Awareness, and sensitization level of service beneficiaries Geospatial/relief/terrain factors Climatic/meteorological/seasonal factors Social–cultural factors Economic/financial factors Urbanization level Governance/political factors Background/historical factors

Environmental Health Engineering: Rationale, Technologies and Practices for Various Needs Table 6

493

Characterization of evolutionary nature of EHE challenges and service needs

EHE aspects or influencing factors that evolve with time

Causes and influencing factors of evolution of EHE aspects and influencing factors

Country’s economic level and growth rate/development level

Evolve with time and interactions with other countries; evolve with availability of material and humanpower resources; influence changes in social–economic status of EHE service beneficiaries Evolve with country’s economic level and political regime and governance, and with individual’s exposure, education, and aspirations Evolves with country’s economic level and beneficiary’s income level; evolves with beneficiary’s exposure, sensitization and education, and aspirations; evolves with openness and exposure of society Evolve with economic development, and with the changes in beneficiary’s income

Beneficiary’s economic status and income level Beneficiary’s social–cultural status Water (and other services) availability and accessibility EHE service levels and standards EHE expertise availability and accessibility Beneficiary’s awareness or sensitization level, and general education level Urbanization level Location and geospatial characteristics Meteorological/climatic/seasonal factors and influences Background/historical factors Town planning/urban design Legislative and regulatory regime Governance and administrative regime

Experiences and perceptions change with time; bad experiences and negative perceptions fade with time, exposure, and interactions with other people Is influenced by urbanization; evolves with governance and administrative and political regimes Evolves with the political regime; is influenced by legal expertise availability and awareness of the public Evolves with the political regime; is influenced by governance and administrative expertise availability

Evolution

Evolution of challenges and needs

Diversity of needs of a community

Diversity

Change with social–economic status of beneficiary, the general standard of living and housing conditions, and levels and standards of other infrastructure and utility services Evolve with a country’s general education status, political regime, and governance Evolve with general education level, and with individual’s or community’s exposure and interactions with others; changes with political regime, and with governance Evolves with population growth, economic development, governance and administrative regimes, and town-planning practices Evolve with urbanization; are influenced by urbanization, land use and development, and other infrastructure service development Influences of these evolve with available technologies and practices adopted

Time passage

Fig. 2 Variation in EHE needs diversity and evolution in a community. The starting point is the time when comprehensive knowledge on improved sanitation facilities and technologies is introduced in a community.

In the context of Tables 5 and 6 and Fig. 2, various aspects of the approaches used to handle EHE challenges and service needs can be examined. Based on a systematic SWOT (strengths, weaknesses, opportunities, and threats) analysis, the results can be summarized as shown in Table 7. The fact that all the EHE technologies and facilities give priority to the protection of public health and the environment is not surprising because that is the very essence of EHE. It is, however, remarkable that the facilities are often implemented or proposed for improvement without due consideration for differences in user preferences and the other situation-specific influencing factors

494 Table 7

Environmental Health Engineering: Rationale, Technologies and Practices for Various Needs SWOT (strengths, weaknesses, opportunities, and threats) analysis of EHE challenges and service needs handling approaches Strengths

EHE diversity- All facilities and technologies related aspects used give priority to protection of public health and the environment

EHE evolution- Planning standards and related aspects regulations require use of highest standard sanitation facilities in urban areas in many countries

Weaknesses

Opportunities

Threats

Implementation does not consider diversity of problems and needs

Many sanitation facilities and technologies are available

Complicated designs can discourage facility adoption

Some technologies are complicated and demanding on users

Education and sensitization can reduce diversity of needs in the long term

High costs of some facilities and low affordability slow adoption of some suitable sanitation technologies Follow-up on planning standards is poor and enforcement of regulations is lax

Technologies for both Design and installation very low (cheap) and features of facilities very high (expensive) proposed and implemented are fixed without consideration sanitation standards are available for future changes in user preferences or operating environments Upgrading is possible for some facility designs

In high-density areas, small plot areas limit upgrading

discussed earlier in the text. For example, in Dar es Salaam City, Tanzania, septic tank systems incorporating soakaway pits are specified for use in both low and high groundwater table areas in spite of the fact that soakaway pits are not effective in high groundwater table areas, where septic tank systems overflow with the resultant creation of mosquito-breeding grounds, filthy conditions, contamination of groundwater, and cross-contamination with piped water supply. It is also remarkable that EHE technologies are often implemented or proposed for improvement without considering that user preferences and operating environments change with time. For example, prescribing the use of pit latrines in periurban areas in developing countries, although well intended, disregards effects of changes in sanitation needs brought about by urbanization and changes in social–economic status of inhabitants of these areas. Periurban areas tend to urbanize at a fast rate, and their inhabitants tend to acquire social–economic profiles of urban areas much more easily because of their proximity to and interactions with people from the already urbanized adjacent areas.

Coupling Multiple Options to Diverse and Evolving Challenges The need for diverse options to address diverse and evolving EHE needs has been established earlier in the text. Also, although there are many sanitation technologies and facility types, it has been established that the success of sanitation improvement is more assured with the principal tried-and-tested technologies and facility types. These are (in an ascending order of importance) VIP latrines, pour flush toilets, and flush toilets. To cope with the diversity of the needs, the principal technologies and facility types have to be customized to user-specific and local condition-specific requirements in line with the factors that influence the diversity of the needs (Table 5). Customization of the sanitation technologies and physical facilities calls for flexibility in terms of design, service levels and standards, and construction materials and technology. Noncritical design aspects of sanitation facilities can be changed to accommodate local site condition limitations such as high groundwater table and unsuitable soil types. Construction details can be changed to take advantage of locally available, cheap construction materials and construction skills. For example, in a place where clay soil suitable for making burnt bricks is plenty and the community possesses the brick and roofing tile-making skills, burnt bricks and roofing tiles can be adopted to replace other construction materials used for walling and roofing. This can not only reduce costs of the facilities, but also enhance the degree of ownership of the sanitation facilities by the beneficiary community. In turn, this can enhance the adoption of the sanitation facilities and technologies. The availability of the construction materials and construction skills locally ensures sustainability of the facilities and promotes wider dissemination of the technology. Use of locally available construction materials and construction skills can also contribute toward bringing down some of the social–cultural barriers to acceptance of new sanitation facility types. Selection of an entry-level sanitation technology or facility must consider sanitation facilities already in use as well as the considerations outlined in Tables 5 and 6. In the absence of any other criteria, it is reasonable to recommend as an entry-level facility type, an improved version of sanitation facilities already in use. Alternatively, an improved facility type one standard level higher than the one already in use can be selected. In both cases, the need for the selected sanitation facility type to be able to evolve with the changing sanitation needs must be given top priority. Experimental and innovative facilities should not be tested on first-time users

Environmental Health Engineering: Rationale, Technologies and Practices for Various Needs

495

of improved sanitation facilities. Where they are found to be fit for application, innovative sanitation technologies and facilities should be introduced with great care and be preceded by intensive public education campaigns. To keep up with evolving sanitation challenges and service needs, there is a need for evolving sanitation service levels and standards. This calls for graduating sanitation service levels and standards to allow service beneficiaries to raise their environmental health standards along with their social–economic standards. This is possible if the sanitation technologies are selected and facilities designed on the basis of flexible service levels and standards. One approach that has been found to work is to start with the affordable and feasible sanitation facility such as the VIP latrine. This applies if the higher standard, higher service-level sanitation facilities aspired for are either not feasible or not affordable initially. If the ultimate sanitation facilities aspired for are flush toilets, right from the beginning, the house plan must include their rooms. The site plan must include sewers and a septic tank system for serving the flush toilets as well as other sources of wastewater, including kitchen sinks. Initially, only the VIP latrines and a soakaway system for disposal of wastewater from sinks need to be constructed. As such, the toilet rooms and the space reserved for the septic tank system outside are left vacant. The flush toilets and septic tank system can be added in accordance with the original house and site plans once they become affordable. Unless they are needed for emergency use, the VIP latrines can be demolished or converted into storage rooms once the flush toilets are operational. If the design area is served by a central sewerage system, it is advisable to abandon the septic tank system altogether and install a house sewer to collect wastewater from all sanitary appliances and discharge into the central sewerage system. Obviously, some alterations to the house will be needed to install the flush toilets and their plumbing. If the entry-level sanitation facility selected is the pour flush toilet instead of the VIP latrine, a variation in the approach described earlier in the text can be adopted. On the whole, it is important to recognize that in addition to meeting environmental health needs, sanitation facilities must be amenable to upgrading and retrofitting to evolve with the owner’s social status. Planning for sanitation improvement must take this into account and provide sufficient space for accommodating not only current but also future sanitation needs. Plot size specification and utility services provision must also consider this, among other things.

See also: Biotechnology and Advances Environmental Health Research; Diffusive Gradients in Thin-Films (DGT): An Effective and Simple Tool for Assessing Contaminant Bioavailability in Waters, Soils and Sediments; Household Energy Solutions in Low and Middle Income Countries; Household Water Treatment and Safe Storage in Low-Income Countries; Power Generation and Human Health.

Further Reading Cairncross, S., Feachem, R.G., 1983. Environmental health engineering in the tropics: An introductory text. Wiley, New York. Cilimburg, A., Monz, C., Kehoe, S., 2002. Wild recreation and human waste: A review of problems, practices, and concerns. Environmental Management 25 (5), 587–598. Derkesen, J.G.M., Rijs, G.B.J., Jongbloed, R.H., 2004. Diffuse pollution of surface water by pharmaceutical products. Water Science and Technology 49 (3), 213–221. Erickson, B.E., 2002. Analyzing the ignored environmental contaminants. Environmental Science & Technology 1, 142A–145A. Feachem, R.G., Bradley, D.J., Garelick, H., Mara, D.D., 1983. Sanitation and disease: Health aspects of excreta and wastewater management–World Bank studies in water supply and sanitation 3. Wiley, New York. Giesy, J.P., Kannan, K., 2002. Perfluorochemical surfactants in the environment. Environmental Science & Technology 1, 147A–152A. Koplin, D.W., Furlong, E.T., Meyer, M.T., et al., 2002. Pharmaceuticals, hormones, and other organic contaminants in U.S. streams, 1999–2000: A national reconnaissance. Environmental Science & Technology 36 (6), 1202–1211. Mara, D.D., 2004. Domestic wastewater treatment in developing countries. Earthscan, London. Mara, D.D., Clapham, D., 1997. Water-related carcinomas: Environmental classification. Journal of Environmental Engineering 123 (5), 416–422. Mara, D.D., Feachem, R.G.A., 1999. Water- and excreta-related diseases: Unitary environmental classification. Journal of Environmental Engineering 125 (4), 334–339. Moeller, D.W., 1997. Environmental health. Harvard University Press, Cambridge, MA. Okun, D.A., From cholera to cryptosporidiosis, 1996. Journal of Environmental Engineering 122 (6), 453–458. Prüss-Üstün, A., Corvalán, C., 2006. Preventing disease through healthy environments. World Health Organization, Geneva. Suidan, M.T., Dionysiou, D.D., Lorial, G.A., 2002. Editorial: Why MTBE and gasoline oxygenates. Journal of Environmental Engineering 128 (9), 772.

Relevant Websites http://www.worldbank.org/dThe World Bank. http://www.who.int/en/dWorld Health Organization.

Environmental Health Ethics in Study of Childrenq LE Knudsen, PW Hansen, and M Pedersen, University of Copenhagen, Copenhagen, Denmark DF Merlo, Infrastruttura Ricerca e Statistica, IRCCS - Arcispedale S. Maria Nuova, Reggio Emilia, Italy © 2017 Elsevier Inc. All rights reserved.

Children and Environmental Health: Special Concerns Differential Exposure and Susceptibility of Children In biomedical and environmental health research involving children, it is important to acknowledge that children are not small adults in relation to both their everyday behaviors that affect exposure and their susceptibility. Rapid growth and development of brain and nervous system along with anatomical and physiological changes in several other organs and organ systems differentiate children from adults when studying exposure and susceptibility to environmental exposure. Children, especially children of young age, are particularly dependent on their environment and on their caregivers. Also children have a longer life span in which to express illnesses that may be related to exposure that occur early in life. Studies suggest that exposure to specific environmental factors in early life can increase the risk of adverse health effects that are present at birth or later in life (Box 1).

Box 1 Examples of cases of special childhood vulnerability to environmental exposures 1. In Minamata bay in Japan, the methylmercury waste resulted in massive exposures of unborn and newborn children resulting in neurodegenerative epidemic outbreak in the area with children and later adults seriously affected with neurologic effects from methylmercury damage of the central nervous system (CNS). 2. Exposure to radiation as a result of the Chernobyl nuclear power plant accident where massive amounts of radioactive materials were released into the environment and large numbers of individuals living in Belarus, Russia, and Ukraine were exposed to radioactive iodines, primarily 131I. Iodine-131 concentrated in the thyroid gland of residents of the contaminated areas, with children and adolescents being particularly affected. In the decade after the accident, a substantial increase in thyroid cancer incidence was observed among exposed children in the three affected countries, supporting an association between pediatric thyroid cancer incidence and radiation exposure to the thyroid gland. 3. Consumption of alcohol during pregnancy is associated with a great potential for developmental defects in the unborn child. The critical target organs are the central nervous system and the liver. Fetal alcohol syndrome, fetal alcohol effects, partial fetal alcohol syndrome, alcohol-related neurodevelopmental disorders, static encephalopathy (alcohol exposed), and alcohol-related birth defects are all names for a spectrum of disorders caused when a pregnant woman consumes alcohol. 4. Smoking during pregnancy increases the risk of perinatal mortality, lowers mean birth weight, increases the risk of spontaneous abortion, and has a significant influence of risks of premature delivery and placenta function. Low birth weight, thinness, and short body length at birth has been associated with increased rates of cardiovascular diseases and noninsulin-dependent diabetes in adult life. Second Hand Smoke causes health effects in children, including ear infections, respiratory infections and symptoms, more frequent asthma attacks, and sudden infant death syndrome. 5. Exposure to both outdoor and indoor air pollution may cause significant increases in respiratory symptoms and their intensity in sensitive persons. 6. Most substances including environmental pollutants, food-generated toxicants, nicotine, alcohol, drugs and bioaccumulating substances such as polychlorinated biphenyls (PCBs), lead, cadmium accumulated in the body of the mother, may be redistributed during pregnancy and breastfeeding, thus leading to exposure of the fetus and breastfed child. 7. Exposure to certain environmental pollutants such as PCBs, plasticizers, perfluorinated compounds and pesticides during prenatal development may cause intra uterine growth restriction and delayed effects on the offspring’s reproductive and immune system function. 8. Prenatal exposure to carcinogens may be associated with elevated cancer risk due to higher susceptibility of children. 9. Thalidomide, a drug that was sold during the 1950s and 1960s as a sleeping aid and to pregnant women as an antiemetic to combat morning sickness and other symptoms. It was later (during 1960–61) found to be teratogenic in fetal development. 10. PINCHE (A European project assessing environmental health risks to children) concluded that outdoor air pollutants (especially traffic-related), environmental tobacco smoke, allergens, and mercury were high priorities with an urgent need for action. Brominated flame retardants, lead, PCBs and dioxins, ionizing and solar radiation, and some noise sources were classified as being of medium priority. Some toxicants were given low priority, based on few exposed children, relatively mild health effects or an improving situation due to past policy measures. 11. Safety factors used in risk assessment for extrapolating of findings generated in experimental studies and to take into account variation in vulnerability between individuals may include an extra factor to take into account the heighten vulnerability for children.

q

Change History: December 2016. LE Knudsen revised the text in Box 1. LE Knudsen has revised Figure 1. LE Knudsen had added a new section: “Dynamic and broad consent”. M Pedersen and F Merlo reviewed the revised text and suggested changes.

496

Encyclopedia of Environmental Health, 2nd edition, Volume 2

https://doi.org/10.1016/B978-0-12-409548-9.10513-5

Environmental Health Ethics in Study of Children

497

Although children may share the same environment as adults, they can be more or less exposed to a variety of chemicals and environmental substances than adults because of their behavior and physiological differences. Although exposure during fetal life and infancy is strongly dependent on the exposure of the pregnant or lactating mother, exposure during childhood and adulthood is also influenced by differences in nutritional and energy requirements, activity level, and the location where activities are taking place. Children, in general, have a higher daily intake of food, water, and air along with matters not meant to be eaten, for example, sand or dirt per unit body weight than adults (Table 1). It follows that children cannot be considered and should not be treated as one group given the differences in physiology and physical activity’s influence on the exposure pattern. It is generally recognized that there may be windows of vulnerability or short periods of human development when toxic exposures may substantially alter organ structure or function. Grouping in age groups, for example, in infants ( 2 years of age), childhood (> 2 –  14 years of age), and adolescence (> 14 years of age) can be useful as a guide for the development of exposure scenarios and baseline values for children. Moreover, during infancy and childhood, substantial maturational changes take place in tissue composition, size, and function. The rates of uptake, distribution, metabolism and excretion of toxins, immune system, DNA repair processes, and cell proliferation and differentiation, etc. are altered in children as compared with adults. Potentially vulnerable systems in infants and young children include the endocrine, reproductive, immune, respiratory, visual, and nervous systems. Anatomical, biochemical, and physiological differences from adults are more pronounced, the younger the infant. The physiological changes affect children’s exposures and their susceptibility to certain health effects. The age-related changes in anatomy and behavior are diverse even at a specified age.

Table 1

Developmental stages and their specific characteristics concerning exposures, vulnerability, and preventive interventions

Developmental stage

Developmental characteristics

Exposure

Vulnerability

Preventive interventions

Preconception

Lack of awareness of gonadal exposure.

All adverse environmental and occupational exposures.

Potential for genotoxicity.

Pregnancy

Pregnant women has in average a higher intake of food and air, physiological differences and most substances crosses the placenta.

All adverse environmental and occupational exposures and ad hoc diagnostic investigations.

Potential for teratogenicity, disturbance of growth and development, as well as gestational duration. Potential for immediate and delayed effects.

First three years

Oral exploration, hand-tomouth, beginning to walk, and stereotyped diet

Potential for damage to brain (synapses) and lungs (developing alveoli), allergic sensitization, and injuries.

Preschool and school-age child

Growing independence and playground activities.

Adolescence

Puberty, growth spurt, risktaking behavior, and youth employment.

Food (milk and baby foods), air (indoor and outdoor), tap/ well water, and mattress/ carpets/floor. Environmental noise, radiation. Lifestyle including physical activity and eg baby swimming in pools disinfected with chlorinated compounds. Food (milk, fruit, and vegetables), tap/well water,and air (indoor and outdoor) Environmental noise, radiation. Lifestyle including physical activity Food (any), air (indoor and outdoor), and water, occupational exposure. Environmental noise, radiation Lifestyle including physical activity

Regulations and control on possible sources. Information for adolescents and the general population. Regulation on occupational and other environmental exposures during pregnancy and before reproductive age, health information on preventive measures to couples, and health information to pregnant adolescents. Regulations and control on air pollution, provision of safe water and adequate sanitation, and anticipatory advice for injury prevention (parents and caregivers).

Potential for damage to brain (specific synapse formation and dendritic trimming) lungs (volume expansion) and injuries.

Regulation and control on outdoor and indoor pollutants and food, and information for parents, schoolteachers, and children.

Potential for damage to brain (specific synapse formation and dendritic trimming) lungs (volume expansion) and injuries.

Regulation on child labor, injury prevention, and tobacco smoke and health information and regulations in recreational areas for young people.

498

Environmental Health Ethics in Study of Children

Some diseases are specific for children and children may respond differently to common illness. Healing and compensation is particularly effective in children and yet adverse environmental exposure and illness in childhood may have far-reaching consequences. Major improvements in the treatment of childhood diseases (e.g., vaccination), nutrition, water quality, and many other factors have resulted in significantly improved survival and lowering of childhood mortality over the recent decades. Increased incidences of, for example, certain birth defects, cancer, asthma, allergy, obesity, Attention Deficit Hyperactivity Disorder (ADHD), depression and learning disabilities have been reported for developed countries. In general the understandings of the complex relationships between environmental exposures and children’s health are limited and not easily accessible to researchers and administrators. Especially data on fetal exposure, newborn, and child environmental exposure and short-term and delayed health effects are scarce. Therefore there is a need for environmental research which include assessment of early life exposure and participation of the children. As study subjects, the parents/caretakers as well as the children participating in research have the right to know about the aim of the research they participate in, the consequences of research to their life and health and the scientists carrying out the research must follow legal requirements and take into account ethics aspects. Ethics research guidelines promote the following four basic principles of biomedical ethics: autonomy, beneficence, nonmaleficence, and justice, and define the responsibilities of researchers to protect research participants and guarantee their rights and safety. Important ethical issues include information of participants and consent to participate. Person identifiable data must be protected. Follow up and protection of samples and information derived from samples should be discussed in the context of biobanks, where children obtain individual rights when they become adults. There are practical issues on how to best present information on the research study to study participants, especially children in a form they can understand. In the planning of the research consideration on the study population size, strategies on use of non-invasive tissue sampling instead of invasive measures when possible, on minimization of bias, harm and fear among study participants, transparent protocols and communication of the results must be made.

Differential Autonomy Autonomy is related to respect for the person, and is commonly understood as his/her right to know or not to know, and as his/her freedom in making decisions (to participate in or not to participate in, or withdraw from, the research). For persons with diminished capacity of self-determination (including children who, due to age-related physical, mental, and psychological development, may not be fully capable of understanding the research issues, benefits, and risks), the right to be protected is in line with this principle. It requires a written informed consent, which again can only be based on adequate and relevant information to potential study participants. Indeed, only truly comprehended information can guarantee a freewill decision after understanding all research implications (intentionality and voluntariness). Planning research on children inevitably requires that special attention is devoted to their capabilities and development. The ability of making independent decisions (understanding and competence) is strictly connected to the process of thinking. Given the variation in development during infanthood, childhood, and adolescence, understanding and determining objectively the child’s capacity to understand complex research issues is a big challenge to researchers. To be able to answer the question “at what age are children able to make decisions consciously and independently?” requires good knowledge of the development of abstract thinking. In early childhood, thinking does not constitute a separate and independent cognitive activity, but is merely subject to practical acts (i.e., concrete actions taken by the child). Four age-related developmentally different phases can be identified. From a few months after birth up to 2½ years of age, motor-sensorial intelligence is predominant. From 2½ to 6 years of age, a child thinks by means of images and focuses on one directly perceived aspect of a given situation (preoperational intelligence). Concrete notions start appearing at the age of 6–11 years, resulting in what is called orientation in reality. Later in life, activities become intentional and planned. At the age of 12 years, abstract thinking appears, which is built up by the age of 15 years. It enables a child to give independent opinions and to perceive a multidimensional situation. With younger children (aged 15 years) who are not able to fully understand all aspects of the research, the principle of respecting their way of understanding should be taken into due consideration. If a child perceives the research situation as negative, resulting in negative emotions, this should be respected and taken into account. Although for the reasons mentioned earlier in the text, autonomy is not within the reach of small children, the opinions of older children are generally asked and taken into account by obtaining assent. The parents/caretakers will in most cases give consent for the participation of them and their children in environmental research. Most of the guidelines promote besides autonomy three other basic principles in biomedical ethics:

Beneficence/Nonmaleficence The principles of beneficence and nonmaleficence imply the obligation of maximizing possible benefits, protecting participants from potential/predictable harm, and securing their well-being. The overall aim of the research should be of benefit for society and the understanding of relationships needed for protection of future generation.

Environmental Health Ethics in Study of Children

499

Justice Justice addresses the issue of fairness of the distribution of research benefits and risks. Only reasons strictly related to research objectives, and not their easy availability or other population-specific characteristics (e.g., ethnic minorities, the socioeconomically less advantaged, gender, etc.), should define the criteria for selection of participants. In environmental as well as therapeutic research, justice is directly linked to the validity of the study, and to the possibility of extrapolating research findings from the study sample to the target population. Research with children should only occur when it cannot be performed on adults, when there is a need to know, and when the results of research with adults can or cannot be extrapolated to children. However, modern biomedical research, including molecular and genetic epidemiology with its complex designs, is difficult to comprehend even for an adult. The fact that it is not simple to formulate the information contained in highly technical research protocols in a truly understandable form for the target groups makes the whole issue of autonomy and informed consent more important. Since information of participants and their consent to participate is crucial in research, researchers must carefully consider the changing conditions of health and research techniques in the information process to obtain the informed consent. Follow up and protection of data (samples and information derived from samples) should be discussed in the context of biobanks, where children obtain individual rights when they become adults.

Children as Study Persons Nowadays, it is clear that research with children and on children is necessary within both clinical and environmental fields, to provide age-specific and relevant data regarding the efficacy and safety of medical treatments, and regarding assessment of risk from unintended or accidental environmental exposure. The inclusion of children in epidemiological studies and clinical trials were avoided from the doing no harm perspective resulting in lack of appropriate data for risk assessment and for dosing of medicinal compounds. In this context, the stakeholders are many, including children and their parents, physicians and public health researchers, and the society as a whole, with its ethical, regulatory, administrative, and political components (Fig. 1, adapted from Pedersen et al). Seeking consent/assent for participation in research is required by the application of traditional moral theory or principles such as those that are reported in the Code of Medical Ethics.

Fig. 1

Ethical considerations may be raised at different critical steps of research which include children by various groups of stakeholders.

500

Environmental Health Ethics in Study of Children

1. Seeking a person’s consent respects their basic right to self-determination (Autonomy). Individuals are best placed to determine what is their best interest and the only justification for infringing this right is to prevent harm to others. 2. Obtaining consent confers benefit by encouraging active participation of individuals in investigation and treatments, which are intended to restore their health (Beneficence). 3. Obtaining consent protects patients from the physical and psychological harms that may occur as a result of illness or its treatment (Nonmaleficence). 4. Obtaining consent involves treating others in a way in which people would expect to be treated themselves. The universal need to obtain consent also involves treating people justly (Justice). 5. Society and social structures are essential to the existence of the individual. Obtaining consent will broaden the views of the community and its altruism (Community Spirit and Solidarity). Each research project to be conducted in human beings must be carefully reviewed by a research ethics committee Regional Ethics Committees (REC) for approval. Since it is unethical to carry out studies that cannot give scientific answers, it is important that these committees consider the following questions: 1. 2. 3. 4. 5. 6.

Does the study have a real question or questions? Is the study designed in the best possible way to answer the questions (test the hypotheses)? Will the study work in practice (feasibility)? What are the risks and burdens for the research subjects involved? Are the results of the study to be published in peer review journals? Is data protection ensured and eventual future use of data and spare sample properly addressed?

When the REC is convinced that the study is properly designed and the risks for participants will be null or acceptable (low or minimal, considering the benefits) the study can be approved. A REC should consist of different members with different backgrounds, including pharmacists, statisticians, an ethicist, nurses, and of course medical doctors. The Committee should have members with experience in pediatric research.

Informed Consent Informed consent is the process by which an adequately informed person can participate in choices about his/her health care and participation in research. Its purpose is to enable potential participants to make informed choices about themselves and to safeguard their own best interests, in the full knowledge of risks versus potential benefits. It originates from the legal and ethical right of the patient to direct what happens to his/her body and from the ethical duty of the medical doctor to involve the patient in his/ her health care. Consent is required for all medical care; for preventive, diagnostic, or therapeutic measures; and for research. Usually it is necessary to obtain consent in which an individual is specifically asked to consent to the procedure in question. Consent may be implied when an individual presents himself/herself or is presented for a procedure to which general agreement has been obtained or implied. Nevertheless, information about benefits and risks are mandatory. Consent may be written or verbal. Written consent provides some kind of record that the procedure has been discussed but may have no more legal force in some European countries than verbal consent. Nevertheless, most European countries prefer written consent and this is the case for clinical trials of medicines. There is an agreement already respected in most countries that, whenever possible, children should give their own opinion, in the form of written consent, for the studies they attend. However, children, especially unborn, newborn, and small, are clearly unable to consent for research by themselves. Hence, they are dependent on the decisions of their parents or of other legal guardians. Even older children, who can already express their own opinions, are naturally influenced by the people they trust the most. Obtaining informed consent from a child, according to the available guidelines, involves necessarily the child’s assent and parental (or legal guardian’s) consent (proxy consent). In the case of very young children who are unable to assent, parental consent is of course needed in the child’s best interest. This is a challenge for researchers, who are responsible for ensuring informed consent. The notion of proxy consent has been rejected by many ethicists and legalists on the basis of the principle that a “true consent” cannot be given by another person. However, consensus exists that permission from the guardians is in any case necessary when studying children. Specifically, an informed permission should be obtained from a capable adult responsible for the child’s participation in both therapeutic and environmental research. On top of this, it is generally recommended to seek willingness to participate (assent) from children using an age-appropriate information process. The age of the child research participant is therefore critical in defining the appropriate approach and tools (e.g., information leaflets and abstract forms), to obtain (1) the parents’ (guardians’) informed permission, (2) the child’s assent, and (3) informed consent from older minors. In any case, refusal by a child to participate in a given research should always be respected and taken into account. If a child under the legal age to consent, which may vary between different countries, gives the assent to participate in a research, parents’ informed permission should be obtained because parents know their child the best and therefore can foresee the

Environmental Health Ethics in Study of Children

501

consequences of participation for the child. There is in fact a chance that children particularly accustomed to obedience may have difficulties in expressing their negative attitude to the researchers in an unfamiliar, possibly intimidating, environment (e.g., hospital and research center). To prevent such a situation, children could give their assent to their parents, possibly at home, in a more comfortable environment. On the next day, parents would give the answers to the researchers. According to this procedure, the enrollment of a child will be possible only after the child’s assent and the parents’ informed permission is obtained. This rule should be applied with the exemption of the situation where parents do not give permission for their child’s participation in therapeutic research. In this case, deontological rules permit that the researcher may resort to the guardianship court to authorize the child’s participation in research that is either life saving or expected to give great benefits. Because children cover a broad age range (i.e., 0–18 years), obtaining a dyad consent for long-term studies must be viewed as a continuous process where children recruited into studies at a young age or before their birth (such as in studies where cord blood is collected and stored) should be asked for their assent repeatedly as they grow older, until they reach the legal age for consent. It is important to realize that autonomy is not simply determined by age. Religious, cultural, and ethnic differences may play an important role. Christian anthropology, for example, recognizes an embryo as a human being since conception, because of the developmental continuity, and rejects the possibility of recognizing such status only at some later period of the fetal development. According to certain philosophical conceptions, the status of a human being is acknowledged only when the full capacity of making a free decision is reached. However, as to when a child reaches such a full capacity is not easily and objectively assessable, and it may be easier to rely on sociodemographic, age-based definitions of the developmental stages, which are, to some extent, subjective. It follows that the ability to give a truly informed consent can be expected from a child aged approximately 13–16 years. This argument is relevant nowadays in research on biomarkers and genetic research on children, given the growing number of biological specimens from people of various ages, stored in specifically dedicated repositories (biobanks) across the world. Obtaining such consent from a capable child may or may not require informed permission from parents or guardians, depending on the local legislation, culture, ethnicity, religion, and the decision of the local or national RECs. Last but not least, a freewill decision is clearly communication-dependent. Obtaining consent requires that the relevant information concerning the research (purposes, benefits, risks, right to withdraw, etc.) be effectively transferred from researchers to participants so that the latter can make their decision. Unfortunately, there is evidence that this is not easily achievable, either in children or adults. Research terminology can be a communication barrier between potential participants and researchers. In addition, environmental research is necessarily multidisciplinary, and therefore various scientific branch-specific jargons/technicalities appear in research documents. Effective communication of complex concepts requires, whenever possible, the replacement of scientific and technical terms with others that are used in everyday life by the majority of the population. Descriptions of research projects can never be fully exhaustive. The Nuffield Council of Bioethics holds that, even with the best efforts, fully informed consent remains an unattainable ideal, and calls for genuine consent instead of complete consent. This puts extra weight on the ethics and honesty of the scientists carrying out the research. It is their task to make every effort to the best possible understanding between the scientists and research subjects. The most important goal of informed consent is that the participant has an opportunity to be an informed participant in his/her decisions to participate or not participate in the research. Fully valid informed consent has four components: 1. Competencedthe person(s) giving consent must be deemed mentally competent to do so. In the case of research on children, the researcher has responsibility for determining whether or not the parent/legal representatives are in a fit state of mind to give consent. 2. Informationdsufficient information must be given to the person to make an informed choice. It is through communication and the information sheet that the level of information provided is determined. The information sheet, prepared by the researcher, is assessed by the REC although there are few guidelines as to a minimum standard of content. 3. Understanding dthe person giving consent must be considered capable of making a reasoned choice. The researcher obtaining consent must judge the level of understanding of the patient(s). 4. Voluntarinessdthe person giving the consent must do so voluntarily and must recognize that withdrawal from the study is possible at anytime without this affecting care. It is generally accepted that the information given in the informed consent process through the aid of the research information sheets includes a discussion of the following elements: 1. An invitation to take part in the study. 2. A statement that the study involves research, a clear and understandable explanation of the purposes of the research and the expected duration of the subject’s participation, a description of the procedures to be followed (including random allocation to experimental or control treatments), and identification of any procedures that are experimental. 3. A description of any reasonable foreseeable risks or discomforts to the subject. 4. A description of any benefits to the subject or to others, which may reasonably be expected from the research. 5. A disclosure of appropriate alternative procedures or courses of treatment, if any, that might be advantageous to the subject. 6. A statement describing the extent, if any, to which confidentiality of records identifying the subject will be maintained.

502

Environmental Health Ethics in Study of Children

7. An explanation (for research involving more than minimal risk) as to whether there are any treatments or compensation if injury occurs and, if so, what they consist of, or where further information may be obtained (a risk is considered ‘minimal’ when the probability and magnitude of harm or discomfort anticipated in the proposed research are not greater, in and of themselves, than those ordinarily encountered in daily life or during the performance of routine physical or psychological examinations or tests). 8. An explanation of whom to contact for answers to pertinent questions about the research and research subjects rights, and whom to contact in the event of a research-related injury to the subject. 9. A statement that participation is voluntary, refusal to participate will involve no penalty or loss of benefits to which the subject is otherwise entitled and the subject can withdraw at anytime without penalty or loss of benefits to which the subject is otherwise entitled. 10. A description of the funding of the study and whether the lead investigator is being paid for enrolling subjects into the study. It is essential that consent forms are written in plain language that the research subject can understand. Children and adults can agree (assent) or disagree (dissent) to participate in a clinical or environmental study. However, child’s ‘consent’, can be obtained only when they reach the age of maturity. Until then, they can be enrolled in a study by obtaining permission from their parents or legal guardians. The fundamental ethical principles that are governing medical research in humans have been available to the medical community since the Nuremberg Trial, and specifically to children, since 1989. In addition, the consent form should not contain any exculpatory language. That is, subjects should not be asked to waive (or appear to waive) any of their legal rights, nor should they be asked to release the investigator, sponsor, or institution from liability for negligence. Although consent is often perceived as one-off event, it is better regarded as a continuing process. However, studies involving children cannot rely on the conventional concept of ‘informed consent,’ which implies that a subject is fully capable of making an informed choice. Any study must seek parents’ and children’s agreement to research participation until a child is capable of making autonomous decisions. Therefore, to obtain a ‘real’ consent (child informed assent plus parental informed permission) it is critical to develop and correctly use adult- and child-specific tools (e.g., two information sheets may be requireddone for the child and one for the parents/guardian, both addressing the study aims, potential risks, and potential benefits). The key word is ‘informed.’ An informed consent can be given only when truly comprehended information can guarantee a ‘free-will decision’ after understanding all research implications (risks, benefits, rights to opt out, etc.). The ability to give a truly informed consent can be expected from a child aged approximately 13–16 years. Unfortunately there is no agreement on age and there are differences between European countries, with the minor’s will being considered necessary or prevailing over that of the parents or legal representatives at an age that ranges from 7–17 years.

Informed Assent Children, especially unborn, newborn, and small, are clearly unable to consent for research by themselves. Hence, they are dependent on the decisions of their parents or of other legal guardians. Even older children, who can already express their own opinions, are naturally influenced by the people they trust the most. Obtaining informed consent from a child, according to the available guidelines, involves necessarily the child’s assent and parental (or legal guardian’s) consent (proxy consent). In the case of very young children who are unable to assent, parental consent is of course needed in the child’s best interest. This means that there is a consensus agreement that a ‘consent dyad’ is required to conduct research on children. This is a challenge for researchers, who are responsible for ensuring informed consent. Informed assent means a child’s agreement (acquiescence) to research procedures in circumstances where he or she is not legally authorized or lacks sufficient understanding for giving consent competently. When the blood sampling involves a child aged 7 years or older, permission must be obtained from the parent or legal representative and assent must be obtained from the child. Each institution (hospital, university, etc.) has its own responsibility to determine the necessity of obtaining assent from these children. The regulations also state that age, maturity, and psychological state should be considered in the determination of whether children are capable of assenting to the medical procedure. Assent should include the following elements: 1. Helping the patient (child) achieve a developmentally appropriate awareness of the nature of his/her condition. 2. Telling the patient what he or she can expect with tests and treatment(s). 3. Making a clinical assessment of the patient’s understanding of the situation and the factors influencing how he or she is responding. 4. Soliciting an expression of the patient’s willingness to accept the proposed care. Regarding this final point, note that no one should solicit a patient’s view without intending to weigh it seriously.

Proxy Consent There is an agreement already respected in most countries that, whenever possible, children should give their own opinion, in the form of written consent, for the studies they attend.

Environmental Health Ethics in Study of Children Table 2

503

Schematic representation of different approaches to the need of the informed permission, assent/consent according to different dominions Developmental stages Prenatal

Adulthood a

Infancy/childhood

Dominion

Fetus

Demography (age in years) Biological relationship and possible forms of permission to research

0 0 2 > 2–14 >14 Parents:b Parents:b Parents: b Informed Informed Informed permission permission permission Fetus: Infant: Child: Child: Implicit assent Implicit assent Assent Assent/Consent (age >14) Status as a human being recognized only when full capability of Full capability of free decision (age, maturity) free decision isreached Status as a human being recognized from conception on Ethical regulators: decisions apply to all ages Status as a human and requirement/possibility for individual consent depending on national legislations

Philosophy Religion Ethics Law

Newborns

Infants

Childhood

Adolescence

a

Adulthood: usually but not always legally defined; Parents: mother or father or legally authorized person.

b

However, the notion of proxy consent has been dismissed by many ethicists and legalists on the basis of the principle that a ‘true consent’ cannot be given by another person. However, consensus exists that permission is in any case necessary when studying children (Table 2, from Merlo et al).

Biobanking Biobanks are crucial for biomedical research. Including biological specimens such as blood, plasma, saliva and purified DNA they are a unique opportunity to connect measurable molecular markers and lifestyle factors with identifiable individual health data. For example, the Icelandic Biobank Act defines a biobank as a “collection of biological samples, organic material from human beings, alive or dead, that is permanently preserved.” These definitions include three points, which, if combined, create possible concern, especially relating to children: (1) the identifiability of DNA samples, (2) the possibility to connect with health data, and (3) the permanently preserved specimens. Management of such combined data permanently stored, and implications in society, with the general ethical questions of how autonomy and individual rights are looked upon in relation to power and financial gain, are not simple issues. It is self-evident that individual interests and societal and financial interests do not agree in many instances. Whose interests should come first? According to the Declaration of Helsinki, it is clear that it is always the individual who should be respected above everything else. It follows that biobanking faces four basic aspects encountered in biomedical research: the protection of individual rights, the right to know and that to opt out, and the privacy of participants. Children have their whole future ahead of them, but do not have an impact on the direction of development in research and on the use of samples and data. Identifiable genetic data are ethically in a special category, because they can be permanently attached to a person. Use of such data may have long-term consequences years after the data are generated, and they should thus be treated with more care than other types of health data. When stored, samples from children will, no doubt, be a part of biobanks, especially any national biobanks. Thus, any of the issues concerning the planning, development, management, and use of biobanks also concern children. In addition, as discussed earlier in the text, children and unborn children are more vulnerable physically and mentally than adults on account of having less capacity to understand complex issues and long-term consequences, a capacity that is totally lacking in babies, newborn, and unborn children. Furthermore, children have to live in the future that people create. Children cannot themselves choose to be born, or choose their destiny when they are small. It is in the hands of their parents, and even more in the hands of society, through the legislation defining, for example, the value of life. Issues of suffering and, by contrast, the financial costs of screening and care, are often discussed in connection with serious hereditary conditions. What are discussed much less are the ethical costs and what kind of society is preferable Children largely adopt the values and ways of the society they live in, especially through parents, school, and the media. It is, of course, not so much the question of what values are discussed and taught than the question of what values are actually practiced. Indeed, children can be considered as ‘autonomous subjects’ at various ages according to the ‘traditions’ of the ‘moral community’ they live in. The development of children varies a lot, but all small children have very limited capacity to handle difficult issues. It is partly a question of intelligence and cognitive development, but life experience should not be overlooked. Even in early adulthood, people are not fully developed in their inner values and needs. The minds of young children are easily adjusted to the will of beloved, respected, or sometimes feared adults. When fully grown and psychologically mature, people may even represent contrasting values compared to their early age. This may change their earlier views on belonging to a biobank and on use of their samples. Because

504

Environmental Health Ethics in Study of Children

there are no experiences on long-term storage and use of biobanks at this point, it is difficult even for adults to consider the implications and what may be expected.

Dynamic and Broad Consent During the last decade there has been an increase in establishment of biobanks with materials from newborns and children for further research. The traditional version of the consent, for children by the parents, that has to be given from the participants every time their data or biomaterial is used in new projects, is time consuming requesting renewed approval by the Ethics Committee. Another way of obtaining consent is discussed. Broad consent is consent to a range of research questions within certain limits, including upcoming research questions. Dynamic consent is an alternative to broad consent placing the participants in the centre. The dynamic consent is an ongoing process facilitated by modern communication strategies to inform, involve, and obtain consent for every research question based on biobank resources, thus giving the participants more control over “their” data and access to information about projects. The issue of dynamic consent is also considered a way of informing about results becoming available many years after sampling. Broad consent and dynamic consent is being debated worldwide with regard to ethical concerns. Broad consent is criticized as pragmatic, paternalistic, top down governance, and not respecting the autonomy of the participants. The need to re-contact the participants in case of new projects is expensive, time-consuming, may be difficult and can result in high drop-out rates. The dynamic consent is on one hand considered a better alternative as it is considered to increase recruitment due to increased user participation, where the participants get more committed to research interests and altruism. The participants manage their own consent preferences, and there is a reduced dropout, reduced need for anonymization of data, the recruitment is more streamlined and the re-contact is more efficient, the consent can securely travel along with the data and samples when shared with third parties, it helps educate the public, facilitate innovative research, and sustain public confidence in the research enterprise. Others argue that the dynamic consent lead to a larger amount of information, which makes it challenging for the participants to distinguish between relevant and irrelevant information. Unsolicited misuse related to employment, education and insurance may arise.

Conclusions In addition to the general requirements for good research on humans (necessity, scientifically good planning with as few research subjects as possible, potential benefit overriding potential risks, and approval of an independent REC), and the vulnerability of children (only research benefiting directly children or, with minimal harm, the group they represent) the following key points must be considered when planning research on children: 1. children should always be respected as persons; 2. an abbreviated description of the research for getting research participants’ informed permission/assent/consent must be written in understandable language according to the age of the target group(s): in studies involving children at least two information sheets are required: one for children and one for their parents/guardians; 3. enough time should be given for parents and children to discuss the research and consider their participation in a friendly environment; 4. an assent/consent from the child and parental permission should be sought whenever possible by using appropriate tools; 5. refusal to participate by a child should be respected; 6. the presence of parents should be ensured in the interventions if practically possible, for the child’s comfort and to ensure that the rights of the child are looked after; 7. special attention has to be paid on the most vulnerable children (institutionalized, homeless, impoverished, and dying children); 8. children from different countries or belonging to different ethnic, social, or religious groups should be treated with the same respect; 9. follow-up tools must be considered by researchers to monitor long-term effects in study participants, considering incidental findings; 10. for evaluation of the studies including children, the involvement of a pediatrician in the REC should always take place. Research with children raises specific questions to the protocol of the study that are to be handled by ethical committees boards, preferable with consultation of relevant expertise (pediatricians, lawyers, statisticians, toxicologists, psychologists, and ethicists). No best practices in research with children are given and at present the research process is only accepted by the ethical committee not followed/controlled. There is a need to know more about the conception of children and parents on how research is regulated. Informed consent is a prerequisite in all instancesdgiven proxy by parents of children of young age (< 6 years) and given accompanied with assent of school children and adolescents. There is a need to know more about the child’s perception of the information givendat different ages. Incentives to participate should be named – for example, ‘the feel good factor’dfor the child provides valuable information to the society. The refund could be considered comparable to work that also takes up the child’s time. The right to withdraw at anytime of a research process is fundamental for adults but more unclear in relation to children. There is a need to know more about the child’s ‘will.’ In clinical trials with treatment of disease, it may be clear overruling the child’s

Environmental Health Ethics in Study of Children

505

expression of withdrawal but how about environmental health studies? The child’s right to withdraw/opt out at maturity must be considered and the extent of withdrawal must be clarified (all information collected, new information, etc.). Information about results to parents and children must be agreed on before the study. There may be a difference between cross sectional and longitudinal studies, especially if data and tissue banking is the case. In environmental studies it should be given in all studies either at individual or group level, depending on the sensitivity of the information (stigmatization due to increased disease risk and consequences for employment, insurance, education, and loaning). The child may have the right to be notified of future research plansdor should the decision be delegated to the ethical committee? The ethical committees within Europe are regulated nationally and show wide diversity regarding compositiondin some countries no scientific or legal expertise is assured within the committees’ independencedin some instances the researchers’ interest is not challenged versus the interest of the study persons. Recommendations by the ethical committees are not validated and compared even within the same countries. The question about genetic exceptionalism is clearly relevant in relation to autonomy, data protection, etc. of children participating in studies with genetic testing. Exceptionalism may be present regarding environmental health/public health studies versus clinical trials, where the participants may have an individual interest. This is not the case in many environmental studies where the societal/community interests usually override.

Further Reading Bos, W., Tromp, K., Tibboel, D., et al., 2013. Ethical aspects of clinical research with minors. European Journal of Pediatrics 172, 859. Casteleyn, L., Dumez, B., Van Damme, K., Anwar, W.A., 2013. Ethics and data protection in human biomarker studies in environmental health. International Journal of Hygiene and Environmental Health 216 (5), 599–605. Centers for Disease Control (CDC) and Prevention, 2005. Third national report on human exposure to environmental chemicals. Department of Health and Human Services, GA, USA. http://www.cdc.gov/exposurereport/. European Environmental Agency, 2002. Children’s health and environment: A review of evidence. Environmental Health Issue Report 29. EEA, Copenhagen K, Denmark. Eskenazi, B., Gladstone, E.A., Berkowitz, G.S., et al., 2005. Methodologic and logistic issues in conducting longitudinal birth cohort studies: Lessons learned from the Centers for Children’s Environmental Health and Disease Prevention Research. Environmental Health Perspectives 113 (10), 1419–1429. Gammelgård, A., Knudsen, L.E., Bisgaard, H., 2006. Why do parents enroll their healthy infants in research? A Study of parents’ perceptions of their children’s participation in the COPSAC study. Archives of Disease in Childhood 91, 977–980. Gee, D., 1999. Children in their environment: Vulnerable, valuable, and at risk. Background briefing children and environmental health. In: WHO Ministerial Conference Environment & HealthWHO Regional Office for Europe, European Environment Agency, London, 16–18 June. Greener, L.E., 2008. Bitter medicine. New regulations aim to address the dearth of clinical safety trials for drugs used in children. EMBO reports 9, 505–508. Johnsson, L., Eriksson, S., 2016. Autonomy is a right. Not a feat: How theoretical misconceptions have muddled the debate on dynamic consent to Biobank Research. Bioethics 30, 471–478. Kaye, J., Whitley, E.A., Lund, D., Morrison, M., Teare, H., Melham, K., 2015. Dynamic consent: A patient interface for twenty-first century research networks. European Journal of Human Genetics 23 (2), 141–146. Knudsen, L.E., Merlo, D.F., Larsen, A.D., 2008. Workshop on ethics and communication in Copenhagen. Environmental Health. 7 (Suppl 1), S1, 11–13.3.2007. Landrigan, P.J., Etzel, R.A. (Eds.), 2014. Textbook of children’s environmental health. Oxford University Press, New York. Lind, U., Knudsen, L.E., Mose, T., 2007. Participation in environmental health research by placenta donationda perception study. Environmental Health 6, 36. Merlo, F., Knudsen, L.E., Bargiel-Matusiewicz, K., Niebroj, L., Vähäkangas, K., 2007. Ethics in studies with children and environmental heath. Journal of Medical Ethics 33, 408–413. Modi, N., Vohra, J., Preston, J., Elliott, C., Van’T Hoff, W., Coad, J., Gibson, F., Partridge, L., Brierley, J., Larcher, V., Greenough, A., 2014. Guidance on clinical research involving infants, children and young people: An update for researchers and research ethics committees. Archives of Disease in Childhood 99, 887–891. Paulson, J.A., 2006. An exploration of ethical issues in research in children’s health and the environment. Environmental Health Perspectives 114, 1603–1608. Pedersen, M., Merlo, F., Knudsen, L.E., 2007. Ethical issues related to biomonitoring studies on children. International Journal of Hygiene and Environmental Health 210, 479–482. Spencer, K., Sanders, C., Whitley, E.A., Lund, D., Kaye, J., Dixon, W.G., 2016. Patient perspectives on sharing anonymized personal health data using a digital system for dynamic consent and research feedback: A qualitative study. Journal of Medical Internet Research 18 (4), e66. Steinsbekk, K.S., Kåre Myskja, B., Solberg, B., 2013. Broad consent versus dynamic consent in biobank research: Is passive participation an ethical problem? European Journal of Human Genetics 21 (9), 897–902. Strech, D., Bein, S., Brumhard, M., Eisenmenger, W., Glinicke, C., Herbst, T., Jahns, R., von Kielmansegg, S., Schmidt, G., Taupitz, J., Tröger, H.D., 2016. A template for broad consent in biobank research. Results and explanation of an evidence and consensus-based development process. European Journal of Medical Genetics 59, 295–309. Tamburlini, G., von Ehrenstein, O.S., Bertollini, R. (Eds.), 2002. Children’s health and environment: A review of the evidence (Environmental Issue Report No. 29). A Joint Report from the European Environment Agency and the WHO Regional Office for Europe. European Environment Agency, Copenhagen. WHO, 2006. Environmental Health Criteria 237: Principles for evaluating health risks in children associated with exposure to chemicals. Report of WHO. Geneva. WHO, Switzerland. Williams, H., Spencer, K., Sanders, C., Lund, D., Whitley, E.A., Kaye, J., Dixon, W.G., 2015. Dynamic Consent: A Possible solution to improve patient confidence and trust in how electronic patient records are used in medical research. JMIR Medical Informatics 3 (1), e3.

Relevant Websites http://www.cdc.gov/exposurereport/dCenters for Disease Control and Prevention – National report on human exposure to environmental chemicals. http://ohsr.od.nih.gov/guidelines/nuremberg.htmldOHSRThe Nuremberg Code. http://www.privireal.org/d European Commission, Privireal. http://yosemite.epa.gov/ochp/ochpweb.nsf/content/homepage.htmdThe Office of Children’s Health Protection, US: Environmental Protection Agency. http://www.wma.net/e/policy/b3.htmd World Medical Association. WMA Declaration of Helsinki. Ethical Principles for Medical Research Involving Human Subjects: 52nd WMA General Assembly, WMA, Edinburgh, Scotland.

Environmental Health Impacts on Ascariasis Infections by Indication of Afghanistan: A Review Sayed Hussain Mosawi, Zeinab Hosseiny, and Khanali Mohammadi, Khatam Al Nabieen University, Ghazni, Afghanistan © 2019 Elsevier B.V. All rights reserved.

Introduction Many people worldwide, especially children of growing age, are affected by nutritional and health problems, including parasitic infections, and many of them cannot cope with it for many reasons. According to the statistics conducted, 20 million people are worldwide infected with intestinal worms, most of them are children and adolescent. Intestinal worm reduces mental and physical growth of children and adolescent as well as their educational backwardness. About one quarter of the world population is said to be infected with ascariasis. Ascaris worm, in an infected person, feeds on intestinal contents, causing nutritional deficiencies, vitamin A deficiency and reduces seasonal growth in children in endemic areas. Also if there is a large number of worms, can cause obstruction of certain tracts such as intestine, pancreas, airways and appendicitis, and are followed by obstructive jaundice, liver abscesses, and so on (Fig. 1). Parasitic infections are directly related to the hygiene level of the society, and despite the large efforts of the WHO, parasitic infections, especially ascariasis, are common in many countries, especially in developing countries. Ascaris is one of the intestinal parasitic worms that causes ascariasis. Although ascariasis is the most common worm in the world, due to the different opinions about its clinical significance and the unique epidemiological characteristics of the disease, little attention has been paid on how to control it. Efforts to reduce the number of worms in human societies have been beneficial to some extent by using public prevention methods. Every year, about 361,000, children under 5 years of age die, in poor countries like Nigeria, due to inappropriate hygienic condition and lack of hygienic information. According to a study conducted in Nigeria in 2002 on effect of water and sanitation on preventing ascariasis infection among school students, it has been proven that the proportion of the outbreaks in homes using refined water and standard toilets has been significantly lower, and it has been mentioned that to control the infection, in addition to improving water and sanitation facilities, the health education program should also be considered to ensure that the health facilities provided are properly used. Due to the universal spread of ascariasis and its high resistant egg, especially in soil, this disease is highly valued and to control it, its treatment and prevention should be carefully observed. There are several ways to control and prevent ascariasis, most of which are related to human environment. Environmental pollution is one of the serious problems round the world especially, in developing and under developed countries, which is one of the results of the decline in the health of its people. This disease alone, especially in poor countries, is responsible for 1.5 million deaths every year and averaging 763 million people worldwide have been infected. Ascariasis is one of the most common parasitic infection in man, with almost one fifth of the world population is suffering from it. Ascariasis is most common in hot climates; therefore, their life cycle and modes of transmission should be carefully observed which will help in prevention, treatment, and eradication of the disease. According to scientific findings their mode of transmission is fecal-oral, in which the fingers are contaminated by contact with soil or feces. Eggs present in soil will retain their infectious properties for several months, and they will survive in cold weather (5–10 C) for up to 5–7 years. Another way of transmission is food, especially those which are taken raw and are infected by insects or fertilizers.

Fig. 1

506

Intestinal lumen obstruction in a 6 year old girl, Lugar province, Afghanistan.

Encyclopedia of Environmental Health, 2nd edition, Volume 2

https://doi.org/10.1016/B978-0-12-409548-9.11463-0

Environmental Health Impacts on Ascariasis Infections by Indication of Afghanistan: A Review

507

Despite the relative progress of health facilities in developing countries, there are still no adequate standards for water, sewage and air pollution, which threatens the health of society. In order to protect and promote the health of society, various measures have been adopted, and various factors affecting the health of the environment are considered, each of which has its own effects. Various studies have been conducted in different countries to observe the prevalence of ascaraisis, but among them Afghanistan is one of the most important country with a high prevalence in the area. Ascariais is common and important parasitic infection in developing and under developed countries, with over 10% of the developing countries population have been infected with the infection (WHO). So far, several humanitarian measures have been taken by the WHO for afghan children, but in many parts of the country, the disease have endangered the lives of hundreds of children and adults. Until 2011, only.69% of the school children in eastern Mediterranean countries including Afghanistan, are said to have received preventive assistance that has been provided by international organizations. Since 2001, one of the reasons for the lack of assistance and high prevalence of ascariais among the people of Afghanistan is the political situation of the country and the lack of security; therefore, making it impossible to send health facilities, developing the infrastructure of water and sewage facilities. Afghanistan is one of the developing countries facing a lot of economic and health problems. According to the WHO, the country has a population of 34 million by 2016. Civil wars in the country has reduced people access to the health services, and urban infrastructure such as water and sewage, due lack of security is not possible everywhere. That is why, the studies conducted in the area are very fierce, while some common disease in the country threats the lives of many people, especially the children. Of these, ascariasis is of high importance for the reason given and should be controlled and prevented. The importance of this country is its political situation that has dominated it for decades, caused its economic growth and health very weak, which has led to the growth of disease like ascariasis in the country.

Ascaris lumbricoides Ascaris lumbricoides is a parasitic nematode that live in the small intestine of human. The parasite was described by Linnaeus in 1758. The soil-transmitted, intestinal nematodes, Ascaris lumbricoides is one of the most common parasitic diseases in the world and estimated to infect 1.4 billion people globally.

The Life Cycle of Ascaris lumbricoides Ascaris lumbricoides uses fecal-oral route to cause ascariasis in human. Human could be infected with the ingestion of an egg that contains a third-stage larva. This larva hatches in the intestine of human, then migrate to the caecum and proximal colon, entering the mucosa and after that the larvae move to the liver through the hepatic portal system. The larvae make their way from the liver to the lungs by 5–6 days after infection, burst into the alveoli and move to the pharynx where they are swallowed, and are found in the intestine by 8 or 9 days after infection. After the worms return to the intestine, they molt to the mature stage as adult male and female worms, This occurs about 3–4 weeks after the ingestion of the eggs when the worms are about a 1 cm long in the intestinal lumen. At the beginning of patency (about 8 or 9 weeks), the adults are about 15–20 cm long. Although adult A. lumbricoides can live as long as 20 months, the usual life span is about 1 year (Liu, 2012).

Environmental Health and Ascariasis As a part of public health, Environmental health deal with investigating and understanding the impact of the environment on people and vice versa. In the following, The Interaction between human activities like agriculture, industry, management of water and wastewater, urbanization, public services) and the physical (soil, water, air and climate) and biological (reservoirs and vectors) environment and their impact on transmission of Ascariasis discussed (Landon, 2006).

Water The importance of water in transmission of certain diseases has been proven. That is why, the developed countries are trying to increase the standards of purifying the drinking water even more so that they could be able to increase the level of health of the society. According to the statistics, half of the people in developing countries are suffering from an illness or infection from impure and unhealthy water. According to WHO and prioritization HSW in the developing countries, health promotion in these countries is the best way to intervene and control major disease. Accordingly, investment in constructing infrastructure of water and sewage is the most effective way. This should be invested in less developed countries like Afghanistan. According to the statistics available, only 31% of Afghan households have access to pure drinking water, which indicates the severity of Ascaris infection in these people (Fig. 2).

508

Fig. 2

Environmental Health Impacts on Ascariasis Infections by Indication of Afghanistan: A Review

Unhealthy water, Ghazni province, Afghanistan.

Because of the high importance of pure water in preventing infectious disease, such as ascariasis, in societies with high prevalence of this infection including Afghanistan, water purification and sanitation facilities should be built and put in to operation so that people can use healthy water which are free of pathogen and parasite, such as Ascaris, to reduce and control the disease. Improved water supply and sanitation facilities have shown promising application in reduction of ascariasis. Studies showed that indoor facilities are associated with larger reductions than public facilities. In the United States the reduction in the prevalence of infection was 37%, 12%, and 30% for groups that had lavatories and indoor plumbing and for those with lavatories and a yard well and control group that had only lavatories, respectively. In addition piped water supplies produced a significant reduction in ascariasis among children under3 years of age. Hand washing is the most important and easiest way to prevent many infectious and parasitic diseases. Researches show, washing hands with soap and healthy water is one of the most effective way to prevent the transmission of ascariasis. This is important, especially when hands come in contact with contaminated food, soil contaminated with stool or when eating. Magnetic treatment systems is an efficient approach in different fields including health, industry, wastewater treatment, soil treatment farming, food processing and environmental managements. This system can also alter the function of microorganisms. In the China the Effect of magnetized water against ascariasis approved.

Sewage Two important and essential factors which makes the disease endemic are: The use of human fertilizers in agriculture (Fig. 3) and and nonsanitary disposal of wastewater (Fig. 4), and continues everywhere, where there is inappropriate sanitation of wastes. In Afghanistan, only 5%–7% of people have access to standard toilets, indicating the lack of system for collecting and thrashing of wastewater, as a result sewages gets deposited near the living places and environment of the people. If agriculture products are irrigated by refined wastewater, it should be ensured that they are properly refined and disinfected and that there are no pathogens in the treated water. If the existing wastewater purification system is not able to maintain existing standards and reduce to pathogens, avoid irrigation of food products, especially raw food products, with waste products. Also avoid disposal of untreated water in water resources.

Fig. 3

The use of human fertilizers in agriculture, Ghazni province, Afghanistan.

Environmental Health Impacts on Ascariasis Infections by Indication of Afghanistan: A Review

Fig. 4

509

Nonsanitary disposal of wastewater in the streets, Ghazni province, Afghanistan.

Food is one of the factors of transmitting ascariasis especially which are taken unwashed. Ascaris egg when taken enters the GI system and feeds on intestinal contents reducing the absorption of nutrients by the intestinal cell of the patients and causes GI symptoms. More than 50% of the people in Afghanistan are suffering from chronic malnutrition, which in turn undermines the immune system and increases propagation of parasitic infections, including ascariasis.

Soil Soil is also a suitable medium of transmission for this infections. The egg of Ascaris, parasite, through stool enters the soil where it can survive up to 15 years and can spread the disease. The role of soil in spreading the infection has been proven long before. Food is one of the factors of transmitting parasites, ascaris egg, especially which are taken raw. Ascaris egg when taken enters the GI system and feeds on intestinal contents reducing the absorption of nutrients by the intestinal cell of the patients and causes GI symptoms. In order to control and reduce the number of patients, the WHO in 2001, recommended, avoiding contact with the soil that may be contaminated with patients stool, also foods such as fruits and vegetables that are in contact with the soil should be washed or heated before they are taken (Fig. 3). These recommendation are more important in endemic areas of this disease, areas lacking water purification systems, sewages, waste collection and also areas where human and animal fertilizers are used to fertilize the agricultural land. Soil moisture and relative atmospheric humidity are key factors in development and survival of ova and larvae. Higher humidity helps to faster development of ova while low humidity (< 50%) the ova of A. lumbricoides does not embryonate (Kim et al., 2012).

Air Pollution The significance of air pollution is so high that according to WHO, one out of every nine people die from air pollution. Air pollution can have artificial or natural origin. Pollution generated by fossils fuels used in vehicles as well as factories are artificial air pollution, and natural air pollution includes dust particles (Fig. 5), pollen from plants and smoke from natural sources such as smoke from volcanic activities of the mountains. There is no study conducted to show the direct effect of air pollution on the prevalence of ascariasis. But with other factors, observed in the areas with high natural air pollution and unsuitable economic and health condition, ascariasis more common. The Ascaris can be transmitted by insects as well as inhalation of dust contaminated with parasite. Therefore as the Ascaris transmission is fecal oral, in areas with poor hygiene services, where humane wastes flow to the living environment and the wastes are not collected and disposed, a suitable environment is established for growth and reproduction of ascariasis carrying insects. Also the parasite can enter the air through sewage and human stool and can transmit the infection. Consequently, in these areas, it is necessary to prevent the dispersal of sewages and wastes to the living environment of the people so that the dust from these materials does not cause disease in human population in the area.

Animals (Zoonotic Potential of Ascariasis) It appears that A. lumbricoides and A. suum are two separate species but the process of speciation is not that much advanced. In many studies, cross-transmission between pig and human hosts have been demonstrated in different part of the world. Local farming

510

Environmental Health Impacts on Ascariasis Infections by Indication of Afghanistan: A Review

Fig. 5 The dirt roads and wastewater canals (left) Dispersing dust particles after passing vehicles from this dirt roads (right), Ghazni province, Afghanistan.

mode and hygiene status are factors that determine the level of cross-transmission. This cross-transmission and subsequently zoonotic form of the disease could lead to greater morbidity and emerging of drug-resistance individuals in the population of parasite. The changes in breeding system and good management practices of Swines farms can decrease the infection. Utilization of biological control strategies (e.g., Pochonia chlamydosporia fungus) are useful for eggs of the parasite present in the environment (Nejsum et al., 2005).

Insects Arthropods are probably the most successful of all animals and they are found in every type of habitat and in all regions of the world. The vectors have essential role in parasite life. Vectors can cause illness through the consumption of food contain human enteropathogens, mechanically transmitted by flies or cockroaches. Domestic insects such as flies and cockroaches can contribute to the persistent spread of food-borne parasitic diseases in both developing and developed countries. Poor environmental hygiene and weak health services cause high prevalence of Soil transmitted helminths in the slum and rural area of many countries. While the major transmission mode have been considered contaminated water and soil, indirect transmission by vectors neglected. Many species of insects have been reported to be associated with unsanitary conditions and dissemination of human pathogens in the environment. Helminth eggs are transported by flies and cockroaches on their external surface and feet. The risk of contact between flies and pathogenpositive fecal matter increased When infected persons excrete in open areas. Studies have shown that eggs of Ascaris lumbericoides are carried by many species of house flies and cockroaches. Personal and environmental hygiene should be emphasize by health education in high risk areas. Foods must be protected from house flies and cockroaches where open air defection is common. Drug administration for eradication of house flies and cockroaches is an effective controlling attempt to stop STH transmission in the community (El-Sherbini and El-Sherbini, 2011).

Geographical Information Systems (GIS) and Remote Sensing (RS) Spatial and seasonal variability in environmental factors (e.g., temperature, rainfall, vegetation, and altitude) have a significant influence on transmission success and patterns of STH infection in a location. Environmental factor conditions can be derived by satellite imagery and GIS can use them and give us more interpretable data. Geographical Information Systems (GIS) and remote sensing (RS) are good implements for better understanding of ascariasis distributions and it’s ecological correlates and subsequently for the design, implementation and monitoring of global control programs through identifying areas of particular risk. Their satellite based data help us to explain the ecology and epidemiology of STH infection. Finally utilizing validated statistical models (like ARIMA) coupled with GIS and RS can provides us tools to predict the prevalence of the ascariasis in the future (Lai and Mak, 2007).

Environmental Health Impacts on Ascariasis Infections by Indication of Afghanistan: A Review

511

Sustainable Development The sustainable development defined as “meeting the needs of the present generation without compromising the ability of future generations to meet their needs” by World Commission on Environment and Development in 1987. These needs include food, work, shelter and health care and they must be provided in a manner that preserves the environment and its resources. The quality of the environment and economic development have a direct relationship to health determinants in the societies. Ecological sustainability must be achieved; this means ensuring that natural resources can be sustained for present and future use without being irreparably damaged or destroyed. Basic concerns of environment, health and sustainable development should be taken seriously because long-term progress and improve health and lifestyle are depended on them. Today, developing countries are moving towards sustainable development, and sustainable development is based on environmental protection. Natural resources like water and soil must be sustained (ecological sustainability) for present and future generation use without being irreparably damaged. The pollution of the water and soil, which are considered as the most important source of production (agriculture, animal husbandry and industry) should be avoided (Landon, 2006).

Personal and Environmental Health Education Personal and environmental health education is one of the important factor for prevention of parasitic infection and disease. The prevalence of ascariasis in a society is related to the level of education of its people and one of the way to control ascariasis is to raise the awareness of people to obey health principles and to change their behavior that reduces the transmission of parasites. People should be aware of the common disease in their society and should be aware of its prevention, so that they could be able to cope with it. The importance of health education to the people is so important that without it the prevention of Ascaris would be impossible. Education should be given at childhood, especially at school. In societies with high prevalence of ascariasis, in order to deal with it as quickly as possible, education should be given in addition to broad casting in mass media, from home to home and person to person. People need to know how to obey personal health principle and prevent introduction of pathogens in to their families and places where they live.

Conclusions In conclusion, eradication and control of ascariasis is impossible without following health principles by people, proper purification of water and disposal of sewage. Low awareness of sanitary problems in developing countries, has made them extremely unfavorable for the prevention of ascariasis. Lack of health facilities, water and soil contamination, and increasing number of patients have made ascariasis an endemic disease in many parts of these countries. Therefore, building sanitary toilets, water purification and sewage, collecting wastes from city, vector control, managing farms, educating preventive measures and sanitation can prevent people from further prevalence of the disease in future. In addition implementation of GIS is a applicable tool in prevention and control of ascariasis. Finally economic sustainable development should be taken into consideration by political authorities.

References El-Sherbini, G.T., El-Sherbini, E.T., 2011. The role of cockroaches and flies in mechanical transmission of medical important parasites. Journal of Entomology and Nematology 3, 98–104. Kim, M.-K., Pyo, K.-H., Hwang, Y.-S., Park, K.H., Hwang, I.G., Chai, J.-Y., Shin, E.-H., 2012. Effect of temperature on embryonation of Ascaris suum eggs in an environmental chamber. The Korean Journal of Parasitology 50, 239. Lai, P.C., Mak, A.S., 2007. GIS for health and the environment. Springer. Landon, M., 2006. Environment, health and sustainable development. McGraw-Hill Education, UK. Liu, D., 2012. Molecular detection of human parasitic pathogens. CRC Press. Nejsum, P., Parker, E.D., Frydenberg, J., Roepstorff, A., Boes, J., Haque, R., Astrup, I., Prag, J., Sorensen, U.B.S., 2005. Ascariasis is a zoonosis in Denmark. Journal of Clinical Microbiology 43, 1142–1148.

Further Reading Betson, M., Nejsum, P., Bendall, R.P., Deb, R.M., Stothard, J.R., 2014. Molecular epidemiology of ascariasis: A global perspective on the transmission dynamics of Ascaris in people and pigs. The Journal of Infectious Diseases 210, 932–941. Blum, A.J., Hotez, P.J., 2018. Global “worming”: Climate change and its projected general impact on human helminth infections. Public Library of Science. Brooker, S., Clements, A.C., Bundy, D.A., 2006. Global epidemiology, ecology and control of soil-transmitted helminth infections. Advances in Parasitology 62, 221–261. Campbell, S.J., Nery, S.V., Wardell, R., D’este, C.A., Gray, D.J., Mccarthy, J.S., Traub, R.J., Andrews, R.M., Llewellyn, S., Vallely, A.J., 2017. Water, sanitation and hygiene (wash) and environmental risk factors for soil-transmitted helminth intensity of infection in Timor-Leste, using real time Pcr. PLoS Neglected Tropical Diseases 11, E0005393. Crompton, D., 2001. Ascaris and ascariasis.

512

Environmental Health Impacts on Ascariasis Infections by Indication of Afghanistan: A Review

El-Sherbini, G.T., El-Sherbini, E.T., 2011. The role of cockroaches and flies in mechanical transmission of medical important parasites. Journal of Entomology and Nematology 3, 98–104. Engineer, U. Magnetic water technology: Progresses, Promises and Challenges Kay, B., 2006. Water Resources: Health, Environment and Development. CRC Press. Strunz, E.C., Addiss, D.G., Stocks, M.E., Ogden, S., Utzinger, J., Freeman, M.C., 2014. Water, sanitation, hygiene, and soil-transmitted helminth infection: A systematic review and meta-analysis. PLoS Medicine 11. E1001620. Weaver, H.J., Hawdon, J.M., Hoberg, E.P., 2010. Soil-transmitted helminthiases: Implications of climate change and human behavior. Trends in Parasitology 26, 574–581.

Environmental Health Issues for Railroadsq Y Kanagawa, JR East Health Promotion Center, East Japan Railway Company, Tokyo, Japan © 2019 Elsevier B.V. All rights reserved.

Abbreviations COP3 The Third Session of the Conference of the Parties of the United Nation Framework Convention on Climate Change HCFC Hydrochlorofluorocarbon IC card Integrated circuit card PV Photovoltaic UNCED United Nations Conference on Environment and Development UNEP United Nations Environment Programme VVVF Variable voltage and variable frequency

Introduction Two centuries have passed since the invention of the steam locomotive (in 1804), and opening of the world’s first commercial railroad in the United Kingdom (in 1825). Many railroads were built to contribute toward industrial development in various countries and are still being used. As time passed, the source of power for railroad transportation has changed from steam to electricity and diesel. In recent years, increasing awareness of environmental issues has promoted efforts to reduce the load on the global environment, such as the development of energy-saving and hybrid vehicles. This article describes the environmental impacts of railroads including global environmental problems and the countermeasures to be taken.

Global Environmental Problems and Railroads The global environmental issues defined by the United Nations Environment Programme (UNEP) consist of nine items including ozone layer depletion, global warming, acid rain, and desertification. In 1987, the World Commission on Environment and Development proposed the concept of sustainable development, and the Montreal Protocol on Substances that Deplete the Ozone Layer was adopted. In 1992, the United Nations Framework Convention on Climate Change was adopted, and the Earth Summit (United Nations Conference on Environment and Development (UNCED)), in which approximately 180 countries participated, was held. In the Earth Summit, Agenda 21, which presents specific programs against many issues concerning air quality conservation, forest, desertification, biological diversity, marine protection, and waste disposal, was adopted. In December 1997, the Third Session of the Conference of the Parties of the United Nation Framework Convention on Climate Change (COP3) was held, where the Kyoto Protocol was adopted. Thus, it is well recognized that the progress of global warming may have an extensive and serious impact on the environment of humans and other living beings. Direct effects of global warming on our health may include an increase in the occurrence of heat strokes owing to higher summer temperatures and heat waves and changes in mortality rates associated with diseases in the cardiovascular and respiratory systems. Increased occurrences of animalborne diseases (e.g., malaria and dengue fever) associated with the expansion of habitat areas and activities of infection vectors are indirect effects to be concerned of. In addition, a spread of infection vectors through water and food, an increase in the occurrence of diarrhea and other infectious diseases, and an increase in risks of various infections due to migration and damage to social infrastructures caused by a sea level rise are also of concern. Furthermore, combined with air pollution, increases in the occurrence of asthma and allergic diseases are also expected (Table 1).

q

Change History: September 2018. The section editor Orish Ebere Orisakwe updated the references. This is an update of Y. Kanagawa, Environmental Health Issues for Railroads, Editor(s): J.O. Nriagu, Encyclopedia of Environmental Health, Elsevier, 2011, Pages 410–418.

Encyclopedia of Environmental Health, 2nd edition, Volume 2

https://doi.org/10.1016/B978-0-12-409548-9.11760-9

513

514

Environmental Health Issues for Railroads Table 1

Effects of global warming on health

Thermal stress Floods and droughts El Nino and health Air pollution Allergens Infectious diseases Dengue fever and other arboviruses Leishmaniasis Tick-(arthropod-) borne diseases Rodentborne diseases Water-related diseases Malnutrition In such a situation, an important task for railway businesses is to reduce the CO2 emission associated with the use of electricity, as railways consume more electricity.

Impacts of Railroads on the Environment and How to Reduce Them Noise and Vibration Nuisance Railroad noise

Well-known railroad-related impacts on the environment include noise and vibration nuisance. The noise is called railroad noise. Noise and vibration nuisance as well as malodor are referred to as sensory nuisances. The effect of a sensory nuisance varies depending on individual sensitivity, and such nuisances are reported to infrequently cause property damage. The nuisance frequently has a strong physiological effect and mainly causes symptoms such as hearing loss, headaches, gastrointestinal disorders, and hypertension, and has mental and psychological effects. The main sources of railroad noise are thought to be frictions and shocks between wheels and rails, shocks at the rail joints and rail points, and vibrations of track lanes, iron bridges, viaducts, etc. A characteristic of railroad noise is the intermittent occurrence. In addition, various types of railroad vehicles have different noise levels, frequencies, and durations. Vibrations associated with transportation, including the railway, are known to cause artificial ground vibrations, which lead to property damage owing to the vibration of buildings, and they also affect our daily lives. The Shinkansen Superexpress, or bullet train, which was launched in 1964, when at high speed emits aerodynamic sounds from uneven parts of its body, and thereby high frequency components of the noise are enhanced, making its noise have a different sound quality, higher power, and longer transmitted distance compared with that from a conventional train. Owing to this, in the areas around the Tokaido Shinkansen line, the noise and vibration nuisance became a major social issue called the Shinkansen Superexpress Railway Noise; people living in this area in Nagoya City filed a lawsuit in the 1970s.

Efforts to reduce noise and vibration nuisance

In 1975, to reduce the impact of Shinkansen noise, preserve the living environment, and protect people’s health, the environmental quality standards for Shinkansen Superexpress Railway were established. These environmental quality standards are administrative nonbinding targets to comprehensively promote various measures relating to sound sources, damage prevention, and land use. The standard values depend on the classification of each area. For example, the upper noise limit is 70 dB for area category I, “areas used mainly for residential purposes.” For some areas classified as category II, “other areas, including commercial and industrial areas,” the upper noise limit is 75 dB, if permitted to conserve the needs of normal living conditions. The target dates to achieve the standards were specified as follows: for lines existing in 1975 (when the standards were established), the target date was within 10 years the areas should be restricted to 70–75 dB and within 3 years to be restricted to 80 dB or more; for lines under construction in 1975, the target date was within 5 years of the start of service; and for new lines, the standards had to be achieved immediately on the start of service. For the newly constructed lines, stricter noise regulations were applied. For the vibration of a Shinkansen, the recommended vibration reading (70 dB) was established. Specific measures taken against the noise and vibration nuisance caused by the Shinkansen were. 1. 2. 3. 4.

use of heavier rails; laying of ballast mats; application of antivibration slabs; replacement with continuously welded rails without joints (rails of 200 m or longer, which are produced by welding the joint between rails to reduce the noise emitted when a train runs over the joint); 5. rail grinding (flattening of the unevenness that the rails produced under the stress of the running trains to improve the contact between the rails and wheels of running trains); and 6. flat grinding of the wheels (flattening of partial wear on the wheels to restore the original circle shape to improve the contact between the rails and the wheels of running trains).

Environmental Health Issues for Railroads

Fig. 1

515

(A) 0 series Shinkansen Superexpress. (B) N700 series Shinkansen Superexpress. The author took this photograph at the Nagoya station.

Railroad companies also try to reduce noise by identifying noise-emitting sites of the trains, developing the train body to have low air resistance, and improving pantographs. Furthermore, measures have been taken against sound sources (e.g., construction of soundproof walls) and potential damage (e.g., soundproofing private houses). Vibration-proofing works for private houses along the railroad have also been done and relocation compensations have been paid. As a result, even in the areas along the Tokaido Shinkansen line, which were embroiled in the lawsuit, the noise has decreased to 54–70 dB. The N700 series Shinkansen launched in July 2007 (Fig. 1) was designed to primarily reduce noise; for example, by having windproof streamlined covers on pantographs, which had been sources of noise. In addition, the low air resistance body decreases the energy consumption in operating the train and thereby reduces CO2 emission.

Impacts on Global Warming Efforts to Save Energy in the Operation of Trains CO2 emission

In the Law Concerning the Promotion of Measures to Cope with Global Warming established in 1998, six gases including carbon dioxide, methane, dinitrogen monoxide, and alternative freon are defined as greenhouse gases. The greenhouse gas emission in Japan was 5.2% higher in 2001 than in 1990. The emission of CO2 accounts for more than 90% of all greenhouse gas emission. In 2004, approximately 26,530 million tons of CO2 was emitted throughout the world, of which Japan accounts for approximately 5%. Approximately 40% of CO2 emission in Japan is from the industrial sector, 20% is from the transportation sector (including automobiles, aircrafts, and trains), and 30% is from commercial (e.g., services and offices), residential, and other sectors. Emission from the industrial sector has decreased since the reference year, in 1990, whereas there has been an approximate 20% increase in the emission from the transportation sector and more than a 40% increase in the emission from the commercial and other sectors. This is because automobile passenger transportation accounts for about half of the transportation sector. Approximately 90% of CO2 emission in the transportation sector is attributable to automobiles. CO2 emission from automobiles transporting passengers increased by approximately 50% from 1990 to 2005 because of the significant increase in the number of automobiles owned and

516

Environmental Health Issues for Railroads

Industrial process 4%

Waste 3%

Energy sector 6%

Others 0%

Industrial sector 35%

Residential sector 14%

Commercial and other sectors 18%

Transportation sector 20%

Fig. 2 CO2 emission levels sector-wise in Japan (2005). Source: Annual Report on the Environment and the Sound Material-Cycle Society in Japan (2007).

Airplane 0.6%

Train 0.2%

Travelers; 60.8%

Vessel 3.1% Automobile 35.2% Automobile 50.6%

Freight; 39.2%

Airplane 3.6% Bus 1.8%

Fig. 3

Vessel 1.9%

Train 2.9%

Structure of CO2 emissions from the transportation sector (2005). Data source: Greenhouse Gas Inventory.

number of miles driven. The rates of CO2 emission increase are outstanding for aircrafts and automobiles compared with other passenger transportations (Figs. 2 and 3). Thus, the transportation sector consumes large amounts of electricity, and reduction of CO2 emission associated with the use of electricity is considered as an important task. To accomplish this task, the following efforts have been made:

Efforts to save energy related to vehicles 1. Reduction of the weight of the body: The material used for the body of the vehicle has been changed from steel to aluminum alloy (stainless steel) and the body work has been redesigned to reduce the weight. 2. Reduction of air resistance: To smoothen the body surface, the windows of passenger rooms are made flat without unevenness between the outside panel and the window, and surrounding diaphragms have been installed between all cars. The fender skirt has been improved to smoothen the body surface and the surface under the floor. The body having low air resistance reduces the energy consumption of the train when it is in motion, which leads to a reduction in CO2 emission. The N700 series Shinkansen introduced in July 2007 (Fig. 1) has an aero double wing, and the top of the body is shaped with excellent aerodynamic characteristics. 3. Reduction of energy to operate trains: The power regenerative brake can switch a motor to a generator at the time of braking, and convert the kinetic energy into electric energy while decelerating, then return the generated electricity to the overhead cable for later use. Thus, the brake allows energy

Environmental Health Issues for Railroads

517

Fig. 4 The world’s first diesel hybrid railcar operating on the Koumi Line. Excerpts from Aiming for a Sustainable Society. JR East Group Sustainability Report 2007.

recycling and savings. Furthermore, a new generation energy-saving train equipped with a variable voltage and variable frequency (VVVF) inverter, which allows effective control of the motor, has been introduced. 4. Development of a hybrid system: A diesel hybrid railcar (Fig. 4), which uses both electrical energy generated by a diesel engine and that generated and stored by motor at the time of braking, was developed and has been in commercial operation since July 2007. The mileage of the diesel hybrid railcar is better than that of a conventional diesel train by approximately 20%. Additionally, it emits less noise ( 30 dB) while stopping at stations. Furthermore, the latest exhaust system has reduced toxic substances in the exhaust gas (nitrogen oxide, graphite, etc.) by approximately 60%. 5. Development of the fuel cell train: A fuel cell is characterized by its high power generation efficiency and its emissions are limited to water and unused air; thus, this is a clean power generation technology with low environmental load. A study on the world’s first fuel cell hybrid railroad vehicle began in 2006. Test runs of this vehicle on a commercial railway at speeds up to approximately 100 km h 1 began in the spring of 2007.

Effects of efforts to save energy

On the Tokaido Shinkansen line, efforts to reduce the weight of the body of the train resulted in the reduction of 250 tons or more for vehicles of the 300 or later series compared with the first Shinkansen introduced in 1964, the 0 series. Additionally, energy consumption has improved through the development of the simple body structure to reduce air resistance and the adoption of high-performance power control systems such as the power regenerative brake and the VVVF inverter. For example, the power required to run from Tokyo to Shin-Osaka compared with that required for the first 0 series to make the same trip, given as a percentage, is 73% for the 300 series (launched in 1992), 66% for the 700 series (1999), and 51% for the N700 series (2007). Even with a 50 km h 1 higher maximum speed, vehicles of the 300 or later series consume less electricity: 91% for the 300 series, 84% for the 700 series, and 68% for the N700 series (Fig. 5). Railroad companies are also trying to improve the energy efficiency of conventional trains through weight reduction of the vehicles and introduction of the power regenerative brake and the VVVF inverter. JR East replaced 83% of trains with the energy-saving trains by the end of 2006. As a result, the energy consumed per unit transport volume in 2006 was 13% less than that in 1990. To reduce energy consumption, the railroad companies are also reviewing the management of trains to reduce the number of out-ofservice trains and to adjust the number of carriages of a train based on how the train is used by passengers.

Reduction of Other Energy Consumption Not Related to Train Operation Compared with the energy required for the operation of trains, less energy is used for automatic ticket gates, elevators, and air conditioners in the stations. However, tens of billions of mega Joules of energy is consumed annually for such purposes. Such energy consumption is currently increasing owing to the improvement of facilities for the safe transportation of trains, installations of barrier-free equipment in stations, and the introduction of automatic ticket vendors and automatic ticket gates. Therefore, the railroad companies are trying to reduce the energy consumption in the stations by reviewing the performance of equipment and introducing inverter control systems when they replace deteriorated equipment.

518

Environmental Health Issues for Railroads

220 km h

120 100

0

270 km h

91 84

80

0

68

60 40

100

79

73

66 51

20 0 0 series

Fig. 5 2007.

300 series

100 series

700 series

N700 sveries

Power consumptions of different types of Tokaido Shinkansen trains. Data source: Central Japan Railway Company Environmental Report

Reduction of CO2 Emission From Power Generation Railroad companies have improved the efficiency of power generation by replacing facilities in their self-supply power stations and by using natural energies in their efforts to reduce the CO2 emission from power generation. The combined cycle generation facility, which is a combination of a gas turbine facility in which a turbine is driven by fired gas, and a steam turbine facility in which a turbine is driven by steam generated with waste heat, is a highly efficient electric power facility. As a result of the replacement of the conventional electric power facilities with the combined cycle generation facilities, and optimized operation of the facilities, the CO2 emission per unit electric power generation has decreased by 38% compared with the value for 1990. Thermal power stations are also trying to reduce the emission of nitrogen oxide (NOx) and sulfur oxide (SOx) included in the waste. For this purpose, fuels such as natural gas, kerosene, and low sulfur fuel oil with relatively low environmental loads are used, and denitrification and dust collecting equipments are being installed. Natural energies are also used. For example, hydraulic power generation, which does not emit CO2, is used and photovoltaic (PV) panels are installed on the roofs of stations and other buildings. The PV system with an area of approximately 800 m2 installed on the roof of the platform of Kyoto Station on the Tokaido Shinkansen line in 1997 generates up to 100 kW of electricity, which is equivalent to the electrical energy used for lighting on the Shinkansen platform of the station. It reduces annual CO2 emission by approximately 60 tons. Furthermore, trees are being planted on the roof floors of station buildings or office buildings to reduce the heat island effect, absorb CO2, and reduce the energy needed for air-conditioning in the buildings (by intercepting the heat from sunlight).

Efforts to Reduce CO2 Emission From an Overall Traffic System Carbon dioxide emission from rail freight transportation is approximately 1/45 of that from private trucks and approximately 1/8 of that from business trucks. The energy consumption basic unit of rail freight transportation is approximately 1/6 that of business trucks, indicating excellent characteristics with regard to the reduction of the environmental load (Fig. 6). In the transportation sector, the present CO2 emission considerably exceeds the targeted level specified in the “Outline for Promotion Effects to Prevent Global Warming.” As measures within each mode of transportation, low-emission cars were developed and popularized and the convenience of railroad and marine transportation was improved. As measures to improve the

Truck

158

Coastal shipping

39

Railroad

21 0

200

Fig. 6 CO2 emission factor of different types of transportation (2004). Amount of CO2 emitted to transport freight of 1 ton for a distance of 1 km (g-CO2/t-km). Data source: Modal Shift Promotion Campaign 2004 (Council for the Promotion of Modal Shift, Ministry of Land, Infrastructure and Transport).

Environmental Health Issues for Railroads

519

transportation system, modal shift was promoted and load efficiency was improved through cooperation and upsizing of truck transportation. Shippers, distributors, and consumers should then make further efforts within this framework. The use of railroad and coastal shipping, which are the two types of mass transit with low environmental load, are recommended (modal shift). A modal shift ratio (share of railroad and coastal shipping in long-distance freight transportation of 500 km or more) of more than 50% has been set as a target to be achieved by 2010. To shorten the lead time of rail freight transportation, the train operation systems have been revised and, for example, the running of nonstop trains between main stations has been increased. Maintenance and improvement of facilities has also been done. For example, the container platforms in freight stations have been improved for more efficient cargo handling. Furthermore, exceptional taxation measures are taken for the replacement investment of high-performance locomotives and freight cars, and government subsidy measures are taken for the maintenance of infrastructure. To promote the consumers’ use of intermodal transport (a transportation system in which various means of transportation are continuously used from a certain point to a destination), the convenience and comfort of the railroad must be improved. For instance, parking lots were constructed in front of stations, and services such as “park and ride” (traveling in an automobile to the nearest station, then on a train) and “rail and rent-a-car” (traveling by rent-a-car from the nearest station to the destination) have been marketed. Use of the same line by two or more railway companies, improvement in convenience, expansion of the service areas for transit with integrated circuit (IC) cards such as Suica, and barrier-free stations and trains are also being thought of. For various products commonly used by consumers, the Eco Rail Mark certification has been established to indicate environment-friendly products.

Efforts to Save Resources Reduction of Trash Produced in Stations and on Trains The quantity of trash produced in stations and on trains such as the Shinkansen is more than 100,000 tons per year (Table 2).

Reduction of Office Trash To reduce office trash such as used paper, office communication (computerized documents) through local area networks is being promoted. In an office, information such as changes in train timetables or the organization of trains was conventionally passed on to staff via paper booklets; however, when the paper media were replaced with an online browsing system, paper consumption Table 2

Environmental load of business activities of JR East JR East (2006)

Input Energy

Water Office paper Output (recycle rate and reuse rate) CO2 emission General waste

Industrial waste

Electricity City gas Other fuels (crude oil equivalent) 11,890,000 m3 1806 t (99% is recycled paper) Offices Stations and train General rolling stock centers, etc. Tickets Power plants Total Construction projects General rolling stock centers, etc. Medical waste Power plants Total

5,560,000,000 kWh (56% is from the company’s own power station) 9,340,000 m3 72,000 kl

2,130,000 t 3089 t (72%) 44,539 t (50%) 1198 t (90%) 641 t 12 t 49,479 t (52%) 400,685 t 30,713 t 118 t 424 t 431,940 t (90%)

To improve the recycling rate, in many cases trash is collected separately using three types of trash containers in stations and trains; that is, different containers for newspapers and magazines, bottle and cans, and other trash. Large terminal stations have recycling centers in their neighborhoods, where trash is collected, separated, and recycled. For example, magazines are recycled into coated paper, which is used as the paper for the information magazines placed in Shinkansen cars, whereas newspapers are recycled into copying paper that is used in-house. The recycling rate exceeded 40% for some companies in 2007. Source: Excerpts from Aiming for a Sustainable Society. JR East Group Sustainability Report 2007.

520

Environmental Health Issues for Railroads

decreased by approximately 80%. The recycling of recyclable items such as toners of printers is increasing; this is possible because a thoroughly separated collection of trash from offices takes place.

Reduction of Railroad Material Waste Various wastes are produced in the process of constructing railroad facilities or repairing trains in vehicle centers. Rails, railroad ties, ballast, and concrete are wastes from railroad facilities; and wheels and chair filler are wastes from trains. As an example of recycling, parts of the rails and ballast used for the Shinkansen line are reused after reprocessing in recycling facilities to meet the standards for conventional railroads. A huge amount of waste is produced when civil engineering structures such as banks, bridges, viaducts, and tunnels are renewed. Therefore, appropriate examinations and assessments of civil engineering structures are performed to accurately understand the state of the structures, and to prolong their lifetime in an effort to save on materials. In addition, appropriate maintenance of the railroad track is done so that trains do not impose an uneven load on the rail and ballast, which may not only prolong the lifetime of the materials of the track, but also provide a more comfortable ride and decrease the noise and vibration nuisance for people living along the rail line.

Recycling of Tickets and Passes It is necessary to buy a ticket or a commuter pass to ride a train, but these are finally disposed of as trash. However, the IC card tickets released in 2001 can be repeatedly used and contribute to a considerable saving of resources. Recently, the IC card tickets have become popular around urban areas, and one can use trains of different companies with one IC card ticket, which substantially contributes to reducing the resource consumed. To prevent the IC card from being disposed, a deposit system is used. JR East introduced the IC card ticket ‘Suica’ in November 2001 as a new ticket that replaces a magnetic commuter pass. As a result, the number of annually issued magnetic commuter passes decreased by approximately 17,100,000 in 2006 compared with the number in 2000, which was the year immediately before Suica was introduced. However, almost 100% of the collected large quantities of used tickets and magnetic commuter passes are recycled. Iron powder on the backside of these tickets is separated, and the paper is recycled as toilet paper, corrugated cardboard, and business cards.

Recycling of Uniforms Uniforms that are no longer needed are collected and recycled. For example, clothes are reused as interior materials for automobiles and heat insulating materials. Some clothing that cannot be reused is incinerated and the heat is used for the generation of electricity. Furthermore, some companies have adopted uniforms made of polyester fiber recycled from polyethylene telephthalate bottles.

Other Efforts In addition to the recycling of waste, efforts to reduce the quantity of waste are also made. Such efforts include the remodeling of trains to prolong their life, reconstruction of electric points, and reduction of dry cell battery consumption by using rechargeable instruments. To reduce water consumption, the water processed in factories or stations is reused, and restrooms that use no water have been installed as pilot programs. Furthermore, green procurement of construction materials and office supplies is being implemented in view of environmental conservation.

Control of Chemicals Various chemicals that are used to maintain trains and facilities are discussed in the following sections.

Ozone-Depleting Substances Freon gas is a global warming substance other than CO2. The use of chlorofluorocarbons and hydrochlorofluorocarbons (HCFCs) as refrigerants for air conditioners in trains and buildings is currently regulated by the Law Concerning the Protection of the Ozone Layer through the Control of Specified Substances and Other Measures. Therefore, as alternatives, the use of hydrofluorocarbons and perfluorocarbons is increasing. Certain freons with strong ozone-depleting characteristics are being replaced with alternative freons. However, since the alternative freons also contribute to the greenhouse effect, conversion into new refrigerants is currently under consideration. Other gases that may cause global warming, such as sulfur hexafluoride, are monitored and strictly controlled from use to disposal and are prevented from being released into the atmosphere.

Environmental Health Issues for Railroads

521

Specified Chemical Substances Organic solvents that are used in the roadbed stabilizers that stabilize the crushed stones on the track and substances that are used in an antifreeze solution in a rail motor car must be controlled based on the Pollutant Release and Transfer Register. Such substances include HCFC-141b, 2-aminoethanol, bisphenol A epoxy resin, 4,40 -methylene diamine, o-toluidine, ethylbenzene, ethyleneglycol, xylene, chromium and trivalent chromium compound, dichloromethane, styrene, toluene, and m-tolylene diisocyanate. To reduce the use of coating materials, stainless steel trains that do not require coating are used, and materials used in structures such as bridges and electric poles have been changed to weather-resistant steel for which a coating is not necessary. Additionally, to reduce the use of dichloromethane included in some coating removers and adhesives, oil-based coating materials have been replaced with water-based coating materials.

Asbestos Asbestos was used for some parts (brake linings and electric parts) of trains and for buildings.

Polychlorinated Biphenyl Electric appliances containing polychlorinated biphenyl as insulating oil, such as transformers, condensers, and fluorescent lamp stabilizers were used in trains and transformation facilities, but these have been replaced with appliances without polychlorinated biphenyl.

Other Efforts Railroad companies take the aforementioned measures comprehensively by establishing in-house organizations to conduct environmental management (e.g., establishing an ecology promotion committee). For example, to raise awareness of the environment, they have educated their employees and business partners and have designed awards to honor excellent environmental activities. In vehicle centers and other facilities that examine and maintain vehicles, various toxic chemicals are used, and industrial waste is produced. Thus, the certification of ISO14001 has been acquired to establish environment-friendly processes. Furthermore, to conserve natural environments along the railroads, artificial forests called railroad protection forests, which were planted from the Meiji era to protect the railroads from disasters such as drifting snow, landslides, falling rocks, and snowslides, are maintained. These efforts have been well publicized through environmental reports and various events (Table 3).

Table 3

Efforts by Japanese major railroad companies to protect the environment

Environmental activities along railway lines Implementation of noise-reduction measures along Shinkansen and conventional lines (soundproof walls, continuous welded rail, and other measures) Appropriate control of environmentally harmful chemicals Activities to protect the global environment Reduction of the energy required to operate trains Introduction of energy-saving railcars, energy savings in stations, and office buildings Promotion of intermodal transportation (Park and ride schemes, Rail and Rent-a-Car service, etc.) Reduction of CO2 emissions from power generation and supply Measures for resource circulation Reduction and recycling of waste collected from stations and trains Recycling of waste generated at general rolling stock centers and through construction projects Reduction and recycling of train tickets Management activities Development of environment management systems Acquisition and maintenance of ISO14001 certification Environmental management education Publication of Social and Environmental Reports and environmental advertisements Research and development Development of energy-saving railcars (such as hybrid trains) Development of noise-reduction technology Source: Excerpts from environmental reports from each company.

522

Environmental Health Issues for Railroads

Summary Unlike today, in c. 1970, the main railroad-related environmental issues were noise and vibration nuisance associated with the introduction of the Shinkansen. In those days, the balance between the publicness of the railroad and tolerable limit among suffering residents was discussed. Since the noise nuisance was considerably high, the government decided to regulate the noise. As alarms were also sounded for global environmental issues, the Declaration of the United Nations Conference on the Human Environment was adopted at the United Nations Conference on the Human Environment held in 1972. It proposed 26 principles to conserve and improve the environment, stating that growing environmental disruptions in many regions around the world were of great concern to the physical, mental, and social health of humankind. Among these issues, global warming is expected to have various effects on health via heat waves, abnormal weather, and changes in temperature and rainfall; thus efforts to prevent global warming have begun. The Kyoto Protocol adopted in COP3 held in 1997 established legally binding targets for reductions in greenhouse gases including CO2. On 16 February 2005, the Kyoto Protocol entered into force, and the nonbinding-reduction target became the binding-reduction target for countries that ratified the protocol. For all developed countries, the targeted mean reduction of greenhouse gas emission for the period 2008–12 was set as at least 5% of the level in 1990. Assignments of the reduction are 8% for the European Union, 7% for the United States, and 6% for Japan. With this background, the transportation sector considers that efforts for CO2 reduction should be made by the traffic system as a whole; thus intermodal transport (a combination of railroads and other transportation) and green procurement are promoted as hopeful measures to prevent global warming. In addition to such efforts to save energy, appropriate management of waste and toxic chemical substances is also an important task to decrease the deterioration of the environment and its impact on human health. Therefore, railroad companies have also undertaken these tasks. Since trains run on iron wheels and iron rails, energy loss related to the operation is small, and trains are free from traffic jams. Therefore, they have particularly high energy efficiency and impose a lower load on environment compared with many other modes of transportation. Today, global environmental issues require urgent action and so we have greater expectations than ever before for the research, development, and operation of new energy-saving systems such as hybrid trains and fuel cell trains as well as for the use and development of materials that impose a low environmental load.

Acknowledgments I greatly appreciate Kazuhiko Yokota, Director, Health Promotion Center, East Japan Railway Company, Wataru Miwa, Chief of the Management Planning Department, and Akio Hagiwara of the same department who provided me with related documents, Akio Sagawa, Senior Researcher, Laboratory Head, Noise Analysis, Environmental Engineering Division, Railway Technical Research Institute who introduced his writings to me.

See also: Automobile Exhaust: Detrimental Effects on Pulmonary and Extrapulmonary Tissues and Offspring; Mobile Source Related Air Pollution: Effects on Health and the Environment.

Further Reading Act on Confirmation (2002) Act on Confirmation, Act on Confirmation, etc. of Release Amounts of Specific Chemical Substances in the Environment and Promotion of Improvements to the Management Thereof (Law No. 86, 13 July 1999; Final revision Law No. 152, 13 December 2002). Aiming for a sustainable society. JR East Group Sustainability Report 2007. Central Japan Railway Company Environmental Report 2007. Donaldson, D., 2018. Railroads of the raj: Estimating the impact of transportation infrastructure. American Economic Review 108 (4–5), 899–934. Environmental and Social Report (2007) Japan Freight Railway Company. Environmental Policy and Regulations in Japan. Yumihiko Matsumura (ed.). Environmental Standard Concerinig Shinkansen Railroad Noise. Environmental Agency Notification No. 46, 29 July 1975; Revised Environmental Agency Notification No. 91, 28 October 1993). González-Gil, A., Palacin, R., Batty, P., Powell, J.P., 2014. A systems approach to reduce urban rail energy consumption. Energy Conversion and Management 80, 509–524. Haines, A., Patz, J., 2004. Health effects of climate change. JAMA 291 (1), 99–103. Japanese Environmental History in the 20th Century (2002) K. Ishii (ed.) Japan Environmental Management Association for Industry, Tokyo. He, G., Mol, A.P., Zhang, L., Lu, Y., 2015. Environmental risks of high-speed railway in China: Public participation, perception and trust. Environmental Development 14, 37–52. Law Concerning the Promotion of the Measures to Cope with Global Warming (Law No. 117, 9 October 1998; final revision Law No. 57, 7 June 2006). Outline for promotion effects to prevent global warming. Measures against global warming to 2010, established by global warming Prevention Headquarters, 19 June 1998.

Environmental Health Issues for Railroads

523

Pereira, P., Gimeìnez-Morera, A., Novara, A., Keesstra, S., Jordán, A., Masto, R.E., Brevik, E., Azorin-Molina, C., Cerdà, A., 2015. The impact of road and railway embankments on runoff and soil erosion in eastern Spain. Hydrology and Earth System Sciences Discussions 12 (12), 12947–12985. Pollution Control Technologies and Laws-Noise and Vibration Nuisance, 2008. Editorial Committee of Pollution Control Technologies and Laws. Japan environmental Management Association for Industry, Tokyo. Talaiekhozani, A., Ghaffarpassand, O., Talaei, M.R., Neshat, N., Eydivandi, B., 2017. Evaluation of emission inventory of air pollutants from railroad and air transportation in Isfahan metropolitan in 2016. Journal of Air Pollution and Health. 2 (1), 1–18. West Japan Railway Company Environmental Report 2006.

Relevant Websites Japanese National Railways, n.d.dHundred Year History of Japanese National Railways. greenpartnership.jp/about/construction.html http://www.greenpartnership.jp/about/construction.htmldGreen Partnership. mlit.go.jp/tetudo/ http://www.mlit.go.jp/tetudo/dRailway Bureau of Ministry of Land, Infrastructure and Transport (Rail Freight Transportation in Japan and Summary on Eco Rail Mark).

The Environmental Health of Children of Migrant WorkersdAn Example From China Catherine Jan, Peking University, Beijing, China; and The University of New South Wales, Sydney, NSW, Australia Christopher Magoon, The University of Pennsylvania, Philadelphia, PA, United States Brendan Ross, McGill University Faculty of Medicine, Montréal, QC, Canada © 2019 Elsevier B.V. All rights reserved.

Abbreviations and Definitions (a) Migration: Either rural-to-urban or between-urban areas migration (b) Left behind children: Children who are left behind in their home town (often rural villages) while their parents live and work elsewhere (often in the cities) (c) Migrant children: Children who migrated with their parents (d) Children of migrant workers: Left behind children þ migrant children (e) Hukou: The household registration system in China created in 1955 to restrict internal population movement, especially rural-to-urban migration. (f) Cancer cluster/villages: A greater-than-expected number of cancer cases within a group of people in a geographic area over a period of time.

Introduction and Background There are almost 1 billion migrants worldwide, with 214 million international migrants and another 740 million within-country migrants. In East and South-east Asia, one child is left behind for every adult working in a remote location. Similar data have been reported elsewhere in the world. Children are particularly sensitive to environmental exposures. Environmental factors cause over one-third of the overall disease burden in children and cause about a quarter of the global burden of disease in all ages with children under 5 years-old bearing over 40% of this burden. Children in developing countries are disproportionately affected: on average these children lose eight times more healthy life years from environmentally-caused diseases, per capita, than their counterparts in developed countries. Within this population, children of migrant workers are even more vulnerable. Good environmental health in children of migrants is essential for fulfilling sustainable development goals (Table 1). China demonstrates the significant impact of environmental changes on the health of children of migrants. Economic development, migration, the environment and people’s health are closely linked. With its rapid economic rise, China has undergone the largest migration in human history, which grew from 50 million rural-to-urban migrants in 1990 to 274 million such migrants in 2014. In 2010, 61 million rural children were left behind in the villages, which amounts to 38% of all rural children and 22% of all children in China, and 35.8 million migrant children lived in Chinese cities. In 2013 alone, 63% of all 6–15 year-old children of migrants migrated with their parents. Many migrants earn minimum wage, are uninsured, live in poor housing conditions, and are often not eligible for public education. Meanwhile, economic growth associated with industrialization, increased energy use and industrial waste has rendered widespread water and soil pollution and air quality among the worst in the world. All of these factors impair the physical and psychological health of migrants and their children. To solve the wide spectrum of unique health problems the children of migrants are facing and to meet their particular needs, an integrated, systematic approach must be taken. In this chapter, we assess environmental health factors and health care concerns of migrant children in China. We use this example to illustrate the dynamics of this global problem.

Main Environmental Determinants of Health Water Quality Children, due to their developing immune system and larger water intake proportional to their physical size, are at the greatest risk of deleterious health effects from unsafe water. Children living in rural areas are especially prone to diseases caused by water pollution. Environmental regulations are more strictly enforced in urban areas, and as a result, many industries that produce industrial wastes have relocated to rural locations, polluting lakes, rivers, and farmlands of those areas. The diarrheal mortality rate of children under five in rural areas is 14 times that of urban areas. Improving water quality can improve their health status. For example, data show that a single water quality improvement program leads to an increase in child height by 0.962 cm in rural China.

524

Encyclopedia of Environmental Health, 2nd edition, Volume 2

https://doi.org/10.1016/B978-0-12-409548-9.11417-4

The Environmental Health of Children of Migrant WorkersdAn Example From China Table 1

525

Environmental health of Children and Sustainable Development Goals

SDG

Relevance to children’s environmental health

SDG 3: Ensure healthy lives and promote well-being for all at all ages

The overall healthcare infrastructure in China still lags behind other sectors. With a rapidly aging population and rising rates of many chronic diseases, China is facing looming public health crises. These challenges extend to children, as the rates of obesity and mental illness are rising among the pediatric population. Children are also facing severe potential financial pressure to pay for the healthcare costs of their largely uninsured parents Harmful pollutants in the environment harm children’s health which in turn impedes their education and chance at reaching their full potential in life. Environmental exposures inside school buildings, such as asbestos, lead, harmful chemicals, mold, high moisture level, and dirty central heating or cooling equipment, threaten children’s health. Pollution in air, water and soil due to deforestation, land degradation and industrialization may require children (especially living in rural villages and/or mountainous areas) to spend time outside school fetching viable resources such as clean water, which can interfere with school attendance. Children (especially girls) are at risk from dropping out of school because they have to look after younger siblings who get sick due to pollution Environmental risks differentially impact children coming from different geographic, socioeconomic, and cultural backgrounds. Emphasis on improving the environment for those most vulnerable (such as children of migrants) will significantly reduce inequalities in health, education, and quality of life Urbanization has opened up markets and enormous opportunity to China and many of its citizens, but it has also laid bare the stark divide between the rich and the poor. Nowhere is that divide clearer than among the groups of migrant workers, who move to cities, often with their children, to find work and seek economic opportunity. Living on the periphery of urban areas, these migrant families often subsist in environments that lack safe drinking water, or public infrastructure. With little regulation, young children often live in crowded housing with poor ventilation. Available housing is often impermanent, making it difficult for migrants to improve their living conditions

SDG 4: Ensure inclusive and equitable quality education and promote lifelong learning opportunities for all

SDG 10: Reduce inequality within and among countries

SDG 11: Make cities and human settlements inclusive, safe, resilient, and sustainable

The United Nations published a series of Sustainable Development Goals (SDGs) to serve as guidelines for reducing the disease burden from environmental risks for children worldwide.

Although water quality is better in urban areas compared to rural areas, it is still suboptimal. China’s rapidly expanding urbanization puts a huge demand on safe urban water supplies. The rate of urbanization outpaces the rate of building these safe water supplies. From 1990 to 2007, access to piped drinking water for urban populations increased from 48% to 94%. But provision of piped water has not guaranteed access to safe urban water supplies. A nationwide assessment in 2009 showed that a quarter of 4000 urban water plants surveyed failed to meet monitoring requirements for water quality. Nearly half of China’s major cities do not comply with health-based quality controls for drinking water. While many urban families buy bottled-water and use high-tech carbon filters to filter water for cooking and drinking, migrant families with much lower incomes and poorer living conditions are less able to do so. Research on water pollution risks on children of migrants is scarce and is still needed. However, research in adults showed that in China, rural-to-urban migrants and ethnic minorities report higher exposure to water pollution than urban residents. Long-term exposure to polluted water poses serious health consequences such as diarrhea, hepatitis A, and liver and digestive cancers. Every year, 190 million people in China get sick due to water pollution. A report issued by The World Bank showed that 70% of China’s surface water is unsuitable for drinking, and due to the absence or shortage of water treatment in rural areas, many children and their caretakers rely on this kind of water for drinking. As a result, the cancer morbidity in rural areas is higher than urban areas. Though all are at risk of these diseases, children of migrants are particularly vulnerable as they also suffer poverty related problems such as malnutrition which contributes to weaker immune systems and poorer social and medical protective mechanisms to combat disease.

Air Quality China’s large-scale migration in recent decades has shifted the pattern of environmental health problems. Compared to left behind children in the villages, children who migrated with their parents from rural areas often experience improved drinking water supplies in urban areas which has reduced their risk of waterborne infectious diseases, but they are more likely to encounter new risks such as air pollution. Air pollution is the world’s biggest environmental health threat. WHO estimates that 97% of cities in low- and middle-income countries do not meet WHO air quality guidelines. China’s rapid industrialization has rendered the air quality in some of its cities

526

The Environmental Health of Children of Migrant WorkersdAn Example From China

among the world’s worst. The release of hazardous chemical and biological agents into the air poses major health risks for children. Air pollution accounts for over 50% of the overall disease burden of pneumonia which is among the leading causes of global child mortality. These problems are enhanced for migrants because they are usually not entitled to the same health-care benefits as their peers who are non-migrants. In China, children of migrants live in poorer housing conditions compared to children of nonmigrants, and their parents often do not buy indoor air-purifiers. Indoor mold and dampness, indoor allergen exposures, and ambient air pollution, such as from vehicle combustion, put these children at an increased risk to develop childhood asthma, which has increased in prevalence in the past decades. Migrant workers are often exposed to highly stressful work conditions and are less educated about the harmful effects of smoking. Non-smoking pregnant women exposed to second-hand tobacco smoke suffer increased risk of congenital malformation in their babies, particularly heart abnormalities, limb reduction, kidney/urinary tract issues, and cleft lip. Environmental insults in early life can also induce development of chronic diseases later in life, such as cardiovascular disease and cancer. Left behind children in rural villages are more likely to be exposed to traditional risks such as indoor air pollution from secondhand tobacco smoke and combustion of solid fuels such as wood and crop residues in household stoves, which can lead to concentrations of toxic carbon monoxide more than 10-times higher than health-based standards. Worldwide, solid fuel use is a major risk factor for acute lower respiratory tract infections, which are among the largest causes of mortality in children, accounting for 16.5% of deaths in children under five in 2012 and 15.5% of deaths in 2015. Research has shown that income and education level are robust determinants of household energy choices, with the poorer and less educated more likely to adopt cheaper and less healthy options. Children who migrate with their parents to cities are increasingly exposed to health risks associated with industrialization and urbanization, such as combustion from transportation, industrial and agricultural wastes, dust from construction, as well as emerging risk factors such as greenhouse-gas pollutants. Poverty mediates the health consequences of these factors; although these factors equally affect migrant and non-migrant children, children of migrant workers suffer a greater health risk because they are less able to deal with these problems (such as lack of air purifiers, quality masks, and health care access) due to poverty. Exposure to outdoor air pollutants has been associated with lung cancer and cardiovascular diseases. Pregnant women exposed to outdoor air pollution are more likely to have babies with low birth weight, which is linked with cardiovascular disease, type II diabetes, and obesity later on in life. Two-thirds of the world’s emissions of inorganic mercury comes from coal combustion. Within the last 25 years China’s coal production has increased steadily at nearly 1 billion metric tons per year. Fluctuating from rural to urban areas, 22%–55% of households in China rely on coal. Inorganic mercury emitted from industrial processes is converted by microorganisms into methyl mercury, which resides in the aquatic food chain, contaminating fish at the top of the food chain. Exposure to methyl mercury impairs neurological development of fetuses, infants, and children, and can adversely affect developing brain and nervous tissue. While we are waiting for more research to illuminate the exact economic cost of the health of children of migrants due to air pollution, the economic cost of mortality and morbidity that results from outdoor air pollution in a typical Chinese city is predicted to range from 8% to 16% by 2020.

Other Factors In addition to air and water pollution, contamination in soil also poses one of the biggest health risks in China. Pollution of agricultural areas caused by industrial discharges is less strictly regulated in rural areas than urban ones, putting left behind children at an increased risk. Heavy metals such as lead, copper, and mercury are dramatically increasing in soil which can be transferred to food which is linked to neurotoxic effects in children. Both left behind children and migrant children could be vulnerable to discrimination and stigmatization in their schools and communitiesdleft behind children because of their “parentless” status whereas migrant children due to their non-urban background. These can pose significant physical and psychological stress on children. A 2018 study found low acculturation was associated with obesity among migrant children in Guangzhou, China. Economic status and lifestyle are also important factors for the health of children of migrants. Lower income can lead to poor nutrition, poor housing, and lower access to health and education. Exposure to environmental pollution is greater for poor families and that situation has worsened as China’s income disparity has widened over the past decades. At work, migrant workers are more likely to work in construction, chemical and mining industries, often with long hours, and sometimes without labor protection, all of which can lead to higher rates of death and injury. Outside work, they typically live in rental housing units that are poorly-outfitted and overcrowded. They also have a weak awareness of self-protection and are inclined to save and send money to their families in their home villages. A study on a group of migrant workers in Guangdong province, China, found that 88% of migrant workers had skin, nervous system, respiratory system and digestive system diseases. Illness, injury and death in migrant workers deprive their children of parental care, which harms the physical and psychological health of these children. Adverse socio-environmental exposures associated with migration such as parental neglect, child abuse and school bullying increase the risk of mental health problems in children. Children of migrants are also likely to inherit the lifestyle of their parents, exposing them to similar environmental health risks. Poor people also have fewer resources available for them to move away from more polluted living environments or to leave hazardous jobs.

The Environmental Health of Children of Migrant WorkersdAn Example From China

527

Other environmental factors that may affect the health of children of migrants include climate change, ultraviolet radiation, ionizing radiation, nanoparticles, infrastructure and surveillance of children’s homes, schools and communities, and lack of preparedness for natural disasters.

Social and Political Environment: The Hukou System and Its Implications on Health Inequity in health care exacerbates the environmental health problems in children of migrants. Hukou is the household registration system in China created in 1955 to restrict internal population movement, especially rural-to-urban migration. There are two types of Hukou in China: rural Hukou (or agricultural Hukou) and urban Hukou (or non-agricultural Hukou). Prior to 1998, children inherit their Hukou status at birth according to mothers’ Hukou status. Since 1998, children’s Hukou status can be inherited from either their mother or father. A person with rural Hukou residing in urban area cannot change to urban Hukou status without legal permission, and this permission is hard to obtain because there are limited opportunities, such as formal employment within state-owned enterprises or special military service. The Hukou system is the major cause of rural–urban disparities in social and economic outcomes at the individual level because it determines eligibility for various welfare benefits, such as health-care, education, housing, social subsidies, and employment (Fig. 1). The system also links people to their place of origin. Migrants (even urban-to-urban) without local Hukou, especially in major cities, are not entitled to the full spectrum of welfare services provided by local governments. As of 2014, there were 274 million rural-to-urban migrants. Due to their rural Hukou status, these people are considered to be temporary employees in urban areas, and therefore are not entitled to many welfare benefits available to their urban counterparts. For example, migrant children do not have access to public schools in urban areas. This and other factors such as poor housing conditions and high urban living costs mean that children are often left behind in villages while their parents worked in cities. Despite a boost in family income due to parents working in the cities, children who are left behind still have less health care access, poorer health status, higher disease incidence, and lower health-related quality of life compared to those from non-migrant families. Reasons for poorer outcomes include less time available for these parents to focus on child health-care and wellbeing due to lengthy commutes to and from work, lack of regular baby checkups and immunization for many rural children, limited health knowledge of grandparents, and poor food quality in boarding schools. Moreover, less access to prenatal care for pregnant migrant women may cause poorer birth outcomes and worse health for their children later in life. Hukou status also determines the type of health insurance for people in China. There are three main insurance schemes in China which cover over 98% of the country’s population. Different insurance types have different funding pools, which lead to different financing levels and benefit packages. Insurance for urban employees (called urban employee-based basic medical insurance scheme or UEBMI) offers better health coverage, financial protection, and reimbursement than the insurance type for urban migrants (called urban resident-based basic medical insurance scheme or URBMI) or rural residents (called the New Cooperative Medical Scheme or NCMS). China has done well to reach universal health insurance coverage, and increased participation in NCMS leads to increased utilization of preventive care, but unfortunately it does not improve health status for those insured. Further policy initiatives, research and implementation are required to improve all of the major health outcomes such as health status, financial protection, and patient satisfaction. Poor health status for rural people is also due to the unbalanced distribution of health care resources coupled with the restrictions imposed by the Hukou system. Most health resources are located in urban China, which are not covered under the NCMS (Fig. 1). In 2012, there were 3.9 physicians per 1000 people in urban China, but only 1.4 in rural China. Village doctors and schools are the first point of contact when these children encounter health issues. Lack of adequate training for village doctors and teachers, however, continues to hinder early detection and satisfactory care. For instance, 43% of psychiatrists in China only have 3 years of technical school training or less. Local healthcare implementation is essential because NCMS cannot be used outside one’s

Fig. 1

Health-care problems faced by children of migrants in China. (LBC, left behind children, MC, migrant children).

528

The Environmental Health of Children of Migrant WorkersdAn Example From China

hometown in most provinces. For the same care with the same quality, prices could vary for patients with different types of health insurance schemes. Therefore, it is difficult for children with rural Hukou to receive reimbursement for health-care in urban areas, no matter whether they are left behind or migrated with their parents to urban areas. These barriers to access harm the health of these children (Fig. 1). In 2012, the child (5 years and younger) mortality rate in rural areas was 16.2%, three times higher than that in urban areas (5.9%). Furthermore, recent studies showed that the health status was significantly lower in children with rural Hukou, regardless of whether they lived in urban or rural areas. In contrast, the children who have the best health status are those with urban Hukou who lived in urban areas. These findings showed that Hukou alone (independent of the area of residence) has a negative impact on children’s health outcomes in China.

Quality of Care for This Population Healthcare utilization is one upstream factor that influences the care ultimately received by migrant children. Internal migrants in China often have poorer economic status, which, alongside the lack of health equity and access caused by the Hukou system, often causes migrant children to receive inadequate care, both in terms of utilization and quality. The discrepancy starts from birth: migrant immigrants were found to utilize prenatal examinations and postnatal visits two and six times less than locally registered residents. Maternal mortality rates are high for rural-to-urban migrant women: for example, 48 deaths per 100,000 migrant women in Shanghai compared with 1.6 per 100,000 among resident women in 2005. Moreover, migrants were about two times less likely to use hospitalization services than local residents. While not much research has been done on the healthcare utilization of migrant children in China, one recent survey done in Guangzhou found 17.6% of migrant children who required outpatient services in the past 2 weeks had unmet needs and 46.8% of migrant children who required inpatient care within the past year had unmet needs. That study also found that highly acculturated parents of migrant childrendthose that had adapted well to life in the citydwere significantly more likely to access healthcare services, indicating that not all migrants’ situations are the same (Fig. 1). Social integration has also been shown to play a critical role in health access. A 2006 study of five Chinese cities found a strong significance between a migrant’s notion of social integration and their ability to access healthcare; this reflects both their sustained relationships with family and friends back in their rural community and the relationships that they have established in the city. If one is cut off from support networks, healthcare visits and hospitalizations can be much more daunting. In recent years, the lack of healthcare access for migrant children has begun to receive notice at the national level as well. In March 2017, the National Health and Family Planning Commission (formerly the Ministry of Health) issued a notice emphasizing the health-care needs of the left-behind children and setting guidelines for future funding, surveys and public education. According to China’s recent thirteenth five-year plan, establishing a medical system built on a strong foundation of primary care and prevention services is a top government priority. Cost is often seen as a central issue: in one 2008 survey of migrants in Hangzhou, one-quarter of the cohort had to return to their village for healthcare to avoid the high cost of care in cities. With a sharp gradient determining the quality of care in China, issues of access are only further exacerbated by the fact that families do not trust the care provided in their rural hometowns; a 2016 study found that 6% of Chinese respondents distrusted care provided by hospitals versus 26% stating they had the same distrust for clinics. Quality of hospitals in China is also closely linked to their location in key urban areas, with the best facilities usually found in the largest cities. For many migrant children, the main source of care in the village may be restricted to village community health centers. In terms of quality, the specific care provided by those facilities has been questioned as well. One cross-sectional standardized patient study of rural health centers found a discrepancy between township-level and village-level healthcare in rural Shaanxi, Sichuan, and Anhui. Only 9% of village doctors and 14% of township doctors correctly treated cases of diarrhea according to national standards, indicating that the more rural the locale, the more likely the doctors lack proper education and training. Among the corps of rural doctors, there is also a trend toward aging physicians and a difficulty in recruiting medical graduates to accept and work in the same conditions. For particularly acute cases, urban hospitals, often run by the military, are seen as the locations that patients should visit for advanced cases. Yet if one lacks access to those facilities or cannot afford to pay, the environmental and other health problems facing migrant children become more difficult to address.

Key Diseases Infectious Disease Children of migrants face additional risk factors for the contraction and spread of infectious diseases. This section will highlight a few specific infectious diseases that have been tied to migration and give special attention to vaccine-preventable illnesses, sexually transmitted infectious diseases, and waterborne illnesses. While the incidence of infectious diseases in China has decreased in recent decades, migrants are more likely than non-migrants to live in dense, overcrowded urban areas with poorer hygienic conditions that can have deleterious effects on children. For example, one epidemiologic study of Hand-Foot-and-Mouth disease–a contagious viral illness that primarily affects young

The Environmental Health of Children of Migrant WorkersdAn Example From China

529

childrendfound that children of migrants were more than three times more likely to contract severe forms of the disease. Tuberculosis, which affects children as well as adults, has also been found to thrive on China’s internal migration. China has the third highest number of tuberculosis patients in the world, and experts have called migration the “primary mechanism driving local incidence.” Migrants have been found to experience significant delays receiving treatment, which has significant consequences both for the individual patient and the further spread of the illness. Air pollution, tobacco smoke, and other environmental toxins also contribute to the spread of tuberculosis. One team from the Harvard School of Public Health found that tuberculosis rates in China could be cut by as much as half if these environmental hazards were mitigated. Children of migrants are also more likely to get vaccine-preventable illnesses. While the vaccination rates overall in China are quite high, children of migrants remain more likely to be under-vaccinated. Children of migrants are among the most likely to be affected by a measles outbreak, for example. Vaccines are mandatory and given to children free of charge, regardless of Hukou registration status, however, only about half of children of migrants were appropriately vaccinated according to one 2010 study of migrant communities in Beijing. Another 2010 study found that 22% of female factory workers were non-immune to rubella, which can cause severe birth defects and lifelong disability among in their children if the disease is contracted when the woman is pregnant. This vaccination rate is lower than what the World Health Organization has deemed necessary for herd immunity, and indeed, the incidence of congenital rubella cases has increased in recent years, especially in areas with many migrants. Migrants are also more susceptible to sexually transmitted infections. An estimated 75% of female sex workers in China are migrants, and those who frequent sex workers are also themselves more likely to be migrants. One sample of migrants found that 6% of all female migrants in major cities engaged in commercial sex, a rate many times greater than the general public. The migrant-dominated sex industry, as well as other factors discussed elsewhere, puts migrants at a higher risk for sexually transmitted infections, many of which can lead to complications in the children of migrants. The rates of congenital syphilis, for example, have increased dramatically in recent decades, with a large burden on migrants. While there are no data on infectious illness caused by access to unclean drinking water in children of migrants specifically, water pollution is a major problem disproportionately affecting those with lower socioeconomic status. While drinking water is discussed in further detail below, it is worth mentioning in this section that experts estimate 300 million people in China rely on hazardous drinking water, which has been linked to various infectious including diarrheal illness.

Cancers For the children left behind with grandparents or caretakers, they become especially vulnerable to environmental health hazards. At a time when their bodies are developing and absorbing higher proportional amounts of water and air particles, their parents are not present to monitor their health and development. One investigation in Tongliang, a peri-urban community near Chongqing, reported that fetal and childhood development was negatively impacted by a nearby coal-fired power plant. After the plant was shut down in 2004, the head circumference of newborn babies increased, indicating reduced exposure to polycyclic aromatic hydrocarbons. Migrant workers and their families, with their limited knowledge of local health risks and frequently limited access to healthcare, have been found to be particularly vulnerable to cancer in certain cases. A lack of basic education regarding health risks can leave migrants more vulnerable to carcinogen exposure. In Shijiazhuang, the capital of Hebei province in northern China, a lack of education was shown to be a significant factor associated with Helicobacter pylori infection in migrant workers (H. pylori is classified as a group 1 carcinogen). Migrant families tend to bring unhealthy habits from their hometown to their new environment as well. One study of Henan migrants displaced long-term to Caihu, Hubei, found that migrant residents were nine times as likely as host residents to die of esophageal squamous cell carcinoma, and there was a significant association between being a migrant in the community and using unsafe ground water and not having a dedicated kitchen or a chimney for cooking. Whether due to a lack of healthcare access or a fear of exposing one’s status, high-risk migrant populations have also been found to be less likely to seek proper cancer screening. Among female sex workers in Hong Kong, non-locals or those with migrant status were less likely to have a previous Papanicolaou test (or PAP smear), and one’s place of origin was the single most important risk factor identified by the study for developing cervical cancer. To date, the strongest evidence for an environmental link to cancer is outdoor air pollution’s association with lung cancer. A review of multiple analyses found that for lung cancer, a risk ratio approaching 1.5 exists for living in urban versus rural environments, or living in areas of estimated higher levels of air pollution versus lower levels. Over the past 30 years, rates of lung cancer mortality in Hebei Province, a region with some of China’s most polluted air, have tripled from 1973–75 to 2010–11. High rates of smoking among the Chinese population poses major cancer risk, and the 12 cancers formally established as being caused by smoking account for 75% of all cancers combined in China. With high rates of smoking in China, exposure to secondhand smoke poses a major risk to the health of children of migrants. A study of nonsmoker children in rural China found the rate of secondhand smoke exposure for this cohort to be 68.0% of the population. The stressful nature of work and life as a migrant in China also influences smoking. One study found that migrants with high perceived work stress had a 75% excess likelihood to be smokers. Chinese children who had parents who smoked in front of them or who had fathers who smoked in front of their mothers during pregnancy have also presented with higher cases of asthma, and it is likely that higher rates of lung cancer may develop later in life. Beyond systemic factors like air pollution, cancer villages are another phenomenon that highlight the impact of environmental pollution on cancer epidemiology and the migratory challenges faced by left behind children and migrant children in China. The

530

The Environmental Health of Children of Migrant WorkersdAn Example From China

name itself indicates its meaning; the term is defined as a post-reform phenomenon where a greater-than-expected number of cancer cases occur within a certain population in a geographic area over a period of time, largely due to cancer-causing chemicals (most likely due to contaminated water). Digestive and respiratory cancers are particularly prevalent in cancer villages compared to elsewhere. To date, 400–500 cancer villages have been identified within China, and these highly polluted areas reveal the challenges to improving health outcomes for left behind children and their families. The pollutants include heavy metals, such as mercury, lead, and chromium, as well as other carcinogens that have been banned in many other countries. Pollution tends to decimate the agricultural output of these areas, imposing a financial burden on local families and making it even more difficult for them to combat their health issues. For childhood cancers in China, research on group-specific levels of incidence, comparison between migrant children and urban children, and studies measuring the impact of pollution still needs to develop. The current data however reflects trends of an increasing rural-urban divide in cancer detection. From 2000 to 2010, the incidence rate of childhood cancers in China rose significantly, increasing 2.8% annually, while the mortality rate decreased by an insignificant amount ( 1.1%), which provides support to the idea that the rapid urbanization and advancements in healthcare have led to higher rates of cancer detection among children in China. Zheng et al. also found that rates of childhood cancer in China are higher in urban areas than rural areas (92.6 per million vs. 79.7 million, respectively), which could be due to limited screening in rural areas. From the perspective of left behind children and migrant children, both migratory patterns present a challenge to the childrendeither remain left behind and risk limited detection and a lack of prevention or move to a city and risk exposure to higher rates of air pollution and other contaminants. In China, of course, the problem is not that simple, as air pollution tends to be very intense in some rural areas as well, and water pollution is not limited to rural areas but can be a problem in the outskirts of cities too.

Psychological and Neurological Disorders Mental, behavioral and neurological disorders account for 3.7% of the global disease burden in children under 15 years of age. The Global Burden of Disease 2013 study reports that self-harm is the second leading cause of death among 10–24 year-olds, after road injuries. Suicide rates in rural Chinese youth are threefold that of their urban counterparts, with these rural-urban differences likely underestimated given that self-harm in China’s rural children may go unreported and undocumented. Left-behind children in China are at greater risk of insecurity, depression and anxiety, and are 60% more likely to consider suicide compared to children living with their parents. Children of migrant workers in China face additional barriers including the cultural stigma attached to seeking psychiatric help. Additionally, China faces a major shortage of mental health providers, and many of the providers currently practicing lack sufficient formal training. Studies of the mental well-being of both left behind children and children of migrants point to additional signs of mental ill health. Left behind children in southern China were found to have a higher level of internet addiction, suicide ideation, and more often considered running away from home, alongside other harmful behavior issues like binge drinking and smoking. A cohort of migrant adolescents living in Shanghai had significantly fewer social connections, lower self-esteem, and higher depression levels than their urban-born peers. Symptoms of separation anxiety and generalized anxiety disorders were also commonly seen. The evidence points to great psychological risks incurred by young rural-urban migrants moving in both directions, and noticeably, rural children separated at an earlier age, especially from mothers or both parents, were at an even high risk of having symptoms of anxiety and depression. To address this problem, the Chinese government has enacted a National Mental Health Work Plan (2015–20) that aims to maintain counseling rooms in all schools and to promote awareness of mental health and well-being. More research needs to be done, however, as there is uncertainty regarding the direction and magnitude of such trends over time.

Public Opinions and Satisfaction on the Current Issues and Health Care System In discussing the health of children of migrants, is it important to understand the current position of the Chinese healthcare system as a whole. While sectors such as education, infrastructure, and technology have developed at breakneck speeds within the last few decades, improvements in healthcare have been lacking. Chinese families across socioeconomic strata still face significant hurdles to secure basic healthcare, and the squeeze is especially felt on those of lower socioeconomic status such as migrants. Facing increasingly expensive payments for care with low quality and little recourse from formal legal channels, the Chinese public has become increasingly vociferous in their demands for better healthcare. In recent years, citizens have taken to brazen acts such as directly attacking doctors, vandalizing hospital property, and publicly displaying deceased family members. Experts estimate that tens of thousands of such disturbances take place across the country annually, with over 1 in 10 hospital personnel reporting they have been physically assaulted by a patient or their family. While there are no data on what proportion of these disturbances are caused by migrants, one can speculate that migrants precarious legal status and limited connections makes migrants more likely to demonstrate acts of desperation. Physicians in China also face many additional financial and administrative hurdles that impact the care of children of migrants. Broadly speaking, Chinese physicians do not enjoy the same level of prestige that they do in the West. With relatively low salaries, poor training, and hectic schedules for physicians, China has struggled to find enough qualified doctors to meet

The Environmental Health of Children of Migrant WorkersdAn Example From China

531

the growing demand for healthcare services. The most talented, ambitious Chinese are largely shying away from the underpaid, bureaucratic work in healthcare. With the relatively low baseline salaries, many Chinese doctors struggle to earn a comfortable living. This situation makes it even less likely that those with limited ability to pay for healthcare services, such as children of migrants, will be able to receive care without significant further government investment in the healthcare field.

Past Experience, Lessons and Recommendations An integrated approach is required to address the health issues of environmental hazards on children of migrants. It must address the environmental factors as well as the socio-economic factors. Fortunately, the Chinese governmental and non-governmental agencies have taken steps in both regards. In March 2013, Premier Li Keqiang declared “war” on pollution. In September of the same year, the State Council issued an Action Plan for the Prevention and Control of Air Pollution, which aims to reduce air pollution by more than 10% from 2012 to 2017. In April 2015, the State Council issued the Water Pollution Prevention and Control Action Plan that calls for coordinated effort from 12 ministries and government departments. The government promised to shut down factories that pollute water supplies, and to ensure that more than 70% of water will meet the standard for human consumption in seven major river basins by 2020. In May 2016, China released a nationwide Action Plan for Soil Pollution Prevention and Control with specific aims such as to make 90% of polluted arable land safe for human use by 2020, and increases that target to 95% by 2030. But many cities still do not meet the required standard. Take water pollution for example: continued effort is required to clean water sources, to prevent and purge rivers of industrial and agricultural pollutants, and to recycle water through other methods. In the meantime, the most effective short-term solution seems to be buying bottled water and installing low-carbon water purifiers in homes for drinking water while using lower quality water for laundry and showers. China’s developing primary health-care system is implementing basic public health services to ensure equitable access. These services are funded by the government and available to all for free regardless of their Hukou status. These services include health education, vaccination, maternal health care, elderly health care, and chronic disease management for diseases such as type 2 diabetes and hypertension. Moreover, in March 2017, the National Health and Family Planning Commission (formerly Ministry of Health) issued a notice emphasizing the health-care needs of the left-behind children and setting guidelines for future funding, surveys and public education. Despite this progress, challenges remain in addressing the environmental health issues being faced by this disadvantaged group of children. The implementation of these programs has been fragmented and uneven. Low incomes, poor housing and the marginalized status of migrant workers prevent many children from joining their parents. Despite improving access to primary care, the Hukou still limits most rural children’s access to urban education and health care, and thus hinders the implementation of universal health coverage in China. For example, the best performing hospitals in China are currently all state-owned and almost exclusively located in major cities. While people with a rural Hukou may seek care at these hospitals, they cannot access the same preferential subsidies and benefits that are available to local urban residents. This, coupled with the huge urban-rural income and resource discrepancies, means that children of migrant workers not only disproportionately suffer environmentally caused damages to their health and receive poorer access to health care, but they may also have impaired physiological protective mechanisms to combat toxicants and infectious agents due to poverty-related malnutrition. The Hukou system may also thwart approaches shown to be effective elsewhere, including in Australia, Japan and some countries in Europe. Government efforts to address the environmental health crisis are further complicated by the demand to continue economic development. Since 1989, the current government has promised material wealth to the people for the latter’s support. A careful balance is urgently needed between sustained economic development and its environmental implications. Furthermore, Chinese society has polarized opinions on environmental health problems. First, due to a wide range of exposures individuals may be exposed to on a daily basis, it is extremely difficult to establish a causal relationship between the environmental and health. Second, despite some concrete evidence of environmental implications on health, pollution levels are different in different areas in China and people do not feel that the health effects in different communities are distributed equally or even among the members within the same community. At the municipal level, urban communities with large populations of migrants can also work creatively to intervene in the health disparities among their own residents. In Shenzhen in 2005, the government enacted a program called the Cooperative Health Care System for Migrant Workers (CHCSMW) to provide health insurance for its large population of migrant residents, which made up 60% of the total population at the time. Research on Shenzhen’s model found that migrants who utilized health insurance programs like the CHCSMW were more likely to access healthcare services and visit doctors than migrants who were uninsured locally. A holistic multi-sector approach is required to address these complex issues; such an approach must be tailored to address the specific needs of migrant families in China. It requires coordination and cooperation among various government agencies such as the Ministry of Environmental Protection, National Health Commission, Ministry of Water Resources, Ministry of Finance, Ministry of Agriculture, and Ministry of Housing and Urban-Rural Development, non-governmental actors such as business and civil organizations, as well as the individual members of society including parents, teachers and community members. Health professionals and researchers need to identify knowledge gaps and pinpoint the environmental origins of childhood disease, and these must be

532

The Environmental Health of Children of Migrant WorkersdAn Example From China

communicated to the health sectors responsible for looking after migrant children’s health. The government must set clear goals and each agency must have well-defined responsibilities and accountabilities. Instead of focusing only on a downstream “curative” approach, prevention must be emphasized. Investment should be targeted in two complementary approaches. First, the government needs to take action at the root causes such as reducing pollution to ensure a healthier environment in the long term. Second, the government must provide more opportunities and safeguards to reduce the disparity between the migrants and non-migrants. This may be accomplished by policies aimed at improving the living standards of migrant workers in urban areas and by reducing the discrepancies imposed by the Hukou system, so that children from rural areas gain equal access to urban housing, education, and health care. Despite the enormous challenges, China benefits from pragmatic policy-making that relies on research evidence from demonstration cities and greater allocable resources controlled by a strong central government.

See also: Mexican Epidemiological Paradox: A Developing Country with a Burden of “Richness” Diseases; Mozambique: Environment and Health in One of the World’s Poorest Nations; Uruguay: Child Health; Waterborne Parasites in North Africa Environments.

Further Reading Boffetta, P., Nyberg, F., 2003. Contribution of environmental factors to cancer risk. British Medical Bulletin 68 (1), 71–94. de Brauw, A., Mu, R., 2011. Migration and the overweight and underweight status of children in rural China. Food Policy 36 (1), 88–100. Hesketh, T., Jun, Y.X., Lu, L., Mei, W.H., 2008. Health status and access to health care of migrant workers in China. Public Health Reports 123 (2), 189–197. Jan, C., Zhou, X., Stafford, R.S., 2017. Improving the health and well-being of children of migrant workers. Bulletin of the World Health Organization 95 (12), 850. Jahn, H.J., Ling, L., Han, L., Xia, Y., Krämer, A., 2011. Migration and health in megacities: A Chinese example from Guangzhou, China. In: Health in megacities and urban areas. Physica, Heidelberg, pp. 189–208. Qiu, J., 2011. China to spend billions cleaning up groundwater. Science 334, 745. Song, Y., 2014. What should economists know about the current Chinese Hukou system? China Economic Review 29, 200–212. Tu J (2014) The point, after all, is to change the world. Yinao: Protest and violence in China’s medical sector. Berkeley Journal of Sociology. http://berkeleyjournal.org/2014/12/ yinao-protest-and-violence-in-chinas-medical-sector/. WHO, 2016. Ambient air pollution: A global assessment of exposure and burden of disease. World Health Organization, Geneva. Xia, P., Ma, M.F., Wang, W., 2012. Status of Helicobacter pylori infection among migrant workers in Shijiazhuang, China. Asian Pacific Journal of Cancer Prevention 13 (4), 1167–1170. Yang, C., Lu, L., Warren, J.L., Wu, J., Jiang, Q., Zuo, T., Gan, M., Liu, M., Liu, Q., DeRiemer, K., Hong, J., 2018. Internal migration and transmission dynamics of tuberculosis in Shanghai, China: An epidemiological, spatial, genomic analysis. The Lancet Infectious Diseases 18, 788–795. Yao, T., Sung, H.Y., Mao, Z., Hu, T.W., Max, W., 2012. Secondhand smoke exposure at home in rural China. Cancer Causes and Control 23 (1), 109–115. Zhang, J., 2012. The impact of water quality on health: Evidence from the drinking water infrastructure program in rural China. Journal of Health Economics. 31 (1), 122–134. Zhou, C., Chu, J., Geng, H., Wang, X., Xu, L., 2014. Pulmonary tuberculosis among migrants in Shandong, China: Factors associated with treatment delay. BMJ Open 4 (12), e005805.

Environmental Health, Planetary Boundaries and Limits to Growth Colin D Butler, University of Canberra, Canberra, ACT, Australia; Australian National University, Canberra, ACT, Australia; and Flinders University, Bedford Park, SA, Australia Kerryn Higgs, University of Tasmania, Hobart, TAS, Australia; and Club of Rome, Winterthur, Switzerland Rosemary Anne McFarlane, University of Canberra, Canberra, ACT, Australia © 2019 Elsevier B.V. All rights reserved.

Glossary Anthropocene The human-dominated era in which we now live. There is debate over when this started, but the most widespread views are either the mid-18th century when carbon dioxide levels started to rise sufficiently to alter Earth System function, due to the Industrial Revolution, or the first human created nuclear explosion, in 1945. The word combines the root “anthropo,” meaning “human” with the root “-cene", the standard suffix for “epoch” in geologic time. Ecosystem services The benefits or “services” which ecosystems (functionally linked combinations of species) provide to humans, either indirectly or directly. These are normally grouped into four: “supporting” (e.g. soil formation, nutrient recycling, and water purification), provisioning” (e.g. food or fiber from crops or forests), “regulating” (e.g. the reduction in the harm, “provided” by forests, wetlands and mangroves against floods, landslides, droughts and tsunamis), and “cultural” (e.g. aesthetic, recreational and spiritual benefits that humans derive from contact with some forms of nature). Energy return on energy investment The ratio of useful energy obtained to the energy expended in obtaining that useful energy. The concept is particularly associated with the work of systems ecologist Charles A.S. Hall. Health A word derived from “whole” referring to a desired and robust, resilient state of function, normally of humans, animals or plants, both individually and as populations. Some also apply the term to ecosystems, or even the Earth System. Limits to growth The name of a study, commissioned by the Club of Rome and published as a book in 1972. It summarizes modelling work that explored the complex interactions between human civilization and the physical world in which it is inextricably embedded. The book’s title reflects its main conclusion: on a finite planet, despite abundant solar energy, growth of the human enterprise is not infinite. Novel entity A term introduced and defined in 2015 as a planetary boundary, evolving from chemical pollution in previous planetary boundary publications. It was defined as “forms of existing substances, and modified life forms that have the potential for unwanted geophysical and/or biological effects.” Planetary boundaries Earth System processes, modifiable by human actions, whose boundaries, if not exceeded, constitute a “safe operating space for humanity”. This term, first published in 2009, is conceptually linked to the Limits to Growth framework.

The Limits to Growth (LTG) The Limits to Growth (1972) was commissioned by the Club of Rome and written by a research team led by Donella Meadows, at the Massachusetts Institute of Technology (MIT). It was not the first work to grapple with the idea that physical limits apply to the human economic and social system, but it drew extensive attention and remains the best-selling environmental book of all time. The book summarized outcomes of modeling work that explored the complex interactions between human civilization and the physical world in which it is inextricably embedded. The MIT team concluded that significant systemic problems were emerging from accelerating industrialization, population growth, under-nutrition, the depletion of non-renewable resources and environmental contamination and decline. Pollution was taken as the principal indicator of environmental decline. Specific pollutants that were well-measured at the time included carbon dioxide concentrations, nuclear wastes, dichlorodiphenyltrichloroethane (DDT) production, lead in the Greenland icecap and mercury consumption. Direct environmental decline was also seen in marine settings: diminishing dissolved oxygen in oceans, eutrophication of waterways and falling catches of wild fish in some regions. Land-use change (although a significant contributor to rising carbon dioxide (CO2) levels in the atmosphere then as now) and biodiversity loss were both less conspicuous problems in 1970 than they have since become; though a crisis in available arable land was then acknowledged. But the team stressed the general ignorance at that time of where the planetary limits would lie. The model (known as World3 and based on a version developed by the pioneering systems analyst Jay Forrester) was run with various combinations of the data, ranging from “business as usual,” through numerous partial improvements, to two final scenarios in which population was stabilized, the expansion of industrial production was halted, resources were conserved, and advanced, pollution-sparing technologies were extensively applied. These last runs, where all ameliorative options were adopted, were the

Encyclopedia of Environmental Health, 2nd edition, Volume 2

https://doi.org/10.1016/B978-0-12-409548-9.10651-7

533

534

Environmental Health, Planetary Boundaries and Limits to Growth

only ones that allowed civilization to continue without crisis. If the human economy maintained business as usual, the team found, it would collide with the physical realities of a finite planet by the second half of the 21st century, triggering social collapse. Initially, the book received a positive response and some of the recommendations were adopted by several countries. Canadian Prime Minister Pierre Trudeau and US President Jimmy Carter each commissioned studies of the impact of physical limits on the global future and their national prospects. Although these studies examined the outlook only as far as 2000, their conclusions to that date confirmed those of the original LTG study. Right from the start, however, economists were critical, even abusive, creating a negative impression that persists in some quarters, todayddespite the rapidly accumulating evidence of the basic robustness of the LTG projections and assumptions. Robert Gillette, who attended the LTG launch for the journal Science, noted that the “assumption of inevitable economic growth” represents “the very foundation” of economics. Any “limit” to growth challenged this foundation. It is unsurprizing that most economists attacked the ideas vigorously, an assault that illustrates the conflict between the core assumptions of economists and those of the physical sciences. Economics adopts a standard model where production and consumption exist in a circular flow, without a natural context. It is a world of business and individual, producer and consumer, labor and goods. The physical world, which supplies resources and provides a site where wastes can be discharged, is not seen as essential and does not affect the equations, though, occasionally, the concept of “externalities” (which can be negative or positive) is mentioned. Ecological economists reject the argument that human activity is independent of nature, which they consider a conceit. Instead, nature is accepted as the indispensable foundation of human activities. Physics really matters; questions of depletion and pollution are inescapable. Several researchers have compared the MIT projections with what has actually happened since, establishing that the correlation between the standard run and real world trends over the intervening years is extremely close. One of these researchers, Graham Turner, compared the standard run’s modeled trajectory with 40 years of historical data (see Fig. 1). He concluded that the data for 1970–2010 approximated the standard run of the LTG model, although the figure shows a slight but favorable divergence for the trajectory of non-renewable resources, such as fossil fuels, phosphate and concentrated, rich sources of ores. Systems ecologists Charles Hall and John Day also compared the standard run with actual data to 2008. Despite the common perception that the LTG work had failed, the model’s performance was not invalidated, unlike models made by economists which are rarely, if ever, accurate over such a long time span. In 2018, Jørgen Randers, part of the original LTG team, using an updated model, compared the LTG projections with real world data up to 2017. He found that real world outcomes have approximated the second LTG scenario, the “standard run with extra resources,” or “pollution crisis.”

based on observed data used in the LTG study (1900–1970) based on observed data further processed by Turner (up to 2010)

non-renewable resources remaining

trent modelled by the LTG study (1970– 2100) (LTG)

peak health? c2030: population declines due to increasing cases of regional overload

population food per capita

services per capita

pollution

1900

1950

1970

industrial output per person

2010

2050

2100

Fig. 1 Adapted from Turner, this figure shows the standard run of the LTG model over 200 years. It is based on real data for 1900–2010; analyzed by Meadows et al. for 1900–70 and updated by Turner with data to 2010. It also introduces the concept of “peak health.” This is the point at which human population well-being reaches its zenith, if the LTG model (of decline this century) proves reliable. Although the timing is imprecise, peak health will precede the exact moment of maximum population. Peak health and unwanted population decline are not inevitable; even today they could be postponed, perhaps indefinitely, by enlightened policies and technological breakthroughs.

Environmental Health, Planetary Boundaries and Limits to Growth

535

Many resource analysts have identified declining “energy return on energy investment” as a key to understanding the slowing of the rate of improvement of living standards for most people in high-income countries. In turn, these indicators reflect the decline in easily accessible fossil fuels. This subject will be returned to. It is important to understand that the broad trends described in the figure and used in the LTG model do not capture the entire world. No model can. These trends were chosen because they captured many aspects of the material world, both human and natural, including feedback loops and potential crises that might threaten the human economy and thus wellbeing. Indicators of these trends were chosen for a number of reasons, including researchers’ attempts to choose representative indicators that would faithfully reflect the trends, the need for parsimony and, especially back in 1972, the difficulty of finding accurate data. The decline in population that is modeled is a logical consequence of the decline in resources per capita, whether non-renewable, or as food, industrial output and services. If these inputs decline, the modelers assume, so will human population, which depends on them.

The Ecological Footprint The concept of the ecological footprint is among the most important developments in thinking related to the LTG. Devised in the 1990s by William Rees and Mathis Wackernagel, this measure enables ecological impacts (individual, national or global) to be quantified and compared. On one hand, it estimates the ecological assets required to produce the resources consumed by any discrete population; this includes food and fiber plants, livestock and fish, timber and other forest products, space for urban infrastructure and whatever “sinks” are needed to absorb the waste produced, especially carbon dioxide emissions. The unit of measurement adopted is the area of biologically productive land and water, usually expressed in hectares. On the other hand, the ecological footprint also estimates the productivity of a country’s actual ecological assets (cropland, grazing land, forest land, fishing grounds, and built-up land). Researchers using the ecological footprint methodology calculated that, while the world’s biocapacity averages 1.7 ha. per capita, high-income “developed” countries greatly exceed this average. Examples include the United States (8.8 ha. per capita) and the United Kingdom (5.1 ha. per capita). The US, which has far more productive land available than the United Kingdom, appropriates 1.27 times its own biocapacity through imports, and the United Kingdom almost three times. Many island nations and arid countries such as Saudi Arabia exceed their biocapacity by a factor of more than ten. The ecological footprint has the strengths and weaknesses of any aggregate indicator: the concepts and units are easy to understand by policy-makers and the public, but it does not encompass all aspects of human environmental impact (methane, e.g., is not integrated). This matrix needs to be used in conjunction with other indicators. Another related development is the framework called “planetary boundaries,” devised by a large team led by Johan Rockström (see following section).

The Relevance of LTG to Environmental Health Although the environmental health literature has long identified links between health and indicators used in the LTG model, such as food, services, and pollution, there has been little recognition among the health community, including within public health, of the possibility of a reduction in population this century. Such reductions, as mentioned above, are forecast by most LTG scenarios, including the standard run (see Fig. 1). Such a reduction, were it to occur, will have impacts on public health. There are a few exceptions to this generalization. In 1972, the human ecologist Frederick Sargent, in an article in the American Journal of Public Health, warned that human “interventions in and manipulations of the processes of the planetary life supportsystem (ecosystem) have produced a set of complex problems” (page 629). In 1973, the visionary socialist economist Barbara Ward co-authored “Only One Earth” with the microbiologist, pioneering “Earth physician” and Pulitzer Prize winner, René Dubos. This book stated in part “the charge to the [1972 Stockholm] Conference was clearly to define what should be done to maintain the earth as a place suitable for human life not only now, but also for future generations” (emphasis added) (page xiii). Health is implicit in this statement, as is sustainable development. In 1993 McMichael, who was influenced by Dubos, echoed Sargent’s term, writing in the foreword to his influential book Planetary Overload, that “The most serious potential consequence of global environmental change is the erosion of Earth’s life-support systems. Yet, curiously, the nature of this threat to the health and survival of the world’s living speciesdincluding our owndhas received little attention” (page xiii). A quarter of a century later, little has changed. Although keynote talks by McMichael and John Last at the 1993 conference of the International Epidemiological Association warned of the dangers to global public health of global environmental change, there has been barely any recognition or follow up, at the broad integrated dimension. LTG receives little recognition now. A literature search for the term “limits to growth” in association with “health” reveals little other than work involving the authors of this contribution and their close collaborators. This is unfortunate. The persistence of single-issue approaches to address complex problems retards our ability to act effectively. This is the case where there is singular focus on climate change. The impact on human health and wellbeing is modified by many interacting factorsdincluding population, global demand and availability of resources and services and the waste and pollution we generate. Other than noting the exponential growth of carbon dioxide in the atmosphere, LTG did not address climate change as such, but it modeled these factors as part of a complex system.

536

Environmental Health, Planetary Boundaries and Limits to Growth

Although infrequently, the issues of resource depletion and population pressure have featured in some medical curricula, from at least the 1950s, preceding publication of the LTG by over a decade. Pioneering writers and speakers have included Colin Bertram, Peter Parsons, and Roger Short. John Guillebaud, the world’s first clinician professor of family planning and reproductive health, has repeatedly spoken on the issue of both consumption and population, to diverse audiences including medical practitioners, nurses, medical students and others, since 1971. In 2011, a special issue of the American Journal of Public Health focussed on peak oil, an important aspect of LTG. A scattering of other articles in the health literature have mentioned peak oil, but LTG is far more than peak oil. It is also far more than global warming. The reasons for the general failure of the public health literature to engage with LTG are complex but include an incorrect belief that LTG was discredited, over-specialization within the public health community, political suppression of the core ideas, and a lack of funders. The issue has had very few champions. This lack of engagement is not from lack of evidence.

Planetary Boundaries and Health The term planetary boundaries (PBs) was first published in 2009. These boundaries were defined, initially, as referring to nine Earth System processes (see Table 1), each of which had been, is and will be modified by human actions. The first planetary boundary paper (published in 2009 in Ecology and Society, subtitled “exploring the safe operating space for humanity”) explicitly acknowledges its debt to the LTG framework. A “safe operating space” implies the existence of multi-dimensional limits, just as does the word “boundaries”. Nonetheless, the links between PBs and the LTG are mostly implicit. Although the focus of the PB work is on identifying the criteria for a “safe operating space” for humanity, rather than that of other living species, the concept acknowledges that humans depend on the diversity of life on Earth. The first PB argued to be outside the safe operating space is biological diversity. Analogous to nine bodily systems (renal, hepatic, neurological and so on) the large multidisciplinary team (29 co-authors) that was responsible for the first PB articles argued that these Earth System processes can still provide useful services, even if functioning outside their optimal range. However, pushed too far, even one aberrant bodily function can cause death, and just one extremely disturbed Earth System process might trigger catastrophic consequences for humanity. Perhaps, for example, a precipitous loss of insects could disrupt pollination, food supply, and the survival of birds and other vertebrates. In turn, loss of birds might trigger additional crop losses, as their biological control of insects and other pests (i.e., of pests that do survive) is lost. Another example is the reduced complexity of the microbiome of those who live in cities and other modified environments. This in turn has been hypothesized to be a causal role in the emergence of auto-immune diseases such as Type 1 diabetes. Further, just as with bodily processes, Earth System components are linked to, interact with, and are influenced by common causes. Humans may live for decades with chronic illness, but die quickly if multi-organ failure develops. So too, Earth System processes interact, but exceeding multiple planetary boundaries, either simultaneously or in close proximity, risks precipitating a steep decline towards a civilization-crippling condition. Other writers have also commented on the similarities between the Earth and the human system, including James Lovelock, the originator of the Gaia hypothesis and an early user of the term “planetary medicine.” Table 1 shows links between planetary boundaries, LTG and human health.

Table 1

Links between planetary boundaries, LTG indicators and health impacts

Planetary boundary (PB) direct link to LTG indicator (italicized, brackets)

Most obvious connections with other PBs

Indirect link with other key LTG indicators

Link with human health discussed here?

Aerosol loading (pollution)

Climate change, novel entities

No

Biodiversity lossdterrestrial, marine

Land use, climate change

Biogeochemical flows (N, P) Climate change (pollution effect) Freshwater use

Land use Ocean acidification, land use Land use

Land system change

Climate change, biodiversity loss, biogeochemical flows Chemical pollution Climate change

Industrial output, food production (fossil fuels required for both) Food per person, to nourish growing population Food per person, population size Food and industrial output Food, industrial output per person, population Food per person

Novel entities (pollution) Ocean acidification (pollution effect) Stratospheric ozone depletion (pollution effect)

Novel entities

Industrial output, food per person Pollution Pollution

Yes No Yes No Yes Yes No (effect on marine foodchain) Yes

Environmental Health, Planetary Boundaries and Limits to Growth

537

The exact extent to which we are breaching planetary boundaries is still being explored. The team’s 2015 paper argued that two problems are already extremely dangerous (red zone) and two others are well on the way (amber zone), also noting that the designated boundaries are inter-related and most have overlapping implications.

Planetary Boundaries Red Zone: Biosphere Integrity The team considers loss of biosphere integrity as the most critical problem. Rates of extinction are reckoned to be at least 100 times the background rate, possibly as much as 10,000 times. Populations of vertebrate species declined by more than half between 1970 and 2012 and the biomass of wild mammals is now only about 2% of the total, which is dominated by humans and livestock. That remaining 2% is under siege, including for substances such as rhinoceros horns and pangolin scales, which have alleged therapeutic benefit, even though chiefly constituted of keratin, just as are fingernails. Biological diversity of lower order organisms (within soils, among pollinators, and in the species traditionally utilized for plant-origin food, resources and medicines) are similarly in decline. Also under deliberate attack are forests, especially species with valuable timber, or growing on land that can be used for crops, including oil palm. Many forests are also at unintended risk, due to roads and global warming, each of which also exacerbates the risk of fire. Global warming is also likely to further hasten biodiversity loss. In some cases biodiversity decline and climate change, acting together or independently, also promote the survival of new pests. Infestations of tree borers, able to survive warmer winters in large numbers, render trees, forests and their associated animal life more vulnerable to disease and fire. Humans, a form of life, depend on the fabric of other life on earth for their survivaldfor food, clean air and water, and numerous other ecosystem “services” (see glossary and below), as well as for novel substances, including drugs. At some level of bio-alteration the reduction in ecosystem services will reduce in a non-linear way that could cascade in a manner harmful to all civilization. Smaller scaled examples include the collapse of regional fisheries or the decimation of regional harvests by novel diseases or pests, such as in Ireland in the late 1840s. The Khapra beetle, a pest from South Asia that has evolved insecticide resistance threatens significant (up to 30%) loss of rice, post-harvest, in some regions. Devastating drought in the “dry corridor” of Guatemala, El Salvador and Honduras has been blamed for contributing to the influx of migrants attempting to enter the United States between 2014 and the present (2019).

Planetary Boundaries Red Zone: Biogeochemical Cycles For Rockström, Steffen and colleagues the second most pressing danger is the radical disruption of the biogeochemical cycles, particularly nitrogen and phosphorous. In nature, most nitrogen was inert in the atmosphere, though some was mobilized by bacteria and leguminous plants. Applied as fertilizer, nitrogen has greatly expanded food production, but is now cascading through our rivers, groundwater and continental shelves, initiating algal blooms and dead zones. In the case of phosphorous, the other widely dispersed fertilizer, there is an added dangerdphosphate rock is a resource in decline, with grim implications for future agriculture, especially where populations will lack the financial capacity to import it, as prices rise.

Planetary Boundaries Amber Zone: Land-System Change Land-system change is argued to be in the “amber zone,” close to crossing the boundary into extreme danger, if it has not already crossed it. Millions of hectares of vegetation are still being cleared every year and wetlands continue to be drained. Stocks of “blue carbon,” stored in plants and trees associated with water, such as kelp and mangroves, are also under threat. Land-system changes enable more food, fiber and other financially valued products to be grown, but amplify the harm to several PBs: biological integrity, climate and biogeochemical cycles. Oil palm plantations are displacing tropical forests in Asia, Africa and, increasingly, Latin America, where clearing already provides cattle pasture, soybean and sugar cane. Such plantations involve the death of vast numbers of individual animals and the annihilation of immense tracts of tropical forest. This boundary is underpinned by the declining remainder of tropical, temperate and boreal forests. These forests have a major role in land surfaceclimate coupling. In addition, agricultural land-system change may ultimately result in land degradation giving rise to erosion, loss of topsoil, sedimentation of waterways and degradation of coastal zones. In dryland regions, degradation is referred to as desertification. Large areas are affected. The United Nations Organization considers that 1 billion people are at risk of desertification globally, half of whom live in Africa where they face major challenges to water and food security. Increasing urbanization also drives land-system change, typically in areas of high agricultural productivity. Vast urban regions impact surface energy (through the “heat island” effect), alter hydrological and biochemical cycles, net primary productivity and biological diversity. They are also major foci of pollutants. As humanity becomes predominantly urbanized it is with these land systems that most of us have most intimate contact.

Planetary Boundaries Amber Zone: Climate Change Also in the amber zone is climate change. Remaining below the 2 C target, which is thought to provide a reasonable chance of avoiding catastrophic climate change, necessitates technologies which do not yet exist for extracting carbon from the atmosphere. Most nascent carbon reducing technologies require considerable energy, although new forms of cement may soon be feasible and

538

Environmental Health, Planetary Boundaries and Limits to Growth

affordable on a large scale. Research since 2015 suggests that the 2 C target may need to be adjusted downwards to provide a reasonable chance of avoiding calamitous warming, in which case climate change may already belong to the red zone. Even if the commitments made at the 2015 Paris Conference of the Parties are all honored, it currently seems likely to many analysts that global temperatures will be 2 C hotter than pre-industrial times by 2050 and nearly 3 C higher by 2100. These estimates depend on a number of variables: whether nations will adopt more ambitious pledges in the near term; whether technologies will emerge that can, at a low energetic cost, suck carbon back out of the atmosphere; whether unknown tipping points will be crossed, forcing a temperature surge. If these variables prove unfavorable, the aspirational 1.5 C maximum target may be reached by the early 2030s. There is no guarantee that the damage can be held to approximately 2 C, in particular due to the risk of amplifying feedbacks such as the release of carbon dioxide and methane from the Arctic, and/or the drying and burning of the Amazon forest. The capacity of the ocean to absorb CO2 is also declining. That will slow the rate of ocean acidification, but increase atmospheric heat trapping. Even if temperature rise and rainfall intensity can be contained, crop yields will decline and many places will become unliveable due to excessive heat and humidity or coastal inundation. Glaciers that act as a bank to store water and in some cases whose melt supplies electricity to billions in Asia and South America will shrink, coral reefs and many other species will disappear, and significantdeven catastrophicdsea level rise will result. In Greenland and along the entire coast of West Antarctica ice shelves are already retreating or collapsing as warm seawater intrudes underneath, grounding lines retreat, and the glaciers behind them accelerate in their march to the sea. Climate scientist James Hansen and many glaciologists warn that the disintegration of the polar icesheets involves non-linear processes, and the timing, though still unknown, may be far quicker than assumed, and may include rapid, even unstoppable collapse of ice cliffs in series in parts of Antarctica and Greenland. The impact upon human wellbeing resulting from stress on biological diversity will be compounded by climate change and the fragmentation of society. For example, a complex economic and social fabric enables the importation of food and other resources to an increasing number of regions, some of which have been in this vulnerable situation for decades. Such mechanisms are fragile. Today, five countries are recognized as afflicted by famine: Yemen, Somalia, South Sudan, N.E. Nigeria and two regions of the Democratic Republic of the Congo (Kasai and Tanganyika). In the long run, if climate change and other aspects of adverse ecological change intensify, then it is also possible that regions that are current net food exporters will also experience famine; if this evolves then conditions in food-importing regions will inevitably deteriorate.

Pollution Alongside these four major crises, the researchers also identify the threat from various forms of pollution. Most of these are discussed elsewhere in this encyclopedia. However, we briefly discuss novel entities.

Novel entities, novel behaviors, novel environments and health

Novel entities is a recently introduced term, first identified as a planetary boundary by the PB team in 2015, evolving from chemical pollution in the earlier PB publications. The PB team defines novel entities as “forms of existing substances, and modified life forms that have the potential for unwanted geophysical and/or biological effects.” Most novel entities have been generated in the Anthropocene, the human-dominated era, defined roughly as the time since the start of the widespread combustion of fossil fuels, in the 18th century. They include synthetic molecules such as chlorinated fluorocarbons (CFCs), DDT, dieldrin and other organochlorines used as biocides and compounds used in industry such as polyvinyl chloride. CFCs, by harming the stratospheric ozone layer, clearly impinge on an Earth System function (and thus indirectly on human environmental health); the destruction of the stratospheric ozone layer causes UV light to reach the earth’s surface to a greater extent than prior to the widespread use of CFCs, leading to the potential for an increased incidence of skin cancer, ocular problems and immunosuppression. Here, however, we focus mainly on the biological effects of novel entities. Novel entities are not confined to new chemical compounds, as the PB authors note. Genetically altered organisms can be conceptualized as novel entities, as are nanoparticles (such as in sunscreens and cosmetics), and blue light from computer and phone screens. Humans are also exposed to numerous other emerging environmental hazards, especially since World War II, and to human-generated ionizing radiation (X-rays were once routinely used to help fit shoes). Possible health risks of nonionizing radiation, such as from mobile phones, are discussed briefly below, as are novel behaviors, foods and other novel environments. A 2017 Lancet Commission report estimates that 140,000 compounds have been synthesized since 1950, with perhaps 5000 widely disseminated in the global environment. Although some are regulated, and a few have been banned, the pace of their introduction greatly exceeds that of epidemiological investigation and legal constraint. For example, the International Agency for Research into Cancer (IARC), which is closely affiliated with the World Health Organization (WHO) has recently concluded that the widely applied herbicide glyphosate (commercially known as “Round Up”) may be carcinogenic. These findings have been resisted by some companies and their agents and supporters. Thousands of studies of novel entities have found or suggested that many are carcinogenic, while others act as endocrine disruptors or harm health in other ways. Some have been linked with massive ecosystem disruption, including colony collapse disorder (of bees) and “insectageddon.” The Lancet Commission on Pollution reported that fewer than half of the most widely dispersed chemicals have undergone any testing for safety or toxicity. Interactions between such chemicals have received even less examination. The immunological and

Environmental Health, Planetary Boundaries and Limits to Growth

539

allergenic effects of most novel entities are also barely explored, and could contribute to the changing pattern of allergic diseases, auto-immune conditions and autism. While some novel entities have been regulated (e.g., X-rays) and banned (such as the “dirty dozen,” including the organochlorine dieldrin, which was, as a rare exception, strongly linked with breast cancer), hundreds or thousands of others are released annually onto the market. In both industrial and rural societies, almost the entire population has been exposed to hundreds of chemicals whose concentrations can be measured in tissue samples, while for thousands more, no test exists. There is little support from policy decision makers around the world for precautionary approaches to many potential risks. For example, there are concerns that mobile phones can cause brain tissue to warm up, if the receiver is held close to the ear. However, there are also concerns about the effects of non-ionizing radiation on brain tissue, and claims of an increased risk of malignant brain tumors in heavy users of mobile phones. Cardiac and neurological disorders are also plausible consequences of the rapidly increasing use of wireless devices, including smart meters. Infrasound from wind turbines is another novel entity. Such sounds disturb the sleep of many people who live close to them, and there may also be other harmful effects including vertigo, as well as chronic diseases worsened by chronic poor sleep. Such concerns have often been dismissed as “nocebic” (i.e., through apprehension and negative thoughts) as high quality evidence for health impacts is lacking. The precautionary principle would place the onus on industry to prove safety. Novel behaviors, foods, organisms and environments are also emerging in the Anthropocene. Examples include reduced weight bearing exercise in childhood and adolescence (leading to a higher risk of early-onset osteoporosis), increased screen watching and the partial replacement of tangible, local friends and acquaintances for virtual social networks. Novel diets include the widespread consumption of sweetened drinks, a known factor in obesity and harmful to health, while the greater variety of foods out of season, especially of fruit, is beneficial. There are also novel microbial and parasitic environments and novel microbiomes, each of which is likely to be associated with health benefits and risks. For example, humans and livestock farming provide opportunity for the amplification and spread of genes that convey antibiotic resistance. These genes are favored wherever antibiotics are used by humans or fed to livestock to promote growth and limit infectious disease. Antibiotic resistance genes have been shown to spread to environmental microbes in soil and water systems, to wildlife and to human and livestock pathogens. Identified mechanisms for this transfer include air-borne transport of particulate matter and direct and indirect contact with waste products. The augmented “wild” population of antibiotic resistant genes is an added risk to human health and has poorly understood implications for other environmental microbial systems. Novel or increased contact with mammalian wildlife creates further potential for interspecies transfer of pathogens, particularly viruses. This is discussed below (in biodiversity and health).

Global Warming and Health Since the 1980s, there has been increasing recognition of ways that anthropogenic emissions of greenhouse gases (manifest in phenomena including global warming, weather wilding, jetstream oscillations, sea level rise and ocean acidification) is likely to impact human health, both positively (e.g., fewer cold waves in some areas) and negatively. There are numerous mechanisms for this. One that is perhaps most obvious is an intensification of extreme weather events, including heatwaves, droughts, flooding, and major storms including cyclones, typhoons and hurricanes. Such events can have complex and delayed effects, such as from the savage 2017 hurricanes that flooded and devastated Houston, Texas and the US territory of Puerto Rico, as well as other regions. There is also speculation that the frequency, severity and locations of tornadoes may be affected. Very intense flooding events, where weather systems remain almost stationary, have generated the neologism “rainbomb.” Changes in vector-borne diseases, food security, and sea level rise have long been forecast to occur due to global warming. Global warming is already affecting migration, conflict and mental health, and these effects are likely to intensify. Over the longer timescale, of decades to centuries, adverse effects are forecast to exceed benefits, perhaps by orders of magnitude, especially if the ice sheets in Greenland and Antarctica continue to melt. There are many ways health effects related to climate change can be categorized, such as through changes in temperature and humidity, vector ecology, water quality, water and food supply impacts, severe weather effects, air pollution, allergens, and migration, conflict and related mental health implications. A simpler classification has three main classes, conceptualized as “direct” (e.g., heatwaves), “indirect” (e.g., changes in vector ecology) and a third category, causally more displaced, with the potential for the largest burden of disease, through means such as large-scale conflict, migration and famine. In this classification, effects on mental health are regarded as “cross-cutting.” Dislocation from one’s home due to a storm surge or a prolonged blackout (some parts of Puerto Rico lacked power for months following 2017 Hurricane Irma) can lead to depression and even suicide. Such stress is also likely to exacerbate domestic violence, especially if associated with increased economic insecurity. Increased rates of post-traumatic stress and anxiety are also likely in survivors. Even worse than the mental trauma of a single extreme weather event are the health consequences, including to mental health, from conflict, famine and forced migration. Of course, such “tertiary” effects have multi-dimensional causes, from ancient rivalries to recent and emerging contests over scarce resources, often aggravated by “youth bulges” and brutal repression. All writers on these “tertiary” topics, publishing in the academic literature, recognize the complexity of this issue, and frequently try to convey this by using the term “risk multiplier” to indicate how changes in climate can worsen (or in some cases reduce) the

540

Environmental Health, Planetary Boundaries and Limits to Growth

co-factorial causal contributors to conflict. That is, climate change is conceptualized as similar to a catalyst or enzyme. Famines, wars and migration can all occur without climate change, but in some cases climate change can make these phenomena much worse. In some cases, such as sea level rise, climate change can be conceived as by far the dominant factor. However, even for vulnerable lowlying Pacific islands, co-factors such as high population growth have contributed to vulnerability and the risk of migration, for example by depleting fresh water lenses, leading to the salinization of garden soil. Many important diseases, including parasitic, vector-borne and zoonotic diseases are associated with invertebrates such as ticks, mosquitoes and blackflies, or higher order vertebrates. Ticks transmit diseases such as Lyme disease, mosquitoes transmit many illnesses such as malaria and yellow fever, while black flies transmit river blindness (onchocerciasis). The distribution of these insects and animals are shaped, not only by climate but by many other aspects of their ecology. Often, the identification of the precise attribution to climate change is elusive and possibly fruitless. Less intuitively, the epidemiology of many vector-borne diseases, including malaria, dengue fever and Zika virus is also influenced by the ambient temperature in another way, by determining the growth rate of the parasite or virus within the cold-blooded vector. More rapid growth of these pathogens (i.e., in slightly warmer vectors) can, in some cases, lead to additional cycles of transmission, leading to explosive increases in cases. Another way to think of these organisms is that their numbers and disease potential exist within a window or “sleeve” of climate and ecological suitability. It would be wrong to think that a warmer or wetter climate will inevitably increase the burden of these infectious diseases. As temperatures rise, insect populations may too, but only to a point. Beyond that point, vector populations may in fact decline. Similarly, excessive rain may reduce vector habitat (e.g., flushing the population away), as may unusually prolonged droughts (drying out the habitat). However, the epidemiology of vector-borne diseases is also influenced by human factors, such as insecticides (including impregnated bednets) and molluscicides, and by treatments such as vaccines (e.g., for yellow fever) and antimalarial drugs such as quinine.

Biodiversity Loss and Health The impact of biodiversity loss on human health is being realized slowly. The dimensions of biodiversity (the diversity of genes, species and ecosystems) are not experienced or understood by most individuals, or policy makers, and challenge health researchers. As such, the impacts are dispersed across multiple scales of biodiversity and multiple dimensions of health and wellbeing much of which is discussed elsewhere in this contribution. The PB team has examined the genetic diversity within and between species and ecosystems and its functional role within a global system, separately. They conclude that the loss of genetic diversity has exceeded a safe limit, with uncertainty remaining over how this impacts the function of ecosystems. The alarming rate and extent of loss of genetic biological diversity has been discussed earlier. The loss of genetic diversity undermines the resilience of ecosystems. Under relentless ecological change, biological diversity is replaced by ecosystems dominated by fewer, highly adaptive species, inadvertently or purposefully promoted by human activities. These include domestic species, pests and wild synanthropes, humans and the novel entities described above. Genetic diversity is also the source of pharmaceutical discovery as well as the storehouse of traditional medicines. Most naturederived pharmaceuticals come from plants; some come from traditional medicine practice but much is the product of systematic searching, modification and trial. Nature produces an inspirational variety and complexity of molecules to further manipulate. The diverse origins of the pharmaceutical armory against HIV AIDS includes Betulinic acid, derived from the bark of the tree Betula pubescens; Bevirimat, extracted from a Chinese herb Syzygium claviflorum; and Ganoderic acid b, isolated from the fruiting bodies and spores of the fungi Ganoderma lucidum. Such a utilitarian appreciation likewise extends to livelihoods dependent on different aspects of biodiversity. For some, particularly vulnerable groups and those in remote locations, survival is dependent on the ability to harvest freely (or illegally) from the natural environment. Rich biological diversity is often helpful for the resilience of ecosystems functions, sometimes called “services.” In the early 2000s, the Millennium Ecosystem Assessment, a global collaboration of over 1000 scientists, grouped these into four kinds, which they called supporting, provisioning, regulating and cultural. Food production is classified as a provisioning service. For such a service, biological diversity is required for (supporting or underpinning) soil health, pest control (a regulating service), pollination and the genetics of livestock and crops. Other products of provisioning services include clean water, bio-fuels and crop residues used to provide energy. Other regulating services include carbon sequestration, climate regulation and disaster risk reduction. Nutrient recycling is another example of a supporting ecosystem service. A sacred grove or an iconic species of deep significance to the beholder illustrate cultural services. Disease-regulation as an ecosystem service is contested, but some diseases, such as Lyme disease, are more prevalent in diminished and simplified ecosystems. The net effect of deforestation often favors mosquitoes that serve as vectors of human diseases including previously obscure pathogens such as Zika and Chikungunya viruses, or encourages urbanization or farm-foraging by fruit bat hosts of henipah viruses Nipah and Hendra. There is also a complex relationship between ecological change and malaria, which is, by far, the most important mosquito-transmitted disease. As discussed above, climate change also impacts these vector borne diseases. Changes (driven by reductions) in biodiversity have increased zoonotic infectious disease risk especially due to intensive animal husbandry. Livestock, which now dominate global vertebrate biomass, and intensive production, create the opportunity for viral amplification and mutations resulting in new and previously unrecognized animal diseases and zoonoses. These include H5N1

Environmental Health, Planetary Boundaries and Limits to Growth

541

(avian flu, via chickens), H1N1 (swine flu, via chickens and pigs), possibly Sudden Acute Respiratory Syndrome (SARS, via farmed civet cats and racoon dogs), Nipah virus in Malaysia (via pigs) and Middle East Respiratory Syndrome (MERS, via camels). Novel or increased contact with mammalian wildlife creates further potential for interspecies transfer of pathogens, particularly viruses. Such contact is facilitated by accelerated land-use change and wildlife harvesting and sometimes aided by domestic animals acting as amplification hosts. Disease may transmit directly to humans or indirectly through domestic animals. Many opportunities for viruses to jump between species may be required before a significant disease emerges. Tropical areas of high biodiversity and under human pressure are considered “hotspots” for such diseases. Novel zoonoses of wildlife origin such as HIV AIDS, Ebola and SARS corona virus have been the subject of strong interest in the late 20th century and early 21st century. The examples above have resulted in pandemics. Many other smaller viral “spillover” events have occurred with localized impact only. Natural or wild areas also reduce stress, depression and anxiety in those who visit them. This effect appears to be dependent on cultural and socioeconomic characteristics of the visitor and has deeper, religious dimensions for many Indigenous people. As discussed earlier, the human microbiome links us to the external world. Personal microbiodiversity is enriched by environmental and dietary diversity and, through mechanisms of immune regulation and the gut-brain axis, has a significant impact on physical and mental health. The benefits of experiencing biodiversity within natural settings appear to be physiological as well as psychological. However, the living environs of most people is one of reduced biodiversity, and for many most time is spent indoors. It is the capacity of Earth’s natural systems, the aggregate of species and ecosystem biodiversity, to provide resilience despite changing environmental conditions that should be of the most fundamental concern to health and well-being. The biosphere must absorb our wastes, including carbon emissions; buffer coastlines from extreme weather events; provide clean air, water, a moderate climate, and the renewable resources humanity seeks to consume. It is therefore of great concern that the Global Ecological Footprint Network (see “The Ecological Footprint” section) estimates that 150% of [global] biocapacity is consumed per annum.

Declining Energy Yield From Energy Expended Today, many determinants of health and wellbeing, including effective health services and their inputs, such as consumables and pharmaceuticals, are dependent on abundant and affordable energy. Millions of people living in poverty, especially in sub-Saharan Africa and South Asia, suffer multi-system consequences of air pollution, both indoor and outdoor, from smoke generated by their own household and by other households. In many locations, this is aggravated by the burning of fossil fuels such as coal and oil. Many people, particularly women and children, undertake daily laborious effort to obtain fuel and water. Access to electrical power, and even gas for cooking would bring significant improvements in health to the 1.3 billion people living without electricity. Probably the least understood aspect of limits to growth is the concept and importance of declining energy return on energy investment (EROEI). The number given for EROEI is the ratio of useful energy obtained versus primary energy expended (see Box 1).

Box 1 Energy return on energy investment The major energy carriers in use today are fossil fuels, especially coal, oil and gas. These are used not only for transport, heating and electrical power (over 60% globally), but also as a chemical stock to manufacture plastics and to make fertilizer. But, just as in the past when our ancestors bred, trained, housed and fed donkeys and draft horses for assistance with laborious tasks, fossil fuel needs to be wrested from the environment, whether drilled for, dug by hand or removed by robotic shovels. These processes themselves take energy. In addition, energy released or captured from these sources needs to be distributed and the infrastructure to do that needs to be maintained. Coal needs to be transported and burned, with some of its energy captured through combustion. Electrical energy needs to be distributed and regulated irrespective of its source (including solar, wind, hydro and tidal). To manufacture wind turbines or solar panels requires energy, as does the mining infrastructure described above. Life-cycle assessment allows a full quantification of the energy invested in any form of energy extracted, which is clearly significant. In the heyday of fossil fuels, oil and coal were easy to extract, and their EROEI was highdsome analysts report an average EROEI of over 100 in the late 19th century. In contrast, a review published in 2016 in Nature Communications found that, globally, up until 2017, solar panels may have yielded no energy beyond that required for their manufacture and installation. In other words, under the least optimistic scenario, solar panels, cumulatively, have been a sink for energy, rather than a source until very recently. More encouragingly, the EROEI for solar appears to have increased considerably in the last decade, perhaps to 30 or 35, especially in locations with high insolation, such as in the tropics. The climate footprint of solar is much lower than of coal and will continue to decline, especially as the efficiency of panels increases and the electricity they generate is used to manufacture additional ones. The EROEI for wind is widely agreed to be even higher than for solar, so these two sources have promise as major substitutes for fossil fuel energy, even though researchers still debate whether renewables will yield energy abundant enough to fuel the current consumption-oriented economy. In addition, Ugo Bardi and Sgouris Sgouridis argue that the window for a successful transition is narrow: a very large investment of available energy is required, while still maintaining adequate energy for ongoing services. Moreover, the world’s economic system may fail to allocate the necessary resources in the necessary timeframe (by 2050 in this analysis). Bardi and Sgouridis are skeptical that market forces can effect this transition. They calculated that, as of 2017, capital investment is only about one tenth of what is required. Energy investment is also inadequate. Though not impossible, any transition to renewables will be challenging and requires a substantially greater rate of energy and capital investment than is currently allocated.

542

Environmental Health, Planetary Boundaries and Limits to Growth

There is a widespread understanding that fossil fuels have been crucial to the human colonization and domination of the biosphere. The importance of energy is explicit in the work of many environmental writers, and implicit in the military actions of many great powers, who have frequently acted with violence or duplicity to acquire or maintain energy resources, from the Middle East to the Timor Sea. Without fossil fuel, modern civilization could not have evolved in the way it did, whether to create highways, intensive agriculture, skyscrapers or the space age. Although, in the Middle Ages, the harnessing of water power for work from milling grain to sawing wood (“sawmilling”) was widespread, such industry was necessarily confined to suitable riversides. Ancient mariners crossed straits and sometimes oceans, powered by oars and blown by the wind, but the scale of maritime trade was miniscule compared with that made possible by steam, oil and nuclear-powered vessels. While this dependence on energy is well-known, though rarely highlighted in economic histories, the fact that EROEI is steadily declining is rarely mentioned in mainstream media or outside of specialist journals; it is claimed that fracking and shale oil now negate peak oil, though insufficient attention is paid to the instability of an industry reliant on proliferation of drilling sites and rising costs. The decline of EROEI may be disconcerting to a public ill-prepared for the future austerity which such a decline implies. Such consequences would not only affect health services, but the myriad other processes necessary for health which rely on affordable energy, including agriculture.

Conclusion Concern about the impact of global ecological change on health is growing. So too is an understanding of the need for multi-sector collaborations. Few groups are yet addressing the deeper issues of PBs. However, many once disparate groups are converging as they seek to improve equity in health with a focus on global problems of biodiversity decline, environmental degradation and climate change. “Planetary Health,” promoted by the prestigious medical journal The Lancet, is currently prominent. Others include EcoHealth, One Health and the in Vivo Planetary Health group. Predicting future human health (or survival) under the status quo is difficult. Ecological systems typically demonstrate nonlinear responses to perturbations and it is likely that current consumption patterns will precipitate dramatic shifts in biodiversity, ecological function and health-supporting services. Impacts of environmental change are disproportionately experienced by poor and rural communities. Advocacy and action to prevent these health risks is an essential role for those concerned with public health. This entry has reviewed the issue of Limits to Growth, its more modern formulation as Planetary Boundaries and the relevance of both concepts to global population health. It has used these frameworks to classify and extend some environmental health risks. These include novel entities and behaviors, and global risks including climate change, biodiversity loss, land-system change and biogeochemical cycles. These risks are escalating and we recognize the shortfall in assessing the health risk of new pollutants, entities and behaviors. When considered together with economic and population growth, and with ever-increasing resource and energy use, these environmental health risks may foreshadow significant population decline. By using the LTG and PB frameworks in this contribution, society can frame preventative measures on the scale at which this is required. Without urgent change, future global population health, and survival, is imperiled.

Further Reading Boulding, K.E., 1966. The economics of the coming spaceship earth. In: Jarrett, H. (Ed.), Environmental quality in a growing economy. Resources for the Future/Johns Hopkins University Press, Baltimore, MD, pp. 3–14. Butler, C.D., Higgs, K., 2018. Health, population, limits and the decline of nature. In: Marsden, T. (Ed.), The sage handbook of nature. Sage, London, pp. 1122–1149. Butler, C.D., McFarlane, R.A., 2018. Climate change, food security and population health in the anthropocene. In: DellaSala, D.A. (Ed.), Encyclopaedia of the anthropocene, vol. 2. Elsevier, pp. 453–459. Daly, H., 2013. A further critique of growth economics. Ecological Economics 88, 20–24. Francis, P., 2015. Encyclical letter Laudato Si0 of the holy father Francis on care for our common home. Available at. https://www.catholic.org.au/commission-documents/bishopscommission-for-justice-ecology-and-development/laudato-si/1710-laudato-si-full-text/file. Higgs, K., 2014. Collision course: Endless growth on a finite planet. MIT Press, Cambridge MA, 416 p. Meadows, D., Meadows, D., Randers, J., Behrens III, W., 1972. The limits to growth. Universe Books, New York, 205 p. Raskin, P., Gallopin, G., Gutman, P., Hammond, A., Kates, R., Swart, R., 2002. Great transition: The promise and lure of the times ahead. Stockholm Environment Institute, Boston, 99 p. Steffen, W., Richardson, K., Rockström, J., Cornell, S.E., Fetzer, I., Bennett, E.M., et al., 2015. Planetary boundaries: Guiding human development on a changing planet. Science 347, 736–746. Union of Concerned Scientists, 1992. World scientists’ warning to humanity. Union of Concerned Scientists, Cambridge, MA. Crist, E., Mora, C., Engelman, R., 2017. The interaction of human population, food production, and biodiversity protection. Science 356 (6335), 260–264. Turner, G., 2014. Is global collapse imminent? [pdf] MSSI research paper no. 4, Melbourne sustainable society institute. University of Melbourne. Available at. http://sustainable. unimelb.edu.au/sites/default/files/docs/MSSI-ResearchPaper-4_Turner_2014.pdf. Louwen, A., van Sark, W.G.J.H.M., Faaij, A.P.C., Schropp, R.E.I., 2016. Re-assessment of net energy production and greenhouse gas emissions avoidance after 40 years of photovoltaics development. Nature Communications 7, 13728. Bardi, U., Sgouridis, S., 2017. In support of a physics-based energy transition planning: Sowing our future energy needs. Biophysical Economics and Resource Quality 2, 14. von Weizsäcker, E.U., Wijkman, A., 2017. Come on! Capitalism, short-termism, population and the destruction of the planet. Springer, New York, 220 p.

Environmental Health, Planetary Boundaries and Limits to Growth

543

Relevant Websites https://350.org/d350org. http://www.climatecodered.org/dClimate Code Red. http://www.earth-policy.org/dEarth Policy Institute. http://www.npr.org/sections/thesalt/2012/09/20/161501075/high-food-prices-forcast-more-global-riots-ahead-researchers-say. https://www.ipcc.ch/dIPCC. https://www.invivoplanet.com/dThe inVivo Planetary Health Network. http://oasisinitiative.berkeley.edu/dThe OASIS Initiative. http://www.un.org/sustainabledevelopment/sustainable-development-goals/dUnited Nations Sustainable Development Goals. http://www.who.int/mediacentre/factsheets/fs266/en/dWorld Health Organization: Climate Change and Health. http://www.who.int/globalchange/environment/en/dWorld Health Organization: Global Environmental Change [and health]. http://www.worldwatch.org/dWorldwatch Institute.

Multimedia http://www.stockholmresilience.org/research/research-videos/2016-05-22-understanding-social-ecological-systems.htmldCarl Folke: Understanding social-ecological systems. http://www.stockholmresilience.org/research/research-videos/2016-05-31-john-schellnhuber-the-non-linearity-of-the-climate-challenge.htmldJohn Schellnhuber: The nonlinearity of the climate challenge. https://dotearth.blogs.nytimes.com/2009/03/13/scientist-warming-could-cut-population-to-1-billion/dScientist: Warming could cut population to 1 billion. http://www.stockholmresilience.org/research/research-videos/2016-11-01-the-anthropocene-where-on-earth-are-we-going.htmldWill Steffen: The Anthropocene: Where on Earth are we going? http://www.stockholmresilience.org/research/research-videos/2016-10-19-a-critical-look-at-food-security-strategies.htmldJoern Fischer: A critical look at food security strategies. https://grist.org/briefly/officials-underreported-hurricane-harveys-toxic-fallout/. http://alert-conservation.org/dAlert Conservation. https://www.carbonbrief.org/dCarbon Brief. https://climateandsecurity.org/dThe Center for Climate and Security. https://www.clubofrome.org/ The Club of Rome. https://www.dea.org.au/dDoctors for the Environment, Australia. https://www.footprintnetwork.org/our-work/ecological-footprint/dGlobal Ecological Footprint Network. http://www.greattransition.org/dGreat Transition Initiative. https://health-earth.weebly.com/dHealth Earth. http://tropical.atmos.colostate.edu/Realtime/index.php?loc¼northatlanticdHurricane Energy. https://www.youtube.com/watch?v¼szTzj9k_gUI&feature¼share (melting ice in Greenland). http://www.ippnw.org/ International Physicians for the Prevention of Nuclear War. http://www.nafeezahmed.com/dNafeez Ahmed. http://chm.pops.int/TheConvention/ThePOPs/The12InitialPOPs/tabid/296/Default.aspxdNovel Entities. http://oehni.in/dOccupational and Environmental Health Network of India. https://www.rockefellerfoundation.org/our-work/initiatives/planetary-health/dPlanetary Health. https://press.anu.edu.au/publications/health-people-places-and-planet. http://www.resilience.org/resilience-author/ugo-bardi/. https://www.youtube.com/watch?v¼Rtg5QJlb484&index¼3&t¼11s&list¼LLsyMawK7HdOFBKz95O2ZnfQ https://cassandralegacy.blogspot.com/2018/10/why-economists-cant-understand-complex.html.

Environmental Health Trackingq AD Kyle, University of California, Berkeley, CA, United States © 2019 Elsevier B.V. All rights reserved.

Glossary Environmental factor This term is broadly applied to any aspect of the physical environment that can affect health. Epidemiology The study of patterns of disease in populations, places, and time periods and the causes of the patterns. Intervention An action, intended to improve public health, taken by a public health agency or organization that applies to a population or group. PCBs (polychlorinated biphenyls) A form of flame retardant that was widely used in electrical transformers and other products; new uses have been banned in the United States and other countries, but the chemicals are still routinely detected in fish and other biota.

Abbreviations CDC Centers for Disease Control and Prevention (US) CEC Commission for Environmental Cooperation (North America) MEME Multiple exposures and multiple effects NHANES National Health and Nutrition Examination Survey (US) PBT Persistent, bioaccumulative, toxic pollutants PM Particulate matter US EPA US Environmental Protection Agency VOC Volatile organic compound WHO World Health Organization

Introduction Environmental health tracking is an emerging topic that reflects widespread interest in learning more about how the environment affects health and what can be done about it. Its overall goals are to improve understanding of how the environment affects people’s health by making available more information about the environmental factors that affect health and the diseases that disorders to which these environmental factors contribute. This is to be done in ways that are useful to policy makers, the public, and government agencies. Environmental health tracking combines concepts from public health surveillance, environmental monitoring, community health, and environmental protection. Environmental health tracking might be seen as a step toward creating cohesive and functional information systems for modern environmental public health. There are two principal reasons that environmental health tracking has emerged, over the past decade, in several parts of the world. The first is the increasing understanding that environmental factors contribute to chronic diseases that affect many people. Many different kinds of environmental factors can affect health. Table 1 provides examples of different kinds of environmental factors that can affect health in negative and positive ways. Research over the past few decades suggests that environmental factors contribute to mortality, endocrine disruption, reproductive effects, birth defects, neurodevelopmental effects, neurological effects, heritable epigenetic changes, diabetes, obesity, autoimmune disease, and so on. Table 2 lists some of the chronic diseases and disorders that have been associated with environmental factors. The second reason is that lack of information about the geographic distribution of environmental factors important to health. Developing better information to understand and document such relationships is an important public concern and a key goal. Environmental health tracking focuses primarily on extrinsic environmental factors that affect people’s health. Because it focuses on factors that are external to the individual, it is different from surveillance strategies that focus on the behaviors and actions of

q

Change History: October 2018. The section editor updated the references. This is an update of A.D. Kyle, Environmental Health Tracking, Editor(s): J.O. Nriagu, Encyclopedia of Environmental Health, Elsevier, 2011, Pages 424–432.

544

Encyclopedia of Environmental Health, 2nd edition, Volume 2

https://doi.org/10.1016/B978-0-12-409548-9.11647-1

Environmental Health Tracking Table 1

545

Environmental factors important to health

Traditional environmental factors Air pollutants Soot or particulate matter, ozone, lead, SO2, CO, and NOx Toxic air pollutants Benzene, acrolein, formaldehyde, carbon tetrachloride, diesel exhaust, etc. Drinking water pollutants Lead, pesticides, nitrogen compounds, radon, etc. Contaminants in food Methyl mercury, pesticides, dioxins, PCBs, etc. Emerging areas Contaminants in house dust Lead, pesticides, flame retardants, etc. Indoor factors Ventilation, control of moisture, etc. Elements of the built environment Access to quality food stores, walkability, etc. Effects of climate change Temperature stress, dislocation, fire, etc.

Table 2

Diseases and disorders of interest for environmental health

Asthma Autoimmune effects Birth defects Cancers Chronic obstructive pulmonary disease Diabetes Elevated levels of lead in blood Heritable epigenetic effects Mortality Neurodevelopmental effects (e.g., loss of cognitive or motor function) Neurological effects Other respiratory effects Parkinson’s disease Reproductive effects (e.g., low birth weight and reduced fertility)

individuals. These include actions such as smoking or excessive food intake. These are not completely independent of the environment since people’s habits are affected by their environments. Environmental health tracking pulls together ideas, data, and tools from several disciplines.

Environmental Monitoring Perhaps the most significant building block of environmental health tracking is environmental monitoring. The purpose of environmental monitoring is to measure contaminants and other environmental factors in the environment. The focus of environmental monitoring has been on contaminants that are chemical agents, but biological and physical agents are sometimes included. Other kinds of environmental factors, such as elements of the built environment, have not historically been included in environmental monitoring programs, but this may change with the broadening of the spectrum of environmental factors recognized to affect health. Choices have to be made about where to measure environmental agents in the environment. Agents that are emitted or discharged can be measured as they are released, for example, as part of the gases and particles that are released through a stack or as part of the wastewater discharged from an industrial plant. Some of the contaminants commonly measured in emissions to air include carbon monoxide (CO), nitrous oxides (NOx), sulfur dioxide (SO2), particulate matter (PM), and volatile organic compounds (VOCs). Some of the contaminants most commonly measured in wastewater include nitrogen, biological oxygen demand, and various toxics. When selecting where to monitor, the purpose for which the information is needed is important to consider. For example, information about emissions or discharges is most useful for determining how individual facilities or classes of facilities contribute to

546

Environmental Health Tracking

pollutant loadings, for making decisions about compliance and enforcement actions, and for targeting needs for additional controls. These are policy-relevant actions that can contribute to a cleaner environment and reduce the environmental burden of disease. However, information about releases is less useful for epidemiological studies or to estimate health effects because it does not reflect the degree to which people are exposed to the contaminants released. Contaminants can be measured once they are out in the environment. Such data can help to characterize the overall quality of the environment and to determine whether conditions are improving or worsening. They can sometimes help to estimate human exposures on a large scale in ways that are informative for environmental health policy or epidemiological studies. Fig. 1 shows pathways that environmental factors can take toward reaching humans and affecting health. Perhaps the most extensive monitoring in developed counties is for air pollutants, particularly for particulate matter (PM) and ozone. (The concern is for ozone at the ground level, where people live and breathe. Ozone at ground level is responsible for a significant death toll and for other adverse health effects. By contrast, ozone up in the atmosphere protects the earth from radiation and is beneficial.) Both ozone and PM are associated with a significant burden of disease and have been studied extensively. Children, people with existing disease, and the elderly are most susceptible. The United States and many countries around the world set limits for the maximum levels that are to be allowed for these pollutants in the outdoor air. Many other common air pollutants are monitored less frequently and are less studied. These include benzene, acrolein, formaldehyde, carbon tetrachloride, and many others, often called toxic or hazardous air pollutants. Pollutants measured in lakes, rivers, and streams commonly include turbidity, which is a measure of how clear the water is and so reflects the presence of particles and nitrogen. Chemical contaminants generally receive less consistent monitoring. Drinking water is usually more carefully monitored than surface water bodies, at least for public water systems in developed countries. In the United States, public water systems are responsible to monitor contaminants for which drinking water standards

Driving forces−major changes - Technological - Economic - Social - Industrial - Climate

Sources of environmental agents and conditions

Agents and conditions in ambient environments - Outdoor air - Lakes/streams - Soil - Crops - Aquifers

- Vehicles - Industry - Energy - Products

Agents and conditions in exposure media - Indoor air - Outdoor air - Drinking water - Food and dust - Products - Vehicles

People

Built environment - Communities - Roads - Schools - Homes - Parks and green space - Walkability - Food stores

- Susceptibility - Vulnerability - Activities - Employment - Education - Socioeconomic status - Race/ethnicity - Income

Community factors - Social capital - Economic conditions - Employment

Ecosystem and ecological services - Oxygen - Freshwater - Flood control - Waste processing - Open space - Green space - Fish

Pollutants in people - Biomonitoring - Assay results

Perturbations - Endocrine and signaling - Oxidative stress - Epigenetic effects - Brain changes - Lung remodeling

Preclinical effects

Diseases and disorders - Asthma and respiratory disease - Cancers - Birth defects - Neurodevelopmental effects - Obesity - Etc.

Fig. 1

Model of the influence and interaction of environmental factors and health outcomes.

Environmental Health Tracking

547

exist, as well as a few others and to make information about violations of the standards available to the public and government agencies. Drinking water from sources that people develop for themselves, without a public system, does not require monitoring for the most part. Soils and house dust can serve as sources of exposure to contaminants but are not usually included in routine environmental monitoring, though emerging research suggests that house dust may be an important source for some contaminants. Monitoring is sometimes conducted during investigations of hazardous waste sites or particular releases. Food is monitored to some degree for contaminants, though approaches vary widely in different countries and by types of food. In the United States, authorities are divided among several agencies, and coverage is incomplete, especially for imported food. For example, fish is known to be the greatest source of exposure to methyl mercury in the United States, but no monitoring allows consumers to distinguish fish with high mercury concentrations from fish without high concentrations except in a few instances where states have done this. Pesticide applications for crops may be monitored in some areas. Although monitoring environmental factors and their hazards are crucial to environmental health tracking, the approaches used are usually designed to collect data to help environmental management and enforcement agencies make decisions about compliance with environmental statutes and permit conditions setting limits on releases of contaminants or levels of contaminants in the environment. Data useful for such purposes may not be as useful for assessing health impacts. Environmental monitoring may be conducted in areas where facilities with discharges are located and not in areas where people live or work. Monitoring may be conducted at time intervals that are not ideal for assessing the potential for health impacts. For example, ozone is often measured for only a few months per year. This is useful to assess the highest concentrations and their impact on health but does not allow assessment of long-term exposure. One of the contributions of environmental health tracking can be to promote collaboration between those with knowledge of environmental monitoring and those with knowledge of public health assessment to analyze and interpret monitoring data with regard to its broader public health significance. This can add significant value to the data. In some cases, environmental monitoring activities may be adjusted to provide data that retains its value for compliance but is more useful for health assessment. Also, the interest by environmental health tracking groups may result in greater availability of data not previously accessible to the public.

Biomonitoring Biomonitoring is an emerging area that falls between environmental monitoring and public health surveillance. Biomonitoring refers to the collection of samples of human biospecimens such as blood, saliva, or urine. Such specimens are analyzed for chemical contaminants, such as lead, phthalates, dioxins, or mercury. Specimens can also be analyzed for biological compounds that are formed as a result of exposure to such contaminants. These are often called “biomarkers,” though this term has other meanings as well. Biomonitoring began on a wide scale in the United States in the early 1990s, when the National Health and Nutrition Examination Survey (NHANES), a long-established nationally representative health survey conducted by the Centers for Disease Control and Prevention (CDC), added biomonitoring to its other components. The first national report on human exposure, issued in 1996, reported concentrations of more than 100 compounds in the people of the nation. The CDC has found widespread contamination of the human population by environmental chemicals. Some of the results have been surprising, such as results showing nearly universal exposures to phthalates. The initiative required the development of new methods to measure contaminants in human biospecimens, including blood and urine. The lack of methods has been a barrier to the expansion of biomonitoring. Currently, CDC is measuring more than 300 contaminants in residents of the United States. The CDC provides results that reflect the nation as whole but not individual states. They include children aged six and above but not the younger ones. Some states have started to develop biomonitoring programs to provide data about their residents and may consider including younger children or samples that reflect prenatal exposures, such as umbilical cord blood. This biospecimen can be collected in large volumes that allow the assessment of the cumulative burden of contamination. Biomonitoring has been more widely used in European countries. Much of the important work tracking human body burden of persistent, bioaccumulative, toxic (PBT) pollutants was carried out in European countries. Occupational health surveillance has also incorporated biomonitoring as part of medical monitoring for individuals, but such data are seldom available to the public. One important current issue concerns whether to communicate biomonitoring results to individuals who are tested. Some in the public health field believe that results should be provided to individuals only for substances for which the results can be interpreted with regard to their clinical significance. This is based on a principle of medical ethics that advises that, above all else, a physician should do no harm. Providing information about concentrations of contaminants in the body without a definitive interpretation is viewed as potentially harmful because it may cause worry, without having a potential for benefit, because no treatment is available. So, results would be provided only for agents where medicine has identified a level where treatment is indicated or at which effects are expected to occur. This is the case for only a few chemical agents. For most others, current scientific knowledge allows for interpretation of values only in terms of how they compare to the group studied or to the population as a whole in cases where a national sample has been tested. The CDC uses this approach. By contrast, those who follow the ethics of environmental protection generally believe that people have a right to know their results and the ethical principle of individual autonomy governs this situation. The ethical principle of autonomy posits that individuals should be permitted to make decisions for themselves and that it is not appropriate for the government to take this from

548

Environmental Health Tracking

them. Under this ethical principle, people would be asked whether they wanted to see their individual results and would be provided the results if they so desired, along with the interpretations that exist. This approach was adopted by the State of California in the United States in the statute that governs its biomonitoring program. In any case, measurement of contaminants and biological changes in people is becoming increasingly important to the field of environmental health as a way to definitively determine what kinds of agents are reaching people’s bodies.

Public Health Surveillance Public health surveillance is part of environmental health tracking. Indeed, some would say that the two terms are synonymous. Public health surveillance is about determining rates of disease for groups of people, in particular places and for defined time periods. To decide when to act and what to do, public health focuses on the detection and measurement of excessive numbers of cases or of elevated rates of disease. When the diseases of interest are infectious, elevated numbers of cases are often called disease outbreaks, and public health authorities investigate to find and eliminate the sources of infectious agents that cause them. News media commonly report investigation of disease outbreaks related to Salmonella or Escherichia coli in food or microbes in drinking water, for example. This builds on the early focus in public health on infectious diseases, some of which have been controlled as a result of interventions related to environmental factors. For example, today, waterborne diseases are not common in developed countries, though still an enormous concern in less developed countries. This difference is largely due to nearly universal investment in infrastructure to provide safe water and sanitation services in more developed countries and the severe lack of such investment in many less developed countries. When diseases of interest are not infectious, elevated numbers of cases may sometimes be called disease clusters, but the methods to determine what counts as elevated are more complex and contentious. Surveillance of noninfectious diseases is more recent and also less common than that of common infectious diseases. Public health surveillance has expanded to address some chronic noninfectious diseases, but the approaches and coverage vary enormously by country and even within individual countries. In countries with universal or nearly universal health coverage and organized medical care, medical records are often used as a source of data about disease rates because the records are available for most of the population. Use of medical records can still require active abstracting to turn data collected for individual histories for treatment and billing purposes into data that can be used for surveillance. Differences in coding and diagnostic standards can impede reporting of consistent data. In countries like the United States without universal health coverage and no organized system of medical care, it is difficult or impossible to obtain medical records that reflect the experience of the population as a whole. Systems have evolved over time to generate information about rates of noninfectious diseases, and these systems used several different methods including in-person or telephone surveys, health examinations, mandatory reporting of laboratory test results, establishment of registries, and abstracting of medical records. Ongoing surveillance is available for some diseases only at the national level. Environmental health tracking can lead toward improved cooperation and integration among different entities with responsibilities for medical records and surveillance. Ideally it would also lead to improved surveillance, but such changes are limited to date due to cost.

Community Environments The third element that comprises environmental health tracking is recognition of the importance of the environment as people experience it in their communities, homes, and workplaces. The many environmental factors that affect health combine in infinite patterns, and each community has a unique combination. Some communities face industrial pollution, others face freight transport and traffic impacts, others deal with refineries, and others face widespread pesticide applications. Most communities have multiple issues to address. The importance of the community in addressing environmental health is a key tenet in the conceptualization of environmental health tracking. Although communities have different environments, they also have strengths, resources, and assets that can be brought to bear to address issues and problems. The social capital of communities can be important to propel policy attention. In addition, communities can have their own vulnerabilities, and these can interact with environmental factors to affect health. These can include lower socioeconomic status or greater relative deprivation; poor health status; lack of access to medical care; or lingering effects of segregation. A challenge to the implementation of environmental health tracking is that the interests and concerns of community stakeholders may differ from those of public health and environmental protection agencies, but still need to be considered in program development and implementation. Communities may have a more holistic view of problems that they face and may seek changes more fundamental than those currently addressed in major environmental and public health programs. This can give rise to a political demand for action in new areas. One example might be the impacts of the worldwide shipment of freight that has resulted from increased emphasis on world trade. Communities that serve as distribution hubs for freights, either as ports for oceangoing vehicles or as distribution centers for land based carriers, are seeking greater controls on emissions and other impacts that result from the concentration of facilities.

Environmental Health Tracking

549

Two related issues are the cumulative impacts of multiple environmental factors and the scale at which environmental problems are best addressed and solutions best devised. These are discussed in subsequent sections.

Data Availability and Use of Indicators The core mission of environmental health tracking is to present and interpret data about environmental factors and related health outcomes in ways that are useful for policy makers and stakeholders in taking action to improve environmental health. To do this requires several steps. Perhaps the most important is to make data available to these audiences in ways that are useful. Experiences in the United States have shown that whereas certain federally mandated data sources are readily available to the public, many other potentially relevant data sources are not. Many of these are data collected and maintained at the subnational level, which would be states and counties in the United States. Experiences in Europe suggest that it can also be difficult to generate compatible and comparable data among nations. There are several reasons why data may not be available. For environmental data, the most important reason appears to be lack of funding and information technology resources to support public access. Many data collection systems predate the modern information technologies that rely on standards that can be adopted in common and web-based services that allow for sharing and access without compromising the original data source. Such information of technologies allows for sharing disparate and decentralized data sources through use of common protocols and portals and agreements about security and authentication of authorized users. Investment in human resources needed to cultivate interest in sustained sharing of data resources can contribute to the availability of such data. Investment in the information technologies training and conversion are also important. Environmental protection agencies are generally not highly resistant to sharing data if resource and support are available, though of course there are always exceptions. For public health surveillance, barriers to data sharing may be more fundamental. The public health sector tends to consider its data as belonging to the experts and agencies, rather than to the public. This is partly because of the importance of safeguarding individual privacy related to medical information, but also applies to data that are stripped of identifiers. There are legal limitations on data disclosed to protect individual privacy, and these vary significantly among different countries. The question of the geographic resolution of data is important. For data about diseases and disorders, the smaller the geographic resolution of the data (into, say, a postal code rather than a county), the greater the concern that individual privacy might be compromised by the release of data about cases of disease. This is because greater geographic resolution makes it more likely that the identity of an individual with the disease in question could be determined. In the United States, states generally have broad authority to obtain data needed to investigate or address public health concerns, but use of medical data in research is increasingly restricted. Development of methods to address the public interest in accurate and geographically resolved data while maintaining privacy is under way, but much more remains to be done. Once data are available, the next challenge is to present information in ways that are understandable and relevant for policy and stakeholder audiences. Data about environmental factors, biomonitoring, and diseases are diverse in form and interpretation and can be voluminous. Indicators offer a way to represent such data. Indicators are constructed from data but provide a distilled interpretation. Indicators can represent trends over time, differences between groups or areas, or status with respect to regulatory or health-related targets or benchmarks. Using indicators allows policy and lay audiences to draw on data sources but makes them more understandable. Many entities have developed environmental health indicators at different spatial and temporal scales and for different audiences and purposes. Some of the more well-developed projects focus on children. In 2001, the US Environmental Protection Agency (US EPA) produced an integrated assessment for children’s environmental health, and it included indicators for environmental contaminants, body burdens, and diseases and disorders relevant for children. This project conducted extensive technical analysis but also consulted with stakeholders to develop indicators that were scientifically based and used the best available data sources, but also responded to the needs and interests of its audiences. The World Health Organization (WHO) sponsored an international assessment to support the development of children’s environmental health indicators. It emphasized those relevant to less developed countries and included a model to account for the multiple relationships between environmental factors and health outcomes, known as the MEME (multiple exposures and multiple effects) model. The Commission for Environmental Cooperation (CEC) of the United States, Canada, and Mexico incorporated elements from the US EPA approach and from the WHO analysis into a set of proposed indicators for children’s environmental health, produced in 2006. Like US EPA, these examined environmental contaminants, body burdens, and disease and disorders of importance for children. Like WHO, it used the MEME model and examined conditions relevant to less developed countries. The European Union and the WHO in Europe collaborated to produce a compendium on evidence of the impact of environmental factors on children’s health, serving as backup to support a proposal for children’s health indicators in Europe. The WHO has also supported pilot projects to develop children’s environmental health indicators in countries in South America and Africa. In the United States, the CDC is sponsoring an environmental public health tracking program that is developing indicators relevant to the population as a whole, rather than children.

550

Environmental Health Tracking

These efforts have similarities and differences. Some of the additional needs identified in common include how to select the appropriate spatial scale for indicators, how to further develop approaches to account for the cumulative impact of multiple exposures, and how to best meet the needs of policy and stakeholder audiences to achieve changes in environmental health policy.

Cumulative Impacts Assessment of the cumulative impacts of the multiple environmental factors that may affect a community is also an emerging concern. As previously noted, a variety of environmental factors affect communities and individuals. Yet, for the most part, methods for the assessment of the public health significance of environmental factors and the policy approaches adopted to address environmental factors focus on only one factor at a time. They fail to consider that the net impact of many factors may be negative, even if each factor, assessed individually, might be acceptable. In addition, increased attention is being paid to the environmental health impacts generated by activities in other sectors including energy, transportation, housing, education, and agriculture. Actions and decisions in all of these sectors affect the environment and health of communities. There is increasing interest in developing assessment and measurement approaches to address these areas. For example, decisions by highway agencies about freeway siting or expansions have long-term and significant impacts on communities and their environmental health. The question of how these can be factored into the tracking and management of environmental health impacts is an active area of research and discussion. The assessment of cumulative impacts can be approached using quantitative risk assessment methods expanded to allow for the aggregation of multiple risks. The US EPA has indicated that it is considering such an approach and produced a framework for cumulative risk assessment. The state of California is developing an approach to assessing cumulative impacts that allows for consideration of impacts that may not be amenable to assessment by quantitative methods. One approach to cumulative impacts would be to identify particular areas or communities that are highly impacted and to subsequently mandate a higher degree of protection from additional impacts than would be true elsewhere. A second approach could be to mandate consideration of cumulative impacts for all major proposals, perhaps through an augmented environmental impacts review or through health impact assessment. Yet it is not possible to predict whether either of these approaches or perhaps additional approaches might be adopted. The issue is important for environmental health tracking since its purpose is to address environmental factors that affect health. How the assessment of cumulative impacts would be routinely integrated into environmental health tracking is at only a very early stage of discussion.

Health Disparities and Environmental Justice Significant disparities in health status have been widely documented and are also reflected in a variety of diseases and disorders that have environmental causes and in certain environmental exposures. Such disparities can be seen when populations are considered by race/ethnicity or by social and economic status or relative deprivation indicators. The reasons for disparities are not fully elucidated but likely stem at least, in part, from fundamental social forces such as the distribution of economic resources and social status. While environmental health tracking has not made health disparities a primary focus, these differences are considered to be important. The initiatives are making some effort to include demographic data needed to analyze patterns of exposure and disease to determine whether disparities exist and what may cause them. It is probably fair to say that considerably more work will be needed to fully address health disparities and the broader issues related to environmental justice within environmental health tracking.

Small Areas and the Question of Scale Disease clusters are a significant concern to many advocates of environmental health tracking. The question of how to address disease clusters reported by community groups has long been a troubling one for public health departments. Environmental health tracking has been seen as a way to institute more systematic approaches to dealing with these issues. Whether this will prove to be the case is uncertain at present. In the United States, public health agencies generally do not conduct surveillance activities to try to detect disease clusters. Rather, for the most part, these agencies wait until a community reports a disease cluster and then responds to this report. By contrast, some European countries have developed approaches to surveillance to detect potential disease clusters. One of the critical issues for the response to disease clusters is the question of how to determine what sort of response is appropriate. Public health agencies generally perceive the key question to be to determine whether the disease cluster may have occurred by chance. They mean by this whether, from a statistical point of view, the cluster is part of the random distribution of disease cases in space and time or whether it represents a statistical deviation from that. This can be quite contentious because it is impossible to determine whether a reported set of disease cases represents a statistically significant elevation by looking only at the reported cases.

Environmental Health Tracking

551

From a statistical point of view, these cases have to be compared with the occurrence of cases in other areas. The methods for how to do this vary, and the results can vary with the method chosen. In the United States, health departments understandably tend to use widely available methods that are relatively insensitive, meaning that they are not likely to detect clusters. Understandably, public health departments may not be eager to confirm the presence of clusters that require further investigation and response when they lack resources to investigate and respond. In the United States, a cluster may be more likely to be fully investigated if people in the community that reported it go through political channels to gain resources for investigation and response. From the point of view of a community that has identified a series of cases of disease, the focus on statistical testing and apparent lack of concern for people who are suffering is inexplicable and unacceptable. Consequently, community reports of clusters of disease cases often lead to contention between health departments and communities. Environmental health tracking could contribute to better approaches to community-reported disease clusters in several ways. One would be if more information about patterns of disease were made available at a sufficiently resolved spatial scale to provide a better basis for looking at possible clusters. This could be done on either an ad hoc basis, as occurs now after a reported cluster, or on an ongoing and proactive basis. This could provide a better-informed basis for discussions about the patterns of disease and the meaning of reported anomalies. A second way would be if environmental data obtained through tracking were more readily available to determine possible causes of clusters that could be remediated. A third way that environmental health tracking could contribute would be if cluster identification, investigation, and response became part of the program. This would create ongoing capacity to identify and address small-scale phenomena suspected of contributing to elevated disease rates in localized areas.

The Fuzzy Line Between Research and Surveillance Whether environmental health tracking is surveillance and monitoring program that draws on epidemiological research or whether it is a program that conducts epidemiology research is a continuing debate. Some of the proponents of environmental health tracking initiatives believe that the purpose of data integration and overlay to be to find new relationships between environmental factors and disease. They review such research as a core function of the program. Some of the proponents of the initiatives view the purpose of data integration and overlay to be to better understand conditions where policy actions are needed and to develop hypotheses for research that can be conducted using more sophisticated designs. One way that this debate is manifested is in discussions about what indicators should represent. Some believe that indicators should represent the relationships between environmental factors or exposure and the disease or disorders of interest. Indicators like this would be developed through data linkages between these different kinds of data. Advantages of doing this would be to gain large group sizes to be included in the metrics and to allow the exploration of relationships between environmental factors or exposures in areas where they have not previously been studied. This could also result in estimates of effect size that are relevant to particular areas. Such approaches were advanced by the WHO in the 1990s and by the US CDC in the following decade. However, few such indicators have been developed in the projects that have been completed to date. There are several reasons why it can be difficult to develop indictors on determining relationships between environmental factors or exposure and disease or disorders. The most important reason is that the common chronic diseases of concern for environmental health tracking seldom or never have single causes that can be easily identified. The causes usually involve interplay of multiple environmental factors as well as individual factors including behavior (e.g., smoking) and biology (e.g., individual genetic susceptibility to disease or to environmental exposures). This means that study designs that address the other elements that can affect the relationships between exposures to environmental agents and the occurrence of disease (e.g., confounders) are more likely to find relationships if they exist. This cannot always be done by simply linking overlaying data. What this means is that approaches that rely on simplistic approach to linking data may not detect relationships between environmental exposures and disease that do in fact exist and so could produce misleading results. (Conversely, they could also have the opposite result.) The line between research and surveillance is never completely clear, and some blending will always occur. There are methodological approaches to appropriately using data linkage to explore relationships between environmental factors and disease outcomes and methods to produce indicators to reflect this work may emerge. Much remains to be done to sort out what will be best approaches and practices.

Future Directions Future directions will likely parallel key challenges in the field of environmental health. The first will be to integrate the expanding definition of how the environment affects health and the greater importance of factors such as indoor environments, built environment, and climate change. New data sources and methods of monitoring will be needed to address all of the environmental factors that are important. The second will be to develop ways to more fundamentally integrate activities initiated in other sectors into environmental health. Challenges from the energy, transportation, housing, education, and agriculture sectors cannot be addressed by the kinds of pollution control and mitigation strategies that have been the mainstay of environmental health over the past three decades but will require new approaches.

552

Environmental Health Tracking

The final challenge for environmental health tracking will be to integrate its approaches and findings with the needs of those who are in a position to take actions to improve public health by addressing environmental factors. The impetus for environmental health tracking will depend on its success in providing tools useful to key constituents. Their ultimate institutionalization will depend on whether constituents in communities, agencies, and policy audiences find the tools to be important to advance and an environmental health agenda and environmental health policy.

See also: Environmental Chemicals in Breast Milk; Environmental Health and Bioterrorism; Environmental Justice: An Overview; Environmental Specimen Bank for Human Tissues; Evolving Concepts of Environmental Health.

Further Reading Briggs, D., 2003. Making a difference: Indicators to improve Children’s environmental health. World Health Organization, Geneva. California Policy Research Center, 2004. Strategies for establishing an environmental health surveillance system in California. University of California Policy Research Center, Berkeley. Choi, B.C.K., Corber, S.J., McQueen, D.V., et al., 2005. Enhancing regional capacity in chronic disease surveillance in the Americas. Revista Panamericana de Salud Pública 17, 130–141. http://www.journal.paho.org/?a_ID¼241. (Accessed January 2010). Commission for Environmental Cooperation (2006) Children’s Health and the Environment in North America: A First View of Available Measures. http://www.cec.org/index.cfm? varlan¼english&ID¼1917 (accessed January 2010). Corvalan, C., Briggs, D., Zielhuis, G., 2000. Decision-making in environmental health: From evidence to action. World Health Organization, London. Furgal, C., Gosselin, P., 2002. Challenges and directions for environmental public health indicators and surveillance. Canadian Journal of Public Health 93 (supplement 1), S5–S8. Hofrichter, R. (Ed.), 2006. Tackling health inequities through public health practice: A handbook for action. The National Association of County and City Health Officials, Washington, DC. Kyle, A.D., Balmes, J.R., Buffler, P.A., et al., 2006. Integrating research, surveillance, and practice in environmental public health tracking. Environmental Health Perspectives 114, 980–984. Kyle, A.D., Woodruff, T.J., Axelrad, D.A., 2006. Integrated assessment of environment and health: America’s children and the environment. Environmental Health Perspectives 114, 447–452. Pew Environmental Health Commission, 2000. America’s Environmental Health Gap: Why the country needs Nationwide health tracking network. Johns Hopkins School of Hygiene and Public Health, Baltimore. Pond K, Kim R, Carroquino MJ, et al. (2007) Workgroup report: Developing environmental health indicators for European children: World Health Organization Working Group, Environmental Health Perspectives 115: 1376–1382. http://www.pubmedcentral.nih.gov/articlerender.fcgi?tool¼pubmed&pubmedid¼1780543; 1 (accessed January 2010). Prüss-Ustün, A., Bonjour, S., Corvalán, C., 2008. The impact of the environment on health by country: A meta-synthesis. Environmental Health 7, 7. Teutsch, S.M., Churchill, R.E. (Eds.), 2000. Principles of Public Health Surveillance, 2nd edn. Oxford University Press, London. US Environmental Protection Agency, 2003. Framework for cumulative risk assessment. US EPA, Washington, DC. von Schirnding, Y., 2002. Health and sustainable development: Can we rise to the challenge? Lancet 360, 632–637.

Relevant Websites http://www.epa.gov/envirohealth/childrendAmerica’s Children and the Environment. http://www.hc-sc.gc.ca/ewh-semt/contaminants/health-measures-sante-eng.phpdBiomonitoring of Environmental Chemicals in the Canadian Health Measures Survey. http://www.who.int/ceh/en/dChildren’s Environmental Health. http://www.who.int/ceh/indicators/en/dChildren’s Environmental Health Indicators. http://www.euro.who.int/childhealthenvdChildren’s Environmental Health Indicators Project for Europe. http://www.cec.org/dCommission for Environmental Cooperation. http://www.epa.gov/roe/dElectronic Report on the Environment. http://www.enhis.org/object_class/enhis_home_tab.htmldEuropean Environment and Health Information System. http://www.cdc.gov/biomonitoring/dNational Biomonitoring Program. http://www.cdc.gov/nchs/dNational Center for Health Statistics. http://www.cdc.gov/nceh/tracking/dNational Environmental Public Health Tracking Program. http://www.cdc.gov/exposurereport/dNational Report on Human Exposure to Environmental Chemicals. http://www.paho.org/dPan American Health Organization. http://healthyamericans.org/dTrust for America’s Health. http://www.epa.gov/ICC/dUS-Mexico Border 2012 Environmental Health Working Group.

Environmental Justice: An International Perspectiveq L London, University of Cape Town, Cape Town, South Africa IK Joshi, Lok Nayak Hospital, New Delhi, India E Cairncross, Cape Peninsula University of Technology, Cape Town, South Africa J Gilmore and L Claudio, Mount Sinai School of Medicine, New York, NY, United States © 2019 Elsevier B.V. All rights reserved.

The Importance of Environmental Justice In its more limited conception, environmental justice has been defined as adequate or equitable protection from the consequences of environmental toxicants for everyone, without discrimination. However, such a notion of environmental justice does not completely recognize the kinds of inequalities that lead to a disproportionate burden of ill-health, disability and consequences of environmental degradation being borne by the most vulnerable in society. For example, despite increasing technological innovation and advances in scientific knowledge, the health status of the world’s people remains threatened by a combination of old and new problemsdhigh morbidity and mortality from long-existent infectious diseases such as malaria and tuberculosis, alongside alongside newer and recently establishing epidemics such as avian flu, Zika virus and HIV, as well as a seemingly relentless growth in non-communicable diseases related to trauma, chronic diseases and cancer, amongst others. These health impacts are differentially distributed within and between countries, reflecting severe global health inequalities. For example, although inequalities have been decreasing overall in the past two decades, life expectancy at birth in 2015 was still more than 25% higher in OECD countries compared to Sub-Saharan Africa, an absolute difference of about 21 years. Per capita health expenditure in Belgium in 2012 exceeded that of Angola more than 20-fold (US$ 4320 in Belgium versus $212 in Angola). Such inequalities are persistent, even if not rising. In large part, the causes of such inequalities lie in disparities in people’s living and environmental conditions, which themselves arise from social systems created by humans. The concept of environmental justice therefore speaks to redress of inequalities in health, which are rooted in social systems that replicate unequal power relations. The view of environmental justice as being intimately bound with questions of development, human rights and democratic accountability offers a much broader framework to understand the role of environmental justice and social movements in providing not only protections from toxicants but conferring agency on vulnerable groups. This agency is as important as the responsibility of northern governments to address environmental justice. Agency does not make ‘victims’ responsible for solving the problem but rather empowers them to hold ‘perpetrators’ accountable in ways that change their conditions of their vulnerability in a context of global interconnectedness. An important example of global injustice is the health burden from ambient air pollution and its effect on climate change. Global governance has failed to adequately address climate change as an issue of environmental inequity. While the Paris Agreement formally recognizes inequalities in the historical causes, burdens and effects of climate change, funding and practical support for developing countries and vulnerable communities continues to fall far short of needs. A conception of environmental justice linked to development is more appropriate to the challenges of health inequalities in less developed countries and more robust in melding theory and practice. This view of environmental justice is more closely linked to questions of social transformation with a view to changing the conditions of vulnerability that lead to environmental injustice. Far from being an ethical minefield, addressing questions of power and politics is the only ethically appropriate approach to understanding health, justice and the environment.

How Is Environmental Justice Manifested? The idea of environmental justice usually arises as a result of evidence of its corollarydenvironmental injusticedthat is, the unequal distribution of environmental hazards and susceptibility to hazards resulting in differential attainment of health according for different groups in the population. Examples include the siting of toxic waste sites or polluting industries in poor neighborhoods, populated by communities who are often of ethnic or racial minorities and politically powerless, the use of hazardous building materials in the construction of housing for vulnerable populations and the de facto transfer of fishing rights from poor artisanal fisherfolk to large corporations, including foreign corporations, on the pretext of avoiding depletion of aquatic resources. Another example is when subsistence farmers are forcibly removed from ancestral lands to extend the boundaries of nature reserves. The purported motivation to promote conservation and preserve the biosphere may mask the real purpose, being the expropriation of the powerless to promote up-market tourism.

q

Change History: April 2019. Leslie London made changes to the text and references. This is an update of L. London, T.K. Joshi, E. Cairncross, L. Claudio, Environmental Justice: An International Perspective, Editor(s): J.O. Nriagu, Encyclopedia of Environmental Health, Elsevier, 2011, Pages 441–448.

Encyclopedia of Environmental Health, 2nd edition, Volume 2

https://doi.org/10.1016/B978-0-12-409548-9.11886-X

553

554

Environmental Justice: An International Perspective

A further example includes the large-scale destruction of rainforests, which not only constitute an irreplaceable global carbon sink and source of biodiversity but are the source of livelihood for local inhabitants, many of whom are indigenous peoples. These rich forests are replaced with monocultures of rubber plantations, palm oil and other commodities for the profit of shareholders in (mainly) transnational corporations, whose products serve the consumptive behaviors of mainly northern consumers, leaving behind a toxic legacy of pesticide-saturated lands and rivers, exhausted soils, swaths of permanently deforested areas and populations burdened with the health impacts caused by farming practices based on the indiscriminate application of toxic chemicals. The body burdens of Persistent Organic Pollutant chemicals (POPs) in people and animals in the transpolar regions, remote from the sources of these chemicals, have been noted to be among the highest worldwide. These constitute the core of public concerns that have spurred the emergence of movements of citizens to assert greater say in the environmental conditions to which they are subject. In the US, the environmental justice movement has been traced to the response by African-American community activists to egregious examples of environmental racism such as the case of PCB dumping in Warren County in 1982, resistance which drew strongly on the civil rights movement. This case illustrated the close relationship between environmental injustice and racism. Such considerations also lie at the heart of environmental struggles in many developing countries, where domestic manifestations of racism have led to minority communities bearing the brunt of polluting industries. However, the axes of discrimination associated with environmental injustice include not only race but many other dimensions, including wealth, class, caste and gender. For example, gender considerations have been neglected both in state-driven development policy relating to the Sardar Sarovan Dam project in India and overlooked in civil society mobilization against the dam development. It has been suggested that the focus on racism at the root of environmental injustices is specific to local historical contexts, and that the US experience is not necessarily shared, or applicable in other contexts. Moreover, for the developing world, the inequalities between countries may be as significant, if not more so, than inequalities within countries. Despite the existence of international conventions [such as Basel, Rotterdam and Stockholm (United Nations Environment Programme. Basel Convention on the Control of Transboundary Movements of Hazardous Wastes and their Disposal. URL: http://www.basel.int/convention; United Nations Environment Programme. Stockholm Convention on Persistent Organic Pollutants, URL: www.pops.int/documents/convtext/convtext_en.pdf; United Nations Environment Programme. Rotterdam Convention on the Prior Informed Consent Procedure for certain hazardous Chemicals and Pesticides in international trade, URL: http:// www.pic.int/en/ConventionText/ONU-GB.pdf.)] geared to protecting developing country nations from the exports of hazardous production and chemicals, instances continue to occur where forms of industrial production or hazards that have been restricted or banned in developed countries find their way into developing countries at great costs to human health and the environment. Occupational diseases which used to be prevalent in developing countries are now largely confined to lower and middle-income countries. For example, silicosis, once a disease of working class miners in the UK and Europe, is now found predominantly amongst workers in countries with the least capacity to treat, rehabilitate or prevent future disease. It is estimated that about 10 million workers in India are at risk for silicosis but face non-recognition of the disease because of both deliberate and inadvertent misdiagnosis as tuberculosis, denying them access to compensation. As a result of interventions by the National Human Rights Commission and the Supreme Court in India, states have started establishing Pneumoconiosis boards and workers have an opportunity to claim relief, though prevention and protection are yet to be enforced. In Southern Africa, the massive burden of uncompensated silicosis created by the migrant labor system underpinning South Africa’s lucrative goldmining industry has left swathes of rural workers disabled or dead, yet only now is civil litigation becoming a realistic possibility for workers and their dependents to claim meaningful compensation for an epidemic that could have been prevented with adequate hygiene measures. Thus, while the concept of environmental justice has many common threads when considered in both developed and developing countries, there are considerations in a global context that are particularly cogent for developing countries.

Why Is Environmental Justice Particularly Important for Developing Countries? While the ‘traditional’ trade-off between jobs and safety is a universal challenge for development, it is particularly acute in developing countries, where high rates of unemployment or marginal employment create a coercive environment in which short-cuts in workplace or environmental safety are easily tolerated or may even be welcomed by communities desperate for development. For example, a Board of Inquiry into the siting of a steel plant within a community an hour outside Cape Town in South Africa saw local community leaders oppose environmentalists’ concerns in favor of the prospect of jobs and development for poor people. Such conflicts also allow opportunities for vested interests to exert hidden and, sometimes, explicit influence, illustrated in the conflict between artisanal and commercial fishing interests in regard to the sustainability of fishing resource in South Africa. The approach of defining a Total Allowable Catch based on ‘scientific’ considerations of sustainability is used in South Africa to manage fishing resources. However, the seemingly unbiased framework does not address the criteria for relative allocations between artisanal fishers and commercial interests, as a result of which, political and economic influence of commercial fishing interests are easily inserted into the allocation process. As a result, fisherfolk dependent on the sea for their livelihood are facing increasing impoverishment while commercial interests, including new entrants to the fishing industry who are beneficiaries of policies of racial redress, gain at the expense of artisanal fishers. It has been argued that the sustainable development agenda has been overly driven by environmental concerns in isolation from broader social and health considerations, which may explain why it has become susceptible to contradictions of the sort described above. If the original purpose of sustainable development was to help “the poor live better, healthier, and fairer lives on their own terms,” this can only be achieved through integrating economic development, in particular poverty alleviation, along with environmental protection and social justice. This is perhaps best illustrated in the slogan of Brazilian Landless People’s Movement related to

Environmental Justice: An International Perspective

555

the rainforests of the Amazon where there is a confluence between the interests of indigenous peoples and the sustainability of the forestsd‘for the people (of the rainforests) to live, the forest must live; for the forest to live, the people must live’ (as the protectors of the forests). In practice, however, commercial logging, cash cropping and cattle ranching, tend to prevail, in conflict with both sustainability and the interests of local people. Secondly, in the context of globalization, global trade rules and agreements have differential impacts for developing countries, particularly the poor in these countries. Unequal terms of trade, created by retention of subsidies to European farmers while dismantling tariffs within developing countries, have frequently decimated domestic agriculture and related industries, swelling the ranks of un- and under-employed workers in developing countries and increasing dependence on imports. Even when re-established as ‘niche’ market sources for consumers in developed countries, producers in developing countries remain at systematic disadvantagesdbeing vulnerable to international market price fluctuations in their crop, intense competition from equally displaced farmers in other countries and often lacking in technical support to manage increasingly complex technical demands of northern markets. For example, phytosanitary regulations, including those pertaining to pesticide residues, while ostensibly protecting the health of consumers in the north, require increasingly complex technical support for small farmers in the South, who may risk their own health and that of their families to ensure they maintain access to these markets. International systems such as EUREPGAP, intended to reduce hazards from pesticide residues, are standards that, in practice, can be met with little reference to the protection of workers’ health or the environment in developing countries. Findings from a study in a South American country of children attending a school near a flower farm accredited through international ‘ethical’ trade systems, demonstrated that apparent adherence to international standards can go hand-in-hand with high levels of environmental exposure affecting children’s neurodevelopment. Empowering local farmers with effective risk communication strategies can be an important tool to help address this disempowerment. At the international level, trade agreements can disempower national governments’ capacity to regulate to protect citizens from environmental hazards. For example, a US company successfully brought litigation against a Mexican state, claiming compensation from the North Atlantic Free Trade Agreement because the local authority had denied the company a permit, on grounds of health and environmental hazards, to operate a contaminated landfill. Moreover, NAFTA tribunals are secretive and do not take into account the interests of local communities. Similarly, there is considerable evidence that export processing zones (EPZs) in the Latin American region have been used to conceal the complete deregulation of health and safety with adverse impacts on the health of working populations from these countries. EPZs are widely promoted in other developing countries as a result of neoliberal policies dominant in many regions. Thirdly, while the precautionary principle has import in all settings, both developed and developing, the consequences of failing to implement precaution in the face of scientific uncertainty, has wider implications in the context of high levels of comorbidity experienced by communities in developing countries. Typically, risk assessments for the hazards of chemicals are based upon data provided by companies based in the north, often drawn from research conducted with healthy white males in the company’s parent country. Developing country workers and non-occupational populations may be subject to a range of other environmental and biological stressors that are not only absent in the risk assessment dossier but from the interpretation of typical circumstances under which chemicals are used in developing countries. Thus, populations suffering high levels of immunosuppression associated with HIV or malaria infection, high levels of under-nutrition and climatic exposures not anticipated in the north, may be at substantially greater risk than detailed in ostensibly objective risk assessments provided to support chemical registration. Climate change may exacerbate such impacts, particularly given the unequal distribution of its impacts on poorer countries. Moreover, use conditions are far more variable and much more poorly controlled, given lack of regulatory capacity in developing countries. The lack of attention to these factors means that risk uncertainty in developing country settings is much greater; yet, it is rare to find the precautionary principle given substantive credence in any risk assessment or risk management process in developing countries.

Case Studies Agricultural Production, Pesticides and Food Security It is widely noted that agricultural populations in developing countries are particularly vulnerable to the hazards posed by pesticides, whether as employees of commercial farmers or as self-employed producers, often laboring under conditions of production that expose not only themselves, but family members to significant acute and long-term risks including carcinogenic, reproductive, developmental and neurotoxic disease. Such risks are aggravated in the presence of high levels of comorbidity, typical in many developing countries. Indeed, high levels of adult and childhood malnutrition have been noted amongst farm workers and their families in many developing countries, despite their work involving food production. Unable to choose the nature of crops for cultivation when employed by farm owners with market obligations, farm workers are forced to purchase food for themselves and their families out of their often meager incomes, and often at prices inflated by market forces for rural populations. This contradiction illustrates the extent to which farm workers in developing countries are alienated from control over the products of their labor, as well as risking exposure to chemicals under poor health and safety systems with both acute and long-term health consequences. Production of genetically modified foods, which first began entering the market about 20 years ago, highlights this problem of alienation of agricultural producers from their product. Firstly, there has been a contested debate over the potential risks that GMOs (genetically-modified organisms) pose to humans and the environment. Moreover, research suggests that, contrary to its promise, use of GM crops in agriculture has not accelerated increases in crop yields or led to an overall reduction in the use of chemical pesticides. What GM technology in agriculture has done is to concentrate control over the entire system of food production in a few large

556

Environmental Justice: An International Perspective

multinationals, threatening rather than enhancing national food security. Corporations are noted to have used strong arm tactics, bribery and other means to stifle opposition by those genuinely concerned about preserving traditional agriculture. However, it is particularly small-scale farmers competing in commercial markets who face the contradictions of economic systems geared towards maximizing production at the expense of protection of human health and the environment. Here, the pressure to take short cuts in safety is effectively internalized as a result of economic pressures created by inequitable economic systems. For example, small farmers who are beneficiaries of land redistribution policies in South Africa under a process of post-apartheid redress, are increasingly pressured to adopt high-input agricultural technologies such as pesticides and genetically modified crops, without adequate extension support to protect them and their families from health and environmental risks. These pressures are both external, arising from over-zealous promotion of chemical products by company personnel, and internal, as small farmers become locked into cycles requiring ever more intense production through loans and conditionalities requiring the use of pesticides imposed on farmers by lending agencies. Such pressures have been linked to high rates of suicide amongst farmers in South Asia and Brazil. Further accumulating evidence suggests that some of the burden of suicide amongst small farmers in developing countries may also be attributable to exposure to neurotoxic organophosphate pesticides, which are not only commonly used as agents for suicide, but may contribute to the causal pathway by increasing vulnerability to depression and/or impulsivity. Small farmers therefore sit at the cusp of a complex interaction between toxic chemical exposures, inequitable economic policies and social marginalization that aggravates the problems of environmental injustice. Moreover, they do so at the far reaches of a global economic order in which they are subject to the consequences of policy decisions made in boardrooms and government offices in northern countries, and under circumstances where their interests and well-being have little traction with those deciding policy. This anomaly has been further aggravated by global shifts in food production policy, not only away from food crops to export cash crops, but in the switch to biofuel crops to counter the growing energy crisis, which has partly contributed to high food prices globally, affecting primarily the world’s poor, reportedly increasing the numbers who are chronically hungry by 600 million people globally, pushing 10s of millions of people into poverty and sparking widespread social unrest.

Asbestos Asbestos is one of the most important carcinogens worldwide, causing about half of all occupational cancer deaths. According to global estimates, some 236,000 persons die each year from asbestos- related lung cancer, mesothelioma and asbestosis resulting from occupational exposures. Because of a sound evidence base for its hazards, asbestos, in all its forms, has been banned in more than 55 countries, including all member states of European Union. Despite this trend, global production of asbestos in the period 2011–15 has remained relatively stable, at 2,000,000 metric tons per annum. This figure hides increases in use and, consequently, in human exposure in many countries with little capacity to control these hazards. For example, asbestos use is increasing by 9% per year in India, where new asbestos plants are being built, and is also increasing in other developing countries. Of the 125 million workers estimated by the World Health Organization (WHO) to be exposed to asbestos worldwide, the largest proportion are found in the developing South East Asian countries. This means that most of the world’s population lives in countries where asbestos products continue to be used, under poorly controlled conditions, exposing not only workers but consumers and residents through environmental routes of exposure. The accompanying costs for health care, lost productivity, human suffering, and the management of asbestos hazards in buildings and waste disposal are enormous. Such burdens are still largely preventable for countries that have not used significant quantities of asbestos in the past. In fact, while asbestos-related deaths are likely to begin decreasing in developed countries due to restrictions imposed over the past few decades, asbestos-related deaths are likely to rise in developing countries. This inequity poses a major challenge to the international scientific community, particularly when considering that it has been estimated that the global asbestos-related epidemic will claim up to 10 million lives before asbestos exposures are ended. Efforts by vested interests to present chrysotile asbestos as less hazardous than crocidolite asbestos have coincided with sustained or increased production of chrysotile in some countries. To justify the ongoing use of chrysotile, interventions have been attempted at national and international levels to influence policy on chrysotile asbestos and to support research geared to cast doubt on the carcinogenicity of chrysotile asbestos. In 2017, the Conference of the Parties of the Rotterdam Convention again failed to list chrysotile asbestos under the schedule of hazardous substances, even though chrysotile asbestos meets all criteria required for listing. Chrysotile-exporting countries lobbied hard to block the listing, and a proposal from 10 African countries, which would have ended their veto through providing for a 75% majority vote to list a chemical, was not adopted. As a result, the double standard continues which sees populations of the Global South provided lesser protection from a known hazard and denied their rights to a safe environment. Ironically, some trade unions, particularly in developed countries, who ostensibly represent the interests of workers at risk from asbestos exposures, have been at the forefront of efforts to defend the safety of chrysotile, because of concerns over members’ job losses. Such trade-offs take place in the context of increasing government regulation in developed countries, pushing hazardous production to developing countries lacking the capacity to regulate for health and safety. In the case of asbestos, it is not primarily multinational corporations holding such control, but rather national enterprises, operating in close relationships with developing country governments and supported by sympathetic media ownership, that ensure the ongoing profitability of asbestos production and use, largely through externalization of the costs of prevention and compensation. However, compensation for asbestos-related morbidity and mortality remains beyond the reach of most workers, and it is the rare exception for victims of asbestos-exposure to win redress through court action. Parties responsible for the toll in human

Environmental Justice: An International Perspective

557

suffering as a result of asbestos exposure escape liability, particularly in countries with thriving asbestos industries, operating under a climate of impunity. The description of the asbestos catastrophe as resulting in part from human failure to anticipate its scale remains as true today as it was 2 decades ago:

The asbestos disaster did not result from superficial miscalculations. Rather, it resulted from very careful calculations, many of which were wrong. They were wrong in their predictions and are now liable for huge sums of money. These are troubling reflections, particularly when we remember that “statistics are human beings with the tears wiped away”. (Selikoff, 1994)

Critics of the asbestos industry who have brought to global attention the hazards of all forms of asbestos have been subjected to personal attacks and attempts to discredit their scientific competence. This phenomenon is illustrative of the difficulties of challenging strong vested interests, common in the environmental justice setting.

Air Pollution From Waste Incineration In South Africa, in common with many developing countries, regulatory control of hazardous waste incinerators is woefully inadequate. While the current guidelines for the licensing of incinerators are comparable to corresponding European Union regulations, South African regulators are not compelled to impose the guidelines on new or existing plants. In addition, the infrastructure for monitoring compliance with emission standards, particularly with respect to dioxins and furans, does not exist. On the outskirts of Cape Town, a state-owned munitions factory producing small caliber ammunition, 40 mm Baton and CS riot control rounds as well as high explosive and pyrotechnic products had been operating for the past 60 years. It is located across the road from two densely populated low-income suburbs. For many years, the company disposed of waste materials through open-burning on a daily basis. The process involved mixing explosives-contaminated plastic and cardboard packing materials, un-saleable mixtures of explosives and CS gas ingredients with sawdust, and igniting this toxic cocktail to burn on open ground. The result was a daily plume of black smoke, laden with the products of incomplete combustiondpolycyclic aromatic hydrocarbons, dioxins and furans, elemental carbon and heavy metals embedded in very visible particulate matter emitted by this crude, uncontrolled combustion process. Depending on the wind direction and atmospheric dispersion conditions (winter temperature inversions, with attendant poor dispersion conditions are common in the area), one or other section of the surrounding communities were exposed to these pollutants. They reasonably ascribed their high levels of chronic respiratory and other ailments to exposure to the air pollution, although neither air sampling nor epidemiological studies were conducted to demonstrate the association. In 1998 the company finally began to explore alternative waste disposal methods by proposing a custom-designed incinerator. Details of the proposed incinerator were made public through an Environmental Impact Assessment (EIA) process in 1999. The public response lead to the formation of the Anti-Incinerator Alliance (AIA) of community groups and non-governmental organizations (NGOs), united against both the continued open-burning to dispose of explosives’ waste and the proposal to replace it with an incinerator of dubious design. The AIA, with the support of other legal and environmental NGOs, intervened to force a review of the Draft Scoping Report by an independent scientist chosen by the AIA. The independent review concluded that the Scoping Report had failed to consider the probable generation and emission of highly toxic dioxins, furans and polycyclic aromatic hydrocarbons; that the basic design concept favored de novo the formation and the emission of these substances, that the proposed emission control systems were inadequate and consequently that the health risks associated with the proposed incinerator had been underestimated. In addition, the AIA questioned the independence of the EIA consultants, the lack of the regulator’s capacity to monitor and enforce compliance with any emission limits imposed and the location of the plant. In response, the independent review report was withheld for several months and the independent reviewer threated with litigation. However, the combination of the legal challenge to a palpably flawed EIA process, and the public outcry initiated by the AIA caused the company to put the incinerator project on hold. In 2002 the company briefly attempted to relaunch the EIA process with a redesigned incinerator and a reduced waste stream. The adverse publicity occasioned by the first attempt to obtain approval for the project resulted in a more cautious approach in promoting the new incinerator design. Nonetheless, the new design included the same major faults as previously, including the lack of monitoring and enforcement capacity. The provincial regulatory authority, while adhering strictly to the required EIA public participation process, did not attempt to independently assess the technical merits of the proposed incinerator design. However, due to the cumulative impacts of environmental (air, soil and water pollution) and commercial (declining sales) problems, the company earmarked the site for closure in 2008. The incinerator was not constructed and the site was closed and rehabilitated between 2010 and 2012.However, elsewhere in South Africa, as occurs commonly worldwide, regulatory authorities have approved the construction and operation of poorly designed and operated small-scale incinerators, despite formal objections from nearby communities. These facilities are usually located in poor working class areas where communities lack the financial, organizational and technical resources to mount an extended campaign to stop their construction, compounding the environmental injustices suffered by these communities.

558

Environmental Justice: An International Perspective

E-Waste Hazardous waste has long been recognized as an important environmental health issue, globally. More recently, as consumer demand and planned obsolesce of electrical and electronic equipment (EEE) is increasing, e-waste is now becoming one of the fastest growing waste streams worldwide. E-waste consists of large household appliances, information technology and telecommunication equipment, and consumer gear that has reached the end of its useful life. It contains valuable materials such as gold, silver, copper, iron, palladium, and aluminum that can be recycled for economic gain and hazardous materials such as lead, mercury, chromium, chemicals in plastics, and flame retardants that are known to be associated with a myriad of negative health effects. In 2014, the global generation of e-waste was estimated to be approximately 41.8 million metric tons and is expected to increase up to 50 million metric tons in 2018. The U.S. is the leading producer of e-waste and countries within Europe generate the most perperson. It is estimated that 75% to 80% of annual e-waste generated is shipped to low- and middle-income countries in Asia and Africa for recycling and disposal. Factors such as stringent environmental regulations in high income countries to protect public health together with high labor costs contribute to promoting the export of e-waste. The majority of recycling in Asia and Africa occurs in the informal sector where the technology to safely recycle this material is nonexistent. In the informal sector, to extract the valuable components from e-waste, recyclers often depend on rudimentary manual techniques such as burning cables and plastics, and using acid baths, which directly expose recyclers to hazardous chemicals while also polluting the air, water, and soil of surrounding communities. Although the toxicity of many of the substances found in e-waste is well documented, the short and long term effect of e-waste is not fully understood, particularly when exposures occur in combination with other chemical mixtures. The varying routes of exposure and sources along with possible inhibitory, synergistic or additive effects of the pollutants make risk assessment of e-waste highly complex. Potential adverse health effects from these chemical exposures can affect the function of numerous biological systems, including respiratory function, reproductive outcomes, neurodevelopment, and endocrine function. Workers may not know that recycling practices can be harmful to their health; or they may be forced to choose work ahead of their health where there a few alternative economic opportunities available. E-waste, although considered a significant source of incomes for these families, affects the health of the most vulnerable people in the world resulting in a gross environmental injustice. Most e-waste recyclers are often poor and less educated than the average population and a large portion are women and children. There is stigma associated with doing waste work. In India’s Dalit caste system women and girls are often assigned a lower social status than men and boys; this results in a disproportionate number of females doing the lowest-tier jobs such as e-waste recycling. Many of the hazardous chemicals found in e-waste affect women’s reproductive and endocrine functions, and may be linked to reduced fertility. Exposure to chemicals such as lead and mercury may negatively affect fetal development in pregnant women. Children are often involved in e-waste work because their small dexterous hands are ideal to dismantle electronic components. Children are more vulnerable to risks from toxic exposure from e-waste for a number of reasons: they require more food, air, and water (sources of exposure) in proportion to their size than adults; their metabolic pathways are still developing so they are less able to detoxify toxic substances; and children have more future years of life to develop chronic diseases due to chemical exposures earlier in life. The cross-boundary movement of waste and its disproportionate effect on vulnerable populations within low and middle income countries has been recognized. Legislation such as the Basel Convention prohibits the export of e-waste and has been ratified by 181 countries. However, loopholes in this legislation allow waste intended for “reuse” to be exported; in reality, items designated for “reuse” are often defunct or nearing the end of their life cycle and end up as e-waste. For example, in Ghana in 2009, 70% of electronics imports into Ghana were sent under the pretext of secondhand equipment, but a substantial amount were designated as e-waste because they had no value for reuse.

What Underlies Environmental Injustice? These cases illustrate the fact that wealthier developed countries bear responsibility for many factors driving environmental injustice. It has been argued that the inherent impetus of capitalist production is towards a never-ending cycle of consumption that is primarily oriented to generating profits for private owners or shareholders. Technological innovation is blind to the consequences for health and the environment because those consequences are not borne by those who benefit. Thus, in the absence of internalization of these costs, the risks from these technologies migrate to the weakest points in societydgeographically (location), socially (strata) and in terms of political power (disenfranchised or politically marginalized groups). This concept represents the social production of environmental injustice. Distinct notions of development underlie different approaches to environmental justice. The neoliberal free-market approach promoted in many developing countries by international lending agencies is perhaps best characterized by Larry Summers’ comment while at the World Bank that “just between you and me, shouldn’t the World Bank be encouraging MORE migration of the dirty industries to the LDCs [Less Developed Countries]?. the economic logic behind dumping a load of toxic waste in the lowest-wage country is impeccable and we should face up to that.” While the explicitness of such unbridled market logic is no longer politically fashionable, neoliberal discourses continue to result in privatization of water, electricity and basic services in many countries across the globe, reducing access to basic essential needed for good health and development.

Environmental Justice: An International Perspective

559

In contrast, the mantra of sustainable development, coined by the World Commission on Environment and Development in 1987, frames the challenge in terms of “development that meets the needs of the present without compromising the ability of future generations to meet their own needs.” Under this framework, the costs of environmental degradation are internalized in the hope of a more sustainable development trajectory. But reliance on taxation and trading of pollution rights has failed to arrest global greenhouse gas emissions when a clearer emphasis on prevention may have been more effective. It is doubtful that ‘sustainable development’ strategies, such as excluding natural capital from income calculations, taxing resource throughput in preference to labor and income, investing in increasing the supply of natural capital in the long term and moving away from exclusive reliance on global integration, are feasible under current terms of globalization. An alternative framework based on a human rights discourse locates ecological problems in a socio-political context and enables science to inform development in a way that engages with prevailing power dynamics. For example, when scientific evidence is used to support technological progress through the provision of value-free risk assessment, it essentially plays the role of serving dominant interests and contributes to the social production of environmental injustice. In contrast, others have argued that the role of scientist is to mediate in the decision-making process, which is inevitably political and equally contingent on facts and values, by bringing “scientific consensus” of “respected scientists” to bear where there is lack of scientific certainty in risk-assessment. However, as is the case in the developed world, scientists in developing countries who have sought to highlight health hazards posed by new technologies, have been victimized and subject to industry-driven ad-hominem attacks. For example, the manufacturers of endosulfan, a WHO Class II insecticide, attempted to sue a Philippines toxicologist in 1993 for public statements he made attributing health hazards to endosulfan. The case was dismissed for lack of evidence and endosulfan was subsequently banned by the Philippines government. It may therefore be questionable to think that a scientist’s role could remain impartial when facing environmental injustice. Here we distinguish between providing dispassionate, unbiased and rigorous scientific analysis for decision-making, which we believe to be a professional responsibility, from remaining neutral or value-free when there is patent unfairness in the distribution of environmental exposures, susceptibility and health impacts that is preventable. In the latter case, we believe that scientists have a responsibility to speak out to alert stakeholders to the potential for harm. Being on the side of environmental justice does not mean misinterpreting science for sectoral interests. An environmental justice framework also raises questions about the agenda of funding institutions both as enabling conditions that generate environmental injustice, but also in shaping the contours of the environmental justice movement. Not only should funding agencies be accountable to ensure that their policies do not contribute to environmental injustice but there should be a clearer mandate to prioritize support for research that integrates inquiry into both social and environmental conditions, and addresses both short and long-term impacts.

The Human Right to a Safe Environment Environmental justice incorporates environmental concerns into institutional frameworks for human rights and democratic accountability. International human rights law recognizes the right to a healthy and safe environment in various formulations, including both the narrow idea of protection from hazards as well as the preservation of the environment for future generations. Indeed, positive obligations are placed on governments to take active steps for the treatment and control of epidemic, endemic, occupational and other diseases (article 12 of the International Convention on Economic, Social and Cultural Rights). Far from just refraining from violating people’s rights to a safe environment (the obligation of ‘respect’), a range of positive obligations are placed on government to fulfill (through legislative, budgetary and programmatic actions) and protect (from third party violations) the right. For many developing countries, global trade and economic forces threaten to disempower nation states from exercising national sovereignty and acting independently to meet state obligations towards its people’s human rights. Nevertheless, states may also masquerade behind a claim of disempowerment in order to render themselves unaccountable both to their citizens and to international Institutions. Either way, protecting the environment and people’s rights to a safe environment is therefore central to an environmental justice approach. There are three principles central to a human rights analysis: Firstly, rights are essentially about respect for human dignity and are interdependent; secondly, rights require prioritizing those who are most vulnerable; and, thirdly, rights are worthless unless they empower the most vulnerable to take action to change the conditions that create their vulnerability. In the context of environmental justice, therefore, a human rights discourse places greater emphasis on the needs and views of poor, migrant and vulnerable communities and groups facing environmental threats, than, for example, the interests of lobby groups representing powerful economic interests. This normative framework therefore lends strong moral and legal support to the environmental justice model and is particularly important in developing countries where civil society may be weak and where economic interests may militate against environmental protections. It also holds developed countries and, indirectly, transnational companies accountable for the consequence of their policies and actions.

Conclusion The concept of environmental justice as a principle that recognizes the need to shoulder the burden of pollution fairly amongst populations regardless of socioeconomic status, race, ethnicity and gender must now move beyond a national issue to the

560

Environmental Justice: An International Perspective

international stage. Rather than focusing decision-making on narrow cost-benefit analyses, environmental justice challenges us to think about a plurality of values that, when integrated in environmental decision-making, may be better able to realize human potential for all rather than the few, and with future generations firmly in mind. Achieving environmental justice internationally requires multifaceted and broad approaches, including, for example (a) broad networks of community-based organizations that address different issues affecting the disenfranchised, who come together on matters related to the environment; (b) workers who organize a particular labor sector to improve workers’ health; or (c) community-based entities that unite internationally in order to enhance their efforts. It also requires global action to hold powerful elites responsible. Environmental justice organizations around the world come together because of commonalities beyond the general notion of environmental justice. Because of globalization, different populations affected by industrial pollution in different countries may find themselves engaging with the same transnational companies. Organizing internationally towards a common goal is therefore necessary to build a global environmental justice movement.

See also: Climate Change, Environmental Health, and Human Rights; Environmental Justice: An Overview; Environmental Justice and Interventions to Prevent Environmental Injustice in the United States.

Further Reading Brown, G., 2005. Protecting workers’ health and safety in the globalizing economy through international trade treaties. International Journal of Occupational and Environmental Health 11, 207–209. Cushing, L., Morello-Frosch, R., Wander, M., Pastor, M., 2015. The haves, the have-Nots, and the health of everyone: The relationship between social inequality and environmental quality. Annual Review Public Health 36, 193–209. Chakraborty, J., Collins, T.W., Grineski, S.E., 2016. Environmental justice research: Contemporary issues and emerging topics. International Journal of Environmental Research and Public Health 13 (11) pii: E1072. Chatty, D., Colchester, M. (Eds.), 2002. Conservation and mobile indigenous peoples: displacement, forced settlement, and sustainable development. Bergham Books, New York. Claudio, L., 2007. Standing on principle: The global push for environmental justice. Environmental Health Perspectives 115, 500–503. Heacock, M., Kelly, C.B., Asante, K.A., Birnbaum, L.S., Bergman, Å.L., Bruné, M.N., Buka, I., Carpenter, D.O., Chen, A., Huo, X., Kamel, M., Landrigan, P.J., Magalini, F., DiazBarriga, F., Neira, M., Omar, M., Pascale, A., Ruchirawat, M., Sly, L., Sly, P.D., Van den Berg, M., Suk, W.A., 2016. E-waste and harm to vulnerable populations: A growing global problem. Environmental Health Perspectives 124 (5), 550–555. https://doi.org/10.1289/ehp.1509699. Marsili, D., Comba, P., 2013. Asbestos case and its current implications for global health. Annali dell’Istituto Superiore di Sanità 49 (3), 249–251. Perkins, D.N., Brune Drisse, M., Nxele, T., Sly, P.D., 2014. E-waste: A global hazard. Annals of Global Health 80 (4), 286–295. https://doi.org/10.1016/j.aogh.2014.10.001. Quijano, R.F., 2000. Risk assessment in a third-world reality: An endosulfan case history. International Journal of Occupational and Environmental Health 6, 312–317. Randeria, S., 2003. Globalization of law: Environmental justice, World Bank, NGOs and the cunning state in India. Current Sociology 51, 305–328. Sass, R., 2000. Agricultural “killing fields”: The poisoning of Costa Rican banana workers. International Journal of Health Services 30, 491–514. Selikoff, I.J., 1991. Asbestos disease – 1990–2020: The risks of asbestos risk assessment. Toxicology and Industrial Health 7 (5–6), 117–127. Stayner, L., Welch, L.S., Lemens, R., 2013. The worldwide pandemic of Asbestos-related diseases. Annual Review of Public Health 34, 205–216. Tempels, T.H., Van den Belt, H., 2016. Once the rockets are up, who should care where they come down? The problem of responsibility ascription for the negative consequences of biofuel innovations. Springerplus 5, 135. https://doi.org/10.1186/s40064-016-1758-8.

Environmental Justice and Interventions to Prevent Environmental Injustice in the United Statesq Leandra Smollin, State University of New York at Potsdam, Potsdam, NY, United States Amy Lubitow, Portland State University, Portland, OR, United States © 2019 Elsevier B.V. All rights reserved.

Abbreviations ALA American Lung Association CDC Centers for Disease Control DAPL Dakota Access Pipeline EO Executive Order EPA Environmental Protection Agency FEMA Federal Emergency Management Agency HUD USA Department of Housing and Urban Development IWG Interagency Working Group on Environmental Justice NPL National Priorities List PCB Polychlorinated biphenyl PFOA Perfluorooctanoic acid PFOS Perfluorooctanesulfonic acid or perfluorooctane sulfonate RECA Radiation Exposure Compensation Act SARA Superfund Amendments and Reauthorization Act USDA United States Department of Agriculture WHO World Health Organization

Introduction People of color, low-income populations and indigenous people in the United States have long been familiar with the adverse consequences associated with exposure to environmental toxins. A significant body of scholarship documents the disproportionate burden of hazards these groups experience. Any solution or intervention aimed at remedying the multitude of environmental injustices in the USA today must consider a variety of solutions and methods. Environmental justice is a concept that must be considered in relation to the social, economic and political factors that influence the experiences of affected communities. Economic inequality, cultural stressors (e.g. racism, classism, and sexism) and the relative lack of political and social power of affected communities impact processes of creating change. This article discusses some of the key issues and health problems related to environmental justice and points to a number of interventions or solutions that may help to minimize negative health effects in communities directly affected by ecological degradation.

Environmental Justice Environmental Justice is often said to have begun in 1982 in reaction to a proposed landfill in Warren County, North Carolina, USA. The landfill was intended to contain nearly 60,000 tons of soil contaminated with the highly toxic chemical PCB. The majority of communities in close proximity to the landfill were nonwhite and about 20% lived below the United States Federal Poverty Level. Residents pointed to the fact that in this, and similar cases, minority and low-income populations have borne a disproportionate amount of potential adverse health and environmental effects. Organizing against the state government, Warren County residents engaged in 6 weeks of protests and acts of civil disobedience. Ultimately, the people of Warren County lost the battle to prevent the dumping of toxic wastes in their community, but their actions publicized what many poor and minority communities already knew: all individuals do not equally share the burden of pollution. Following the activities in Warren County and in other, similarly affect communities, small-scale environmental justice

q

Change History: April 2019. Leandra Smollin made changes to the text and references. This is an update of A. Lubitow, D. Faber, Environmental Justice and Interventions to Prevent Environmental Injustice in the United States, Editor(s): J.O. Nriagu, Encyclopedia of Environmental Health, Elsevier, 2011, Pages 433–440.

Encyclopedia of Environmental Health, 2nd edition, Volume 2

https://doi.org/10.1016/B978-0-12-409548-9.11820-2

561

562

Environmental Justice and Interventions to Prevent Environmental Injustice in the United States

actions began to occur with increased frequency. Environmental justice organizing on a large scale occurred for the first time in 1991 with The First National People of Color Environmental Justice Summit held in Washington, D.C., USA and has since continued. Broadly speaking, actors in the environmental justice movement represent a convergence of seven independent social movements in the USA, which include: (1) The civil rights movement focusing on issues of environmental racism. These include: the disproportionate impact of pollution for communities of color, racial biases in government regulatory practices, and a glaring absence of affirmative action and sensitivity to racial issues in established environmental advocacy organizations. (2) The occupational health and safety movement, working for the labor rights of non-union immigrants and undocumented workers. (3) The indigenous lands movement, originating in the struggles of Native Americans, Chicanos, African-Americans, and other marginalized indigenous communities, to retain and protect their traditional lands. (4) The environmental health movement, emerging from the larger environmental justice movement, the antitoxics movement in particular. (5) Community-based movements for social and economic justice that have expanded their political horizons to incorporate issues such as lead poisoning, abandoned toxic waste dumps, the lack of parks and green spaces, poor air quality, into their agenda for community empowerment. (6) Human rights, peace, and solidarity movements, particularly campaigns which first emerged in the 1980s around apartheid in South Africa, and USA intervention in Nicaragua and Central America. (7) Immigrant rights movements, which expand the basic struggle for citizenship to include basic rights of citizenship, which include the right to clean air and water. Although definitions of environmental justice vary, Robert Bullard, an environmental sociologist and leading environmental justice scholar, offers that environmental justice is concerned with environmental health, where “environment” refers to “where we live, work, play, worship, and go to school,” and where “health” refers to the World Health Organization’s (WHO) conception of “a state of complete physical, mental, and social well-being, not just the absence of disease or infirmity.” The Environmental Protection Agency (EPA) of the USA defines environmental justice as “the fair treatment and meaningful involvement of all people, regardless of race, color, national origin, culture, education, or income, with respect to the development, implementation, and enforcement of environmental laws, regulations, and policies.” In general, the term “environmental justice” refers to efforts to improve environmental quality in affected communities and to reduce the unequal burdens experienced by communities of color, low-income, and indigenous populations.

Health Impacts Asthma and Air Pollution According to a 2013 report of the Centers for Disease Control (CDC) of the USA, approximately 39.5 million people in the USA, including 10.5 million children, have been diagnosed with asthma in their lifetime. In 2017, the American Lung Association identified asthma as the most common serious chronic disease of childhood. Including medical expenses, loss of productivity relating to missed school or work and premature death, the annual economic cost of asthma approximates 56 billion dollars. Numerous studies have indicated significantly higher rates of asthma among racial and ethnic minority groups in the USA. A 2016 study by the CDC study revealed that between 2001 and 2010, Blacks were more likely to have asthma than Whites or Hispanics. Yet, disparities for Puerto Ricans were the highest, with rates approximately two times higher than non-Hispanic whites and 1.5 times higher than non-Hispanic Blacks. Racial and genetic predisposition have repeatedly been ruled out as a major cause of asthma, revealing that differences in environmental exposure may be the cause of these disparities. Asthma can be aggravated by exposure to pollutants such as tobacco smoke, molds, and allergens such as cockroaches, animal dander, and dust mites. These pollutants may be more common inside homes with indoor air quality problems resulting from a lack of ventilation, accumulation of allergens, and mold and mildew issues. Disproportionate numbers of people of color or low socioeconomic status tend to live in urban areas with high outdoor air pollution and may be exposed to more environmental pollutants that exacerbate asthmatic conditions. Poor air quality is a common feature of deteriorating housing units, when combined with outdoor air pollution, this creates an environment in which asthma is both more common and severe.

Lead Poisoning Lead is a highly toxic substance that is known to cause a range of health effects, from behavioral problems and learning disabilities to seizures and death. Individuals can be exposed to lead through drinking water, deteriorating paint, food, and dust, and a number of consumer products. Lead poisoning is also entirely preventable. While actual exposure limits are difficult to obtain, in 2010 the World Health Organization asserted that lead poisoning accounts for about 0.6% of the global burden of disease. While no level of lead is safe for children, CDC figures for the USA estimate approximately half a million U.S. children under the age of six have blood lead levels higher than 5 micrograms per deciliter, the reference level at which the CDC recommends public health interventions be initiated. As of 2018, the CDC’s Childhood Lead Poisoning Prevention Program has identified two goals for

Environmental Justice and Interventions to Prevent Environmental Injustice in the United States

563

public health action: (1) eliminating blood lead levels of 10 micrograms per deciliter or higher, and (2) mitigate disparities in average risk associated with race and social class. Over the past several decades, blood lead levels have steadily decreased for the broader USA population due to the banning of leaded gasoline and the phasing out of lead paint. However, disparities in lead exposure remain. Deteriorating lead paint is one of the most common routes of exposure, particularly for children. A 2016 report of the USA Department of Housing and Urban Development (HUD) estimates that approximately 37.1 million homes contain some lead paint, which comprises 34.9% of all USA housing. Low-income households had a higher prevalence of lead-based paint hazards (29%) compared to higher income households (18%). Further, rates of blood lead levels for children under 3 years old reveal that black children have higher blood levels than children of other racial/ethnic groups.

Superfund Sites “Superfund” is the term commonly used to refer to the Comprehensive Environmental Response, Compensation, and Liability Act (CERCLA, 42 U.S. C. x 9601–9675). CERCLA was enacted by the Congress of the United States in 1980 in response to the Love Canal disaster. It authorizes the EPA to respond to releases or threatened releases of hazardous substances that pose potential threats to public health or the environment. As of 2018, there are over 1300 sites listed on the Superfund National Priority List, with new sites proposed each year. Superfund law has two types of response actions: short-term “removals,” where actions may be taken to address releases or threatened releases requiring prompt response, and long-term “remedial actions” that permanently reduce the dangers associated with hazardous substances that are serious, but not immediately life threatening. Remedial actions can be conducted only at sites listed on EPA’s National Priorities List (NPL). CERCLA makes use of the “polluter-pays” principle where the parties responsible for the pollution are required to clean it up, or where EPA manages the clean-up and sues the responsible company for the costs. The hazardous chemicals that are associated with Superfund sites tend to contaminate groundwater and soil most readily. Of the 30 hazardous substances found most often at these sites, more than half are known or suspected human carcinogens, and nearly all are associated with some negative health effects including being toxic to the liver, kidney, or reproductive systems. Starting in the mid-1970s, countless governmental and nongovernmental studies have revealed a disturbing pattern of elevated health problems, including heart disease, spontaneous abortions and genital malformations, and death rates. Infants and children suffer higher incidences of cardiac abnormalities, leukemia, kidney-urinary tract infections, seizures, learning disabilities, hyperactivity, skin disorders, reduced weight, central nervous system damage, and Hodgkin’s disease. Nearly half the population of the USA lives within 10 miles of a Federal Superfund site. Despite the fact that minority and poor populations are more likely to live near a waste site, they are less likely to be benefit from the EPA Superfund program. Sites located in low-income areas and communities of color less likely to be listed on the NPL, and when they are listed, it takes significantly longer for hazards to be removed compared to wealthier communities with white inhabitants. Superfund sites diminish the viability and health of communities, and the disparities in governmental responses demonstrate a clear example of environmental injustice.

Brownfields The EPA defines a “brownfield site” as “real property, the expansion, redevelopment, or reuse of which may be complicated by the presence or potential presence of a hazardous substance, pollutant, or contaminant.” A “brownfield” generally refers to a parcel of land that was previously used for industrial purposes and which is contaminated by low concentrations of hazardous chemicals. A brownfield differs from a superfund site in that it is less severely contaminated, and thus less likely to be cleaned up with federal funds. Estimates of the number of brownfield sites range from 400,000 to over one million. The vast majority of brownfields are found in urban industrial areas and tend to be disproportionately located in working class communities and/or communities of color. Brownfields are widely recognized as an environmental justice issue due the range of potential hazards associated with the land. When sites remain vacant, toxins present on the land continue to impact air and water quality, and in turn, human health. In addition to the chemicals that remain after an industry or business vacates the land, what is often a vacant lot can become a site for the illegal dumping of hazardous wastes. This creates a situation that deters economic development, decreases property values, and harms the aesthetic value of a community. Social scientists have suggested that brownfields disrupt the social fabric of a community and negatively impact the maintenance of social networks and community ties. While some state and federal funds may be allotted to brownfield remediation, local and community groups are often left to deal with hazardous sites on their own. Redevelopment is difficult due to a variety of issues related to the hazardous chemicals present on the land: property owners are reluctant to sell property for fear of what an environmental assessment may uncover; banks may decline to foreclose on potentially contaminated properties when an owner defaults, for fear of facing liability for cleanup costs; banks may refuse financing to purchase or develop of potentially contaminated property; and insurance companies may refuse to insure properties they fear may be contaminated.

564

Environmental Justice and Interventions to Prevent Environmental Injustice in the United States

Uranium Mining Uranium is used in the production of nuclear power and is a highly toxic material. Depending on the form, uranium is carcinogenic and has been linked to kidney and lung disease. Economic exploitation and exposure to toxins are often the consequences of the mining and extraction of uranium resources. Approximately 60% of known uranium reserves in the USA are on Native American/Indigenous lands in the southwest portion of the United States of America. The largest single source of uranium in the USA is the Colorado Plateau located in the “Four Corners” area: Colorado, Utah, New Mexico and Arizona. In the middle of the 20th century, the federal government created economic incentives for uranium that encouraged mining activity in this area. It is estimated that 3000 Navajo worked in these mines at some point between the 1950s and 1970s. Miners were rarely informed of the dangers that uranium mining posed to both human health and the health of the land. Along with higher levels of uranium exposure resulting from the concentration of mining on indigenous lands, Native American communities were further undermined by economic exploitation: the Navajo people were paid only 3% the market value of uranium by the government of the USA (the sole legal purchaser). With little economic stability and few other job opportunities, many Navajo worked in the mines out of necessity; communities continue to suffer the effects of uranium mining today. Physical health issues such as lung cancer and respiratory problems related to mine dust have been observed. Heavy radon exposure has been linked to hundreds of cancer deaths in the Navajo community. Further, the majority of closed mines have yet to be been cleaned up, and radioactive waste in soil and water continue to differentially impact Native lands. In 1990, USA Congress passed the Radiation Exposure Compensation Act (RECA). This legislation compensates those who can prove they are sick because of their work in the uranium mines and mills between 1947 and 1971. Workers who can prove they have lung cancer or pulmonary fibrosis are eligible to receive a payment of $150,000 from the Department of Justice. While compensation from the federal government has helped to ease the sense of injustice felt by the Navajo people, many workers eligible for compensation died before RECA was passed.

Pesticides According to a 2017 report of the CDC, about 1.1 billion pounds of pesticides are used annually in the USA. A variety of occupations, such as agricultural workers, groundskeepers, pet groomers, and fumigators risk pesticide exposure from fungicides, herbicides, insecticides, rodenticides, fumigants and sanitizers. Pesticide exposure has been associated with increased risk of malignant lymphoma, leukemia, as well as liver, stomach, pancreatic, lung, and brain cancer. A number of pesticides have been linked to sterility in men and women, stillbirth or abortion in women, and a variety of birth defects. Hired farmworkers are a group that routinely work with, or in close proximity to, a variety of pesticides, many of which are highly toxic to the human body. Many farmworkers in the USA are immigrant or migrant workers, and the USDA estimates that 27% of all farmworkers have less than a ninth-grade education. While 64% of farmworkers are citizens of the USA, a large percentage do not speak English as their first language, creating language and literacy barriers. Coupled with employers who may not provide bilingual explanations of work health hazards, and product labels that do not convey the health effects of pesticides in languages other than English, many workers do not have access to information about the hazards and risks associated with routine contact with pesticides. For some farmworkers, the legal risks associated with working in the USA as an immigrant without a work permit function as a deterrent to reporting adverse reactions or asking questions about chemicals they are working with. Like uranium miners in the American southwest, it is likely that many farmworkers are not provided adequate clothing or safety equipment to protect them from exposure.

Water Contamination In 1974, the USA enacted the Safe Drinking Water Act (SDWA), setting limits to regulate the amount of contaminants in public water systems. These standards do not apply to water in private wells, which is significant as many people in the USA access drinking water from private water sources. The EPA works with states, localities and other water suppliers responsible for carrying out these standards, aimed to protect those served by public water systems. However, evaluations, standards, and regulations do not cover many potentially harmful contaminants, and not all advisory levels set by the EPA are legally enforceable limits (e.g. limits for chemicals PFOA and PFOS, set to protect children during a critical early development window). Although the USA has one of the safest water systems in the world, threats caused by even heavily regulated contaminants such as lead, persist as public health concerns. Contaminated water in Flint, MI, USA. The city of Flint is the seventh largest city in Michigan, with more than half of the population consisting of racial and ethnic minority residents. In 2014, the city of Flint switched its water source from the Detroit River and Lake Huron to the Flint River, as a cost-cutting measure. Despite visible signs of contamination, residents of Flint were repeatedly assured by the state that the water was safe to drink. Later that year, the General Motors plant in Flint, one of the largest companies in the city, stopped using city water due to concerns of concentrated levels of chemicals, while residents their concerns continued to be dismissed or ignored. In the months that followed the switch, various stakeholders, including the EPA, the Michigan Department of Environmental Quality, and other MI state officials downplayed any risk to public health, despite resident outcries calls from the American Civil Liberties Union (ACLU), doctors’ reports of increased numbers of children presenting with unexplained rashes and other symptoms, and at least one external study indicating the water may be unsafe.

Environmental Justice and Interventions to Prevent Environmental Injustice in the United States

565

It was only in October of 2015, after the Michigan Department of Health and Human Services verified the conclusions of a study showing the number of children with elevated blood lead levels had doubled since the town switched its water source, did the state test its drinking water and decide to switch back to their original water sources. However, substantial damage was caused to the public water supply infrastructure due to the corrosion of transport pipes, thus switching back to the city’s original water sources did not address the continuing problem of lead being released into the public water supply. By January of 2016, the President of the USA declared a state of emergency in the city, the National Guard was mobilized to distribute bottled water to residents, and Federal Emergency Management Agency was called to provided additional support. In 2017, the Michigan Civil Rights Commission cited “cumulative and compounding effects” of deeply rooted discrimination and environmental racism as key factors in the trajectory of the Flint, MI water crisis. As of January of 2019, the replacement of all water service lines that could contribute to lead contamination was still not complete. The Dakota Access Oil Pipeline. Since the early 2000s, struggles over oil pipelines have emerged as an environmental justice issue in the USA. The most public battle has been in relation to the Dakota Access Pipeline (DAPL), a 1172-mile-long underground oil pipeline crossing through North Dakota, South Dakota, Iowa, and Illinois. The multibillion project has been contested by Native American tribes due to concerns that the development would disrupt sacred sites and would compromise water quality. Along with broader concerns about the disruption of ecosystems and wildlife habitat, leakage from the oil pipelines is of great concern regarding the potential contamination of drinking and irrigation water. Sunoco Logistics the corporation responsible for operating the pipeline, must report spills to The Pipeline and Hazardous Materials Safety Administration, part of the USA’s Department of Transportation. Since 2010, Sunoco Logistics has spilled 143,100 gal’ pf crude oil from its onshore pipelines, more than any other pipeline operator in the country. Despite strong opposition from multiple local, state, and national organizations as well as weeks of popular protests, a presidential memorandum to advance the construction of the pipeline was issued by Donald Trump in January of 2017. Environmental justice activists not only take issue with the construction of such pipelines, but with the threat of increasing restrictions on the right of the people to protest.

Regulatory Roadblocks to Environmental Justice Many of the health problems detailed above are not simple matters of exposure to a particular toxicant and a subsequent illness. Asthma, lead poisoning, hazardous waste sites, and unsafe mining practices are health issues mediated by social, economic and political forces. Systemic poverty, institutionalized racism, and a lack of regulatory enforcement of environmental laws create a social environment where it is more profitable to expose certain groups of people to hazards than to invest in safer production, distribution and disposal practices. Although the etiology of particular diseases such as asthma and lead poisoning are vital to remedying the health effects of environmental injustice, recognizing the social, economic, and political factors related to these issues is critical to addressing the root causes. Possibilities for intervention must recognize the social and cultural barriers to environmental equality.

Regulatory Inaction: Executive Order 12898 of 1994 Numerous federal policies in the USA have attempted to remedy the known and recognized forms of environmental injustice. Most notable is the Executive Order (EO) 12,898 of 1994, which directs federal agencies to develop strategies to identify and address the effects of their programs and policies on minority and low-income populations. The order was also intended to provide affected communities with increased access to public information on health and environmental issues. Environmental justice advocates have attempted to use this nondiscrimination provision to remedy environmental injustices by suing in federal court or by filing administrative action with the EPA. EO 12898 was an important step in bringing environmental justice issues to the forefront of the political agenda in the USA. However, many of the current regulations intended to protect environmental justice communities have not yet been successful in alleviating the burdens associated with exposure to environmental toxins. Recent regulatory reforms whose aim is to preserve a free market economy have resulted in the rollback of numerous environmental policies, worker health and safety laws, consumer protections, and other state regulatory measures. Severe cuts in the budgets and staffs of federal agencies that enforce and prosecute environmental laws have become a common practice, seriously compromising the health of all Americans, particularly those living in working class neighborhoods and communities of color.

Regulatory Rollbacks: Superfund Sites A lack of federal enforcement has also become problematic in relation to Superfund laws. Originally financed by a tax levied on the petroleum and chemical industries, Superfund created a pool of money used to pay for the cleanup of sites whose polluters were unknown or unable to finance the work. Superfund was reauthorized, and its funds increased when the Superfund Amendments and Reauthorization Act (SARA) was passed by Congress in the USA in 1985. However, in 1995, this “polluter pays” tax was allowed to expire. Consequently, the financial reserves in the Superfund trust have declined from a surplus of $3.8 billion in 1996 to levels that approach or reach zero at the end of each fiscal year, forcing ordinary American taxpayers to shoulder more of the cost for toxic waste cleanups. Since the depletion of the trust, Superfund has relied on the annual appropriation of $1.3 billion or less in tax dollars and the money recovered by the EPA from companies linked to the sites.

566

Environmental Justice and Interventions to Prevent Environmental Injustice in the United States

Owing to regulatory changes, the cost to taxpayers to clean up the toxic waste sites increased by more than 400% from 2004 to 2006. Program funding shortfalls like these slow or stop site cleanups and hinder the EPA’s ability to address the backlog of contaminated sites. Cleanups have fallen dramatically since 2001 and fewer than one out of five sites have been cleaned up enough to be removed from the list since the program’s inception in 1980. By the EPA’s own accounting, Superfund has cleaned up less than 400 sites to the point where they can be deleted from the list. In 2017, proposed budget cuts to the EPA likely mean that cleanups will lag behind earlier rates.

Regulatory Failure: Pollution Trading Rights For almost three decades, the USA has utilized the regulation of industry to address environmental problems. Under this type of regulatory structure, the federal government establishes uniform national pollution limits that the federal or state governments impose on individual polluters through a system of permits or other regulatory devices. Although the established standards are often weak and inadequate when it comes to protecting public health, this approach has reduced many of the most prolific sources of pollution in the USA. These regulations (such as the Clean Air and Water Acts) raised construction and operating costs for industries and instituted a push for other types of environmental controls. In this way, pollution-trading programs, where corporations sell “pollution rights” to other companies unwilling to buy expensive pollution control equipment, have become popular. Policy initiatives such as these tend to displace the social and ecological costs of production onto poor communities of color and working-class neighborhoods. Among the most pervasive and problematic approaches to this type of regulatory scheme involve emissions trading, where the state gives a corporation the “right” to discharge a set amount of pollution. These pollution rights can then be bought and sold by other companies. In theory, this approach provides incentives to capital to reduce their pollution discharges beyond the levels allowed by law in order to sell their unused pollution “credits” to other corporations. Companies unwilling to reduce profits by making significant investments in pollution-abatement technology may instead exceed federal environmental standards by purchasing ‘excess’ pollution rights from another, less-polluting company. A number of environmental injustices arisen with this “open-market” system of environmental regulation. These problems included a general inability on behalf of the EPA to verify and enforce the equal trading of emissions credits. It is also much more difficult for residents in poorer communities to find the time, money, expertise, and political access necessary to adequately monitor pollution rates and serve as a watchdog in support of EPA enforcement. Furthermore, trading programs do not require industry to compensate residents living near the offending facilities for harms that are caused by the trades or allow them to bargain with the trading partners to prevent the harm from being created in the first place. This further problematizes the substantial risk of environmental justice “hot spots” in poor communities of color and working-class neighborhoods. When several industrial facilities purchase pollution credits in one geographic area and use these credits to maintain or increase releases of the most dangerous chemical pollutants, toxic hot spots result. The negative environmental and health consequences related to industry are being displaced onto marginalized communities inside and outside of the USA. Regulatory structures intended to protect vulnerable communities have been ineffective in significantly reducing the unequal distribution of hazards. Strategies for intervention and reduction of environmental health injustices would do well to focus on preventing pollution in the first place by building a clean, sustainable economy and a democratic society committed to principles of social justice for all Americans.

Possibilities for Intervention For environmental justice communities, the link between human health and the environment has never been clearer. A variety of different health problems have become an unfortunate way of life for many people or color and those of low socioeconomic status.

Grassroots Democracy and Inclusiveness Environmental justice communities have consistently engaged in grassroots organizing principles. Through engaging in public protests, lobbying, media relations, electoral work, and other direct-action tactics, including mass-based civil disobedience, the environmental justice movement has won a number of important victories in recent years. Organizing efforts have helped to prevent 80% of all planned municipal incinerators; protected the natural resources and unique wilderness areas of many communities; stopped ocean dumping of radioactive wastes and sewage sludge; facilitated the cleanup of toxic waste sites in poor communities of color and created government policies and programs for addressing environmental injustices at the local, state, and federal levels. Rather than existing as a collection of organizations and networks fighting defensive “notin-my-backyard” battles (as important as they may be), the environmental justice movement must continue to evolve into a political force capable of challenging the systemic causes of social and ecological injustices as they exist everywhere. While health and economic impact of the Flint, MI water crisis have not fully been mitigated by organizing efforts, these efforts have contributed to recognition of the problem, some acceptance of responsibility, and some material outcomes. Criminal charges have been levied against more than a dozen current or former state officials in Michigan. In February of 2017, the Michigan Civil

Environmental Justice and Interventions to Prevent Environmental Injustice in the United States

567

Rights Commission issued an apology for ignoring residents’ concerns in a report titled The Flint Water Crisis: Systemic Racism Through the Lens of Flint, saying “the people of Flint have been subjected to unprecedented harm and hardship, much of it caused by structural and systemic discrimination and racism that have corroded your city, your institutions, and your water pipes, for generations.” While the EPA issued a report only concluding that “to avoid future public health harm through drinking water contamination, the EPA needs to clarify for its employees how its emergency authority can and should be used to intervene in a public health threat,” it has awarded a $100 million grant to Michigan for upgrades to infrastructure in Flint. As of 2017, Flint’s water supply has been switched from the corrosive Flint River back to Detroit’s (safer) water supply, but the task of replacing tens of thousands of lead pipes means that for many residents, the water is still not safe to drink. It is expected that it will take until at least 2020 to fully restore safe drinking water. Coupled with past deception on the part of government officials, some residents question whether they can believe what officials say about the safety of water in their communities. Dakota Access Pipeline Protests: Despite some promising gains, the environmental justice movement faces considerable challenges in coming years. The DAPL protests are telling example of how the larger political and economic forces at play in the USA limit the capacity of grassroots environmental justice movements to challenge development. The DAPL protests were locally-led, with strong leadership from indigenous peoples, were international in scope (with protestors from around the world traveling to participate), and also leveraged digital media to raise awareness and gain support. Rooted in place by the Sacred Stone camp on Sioux tribal lands, the DAPL protests were an example of how grassroots movements can be dynamic, place-based and led by people of color. However, the limits of such mobilizations are clear when the scale of the opposition is considered; corporate economic development and the corporate nature of the political environment in the USA suggest that even the most well-planned movements will continue to face challenges from state and entities. Since 2016, over half of the states in the USA have proposed legislation that would restrict environmental-justice related protests, including the increase of penalties for criminal and noncriminal offenses that relate to protests and demonstrations. Unless movements for environmental justice can address the political-economic dynamics of capitalism that force communities to make such tradeoffs, the conception of environmental justice as a greater participatory democracy and an end to racial discrimination is limited. Further, while increased participatory democracy by popular forces in governmental-decision making and community planning is desirable (if not essential), alone, it is insufficient for achieving true environmental justice.

Sustainability, Clean Production, and Precaution Sustainable development is a process that could assure human health and environmental safety and reduce human impact on the environment. This type of development requires a transition to cleaner modes of production within industry, increased pollution prevention measures to reduce toxic output, development of technologies to reduce and manage environmental health hazards, and the implementation of laws, policies and regulations that are committed to promoting equality in environmental health outcomes. Sustainable practices would include adopting pollution prevention measures, which eliminate the use of dangerous chemicals and production processes, rather than relying on costly and ineffective pollution control measures, which are aimed at containing and distributing environmental hazards once they are produced. A transition to clean production and utilization of the precautionary principle are key components of a more “productive” discourse on environmental health. Clean production is a proactive approach to managing the environmental impacts of production, included: changes in technology, processes, resources or practices to reduce waste, environmental and health risks; the more efficient use of energy and resources; minimize damage to the environmental. The precautionary principle posits that if there is a strong possibility of harm (instead of scientifically proven certainty of harm) to human health or the environment from a substance or activity, precautionary measures should be taken. Standard environmental policy approaches in the United States utilize risk assessments to determine “acceptable” levels of public exposure to industrial pollutants, and it applied as a general standard of industry. However, from an environmental justice perspective, there are significant flaws with this approach. For instance, policy makers often assume that “dilution is the solution,” where the wide dispersion of environmental pollution from various sources leads to what are considered safe levels of public exposure. Unfortunately, if pollution is highly concentrated in certain communities, then this approach can be grossly inadequate. For many environmental justice communities, dilution is not an option and pollution control measures may not be sufficient to reduce levels of risk to the “acceptable” level of exposure. Additional research and information should continue to consider how various chemical exposures impact human health. Scientific studies that make concrete connections between health outcomes or particular diseases and environmental toxins are vital to the passage of legislation that protects the health and safety of all individuals. Continued research into cleaner methods of production as well as safer alternatives to hazardous chemicals is necessary to help to reduce community exposure to toxins until more comprehensive social and political changes can be made.

Summary The struggle for environmental justice and the protection of human health is a complex social, political and economic issue. Given the historical legacy of injustices related to the health of poor, minority and indigenous communities, efforts to remedy such violations must utilize on a variety of solutions. Grassroots organizing tactics, which have long been a focal point of the environmental

568

Environmental Justice and Interventions to Prevent Environmental Injustice in the United States

justice movement, must continue as a daily reminder of the continued negative human health consequences affecting communities. Efforts toward greater social and economic justice must also be considered as a strategy for intervention. Poverty, racism and unequal access to resources cannot be ignored as underlying causal factors in the fight for a healthier and more just society. Along with this, aspects of production and public policy must begin to incorporate an understanding of sustainability and an awareness of the relationship between human health and the environment.

See also: Climate Change, Environmental Health, and Human Right; Environmental Justice: An Overview.

Further Reading Agyeman, J., Bullard, R., Evans, B. (Eds.), 2003. Just Sustainabilities: Development in an unequal world. Transaction Books, London, England. Brown, P., 2007. Toxic exposures: Contested illnesses and the environmental health movement. Columbia University Press, New York. Bullard, R., 2005. The quest for environmental justice: Human rights and the politics of pollution. Sierra Club Books. Bullard, R. (Ed.), 1994. Unequal protection: Environmental justice and communities of color. Sierra Club Books, San Francisco. Faber, D., 2008. Capitalizing on environmental injustice; the polluter-industrial complex in the age of globalization. Rowman & Littlefield, Lanham, MD. Faber, D. (Ed.), 1998. The struggle for ecological democracy: Environmental justice movements in the United States. Guilford Press, New York. Daniel Faber. Harden, Monique. The fight for healthy and safe communities: Uncovering EPA’s anti-civil rights agenda, a report by the National Black Environmental Justice Network, 2002. Lanphear, B., Weitzman, M., Eberly, S., 1996. Racial differences in urban children’s environmental exposures to lead. American Journal of Public Health 86 (10), 1460–1463. O’Brien, M., 2000. Making better environmental decisions: An alternative to risk assessment. MIT Press, Cambridge, MA. O’Neil, S.G., 2007. Superfund: Evaluating the impact of executive order 12898. Environmental Health Perspectives 115 (7). Paehlke, R.C., 1989. Environmentalism and the future of progressive politics. Yale University Press, New Haven. Ticker, J., 1999. Protecting public health and the environment: Implementing the precautionary principle. Island Press, Washington D.C.

Relevant Websites https://www.epa.gov/dUnited States Environmental Protection Agency (EPA). https://www.cdc.gov/nceh/.publications.htmdCenters for Disease Control (CDC). National Center for Environmental Health. https://portal.hud.gov/hudportal/HUDdUnited States Department of Housing and Urban Development (HUD). The healthy homes program.

Environmental Justice: An Overviewq Gordon Mitchell, The University of Leeds, Leeds, United Kingdom © 2019 Elsevier B.V. All rights reserved.

Introduction and Scope Environmental Justice (EJ) is concerned with the fair distribution of environmental costs and benefits, and has emerged as a major theme in the sustainable development paradigm, bridging key goals of environmental protection and social justice. EJ is conceptually very broad, addressing issues of fair distributions of environmental risks and impacts, as well as environmental capital (environmentally derived goods and services), between generations, within current generations, and between people and the natural world. At the international level, many environmental impacts can be interpreted within the EJ framework. For example, several low lying Pacific island states are threatened with total disappearance due to climate change induced seal level rise, and others may become uninhabitable due to increased coastal storm damage and saltwater contamination of groundwater. In comparison to industrialized nations, these small island states have made a negligible contribution to the anthropogenic greenhouse gas emissions driving climate change, yet they will bear a disproportionate share of the impacts. Overall, roughly four-fifths of the world’s natural resources are now consumed by one-fifth of the global population, representing an unequal appropriation of natural capital and an unequal distribution of environmental impacts such as pollution of soil, air and water, fisheries collapse, land degradation, deforestation and loss of biodiversity. Resource and carrying capacity analysis, using such tools as ecological foot printing, shows that the aspiration of nations to achieve western lifestyles and associated per capita consumption can only be achieved by running down environmental resources and degrading ecosystems that provide essential services. This cannot be sustained in the long term, as such development is fuelled by living off environmental stock not the environmental income derived from that stock. Thus it is increasingly recognized that, on a global scale, there are insufficient environmental resources to enable developing nations to reach the same level of consumption observed in developed nations, in a long term, or sustainable, manner. However, despite falls in global inequality since the 1990s, largely due to economic development in India and China, latest figures (2013) show that 11% of the world’s population still live on less than $1.90 a day, and they cannot justifiably be denied their right to develop so as to achieve a better quality of life. Thus there are enormously important issues concerning the just distribution of environmental goods, services and impacts, which become starkly manifest at the international scale. However, EJ issues similarly occur at national to neighborhood scales, where historically a narrower focus on the intra-generational equity aspect of the broader EJ concept, particularly that related to exposure to environmental hazard, is common. In this context, Cutter (1995) defined EJ as “equal access to a clean environment and equal protection from possible environmental harm irrespective of race, income, class, or any other differentiating feature of socio-economic status.” This narrower conception of EJ, addressed in the remainder of this article, has its roots in the work of Freeman, who proposed that environmental risk could be integrated within a theory of individual choice and welfare. Freeman (1972) found a relationship between pollution and income for US cities, and argued that, as the distribution of environmental quality was theoretically produced by its interaction with income and market forces, improving the distribution of wealth would lead to an improved distribution of environmental quality. Such research however, did not gain significant momentum until the 1980s when US civil rights activists were concerned that landfills and hazardous industries were invariably sited within predominantly black communities or indigenous peoples’ reservations. Numerous studies, many conducted by the activists themselves, reported that minority populations were exposed to a disproportionate burden of environmental hazards and associated adverse health effects. Such exposure was often considered unjust, as exposed minority populations gained a disproportionately small share of the benefits of the polluting industries. The EJ movement has roots in Asia, Africa, Europe and the Americas, but because of the sustained nature of its grass roots activism, the United States is widely viewed as the principal origin of the EJ movement. Concerns there over environmental inequality led to the establishment in 1992 of the US EPA Office for Environmental Justice, and 2 years later the passing of President Clintons 1994 Executive Order 12898 that requires Federal agencies “to address environmental justice as part of their overall mission, and to identify and address disproportionately high, adverse human health or environmental impacts of policies, programs and activities on minority and low income populations.” EJ is now an important part of environmental and public health policy in the United States, with all Federal agencies having an obligation to address EJ issues. The EPA continues to be the lead Federal agency on EJ, addressing EJ issues in planning and decision-making, and defining fair treatment, as that where no group of people bears a disproportionate share of the environmental and adverse health impact of development.

q

Change History: April 2018. Gordon Mitchell was involved in the update. All sections have been amended, but substantial additions are limited to section 2 and 5. Figure 4 has been added. This is an update of G. Mitchell, Environmental Justice: An Overview, In Encyclopedia of Environmental Health, edited by J.O. Nriagu, Elsevier, 2011, Pages 449–458.

Encyclopedia of Environmental Health, 2nd edition, Volume 2

https://doi.org/10.1016/B978-0-12-409548-9.11227-8

569

570

Environmental Justice: An Overview

In the United States, EJ policy developed in response to directly expressed concerns of civil society, but elsewhere, EJ policy has emerged more recently, largely in response to intergovernmental agreements on human rights, increasingly seen as a mechanism for achieving environmental sustainability. These rights include the right to a clean and safe environment, the right to act to protect the environment, and the right to environmental information and participation in decisions affecting the environment. These rights were defined in principle in the 1992 Rio declaration on environment and development, and are being implemented through subsequent international agreements including the 1998 Human Rights Act, and in Europe, the Aarhus Convention on Access to Information, Public Participation in Decision making and Access to Justice in Environmental Matters whose objective is to “contribute to the protection of the right of every person of present and future generations to live in an environment adequate to his or her health and well-being” (UNECE, 1999). The Aarhus convention obligations are implemented in the EU via legislation that give citizens greater access to environmental information (Directive 2003/4/EC), and enhanced participation in decisions affecting the environment (Directive 2003/35/EC). An environmental justice directive was also proposed to give citizens the right to initiate administrative or judicial procedures against acts or omissions by private persons or public authorities that do not comply with environmental law. No agreement on this has been possible due to concerns from public authorities and business over costs and delays of potential environmental justice court cases, hence in 2017 the EU drew on all prior environmental justice legal cases to provide clear guidance to national courts on interpretation of existing EU environment law within a justice context. The Aarhus convention thus provides individuals and groups with enhanced avenues to address procedures and breaches of environmental laws which contribute to environmental inequality and injustice.

Evidence for Environmental Inequality Studies investigating the social distribution of environmental pollution began appearing in the United States during the 1970s, some commissioned by the EPA. The principal geographic focus was major urban centers, and environmental issues addressed included air and water quality, solid waste disposal and noise. Two main conclusions were drawn from these early studies. First, the extent and quality of the environmental data available at the time was considered poor, constraining robust and spatially consistent analysis. Second, it was concluded, albeit tentatively, that social gradients in environmental burdens did occur. However, the observed patterns were not consistent between cities, and environmental burdens were not always greatest in the more marginal communities. For example, in some cities the environmental burdens were greatest in high density residential areas which variously included those predominantly occupied by high, middle and low income households, depending on the city studied. Overall however, these early studies suggested that poor households, and particularly poor black households, tended to predominate in those areas where the most serious environmental quality problems occurred. The US EJ movement gained its major impetus in the early 1980s following celebrated cases at Warren County, North Carolina, and Love Canal, New York, where black and low income communities were fighting against problems brought by major disposals of PCBs and other toxic wastes in their communities. Hundreds of studies subsequently emerged assessing the relationships in the Unites States between waste disposal sites, or other environmental hazards, and the socio-demographic characteristics of the areas in which the hazards were located. Highly influential studies leading up to Executive Order 12898 included those of the US General Accounting Office that found hazardous waste facilities in one EPA region were disproportionately located in black communities, and the United Church of Christ Commission for Racial Justice study (“Toxic Waste and Race,”) probably the first nationwide EJ study, which concluded that race was the determining factor in the distribution of chemical hazard exposure in the United States, and hence coined the phrase “environmental racism.” Despite the abundance of EJ studies, the evidence base for environmental inequality in the United States, where the bulk of the empirical studies has been conducted, was much weaker than perceived at the time. This was due to a range of methodological problems that occur repeatedly in EJ studies (see next section) and because many of the studies, including those that received most attention, were conducted by policy advocates whose primary goal was not scientific rigor but advancement of other interests. William Bowen and colleagues at Cleveland State University reviewed over 200 EJ studies published between 1980 and 1998, and found that only about 20% could be considered empirical studies with appropriate research designs, and only 17 studies in total met an acceptable standard of scientific scrutiny. They therefore concluded that the body of EJ research demonstrating environmental inequality could not be considered “immense,” a common claim, and that the evidence base in the United States was relatively small, heterogeneous, and at best indicated, but did not demonstrate, that in specific areas, some identifiable groups live closer to selected environmental hazards. This relative lack of empirical rigor is one reason why many EJ class actions brought against civil authorities in the United States were unsuccessful. Many of the US studies were then subsequently discounted on the grounds of inappropriate research design, particularly the use of geographically confined case studies which cannot be used to logically infer a more general pattern. Nevertheless, such a case study approach also characterized the initial environmental equity studies conducted outside the United States, including those in the United Kingdom and New Zealand, where analysts sought to characterize the social gradients of environmental pollution through analysis of major cities. Amongst these studies, air quality was the most widely studied environmental issue, with a dozen UK studies published from 1998 to 2002. However, despite the common focus of this research, robust evidence of environmental inequality remained elusive as these case studies addressed different cities, spatial units of analysis, air quality parameters, and social variables (deprivation and income were the key focus, but ethnic composition and age were also investigated). These studies found

Environmental Justice: An Overview

571

no association between air quality and age, but did find associations with ethnic status and with deprivation. However, in the case of deprivation, the social variable most widely studied, results were inconsistent, with some studies finding poorest air quality in deprived neighborhoods, and others in the most affluent. Consequently, advice offered to government officials responsible for air quality policy in the United Kingdom was contradictory, with analysts variously arguing for or against targeting remedial instruments, available under the National Air Quality Management Strategy, at deprived neighborhoods. A nationwide small-area approach can do much to overcome problems of representative sampling and selection of appropriate comparison areas. In the United Kingdom this approach was adopted in several studies, including some that have associated air quality, reported on a 1 km grid basis for the entire nation, with socio-demographic data available for all 10,500 population census wards, containing roughly 60 million people. Such analysis reveals that whilst wards with high levels of deprivation do indeed experience poorer air quality than those of average means, the least deprived also experience poorer air quality (Fig. 1). Such an approach helps to make sense of contradictory results from sample areas studies, such as individual cities, whose geographies may differ widely. Here, the higher exposure of the least affluent is assumed to be a product of an implicit process of trading off the environmental disbenefits for other benefits offered by their chosen residential location, so as to maximize household welfare. These analyses also reveal how sensitive identifying evidence of inequality can be to the analytical method. Figs. 1 and 2 both characterize air quality in 2001 using mean annual concentration data for small census areas (wards). However, whilst Fig. 1 addresses all census wards in England, containing some 50 million people, Fig. 2 only includes those wards where the mean annual concentration is above the air quality standard prescribed by EU law (40 mg NO2 per m3) and intended to protect public health. Fig. 2 reveals a very much more unequal distribution, with over half of the 2.5 million people in England resident in wards where air quality is in breach of the prevailing standard resident in the most deprived 20% of wards. Such analyses have also been conducted for a range of other environmental parameters, including flood risk, location of regulated hazardous industrial facilities, and a range of other air quality parameters, and reveals that in the United Kingdom there is strong evidence for unequal social distributions of certain environmental burdens at the national level (and also that some areas experience cumulative impacts from multiple environmental inequalities). Internationally, since the early 2000s interest in environmental justice has grown substantially and many countries have sought to develop their own evidence base on environmental inequalities. Interest has been particularly strong in Europe, but studies have been developed world-wide, including across the Far- and Middle-East, Australasia, and South America. Collectively, these analyses cover a wide array of methods, social metrics, and environmental issues, now including not just pathogenic issues such as air quality with clear implications for physical health, but also salutogenic environments (e.g., greenspace access and quality, quiet) considered important under wider conceptions of well-being. EJ studies are also developing new cross-disciplinary areas. For example, major attention has been devoted to mapping and valuing the flows of services (such as regulation of pollution or flood risk) from ecosystems in the last decade, and work is now seeking to understand how these ecosystem services are distributed socially, as well as spatially. Similarly, research on health inequalities and environmental inequalities have to date largely been conducted in disciplinary silos, but more explicit attention is now given to understanding the role of environmental gradients in health inequality. This extends to the inclusion of socially disadvantaged groups in epidemiological studies. The weight of evidence from environmental justice research shows socially disadvantaged groups are more exposed to environmental pollution, but these groups are under-represented in biomedical studies, thus biasing our understanding of the health outcomes of environmental inequalities.

Mean of census ward mean NO2 concentration (ug/m3 as an annual mean)

45 40 35 30 25 20 15 10 5 0 1

2

3

4

5

6

7

8

9

10

Deprivation decile Fig. 1 Relationship of deprivation to mean annual air quality, England, 2001. (a) All 8414 English census wards, accounting for 49.5 million people in 2001. (b) Ward deprivation measured using Index of Multiple Deprivation. Wards placed in deprivation rank order, then equal population count deciles created. Decile 1 contains most deprived wards, decile 10 least deprived. (c) Bars denote 95% confidence interval.

Environmental Justice: An Overview

Population (thousands) in census wards where NO2 concentration exceeds 40ug /m3 as an annual mean

572

900 800 700 600 500 400 300 200 100 0 1

2

3

4

5 6 7 Deprivation decile

8

9

10

Fig. 2 Relationship of deprivation to exceedence of annual mean annual air quality standard, England, 2001. (a) All 8414 English census wards, accounting for 49.5 million people in 2001, of which 2.5 million (shown here) are resident in wards with NO2 concentration > 40 mg/m3 as an annual mean, the standard prescribed by EU legislation designed to protect public health. (b) Deprivation deciles as Fig. 1.

Analysis Issues There are no prescribed methods in EJ analysis, but a range of methodological issues requiring the consideration of any analyst wishing to characterize inequalities has emerged from critical analysis of past EJ research. These issues include the selection of the community of concern and appropriate environmental parameters, and issues of data quality, spatial analysis and statistical methods. The target community of concern, the social or demographic group most relevant to the problem must be identified in EJ analysis. Target groups are often proposed by community advocates, but for results to have significance for public authorities, target groups need to reflect wider political concerns. For example, the US EJ Executive Order requires that minority and low income communities are not discriminated against, with the Civil Rights Act defining target groups according to race, color, national origin, sex, age, and disability. Clearly this leaves considerable scope, and hence it is also important to consider potential impact pathways of the environmental attribute of interest, and inherent differential susceptibilities of various populations. For example, children may be more susceptible than adults to some environmental pollutants, and hence merit special attention. Evidence that children experience a higher exposure to environmental pollution is indicated by past studies, including the national small-area studies of air quality conducted in the United Kingdom. Similarly, indigenous populations may be at greater risk due to cultural practices not found in the wider population. Examples include Alaskan native villagers whose high dietary intake of fish places them at greater risk to persistent pollutants that bio-accumulate in the environment, and certain tribes in Northern California who suffer above average health risks from pesticides, due to their basket weaving culture. The environmental focus of an EJ study may be health, safety, amenity, or other measures of wider economic welfare. Proximity analysis is widely used, and can be well suited to scoping possible effects, or for investigating issues such as equity in compliance with environmental legislation, resource investment of regulatory bodies or economic impacts on property values. However, whilst proximity analysis is simple and economical to apply, it is a blunt instrument when addressing health inequalities, the most common cause of concern in EJ analysis. Table 1 presents a range of alternative approaches to characterizing the environmental health risk parameter, that place increasingly less reliance on the assumption that proximity to a hazard is equivalent to a poor health outcome, an assumption implicitly made by much of the early literature on EJ. The more direct approaches are increasingly more demanding and resource intensive to apply, hence in selecting the environmental metric, EJ analysts must consider the limitations of each approach with respect to the program objectives. The environmental data used in EJ analyses has usually been collected for some other purpose, such as regulation of environmental laws, which may further limit its ability to act as a surrogate for health risk in EJ assessment. For example, many past EJ studies have relied on chemical release inventories maintained by regulators. In such inventories it is often the case that small scale but numerous facilities are exempt, emissions are not monitored but are measured indirectly using nonstandardized techniques, and self-reporting of emissions with no third party verification is common. Inventories also usually address only annual emission, hence temporal effects, including acute health impacts associated with peak discharge are ignored. Furthermore, the health impacts of many chemicals addressed in such inventories are not always well known, particularly additively or synergistically, or with respect to specified minority populations that may have differential susceptibility. Results of environmental equity analyses are also sensitive to the spatial design of the study. One well known geographical phenomenon, the Modifiable Area Unit Problem where results of an analyses are scale dependent, is evident in equity analysis. For example, the landmark United Church of Christ study (see above) concluded that toxic waste facilities in the United States were disproportionately sited in minority and low income communities, but a subsequent replicate analysis using smaller spatial

Environmental Justice: An Overview Table 1

573

Relative merits of environmental equity assessment approaches

Approach

Strengths

Weaknesses

Proximity analysis

• • • • • •



Poorest approximation to actual health risk



Very poor approximation to health risk



Poor substitute for human exposure and health risk Lack of data Establishment of time-activity patterns of target populations difficult and costly Difficult and costly; Problems of small samples Difficult and costly; Limited knowledge of dose-response

Concentration monitoring or modeling

• •

Easiest to apply; Economical; Able to capture nonhealth impacts Easy to apply; Economical Widely available data often supportive of longitudinal analysis Good spatial coverage May have publicly agreed standards to address

Concentration monitoring or modeling of micro scale environments Internal dose assessment using personal monitors or biological markers Epidemiological assessment of received dose



Good estimates of exposure and health risk



Best estimate of health risk



Most accurate measure of health outcome

Emission monitoring or modeling

• • • • • •

Based on Liu, F. (2001). Environmental justice analysis: Theories, methods and practice, Boca Raton, Florida: CRC Press.

units found that the association was very much weaker. Assuming that results from a large scale apply at a smaller scale is known as the ecological fallacy. The converse, the individual fallacy, is also a danger, and occurs when results from a small study area, are assumed to apply to a large one. This problem of inappropriate extrapolation is evident in the early UK air quality analyses, where city scale results were incorrectly assumed to apply at the national scale, but did not, as Fig. 1 reveals. A related problem where case study work is undertaken is that of selection of a suitable comparison area. For example, if an urban area is analyzed and then compared to the wider region, any conclusion of unequal distribution in environmental risk may be a result of confounding factors that operate differentially in urban and rural areas. Proximity studies also suffer from problems with the boundary and shape of the geographic study unit. The boundary problem occurs where the spatial analysis introduces error in the geographical association of environmental hazard and target population. Thus houses and a point source emission may both be located in the same spatial zone, but in practice may be physically quite distant, or they may be located in different but adjacent zones, where the risk of exposure is higher in practice than represented in the spatial analysis. Proximity studies often address this problem by drawing a buffer around an environmental hazard, such as an emission source, with the buffer then used to “capture” at risk populations for subsequent analysis. Population census zones to be associated with the hazard may be those whose whole area falls within the buffer, or just their zone centroid. These so called polygon and centroid containment methods lead to over or under estimation of at risk populations respectively, especially for irregular shaped zones, hence capturing only the population within the hazard buffer is preferable. However buffer containment usually assumes that populations are evenly distributed within zones, which is rarely the case. Results can also be sensitive to buffer size, which should be selected to best represent the spatial extent of the facility impact, but which in practice is rarely known. Furthermore, buffers have a uniform shape, usually circular for point hazards, and poorly reflect pollutant flow paths that are affected by wind or water movement. A further spatial problem is that of boundary instability, where zone boundaries change over time. Many census units, for example, are designed to contain roughly the same number of people, and hence boundaries are periodically revised. This makes temporally consistent analyses, which are important in the interpretation of results (see below), difficult. Temporally consistent data is a common problem in EJ analysis, affecting both population and environmental variables, and is one of the reasons that relatively few longitudinal analyses have been conducted. Whilst use of a GIS facilitates more sophisticated analyses than were previously possible, it is evident that there are a series of spatial analysis issues that the EJ analysis must be aware of in study design. Ignoring these issues means that results may emerge which are statistically significant but meaningless as a basis for further decision making. Generally, increasing the size of the study area, and reducing the spatial unit of analyses is desirable, as this better addresses the problems identified above, by placing less reliance on sample populations, and so providing a better characterization of the environmental risk experienced by those populations. A further research design issue in environmental equity analyses is how best to characterize association between the environmental and social variables. Visual comparison of mapped data is occasionally used, but objective statistical tests are preferred. Associations identified with bivariate statistics (difference test, correlations, two variable regression) have often subsequently been proven false due to the effect of confounding variables. For example, many past studies failed to account for a correlation between ethnicity and income, and so wrongly concluded that environmental inequality occurs with respect to race, rather than income. Multivariate analysis is desirable to control for confounding variables and to determine their relative importance in explaining the distribution of environmental risk. In equity analysis, linear regression has been a very popular means of doing this (and Probit

574

Environmental Justice: An Overview

or Logit tests for discrete choice data, such as presence or absence of an environmental hazard). However, until recently, few studies adequately reported on diagnostic tests of the underlying assumptions (such as nonlinearity, multicollinearity, and heteroskedasticity), and model misspecification was common (e.g., relevant variables omitted, linear regression used to address nonlinear relationships of exposure and distance). Gini indices are proving useful for characterizing environmental inequality within a population (although no significance tests are possible), whilst the move away from small population case study analysis towards small-area national studies allow for a more powerful description of the social distribution of environmental hazard using simple univariate methods.

Unequal or Unfair? Despite the abundance of environmental equity research, class actions brought against US civil authorities on the grounds of unjust planning decisions have proved largely unsuccessful, in part due to the poor empirical analyses that failed to provide convincing evidence of environmental inequality. However, even where environmental inequality clearly does occur, this must not automatically be considered unjust, as the “fairness” of the observed patterns may depend on how they occurred, and on the justice theory that is applied. This first issue, causality, is considered irrelevant by some EJ advocates, who see the observed pattern as the important issue, not how it was produced. This perspective is readily seen in Fig. 2, which many would argue is unjust, regardless of the processes producing it, as the inequality is in compliance with a legally binding environmental standard primarily set to protect public health. With other environmental parameters, the interpretation is less clear, and when judging the “fairness” of the distribution, and how to respond to it, it is preferable to understand what processes gave rise to the observed distribution. To prove cause and effect the following must be demonstrated: (i) covariation (variables are correlated); (ii) relationships are not spurious (correlation cannot be explained away by a third variable); and (iii) time order of occurrences (cause precedes effect). Most EJ studies to date have been cross sectional, investigating the social distribution of environmental hazard at one point in time and addressing the first two causal criteria, but few have conducted longitudinal studies in which temporal changes in environmental equity, and the time order of occurrences, are investigated. Fig. 3 illustrates this effect with three simple cases, and indicates that determining which came first, hazard or community, is important, as if the minority community arrived after the siting of, say a noxious facility, then logically, malicious intent in siting decisions is not possible, and the observed inequality has arisen in another way. Some studies of this type have concluded that policy to reduce environmental inequality should focus not on siting decision but housing or employment policy. In addition to outright discrimination, “economic theory” is one causal process producing environmental inequality that is usually considered unjust. Here developers of hazardous facilities deliberately site them in minority communities where they believe collective action against them, or compensation from damages, is likely to be minimized (more affluent communities are widely perceived as more effective in deflecting unwanted developments). However, a variety of other processes have been theorized which may also produce such patterns (these have not been robustly tested, usually due to the lack of adequate time series data). Four main theories can be identified, relating to locational choice, risk perception, neighborhood transition, and planning

1

2

CASE A A hazardous facility is sited in a community POTENTIALLY UNJUST / DISCRIMINATORY

2 1

CASE B People move into an area with a facility thought to be safe, but later proven hazardous UNLIKELY TO BE UNJUST / DISCRIMINATORY

CASE C People move to an area known to be hazardous (attracted by job opportunity or housing)

1 2

Fig. 3

UNLIKELY TO BE UNJUST / DISCRIMINATORY

Some scenarios of the evolution of environmental inequity in hazardous facility siting.

Environmental Justice: An Overview

575

practice. Location theory argues that households locate to meet a package of needs and will move so as to maximize utility (a theory also known as the Tiebold hypothesis). As valuation of environmental quality and safety is not uniform amongst households of different income (the affluent are deemed to give these attributes a higher value), an environmental inequality will result. A similar utility maximizing process may also operate with respect to firms, which seek to minimize costs, and so may locate in low income areas where land is cheaper, and labor available, and as a consequence, may be more likely to house hazardous facilities. Risk theory argues that, in a similar way to the Tiebout model, people’s perception of risk varies according to personal and social group characteristics, thus affecting household location. For example, those that value the environment less than average may also perceive environmental risk as less than average, and so will locate closer to the risk than average. An individual’s response to risk may also vary according to cultural group that they belong to. Neighborhood transition theory argues that minority groups are put closer to certain hazards through a variety of interactions between households. For example, invasion succession sees minorities arrive in a neighborhood (with hazardous facilities), do well, and make it more attractive for other minorities who locate there creating a concentration of minority households in proximity to the hazard. Alternatively, location of a hazardous facility in an area may cause affluent householders to move away, with their properties then reoccupied by low income groups attracted by the improved quality of housing available, thus producing a minorities concentration in close proximity to the hazard. Finally, the land use planning system usually acts to protect high quality environments by directing threats to environmental quality towards areas that are already degraded. Thus it is theorized that risks and hazards can be concentrated by the housing market: people with the means to live in higher quality protected environments do so, leaving lower income groups to locate in unprotected areas where hazards are more common. Additionally, housing developers seek to maximized returns by developing properties for high income households in areas of high environmental quality, with social housing more likely to be directed to areas of lower environmental quality. Where environmental inequalities are observed, the interpretation of injustice is also sensitive to the justice theory adopted, which provides guidance on how risks and benefits should be distributed to make a society “fair.” That is, should distributions be made according to merit, need, or entitlement? These ideas are formalized in theories of utilitarianism (maximize net benefit to society), egalitarianism (distribute costs and benefits equally to all), contractarianism (improve conditions of least well off) and libertarianism (maximize freedom of choice and action), and depending upon the justice theory applied, a policy or development causing a shift in social distribution of environmental quality may be seen to be more or less just. If the goal is to improve the condition of the most disadvantaged, for example, then measures that deliver higher net benefit may not be appropriate.

Remedy and Response For some, developing an understanding of how environmental inequalities arise is an important factor in judging whether those distributions are also unjust. However, the processes giving rise to inequalities are complex, defy simple analysis, and have consequently received relatively little attention in EJ research. It is argued however, that this lack of evidence on process should not automatically be a barrier to mitigating environmental inequalities where they occur, a view strongly held by those EJ advocates who consider the observed distribution to be the most important factor in determining whether unequal is unfair. Thus for both practical and theoretical reasons, whilst desirable, a deep understanding of causality is not currently considered a prerequisite to developing measures to mitigate unacceptable inequalities. How then, are environmental inequalities tackled? One approach is intervention to reduce the proximity of minority communities to environmental hazard. This may take the form of redirecting the hazard, using the planning system for example, to ensure that particular communities do not become “hazard havens” where the presence of one hazard makes it easier to gain a development consent for others. Difficulties with this approach include the issue of whether distributing hazards more widely will place more people in total at risk, difficulties of the NIMBY syndrome (not in my backyard), and whether it is right to deny communities the wider benefits, such as jobs, that development projects may bring to an area. Conversely, the proximity of hazard to minority community may be reduced by policies that reduce factors that restrict where minority households are able to live, and which encourage social mixing. Housing development policy, for example, can be used to ensure that new developments contain a significant share of social housing, a policy adopted in a number of Nordic countries. An alternative approach is to raise environmental quality reducing risks posed to households by the environment. This environmental quality enhancement may be achieved by targeted intervention in selected areas, usually those deemed of unacceptably low environmental quality, with a high representation of minority group of concern (so called “pollution-poverty” hotspots). A less proactive approach is to assume a “rising tide lifts all boats.” That is, rely on geographically widespread environmental quality improvement and assume this will benefit minority populations and reduce inequalities. The relative lack of longitudinal EJ analyses means that firm conclusion as to whether this practice is effective are not possible. The best evidence to date comes from analysis of a decade of air quality change across the United Kingdom (Fig. 4, an update of the Figs. 1 and 2 study). This revealed that where air quality improved (NO2) it did so more slowly for people resident in deprived communities, whilst deterioration (fine particulates) was faster in the more deprived areas. This may be a consequence of the more polluted initial conditions experienced by deprived communities (i.e., more air quality improvement is needed to achieve “good” air quality than in less polluted more affluent areas). In health terms, the improvement in air quality does imply a significant reduction in overall respiratory disease burden, including in deprived areas, but an increase in health inequality. This evidence supports the assertion of political scientists

576

Environmental Justice: An Overview

Fig. 4 The changing social distribution of air quality in Britain, 2001–2011. GB population in lower super output areas (LSOAs) where NO2 exceeds the 40 mg m3 annual average legal limit value. Q1 is the least deprived fifth, Q5 the most deprived fifth. Concentration values are the mean of annual average concentrations for LSOAs where NO2 concentration > 40 mg m3. NB. Log-scale (Mitchell et al., 2015).

who have argued, from a theoretical perspective, that environmental protection and social justice are not always compatible goals, but further studies are needed before drawing a wider conclusion. A third major approach to tackling inequalities is to make no attempt to change them, but if widely judged to be unacceptable (e.g., a breach of publicly agreed environmental quality standards), provide compensatory benefits. Compensation can be appropriate where the affected community bears an environmental burden on behalf of the wider community who may enjoy the benefits the hazard producing process brings, without themselves bearing a proportionate environmental risk. Such compensatory benefits may take the form of enhanced community health or education services, improved housing, and more rarely, direct financial compensation.

Procedural Justice Such approaches offer substantive opportunities to intervene to redress environmental inequalities, but such interventions are not commonplace. This is due to uncertainty of the quality of the evidence for inequality, the interpretation of the extent to which unequal is unfair, and issues of causality, which prevent confident assessment of the long term success of such interventions. The response to these problems is a further approach, directed at ensuring procedural justice, rather than focusing primarily on outcomes (unequal distributions). The evidence base for environmental inequality extends to studies that address procedural equity. These have included those tracing how hazardous facility siting decisions were made; the extent to which facility inspection, regulation and management is unfairly influenced by local lobby groups; and justice in allocation of government funding for environmental cleanup work. The importance of addressing procedural justice grew at the same time as the conception of EJ changed from its initial focus on hazardous facilities to encompass a wider range of environmental issues that now include physical needs (for clean air and water, food, shelter, warmth), economic needs (transport infrastructure, access to work and services), and aesthetic, mental and spiritual needs (quiet, access to the countryside). This wider conception of EJ makes assessment of inequality more challenging, particularly with respect to understanding cumulative impacts and the likely impact of interventions on future social distributions of environmental quality, an area where more research is needed. A focus on procedural issues is now widely regarded as the most likely way to achieve socially just distributions of environmental quality. Thus a focus on process, including active engagement with community stakeholders, is often evident in legislation and policy relevant to decision making that affects the environment. Active stakeholder engagement allows local communities to articulate their concerns, identify where they perceive injustice to lie, and what mitigation responses are preferred. These may take the form of one or other of the mitigation responses outlined above, but could also be a process based measure, such as a Good Neighbor Agreement, an enforceable contract between industry and the local community, which details commitments the firm must make in order to demonstrate its accountability to the community. These agreements, which may be made a condition of planning consent, build trust between community and firm, and typically include specific commitment on the part of the firm (e.g., on discharges to the environment, risk appraisal) and procedures for oversight, such as the right of the community to appoint independent environmental and safety auditors, paid for by the firm.

Environmental Justice: An Overview

577

The importance of process is reflected in later definitions of EJ, such as that of the US EPA which stresses the importance of meaningful involvement of all peoples with respect to the development and enforcement of environmental laws, regulations, and policies. Here meaningful involvement is where: “(1) people have an opportunity to participate in decisions about activities that may affect their environment and/or health; (2) the public’s contribution can influence the regulatory agency’s decision; (3) their concerns will be considered in the decision-making process; and (4) the decision makers seek out and facilitate the involvement of those potentially affected.” Similarly, the UN ECE Aarhus Convention on the Environment, and in Europe the associated environmental law on participatory decision making, access to information, and access to judicial process are key elements of this process based approach to achieving environmental justice, influencing, for example, project and strategic level appraisal (EIA, SEA). EU environmental assessment legislation also makes provision for the assessment of impacts that occur in one EU country, in response to developments in another, but such international equity appraisal is currently a marginal activity with respect to formal impact assessment, despite the growing international interest in EJ.

See also: Climate Change, Environmental Health, and Human Rights; Environmental Justice and Interventions to Prevent Environmental Injustice in the United States.

References Cutter, S., 1995. Race, class and environmental justice. Progress in Human Geography 19 (1), 111–122. Freeman, A.M.I.I.I., 1972. Distribution of environmental quality. In: Kneese, A.V., Bower, B.T. (Eds.), Environmental quality analysis: Theory and method in the social sciences. John Hopkins Press, Baltimore, pp. 243–278. Mitchell, G., Norman, P., Mullin, K., 2015. Who benefits from environmental policy? An environmental justice analysis of air quality change in Britain, 2001–2011. Environmental Research Letters 10 (10), 105009. UNECE, 1999. Convention on access to information, public participation in decision making and access to justice in environmental matters. United Nations Economic Commission for Europe, Geneva.

Further Reading Agyeman, J., Bullard, R., Evans, B. (Eds.), 2003. Just sustainabilities: Development in an unequal world. Earthscan, London. Bowen, W.M., Wells, M.V., 2002. The politics and reality of environmental justice research: A history and considerations for public administrators and policy makers. Public Administration Review 62 (6), 688–698. Braubach, M., Fairburn, J., 2010. Social inequities in environmental risks associated with housing and residential locationdA review of evidence. European Journal of Public Health 20 (1), 36–42. Clark, L.P., Millet, D.B., Marshall, J.D., 2014. National patterns in environmental injustice and inequality: Outdoor NO2 air pollution in the United States. PLoS One 9 (4) e94431. Dobson, A., 1998. Justice and the environment: Conceptions of environmental sustainability and theories of distributive justice. Clarendon Press, Oxford. Holifield, R., Chakraborty, J., Walker, G.e., 2017. The Routledge handbook of environmental justice. Routledge, Abingdon, Oxon. Liu, F., 2001. Environmental justice analysis: Theories, methods and practice. CRC Press, Boca Raton, Florida. Lucas, K., Walker, G., Eames, M., Fay, H., Poustie, M., 2004. Environment and social justice: Rapid research and evidence review. Policy Studies Institute, London. http://www.sdresearch.org.uk/researchreviews/documents/ESJ_final_report.pdf. Mitchell, G., Dorling, D., 2003. An environmental justice analysis of British air quality. Environment and Planning A 35, 909–929. Mitchell, G., Walker, G., 2007. Methodological issues in the assessment of environmental equity and environmental justice. In: Deakin, M., Mitchell, G., Vreeker, R., Nijkamp, P. (Eds.), Sustainable urban development volume 2: The environmental assessment methods. Routledge, Abingdon, pp. 447–472. Walker, G., 2012. Environmental justice: Concepts, evidence and politics. Routledge, Abingdon, Oxon.

Relevant Websites Environmental Justice Organization, Liabilities and Trade. http://www.ejolt.org/ including the Environmental Justice Atlas http://ejatlas.org/. US Environmental Protection Agency. http://www.epa.gov/environmentaljustice.

Environmental Liver Toxinsq Luqi Duan, Jephte Y Akakpo, Anup Ramachandran, and Hartmut Jaeschke, University of Kansas Medical Center, Kansas City, MO, United States © 2019 Elsevier B.V. All rights reserved.

Introduction Acute and chronic liver disease represents a major health problem worldwide. Viral hepatitis, alcohol abuse and nonalcoholic fatty liver disease due to obesity are the dominant causes of liver injury, inflammation, fibrosis, and hepatocellular carcinoma. In addition, drug-induced hepatotoxicity represents a substantial challenge for drug therapy and drug development ranging from idiosyncratic to predictable hepatotoxins. Mechanisms of drug-induced liver injury have been extensively reviewed in recent years. The current overview focuses on liver toxicity induced by various chemicals which are present in the environment. Major hepatotoxins relevant for wildlife, livestock and humans are generated by plants, mushrooms, cyanobacteria (blue-green algae) and fungi (Table 1) but can also be environmental pollutants (Table 2).

Cyanobacteria (Blue-Green Algae) Depending on climate conditions, blooms of cyanobacteria are observed in many lakes and rivers throughout the world. During this time, cyanobacteria can produce potent toxins including hepatotoxins such as microcystins, nodularins, and cylindrospermopsins. As gram-negative bacteria, blue-green algae can also produce lipopolysaccharides. Severe liver injury and death from excessive hemorrhage and liver failure after consumption of cyanobacteria-contaminated water has been reported worldwide for wildlife, cattle, sheep, pigs, horses, and dogs. In addition, human exposure to cyanotoxins causes acute and chronic adverse health effects. There are a number of examples where cyanotoxins were linked to acute liver injury and even fatalities. The most prominent example is exposure of dialysis patients to microcystin-contaminated reservoir water in Caruaru, Brazil, in 1996, resulting in acute liver failure of more than 100 patients with 50% mortality. In addition to acute effects, chronic exposure to cyanotoxins is considered one of underlying causes of the high incidences of liver cancer in China. Blue-green algae do not release relevant amounts of cyanotoxins into the surrounding water. However, during aging of the bloom when many bacteria die, most of the high intracellular levels of the toxins are released during cell lysis. A similar effect is observed when blooms of algae are treated with copper sulfate or other algicides. Lysis of the algae by these chemicals leads to the complete release of cyanotoxins into the water. A problem for human consumption is the fact that soluble cyanotoxins are not removed by conventional water treatment processes.

Microcystins Microcystins are cyclic heptapeptides with variable molecular weights. The most potent hepatotoxin is microcystin-LR. Currently, there are more than 60 different congeners of microcystin identified. However, not all of them are hepatotoxic. Microcystins were isolated first from the cyanobacterium Microcystis aeruginosa but are produced by a number of different strains. Microcystins are found mainly in freshwater, which can be a direct source of the toxin. However, microcystins can also bioaccumulate in aquatic organisms and in crops irrigated with contaminated water leading to human exposure. Recent observations suggest that even low level chronic exposure of microcystins by consumption of contaminated seafood can lead to low grade liver injury. Microcystins are potent hepatotoxins due to uptake by the liver-specific transporter organic anion transporting polypeptide 1b2 (rodents) or the human orthologs OATP1B1/1B3. Acute microcystin toxicity results in centrilobular cell swelling and necrosis, apoptosis and massive hemorrhage in the liver, which causes hypovolemic shock and death in animals. Similar histological findings were observed in humans. In hepatocytes, microcystin-LR (MC-LR) and others are potent inhibitors of protein serine/threonine phosphatase 1 and 2A (PP1 and PP2A) by covalently binding to these enzymes. The inhibition of protein phosphatases causes hyperphosphorylation of cytoskeletal proteins, especially intermediate filaments, which leads to membrane blebbing and loss of cell structure. Lower doses of microcystin, which only partially inhibit PPs, lead to phosphorylation of dynein, a mechanochemical protein responsible for intracellular vesicle movement along microtubules. Consequently, low levels of microcystin inhibited dynein ATPase activity and prevented receptor-mediated endocytosis. Several studies have shown that MC-LR can effect glucose and fat metabolism in liver. MC-LR can interfere with the actions of insulin receptor substrate 1 and glycogen synthase in insulin signaling and have a toxic effect on glucose metabolism in the liver. Oral MC-LR exposure can induce hepatic lipid metabolism disorders mediated by biosynthesis of unsaturated fatty acids,

q

Change History: May 2019. All sections have been updated: Luqi Duan (Cyanobacteria, Mushroom Hepatotoxins); Jephte Akakpo (Fungal Hepatotoxins, Hepatotoxic Metals); Anup Ramachandran (Plant Hepatotoxins, Organic Environmental Pollutants); Hartmut Jaeschke (worked on all sections). This an update of C.D. Williams, H Jaeschke, Liver Toxicology, Editor(s): J.O. Nriagu, Encyclopedia of Environmental Health, Elsevier, 2011, Pages 509–514.

578

Encyclopedia of Environmental Health, 2nd edition, Volume 2

https://doi.org/10.1016/B978-0-12-409548-9.11910-4

Environmental Liver Toxins Table 1

579

Biological hepatotoxicants Acute liver injury

Liver fibrosis or cirrhosis

Liver carcinogen

Aflatoxin

þ

þ

þþ

Toxin produced by mold that contaminates food supplies. Chronic exposure can cause fibrosis and it is highly carcinogenic in man. High, acute doses can cause liver injury and increase cancer risk

Cyanobacterial products Cylindrospermopsins

þ

(?)

(?)

Microcystins

þþ

þ(?)

þ(?)

Nodularin

þþ

þ(?)

þ(?)

Cyanobacteria produced polycyclic uracil derivative. Acute or subchronic doses are hepatotoxic. At this time, insufficient evidence exists regarding its potential for long-term effects Cyclic peptides produced by fresh water cyanobacteria. Acute or subchronic doses are highly hepatotoxic. Described as a risk factor for cirrhosis and potential carcinogen Acute or subchronic doses are highly hepatotoxic however exposure is limited because it is only produced in brackish waters and is not normally consumed by livestock or man. Structural similarities to microcystins implicate it as a risk factor for cirrhosis and potential carcinogen if exposure is sufficient

Mushroom toxins Amanitins

þþ





Phallotoxins (i.e., Phalloidin)

þþ





Pyrrolizidine alkaloids (i.e., monocrotaline)

þ

þ

þ(?)

Comments

Major hepatotoxicant in poisonous mushrooms. No longterm effects have been confirmed Hepatotoxicant in poisonous mushrooms. Induces liver injury in laboratory animals, but is not orally bioavailable and therefore not a major risk factor for hepatotoxicity in man Plant alkaloids responsible for veno-occlusive disease and acute injury. Exposure can result in fibrosis and a potential increased cancer risk

(?) means that the evidence to support this effect is limited.

Table 2

Chemical hepatotoxicants Acute liver injury

Liver fibrosis or cirrhosis

Liver carcinogen

Comments

Arsenic

þ

þ

þþ

Cadmium

þþ



/þ

Phosphorus

/þ





Carbon tetrachloride

þþ

þþ

þ

Vinyl chloride

þ

þ

þþ

Tetrachloroethane

þþ

þ

þ

Paraquat and Diquat

þ





Heavy metal that can contaminate drinking water. Acute toxicity causes liver injury but neurological and cardiovascular effects cause significant morbidity and mortality. Chronic exposure can cause cirrhosis or portal hypertension and is a known carcinogen High dose of this heavy metal can cause acute liver injury in laboratory animals. No confirmed human exposure resulting in liver injury. Potent carcinogen in man but no evidence of liver specific tumors Causes liver injury in marine animals but no evidence of human liver injury Once used as a solvent and refrigerant, use today is highly restricted making environmental exposure unlikely. Toxicity is dependent on dose and exposure Once used as a solvent and propellant, use today is highly restricted making environmental exposure unlikely. Responsible for angiosarcoma in the liver Once used as a solvent, use today is highly restricted making environmental exposure unlikely. Toxicity is dependent on dose and exposure Cationic, contact herbicides which are relatively safe unless taken orally. If accidentally ingested in large doses, severe liver injury can occur

580

Environmental Liver Toxins

peroxisome proliferator-activated receptor activation, and gut microbial community shift, which may play an important role in the metabolic disturbance. Microcystin-LR could also change the mRNA and protein expression of endoplasmic reticulum stress signaling molecules and cause hepatic lipid metabolism abnormalities. Chronic exposure to low-level MC-LR can induce nonalcoholic steatohepatitis in mice. Long-term exposure to low dose MC-LR might be closely associated with hepatocarcinoma. MC-LR may activate Akt and p38/ ERK/JNK cascades and promote cell proliferation. MC-LR can reduce cellular adhesion in human liver cells, where the ERK1/2phospho-paxillin (ser83)/E-cadherin axis is involved. MC-LR induces oxidant stress and inflammation as well as epigenetic regulation, which is caused by characteristic gene alterations regulated by DNA methylation; miRNAs also play important roles in MC-LR induced hepatic carcinogenesis. Studies with isolated hepatocytes demonstrated that MC-LR can induce oxidant stress, which was suggested to be responsible for the mitochondrial membrane permeability transition (MPT) pore opening and cell killing by apoptosis. In cultured hepatocytes, the oxidant stress was involved in cytoskeletal dysfunction, which preceded cell death. In addition, the reactive oxygen-mediated MPT triggers the release of mitochondrial calcium, which causes the activation of calpains. Glutathione (GSH) plays a crucial role in antioxidant defense and metabolic detoxification of MC-LR. The detoxification process of MC-LR has been studied by quantitatively analyzing MC-LR and its GSH pathway metabolites (MC-LR-GSH and MC-LR-Cys) in livers of Sprague-Dawley (SD) rats, showing the content of MC-LR-GSH was relatively low during the experiment; however, the MC-LR-Cys to MC-LR ratio was as high as 6.65, suggesting MC-LR-GSH could be efficiently converted to MC-LR-Cys. Chronic exposure to MC-LR may affect hepatocyte mitochondrial DNA replication, as long-term and persistent exposure to MC-LR increased the 8-hydroxy-20 -deoxyguanosine (8-OHdG) levels of DNA in liver cells, damaged the integrity of mtDNA and nuclear DNA (nDNA), and altered the mtDNA content. Notably, MC-LR exposure can change the expression of mitochondrial and nuclear genes that are critical for regulating mtDNA replication and repairing oxidized DNA. They also further impaired the function of mitochondria and liver cells. Since no caspase activation was observed, it appears that calpains rather than caspases are involved in the microcystin-induced apoptosis in cultured hepatocytes. However, the relevance of these mechanisms for the acute in vivo cytotoxicity of microcystin remains unclear. High doses of microcystin cause acute hemorrhage and liver failure in rodents within hours with no evidence of apoptosis. Morphological evidence of apoptosis can be found after treatment with repeated sublethal doses over several days or weeks. Interestingly, the pathological findings in humans after acute microcystin exposure indicate panlobular hepatocellular necrosis and inflammation but no evidence of apoptosis. Recent studies show that apoptosis does not contribute to MC-LR-induced cell death in the in vivo mouse model or in primary human hepatocytes in vitro. Thus, targeting necrotic cell death mechanisms will be critical for preventing microcystin-induced liver injury. Because of the rapid onset of cell death after microcystin exposure, clinically useful therapeutic approaches may be difficult to establish. There is evidence in rodents that various antioxidants are partially beneficial if given prophylactically. However, the most effective way to prevent liver damage is to limit the exposure to microcystin.

Nodularins Nodularins are monocyclic pentapeptides produced by Nodularia spumigena. This cyanobacterium is found in brackish water of coastal lagoons where blooms of N. spumigena occur frequently. The occurrence of the bloom depends on water temperature, light intensity and nutrients, especially nitrogen and phosphorus. Nodularins accumulate in marine mussels, clams, and fish. Human exposure to nodularins can occur through consumption of contaminated seafood and contact with contaminated water. Nodularins are hepatotoxins in mammals with similar potency as microcystins. Liver injury after acute intoxication in rodents causes hemorrhagic necrosis, liver failure, and hypovolemic shock. Features of the liver damage include cytoskeletal disorganization, lipid peroxidation, loss of membrane integrity, DNA fragmentation and strand breaks, cell blebbing, apoptosis, cellular disruption, necrosis, and intrahepatic bleeding, which may lead to the death of the organism via hemorrhagic shock. Apart from hepatotoxicity, nodularins also have initiating and tumor-promoting activity, making them a so-called carcinogenic threat. However, the details of their hepatotoxicity and carcinogenicity have not been fully elucidated. Poisoning with nodularins is most common in wildlife and domestic livestock. Nodularin-induced liver injury after human exposure has not been observed yet.

Cylindrospermopsins Cylindrospermopsins are tricyclic alkaloids generated by Cylindrospermopsis raciborskii and other freshwater cyanobacteria. These compounds are known to cause damage to the liver and other organs in humans and in mammals. Various species of cylindrospermopsin-producing bacteria are present in many lakes and drinking water reservoirs around the globe. The toxins can be released from the bacteria even without cell lysis, which can lead to the accumulation of substantial concentrations in the water during C. raciborskii blooms. An additional concern is that C. raciborskii does not form surface scums but is present in dense bands several meters below the water surface, which makes it more difficult to detect water contamination with this cyanobacterium. Similar to microcystins, cylindrospermopsins are not removed by regular water purification procedures. Thus, contamination of drinking water supplies by cylindrospermopsins represents a significant risk to human and animal health worldwide. Experimental studies in rodents showed that cylindrospermopsins can cause severe and dose-dependent liver injury manifested as centrilobular necrosis within 12 h after exposure. However, cell death is not limited to the liver but affects other organs including the kidney, spleen, the vascular bed in the heart as well as ulceration in the stomach and intestine (after oral exposure). In subchronic oral toxicity studies the liver and kidney were most affected. Cylindrospermopsins were also shown to be genotoxic in

Environmental Liver Toxins

581

various in vitro assays. Consistent with these observations, DNA fragmentation was found in livers of cylindrospermopsin-treated mice. These data suggest that cylindrospermopsins are potential carcinogens. The detailed mechanisms of cylindrospermopsin-induced cell injury remain unclear. In cultured hepatocytes, rapid toxicity was observed, which might be caused by a reactive metabolite generated by the CYP450 system. Cylindrospermopsin also induced irreversible protein synthesis inhibition in primary mouse hepatocytes. The inhibition of CYP450 activity diminished the toxicity of cylindrospermopsin, but not the effects on protein synthesis. This suggests that the parent compound and possibly the metabolites formed could exert toxicity by a different mechanism, also depending on cylindrospermopsin concentrations. Genotoxicity is also prevented by CYP450 inhibitors suggesting the involvement of an active oxidation product in this process. The role of ROS is controversial. Studies showed that enzymes like superoxide dismutase (SOD), catalase (CAT), glutathione peroxidase (GPx), glutathione S-transferase (GST), and glucose-6-phosphate dehydrogenase (G6PDH), which counteract oxidant stress were not altered in Hoplias malabaricus hepatocytes, even when the cells were exposed to up to 100 mg/L CYN for 72 h, suggesting the ROS may not play a role. However, different results were also obtained, where GST activity was found increased in the HepG2 cell line after exposure. Moreover, high levels of Nrf2 (a transcription factor that regulates the expression of antioxidant enzymes) in toxin-treated rat primary hepatocytes was observed providing indirect evidence for an oxidant stress. A more delayed injury mechanism appears to be caused by inhibition of protein synthesis. Cylindrospermopsin showed toxicity and dysfunction in lymphocytes and neutrophils. Thus, more research is necessary to establish relevant cell injury mechanisms of cylindrospermopsins in hepatocytes and other cell types and their impact for liver injury in vivo.

Mushroom Hepatotoxins Consumption of mushrooms from the genera Amanita, Lepiota, and Galerina are capable of causing severe gastrointestinal distress, potential organ failure or even death. The primary toxicants in these fungi are amatoxins and phallotoxins. Amatoxin is a general classification of nine known amanitins. The four primary amanitins are a-, b-, g-, and 3-amanitin with a-amanitin being the most well characterized and primary toxic constituent of most poisonous mushrooms resulting in human morbidity and mortality. Each of these amanitins is composed of eight amino acids in a bicyclic structure. Phallotoxins have a similar bicyclic structure but are formed from only five amino acids. The most commonly studied phallotoxin is phalloidin, although at least six other phallotoxins have been identified including phallisin, phalloin, phallacin, phallacidin, phallisacin, and prophalloin. It has been estimated that up to 90% of fatal mushroom poisonings is the result of inadvertent consumption of Amanita phalloides also known as the death cap. A. phalloides resemble and taste similar to edible straw mushrooms. The primary toxic constituent of A. phalloides is a-amanitin, which is structurally stable even after cooking or storage for extended periods of time. Many toxic mushroom species, including A. phalloides, contain both amatoxins and phallotoxins, however the most severe toxic effects are mediated by amatoxins. Clinical manifestation of mushroom poisoning occurs in four phases. The first phase involves severe gastrointestinal distress including vomiting, abdominal pain, and diarrhea resulting in major electrolyte imbalance. These symptoms generally occur 6–24 h after mushroom ingestion. Most likely these early clinical symptoms are a result of phallotoxin toxicity rather than amatoxin toxicity because phallotoxins are poorly absorbed through the gastrointestinal tract. The second phase of toxicity occurs 24–48 h after ingestion. At this stage symptoms appear to improve however renal and hepatic functions are compromised. The third stage generally occurs 3–5 days after ingestion. At this stage renal and hepatic failure can occur. The fourth and final stage results in death. Mortality is observed 4–9 days after ingestion and occurs in approximately 20–30% of patients treated for amatoxin poisoning in the United States. Mechanistically amanitins and phallotoxins are unique in inhibition of hepatic function. Amanitins are very potent inhibitors of RNA polymerase II (RNAP II) in eukaryotic cells. Amanitin has a high affinity for the polymerase at its site of interaction with the DNA backbone thereby greatly reducing the transcriptional efficiency. This essentially results in transcriptional arrest and eventual cell death. Phallotoxins bind to actin, which disrupts the cellular architecture and thereby inhibits cell function. Several studies indicated hepatocellular apoptosis maybe responsible for a-amanitin-induced liver injury. a-Amanitin-induced apoptosis in human hepatocyte cultures is p53- and caspase-3-dependent. The role of oxidative stress has also been highlighted in the development of severe hepatotoxicity in several studies; a-amanitin is able to form phenoxyl free radicals that might be involved in ROS generation. Due to the low intestinal absorption, phallotoxins are not hepatotoxic in laboratory animals unless administered parenterally. Therefore, human toxicity due to consumption is considered unlikely. Both amanitins and phallotoxins are selectively transported into hepatocytes by the hepatocyte-specific transporter OATP1B3 resulting in selective liver toxicity. Amanitin poisoning is characterized by elevations of liver enzymes in plasma and alterations in serum creatinine levels and the prothrombin index. Clinically there are several treatments used to limit amanitin toxicity. Amanitin has been shown to undergo enterohepatic circulation resulting in reabsorption and prolonged liver exposure. To reduce the amanitin returned to the liver, activated charcoal can be given to absorb the toxin in the small intestine. Due to depletion of hepatic glutathione levels during amanitin poisoning, administration of N-acetylcysteine proved to be protective. Additionally, administration of penicillin G or silibinin have shown benefit, most likely due to inhibition of amanitin uptake into the hepatocyte. Polymyxin B proved to be a new effective antidote for Amanita phalloides poisoning by potentially binding to RNAP II in the same interface as a-amanitin, preventing the toxin from binding to RNAP II. If these clinical interventions are not effective, the patient may require emergency liver transplantation or death will occur.

582

Environmental Liver Toxins

Fungal Hepatotoxins Aflatoxins are a group of mycotoxins produced by the fungus Aspergillus and are potent hepatotoxins and carcinogens in the liver. Structurally all aflatoxins contain a coumarin ring and an unsaturated lactone moiety. Aflatoxins can be found in foodstuffs contaminated with aflatoxin-producing Aspergillus or in dairy milk from animals fed contaminated feed. The most well characterized aflatoxin is aflatoxin B1 and its metabolite aflatoxin M1 which was first identified in milk. Aflatoxin G1 is also highly toxic and carcinogenic in certain animal models. Aflatoxin B2, G2, and M2 are potent hepatotoxicants but have not been demonstrated to be carcinogenic. Human exposure high enough to cause acute toxicity, also known as aflatoxicosis, is rare in developed countries but more prevalent in African and Asian countries. Symptoms of aflatoxicosis include hemorrhagic necrosis of the liver, edema, and lethargy. Aflatoxin contamination of grain, seeds, spices, and edible nuts is most prevalent in warm, humid regions of the world where conditions are favorable for the growth of mold. Storage of these food commodities under inappropriate conditions also facilitates mold growth and production of aflatoxin. For this reason, the monitoring of aflatoxin levels and strict regulation of grain storage is mandated in the United States and certain European countries. Aflatoxin B1 (AFB1) is oxidized to aflatoxin M1 via CYP1A2 and CYP3A4. Aflatoxin M1 is highly reactive due to the formation of an epoxide moiety and readily reacts with guanine bases making it highly mutagenic. Binding of aflatoxin to DNA is generally associated with carcinogenicity while protein adducts correlate with toxicity. Aflatoxin B1 exposure is a risk factor significantly associated with liver carcinogenesis and more than 60% of AFB1 related HCC carry p53 codon 249 mutations. Studies have shown that AFB1 could induce p53 R249S mutation, which may bind to the transactivation domain of the hepatitis B viral protein HBX and accelerates hepatocarcinogenesis. The p53 R249S mutation may relate to the adverse effects of aldehydes generated by the AFB1 metabolism-induced lipid peroxidation in hepatocytes including induction of DNA damage and mutations at codon 249 of the p53 gene. Besides inducing genetic effects, AFB1 exposure can induce persistent epigenomic effects in primary human hepatocytes associated with hepatocellular carcinoma. Detoxification of aflatoxin involves one of two known pathways including glutathione S-transferase A5 (GSTA5) or aldo-keto reductase 7A1 (AKR7A1). The glutathione-conjugated and dialcohol forms of aflatoxin were believed to be less toxic than the parental compound, however, these metabolites also show toxicity. Aflatoxin exposure is monitored clinically by detection of guanine- or albumin-aflatoxin adducts. Guanine-adducts can be measured in urine less than 24 h after aflatoxin exposure. Albumin-adducts serve as a serum biomarker for longer-term aflatoxin exposure. Ingestion of large doses of aflatoxin over a short period of time is associated with aflatoxicosis and an increased incidence of cirrhosis. Sublethal, chronic exposure impacts immune function and nutritional absorption. However, both acute and chronic exposure to aflatoxin is associated with hepatocellular carcinoma.

Hepatotoxic Metals Arsenic Arsenic is a heavy metal that can be found in ground and drinking water and has been implicated in liver cirrhosis and portal hypertension. Drinking water can become contaminated with arsenic due to mining and smelting runoff or geothermal activity. Additionally, exposure can occur from industrial processes like manufacturing of agricultural products, pressure treated lumber, and some consumer electronics. Arsenic is absorbed very efficiently through the gut from where it is then taken to the liver. Evaluation of the distribution profile of arsenic in the mouse indicated that the liver is not a major target for arsenic accumulation. The primary target organ is the kidney due to the renal excretion of arsenic. Despite low accumulation, the liver is a target organ of long-term, low-level arsenic exposure. Typical liver abnormalities can be observed due to chronic exposure and include jaundice, abdominal pain, and hepatomegaly. Eventually this injury can lead to cirrhosis and portal hypertension or even hepatocellular carcinoma. Arsenic can be metabolized into many forms including the trivalent intermediates metabolites methylarsonous acid (MMA3 þ) and dimethylarsinous acid (DMA3 þ) or pentavalent form including monomethylarsonic acid (MMA5 þ) and dimethylarsinic acid, DMA5 þ. The mechanisms of arsenic toxicity in the liver appear to be mediated by the high reactivity of arsenic with thiols and its ability to produce oxidants. In hepatocytes, the trivalent form can inhibit the Krebs cycle and oxidative phosphorylation leading to blockage of ATP production. Arsenic is not considered a mutagen, but reactive oxygen byproducts may damage DNA and inhibit repair mechanisms. The disruption of the DNA repair machinery and altered hepatocyte function can result in aberrant cellular turnover in the liver. In a rat model of chronic arsenic exposure through drinking water it has been shown that hepatocytes with unusual cellular morphology are apparent and excessive postmitotic apoptosis is observed. It has also been demonstrated that NADPH oxidase, which is used by phagocytic cells like macrophages and neutrophils to produce reactive oxygen, is critical in the remodeling of sinusoidal endothelial cells. Capillarization and protein nitration were prevented in mice with the p47 subunit of NADPH oxidase knocked-out. This demonstrates exacerbation of injury by inflammation and excessive reactive oxygen production triggered by arsenic. Characteristics of long-term or chronic arsenic exposure include jaundice, abdominal pain, hepatomegaly and progress to cirrhosis, ascites and even hepatocellular carcinoma.

Environmental Liver Toxins

583

Cadmium Cadmium is a heavy metal that has been shown to produce hepatic injury in laboratory animals and is regarded as one of the most toxic metals. In general, cadmium (as Cd2 þ) and cadmium compounds are fairly water soluble. Contaminated water and food ingestion as well as cigarette smoke are the largest sources of acute and chronic exposure of cadmium to humans. However, exposure from industrial consumer products such as batteries, pigments, coatings and platings, stabilizers for plastics, nonferrous alloys, photovoltaic devices and fluorescence microscopes can also occur. Current regulations put very strict limitations on the handling of cadmium. The Environmental Protection Agency (EPA) determined that cadmium concentration limit must be below 0.04 mg/L in drinking water and 0.005 mg/L in consumer products. In the workplace, Occupational Safety and Health Administration (OSHA) legal limit for cadmium is 5 mg/m3 in air averaged over an 8-h work day. Despite the high toxicity of cadmium, there is no evidence of environmental exposure to cadmium enough to induce liver injury in humans. Unlike arsenic, gastrointestinal absorption of cadmium is low. Cd2 þ is taken up by the divalent metal transporter 1 (DMT1) and the metal transporter protein 1 (MTP1), which are located at the basolateral and apical membrane of enterocytes, respectively. Retention of cadmium by metallothionein (MT) is thought to reduce cadmium absorption into the systemic circulation but the absorption rate can be increased if calcium and iron levels in the body are low due to increased expression of transporters that promiscuously transport cadmium. Once in the body, the excretion of cadmium is very limited. Detoxification of cadmium involves intracellular chelation by MT and coupling to glutathione for biliary excretion. Binding of cadmium by MT in the hepatocyte is thought to limit its toxic effects. This has been demonstrated in vivo by both overexpressing and knocking-out MT in mouse models, which showed decreased or enhanced toxicity, respectively. However, acute toxicity of cadmium has been shown to be dependent on the effect of specific MT family protein binding. MT-I and MT-II greatly inhibit hepatotoxicity while MT-III enhances hepatotoxicity. Although a wide variety of adverse effects can be caused by cadmium exposure in humans, chronic toxicity results mainly in renal dysfunction while acute toxicity primarily results in cadmium accumulation in hepatocytes and extensive liver damage. Cadmium-induced liver damage has been shown to disrupt cellular architecture by interfering with the actin cytoskeleton and modifying cellular interactions through gap junctions that is both time and concentration dependent. These effects ultimately result in hepatocyte proliferation or necrosis. Cadmium has been implicated in hepatocyte injury by directly binding to sulfhydryl groups and inactivating mitochondrial proteins and nonprotein thiols resulting in formation of mitochondrial oxidant stress. MT and glutathione (GSH) provide protection against cadmium induced liver injury because they are rich in cysteine residues. Cadmium has also been implicated in the inactivation of antioxidant enzymes such as superoxide dismutases, catalase, glutathione reductase, and glutathione peroxidases. Cadmium mediated mitochondria derived reactive oxygen production, GSH depletion, and antioxidant enzyme inactivation disrupt the hepatocellular redox balance and lead to lipid peroxidation, DNA damage as well as inflammation. Also, MT binding to cadmium may increase the concentration of unbound free Fe2 þ, which promotes the Fenton reaction and lipid peroxidation. A source of reactive oxygen after acute cadmium exposure and liver damage comes from infiltration of neutrophils. Along with Kupffer cells, neutrophils play a major part in hepatotoxicity by release of inflammatory mediators and reactive oxygen species, in particular hypochlorous acid, which promote necrosis. Antioxidants and other interventions like vitamin A, zinc and selenium have shown some efficacy in reducing cadmium-induced liver injury and in limiting carcinogenesis.

Plant Hepatotoxins Plant toxins are naturally occurring phytochemicals or secondary metabolites formed by plants to protect themselves against various threats like bacteria, fungi, insects, and predators. Toxins can be present in commonly consumed human foods like fruits and vegetable. Human exposure can also come from plant products used in industry for cosmetics or phytomedicine. However, the most prevalent toxicity occurs in grazing wildlife and livestock. Consumption of theses toxins can be poisonous and seriously harm the liver. The adverse effects on the liver depends on the phytoconstituents or metabolites formed in the liver. Plant toxins can be classified into many different genera or chemical categories. For instance, Senecio, Echium, Cynoglossum, Heliotropium, Crotalaria, and Symphytum contain hepatotoxic pyrrolizidine alkaloids. Other genera contain glycosides, proteinaceous compounds, organic acids, alcohols, photosensitizing, and contact-sensitizing substances like poison ivy and volatile oils. Pyrrolizidine alkaloids induce hepatotoxicity. The toxicity is dependent on CYP3A-mediated metabolism, which results in the formation of the toxic pyrrole moiety in the liver. The reactive metabolite has been shown to interact with nucleic acids and proteins and leads to depletion of hepatic glutathione. A target of reactive pyrrolic metabolites of pyrrolizidine alkaloids is the ATP synthase subunit beta (ATP5B), a critical subunit of mitochondrial ATP synthase, which forms pyrrole-ATP5B adducts. This results in impairment of mitochondrial function. Pyrrolizidine toxicity in the liver causes the sinusoidal-obstructive syndrome, which can be used as an animal model of endothelial damage in the liver. Pyrrolizidine alkaloids in Gynura root also induce hepatic veno-occlusive disease. Endothelial damage induced by the pyrrolizidine occurs in sinusoids as well as central venules. In rodent models this correlates with centrilobular injury, which is the primary site of cell damage and necrosis. This localization of injury most likely relates to the bioactivation and unequal distribution of the CYP3A family in the liver. However, there are conflicting opinions if this mimics human liver injury. Additionally, in a rodent monocrotaline model it has been shown that treatment with caspase inhibitors attenuates liver injury by  50%. This indicates that cell death occurs via oncosis and apoptosis in this animal model. However, the primary and mitigating injury of pyrrolizidine alkaloids appears

584

Environmental Liver Toxins

to be endothelial damage with additional tissue injury being caused by disruption of sinusoidal blood flow (ischemic necrosis) and portal hypertension. Chronic persistence of pyrrolizidine alkaloid-derived DNA adducts can lead to liver cancer. However, some plants phytotoxins can be used as selective anticancer drugs.

Organic Environmental Pollutants Various halogenated compounds have been implicated in hepatic dysfunction and liver carcinogenesis in laboratory animals, but exposure levels in humans are generally insufficient to be effective hepatotoxicants. However, some hypothetical risk of exposure to certain compounds can come from traffic pollutants or can occur in an occupational setting, which generally involves laboratory work or the production of plastics and solvents. Vinyl chloride is critical for the manufacturing of PVC plastics. It has been implicated in a very rare form of liver cancer called angiosarcoma and also induces liver fibrosis. The vinyl chloride mechanism of action requires metabolism to an imidazocyclization product that is highly reactive with DNA. Carbon tetrachloride (CCl4) was a commonly used solvent in dry cleaning and was also used as a refrigerant. Today CCl4 is not generally used in consumer products because it has been shown to be a strong hepatotoxic agent and an inducer of liver fibrosis. CCl4 is partially metabolized by CYP2E1 resulting in the formation of the CCl3 radical, which is capable of interacting with lipids, proteins and nucleic acids. The CCl3 radical preferentially reacts with unsaturated lipids initiating a peroxidation chain reaction that can lead to extensive cellular damage and oxidant stress. In vitro, CCl4 treatment of hepatocytes can increase caspase-9 activity and decrease both mitochondrial membrane potential and cell viability. Repeated exposure is capable of disrupting cellular architecture and for this reason it is commonly used as a laboratory model of liver fibrosis. Many environmental pollutants including CCl4 can disrupt oxidative phosphorylation and cause hepatic steatosis. The potential mechanisms found to contribute to steatosis for these pollutants are mitochondrial impairment with reduced fatty acid metabolism, insulin resistance, impaired hepatic lipid secretion, and enhanced cytokine production.

See also: Fluoride in Drinking Water: Effect on Liver and Kidney Function; Metal-Induced Toxicologic Pathology : Human Exposure and Risk Assessment.

Further Reading Ellis, E., 2009. Protection against aflatoxin B1 in ratdA new look at the link between toxicity, carcinogenicity and metabolism. Toxicological Sciences 109, 1–3. Falconer, I.R., Humpage, A.R., 2006. Cyanobacterial (blue-green algal) toxins in water supplies: Cylindrospermopsins. Environmental Toxicology 21, 299–304. Gehringer, M.M., 2004. Microcystin-LR and okadaic acid-induced cellular effects: A dualistic response. FEBS Letters 557, 1–8. Graeme, K.A., 2014. Mycetism: A review of the recent literature. Journal of Medical Toxicology 10, 173–189. Ibelings, B.W., Backer, L.C., Kardinaal, W.E.A., Chorus, I., 2015. Current approaches to cyanotoxin risk assessment and risk management around the globe. Harmful Algae 49, 63–74. Liu, J., Qu, W., Kadiiska, M., 2009. Role of oxidative stress in cadmium toxicity and carcinogenesis. Toxicology and Applied Pharmacology 238, 209–214. Martins, J.C., Vasconcelos, V.M., 2009. Microcystin dynamics in aquatic organisms. Journal of Toxicology and Environmental Health. Part B, Critical Reviews 12, 65–82. Moreira, R., Pereira, D.M., Valentão, P., Andrade, P.B., 2018. Pyrrolizidine alkaloids: Chemistry, pharmacology, toxicology and food safety. International Journal of Molecular Sciences 19, 1668. Naujokas, M.F., Anderson, B., Ahsan, H., Aposhian, H.V., Graziano, J.H., Thompson, C., Suk, W.A., 2013. The broad scope of health effects from chronic arsenic exposure: Update on a worldwide public health problem. Environmental Health Perspectives 121, 295–302. Pouria, S., de Andrade, A., Barbosa, J., Cavalcanti, R.L., Barreto, V.T., Ward, C.J., Preiser, W., Poon, G.K., Neild, G.H., Codd, G.A., 1998. Fatal microcystin intoxication in haemodialysis unit in Caruaru, Brazil. Lancet 352, 21–26. Ramachandran, A., Duan, L., Akakpo, J.Y., Jaeschke, H., 2018. Mitochondrial dysfunction as a mechanism of drug-induced hepatotoxicity: Current understanding and future perspectives. Journal of Clinical and Translational Research 4, 75–100. Roth, R.A., Jaeschke, H., Luyendyk, J.P., 2019. Toxic responses of the liver. In: Klaassen, C.D. (Ed.), Casarett and doulls toxicology, 9th edn. McGraw Hill Publishing, pp. 719–766. van Apeldoorn, M.E., van Egmond, H.P., Speijers, G.J., Bakker, G.J., 2007. Toxins of cyanobacteria. Molecular Nutrition & Food Research 51, 7–60. Williams, J., Phillips, T., Jolly, P., Stiles, J., Jolly, C., Aggarwal, D., 2004. Human aflatoxicosis in developing countries: A review of toxicology, exposure, potential health consequences, and interventions. American Journal of Clinical Nutrition 80, 1106–1122. Woolbright, B.L., Williams, C.D., Ni, H., Kumer, S.C., Schmitt, T., Kane, B., Jaeschke, H., 2017. Microcystin-LR induced liver injury in mice and in primary human hepatocytes is caused by oncotic necrosis. Toxicon 125, 99–109.

Environmental Noiseq Paul De Vos, SATIS Weesp, The Netherlands Annemarie van Beek, National Institute for Public Health and the Environment (RIVM), Bilthoven, The Netherlands © 2019 Elsevier B.V. All rights reserved.

Glossary dB The standard scale for environmental noise levels. dB(A) A-weighted decibel, taking account of the frequency dependent sensitivity of the human ear. Lden Pay-evening-night level. The long-term average noise level, with a penalty of 5 dB for the 4 h evening time and 10 dB for the 8 h night time. Ldn Yearly average level with a penalty of 10 dB during the 8 h night time. Leq The imaginary continuous level of sound thatdover a given time interval Tdcontains the same amount of energy as the measured sound level that varies over time. Lnight Yearly average equivalent level during 8 h night-time period. L10 Noise level that is exceeded during 10% of the time of a particular measurement interval. SEL Equals the equivalent level Leq for a given noise event, compressed into a standard time interval of 1 s.

Abbreviations EC European Commission EU European Union SEL Sound exposure level

The Origin of Noise Exposure Data Noise exposure data have been reported on many different levels of aggregation: from city quarters, cities, and agglomerations to regions, nations, continents, and the world. Two recent examples are presented here: Around 20% of the Union’s population or close on 80 million people suffer from noise levels that scientists and health experts consider to be unacceptable, where most people become annoyed, where sleep is disturbed and where adverse health effects are to be feared. An additional 170 million citizens are living in so-called “gray areas” where the noise levels are such to cause serious annoyance during the daytime. Green Paper on Future Noise Policy (COM(96) 540) adopted and published by the Commission in November 1996. Traffic noise has emerged in recent years as an ever present but often underestimated pollutant in our lives. In Europe, the population exposed to levels above 65 dB (A) increased from 15% in 1980s to 26% in the early 1990s. For comparison, speech can be understood fairly well with background noise levels up to 55 dB (A).

World Health Organization Often the information refers to the overall exposure to environmental noise, including all possible sources that affect everyday life. In other cases, the data refer to a specific group of sources, such as “transport noise,” or specific sources, such as air traffic noise or railway noise. Like in the earlier statement, noise exposure is often referred to in a very general way, without reference to the actual exposure levels. Frequently, only the extent of one or more effects is stated; for instance, “regularly, 250 000 citizens are disturbed in their sleep.” Apparently, the assessment could be started from either side: either the extent of the actual effect is assessed directly, for instance, through field enquiries, or the exposure itself is assessed, and the effect is then determined using generally accepted dose–response relationships. As the relation between exposure and effects is discussed in other articles, the discussion in this article will be limited to the assessment of the exposure itself.

q

Change History: May 2018. Toshihiro Kawamoto made minor changes to the text. This is an update of P. De Vos, A. Van Beek, Environmental Noise, Editor(s): J.O. Nriagu, Encyclopedia of Environmental Health, Elsevier, 2011, Pages 476–488.

Encyclopedia of Environmental Health, 2nd edition, Volume 2

https://doi.org/10.1016/B978-0-12-409548-9.11288-6

585

586

Environmental Noise

Noise exposure assessments have been made for limited areas, for instance, cities. These assessments often build on the application of locally distributed, automated measurement stations. Whether the measurement results are representative for the whole city obviously depends on the density of the measurement stations, the sample frequency, and the duration of the total assessment. Long-term average noise levels have been demonstrated to represent the best predictors of effects such as annoyance and health effects, long term meaning several years. For most health-related effects, average noise levels are to be preferred to maximum noise levels or event-related exposure levels such as sound exposure level (SEL). Measured noise levels are subject to parameters that have major impacts on the observed exposure, such as weather conditions. Also, measured levels include the cumulative contributions of all possible sources and all possible noisy events. Only recently, attempts have been made to automatically distinguish the contributions from distinct sources, for example, an aircraft passing by, from the contributions of all other sources. To overcome the above-mentioned limitations, computed noise exposure is often used to replace or complement measured data. In computations, the number of chosen receiver points can be almost infinite, and the source causing the exposure is clearly identified. However, by nature, computations apply to sources with a known behavior only. Unpredictable events, such as thunder strikes or emergency sirens on ambulances, are excluded from the assessment. It is difficult, if not impossible, to aggregate the results of local noise exposure assessments into national or even supranational conclusions. Local assessments come up with different noise exposure indicators. Moreover, the assessment methods, be it measurements or computations, may differ largely and therefore lead to incomparable results.

The European Environmental Noise Directive Historically, the most important and most recent attempt to assess the exposure to environmental noise is the strategic noise mapping operation in Europe, which is required to be carried out by member state competent authorities every 5 years, starting 2007. This obligation emerges from the European Directive on the Assessment and Management of Environmental Noise (2002/49/EC), which will be indicated as the Environmental Noise Directive (END) henceforth. The first round of noise mapping was to be concluded in 2007, and results of that mapping operation have recently become available. In this first round, the mapping includes 161 urban agglomerations, each with more than 250,000 inhabitants, representing 121 million citizens. This represents almost a quarter of the total population of EU-27 and roughly 2% of the world’s population. In addition to that, 82,000 km of main roads, 12,000 km of main railway lines, and 76 important airports, all within the European Union (EU), were included in the mapping so that the exposure of the residents living close to these infrastructures was included in the assessment. The European Commission requested that noise exposure be expressed using the harmonized indicator, the day–evening–night level Lden. The assessment was to include the contributions from roads, railway lines, airports, and significant industrial sites, using the 2006 operational data. For each of the four sources, interim standard computation methods were to be applied. Thus, a high and unique level of harmonization and standardization was to be achieved, allowing the Commission to aggregate all the data into a Europe-wide assessment of the overall noise exposure. As the mapping operation is to be repeated every 5 years, the Commission eventually will be able to assess trends in overall noise exposure, and thus review the efficiency of its policy on noise production of road and rail vehicles and outdoor machinery. In spite of these high ambitions, the data currently available (July 2008) are far from complete; many member states have not yet submitted the information, and also the operation in this first round has ignored smaller agglomerations. Moreover, it appears that, contrary to the requirements, many member states have used computation methods other than the interim standard methods, leading to incomparable results. Nevertheless, the information currently available is likely to represent the most complete and reliable ever when it comes to an attempt to estimate the global exposure to environmental noise. Therefore, it was chosen as the basis for further analysis.

Environmental Noise in Perspective The noise in a person’s environment, generally meaning the area around his or her house, originates from many different sources. The natural environment, appreciated as it may be as an important source of calm and tranquility, is accompanied by sounds causing impressive exposure levels. Heavy rain on a thin sheet is likely to cause sound pressure levels for a person standing close to the sheet in the order of 90 dB(A) or more. A roaring waterfall may cause similar noise levels. A thunder strike might exceed 105 dB(A), levels that are considered potentially dangerous as they may give rise to hearing impairment to a person who is exposed to these levels for a long period. Nevertheless, such sounds are generally ignored in the current understanding of environmental noise. In the coarse of history, there have always been individuals expressing their concern over the fact that they experienced annoying, disturbing, or even frightening levels of noise, caused by their fellow men. However, only in the past century or so, a general concern has emerged about the ever-growing exposure to noise, the general lack of tranquility, and the stressing effects of urban life. The first organized concern about the growing noise in the environment dates from the early 20th century in Western Europe. In spite of political and legal efforts to put a stop to this growth, the concern and probably the exposure have continued to increase over the years.

Environmental Noise

587

Urbanization, Mechanization, Industrialization, Mobility, and Globalization Four important historic developments have contributed tremendously to the exposure of humans to environmental noise. The first is urbanization. In a confined space, people are more likely to be exposed to unwanted sound caused by their fellows. The second important development in time has been the mechanization and the industrialization following that. The rapid growth of mechanized processes being applied caused a dramatic increase in the number of people exposed to serious noise levels, not only inside factories (occupational noise) but also in the adjacent residences. The energy consumed in these processes is partly converted into radiated noise. Although the efficiency of this conversion has decreased, thanks to improved noise control and better design, the overall consumption of energy by human societies has grown exponentially since the early days of mechanization. The third and probably dominant development has been the growing mobility demand. The introduction of private cars and public transport allowed people to choose to live further from the site of their occupation and from their relatives, that is, from the place they were born. Current urban planning is dominated by the assumption that citizens are prepared to travel, by car, for shopping, leisure, social visits, and work. Even more dramatic is the growth of freight transport across the globe, following the globalizationdthe fourth important developmentdof the world’s economies. Nowadays, transport consumes most of the world’s energy anddevidentlydis by far the most significant cause of human exposure to environmental noise. This includes road transport as well as air traffic, rail transport, and shipping, road traffic clearly being the dominant transport mode.

Indicators for Environmental Noise The current article intends to estimate the world population’s exposure to environmental noise. In doing so, appropriate indicators for environmental noise exposure have to be selected. It should be emphasized that this choice of an appropriate indicator depends on the effects that it intends to describe or predict. A range of different indicators have been introduced and discussed in this encyclopedia. The assessment in this article has been performed adhering to the family of indicators describing effects that become evident only after a long period of time. This applies to noise-induced hearing loss, general noise annoyance, and long-term sleep disturbance and thus, in a more general way, to health effects. Indicators for instantaneous or maximum noise levels, which are more likely to predict effects such as complaints and awakenings, have not been applied. The most commonly used indicator for long-term exposure is the equivalent sound level Leq. The equivalent sound level is the imaginary continuous sound level thatdover a given time interval Tdcontains the same amount of sound energy as the measured sound level that varies over time. The equation reads Eq. (1) 0 T 1 Z 2 1 p ð t Þ 10 A (1) Leq ¼ 10 log@ T p20 0

–5

2

where p0 ¼ 2  10 N m . Leq is the basic component for the common European indicator Lden, which is supposed to be used in the framework of the Environmental Noise Directive. This indicator is the time-weighted average of the equivalent noise levels over three different periods of the diurnal day, viz.:

• • •

the day period (length 12 h, exact period to be chosen by the member state, typically between 0700 and 1900 h); the evening period (length 4 h, exact period to be chosen by the member state, typically between 1900 and 2300 h); the nighttime period (length 8 h, exact period to be chosen by the member state, typically between 2300 and 700 h).

The exposure during the evening period shall be penalized by 5 dB, which is supposed to reflect the fact that the majority of people use this period for relaxation, as opposed to the day period, which they spend for work or other activities. Therefore, in the evening, they are more likely to be annoyed. The exposure during the nighttime period shall be penalized by 10 dB, which is supposed to reflect the fact that the majority of people use this period for sleep, as opposed to the day period. Therefore, in the night, they are more likely to be disturbed in their sleep. The equivalent level over the 8 h nighttime period is also separately indicated as Lnight. It should be stressed here that in these definitions, although the integration time is 12, 8, or 4 h, the intention is to describe the average exposure over a very long series of consecutive days, evenings, and nights. This is to take account of the influence of the weather conditions, which is generally very significant. For example, at a distance of 100 m and more from a constant sound source in the open, for instance, a busy main road, the exposure under downwind conditions (wind blowing from the source to the receiver) is substantially higherdtypically up to 20 dBdthan the exposure in adverse wind conditions. The indicated equivalent levels are intended to energetically average over all these conditions, where the frequency of occurrence of these conditions is determined by the local climate. The averaging is intended to be representative typically for a whole year. Combining these arguments, the definition of the common European indicator Lden reads Eq. (2)    Lday Lnight þ10 Levening þ5 1 12  10 10 þ 4  10 10 þ 8  10 10 (2) Lden ¼ 1010 log  24

588

Environmental Noise

In a position paper, issued by the European Working Group on Health and Socio-Economic Aspects of Noise under the European Commission’s Noise Steering Committee, it was argued that Lden is currently the preferred indicator to predict general annoyance. The current article adheres to this recommendation and will use Lden as the indicator for noise exposure.

Sources of Environmental Noise and Their Relevance The objective of this article is to estimate the global exposure to noise, and this limits the scope of the estimate, confining to the essence. The first limitation is in the distinction between the urban and the rural environment (Fig. 1). It is obvious that in the urban environment, the population density is high, and so is the density of motorized traffic and industrial activities. In rural areas, the population density is generally much lower, and so is the density of the traffic. This distinction is supported by the results of the noise mapping, for example, in the Netherlands. The Netherlands is known to be a densely populated country, even in the rural parts outside of agglomerations. In addition, both the national road and railway networks are widely spread and facilitate huge amounts of traffic. Nevertheless, according to the 2007 mapping results, the number of people exposed to noise from roads and railways outside the six agglomerations is far less than within these agglomerations (Table 1). For the distinction of major and minor main roads, the lower limit of 6 million vehicles per year was ignored in this assessment. Instead, the complete national road network was included in the mapping. So, the small contribution of road traffic noise in rural areas is even an upper estimate. The data in the table lead to the following conclusions:

• •

The overall exposure to railway noise is far less than the exposure to road traffic noise (20% vs. 80% of total). In addition to this, dose–response relationships show that, at similar Lden levels, railway noise is less annoying than road traffic noise, at least for levels above 55 dB Lden. So, for annoyance-related exposure, road traffic is by far the dominant noise type. The exposure to road traffic noise in rural areas may be ignored in comparison to the exposure to road traffic noise in cities, as it is less than 10% of the total. This applies to a densely populated country with a dense road network, such as the Netherlands, so it will certainly apply to most other countries. Typical environments of the world

Rural environment Low population density Low traffic density

Urban environment High population density High traffic density

Transport transport

Fig. 1

Table 1

The two main environments of the world: urban and rural. Noise exposure data from the Netherlands, showing the results in 5 dB exposure classes for six agglomerations with more than 250,000 inhabitants each

Roads In agglomerations Outside agglomerations Railways In agglomerations Outside agglomerations

55–60 dB

60–65 dB

65–70 dB

70–75 dB

> 75 dB

Total

827.900 128.600

673.300 43.600

344.100 13.200

44.500 1.900

1.000 1.00

1890.800 187.400

118.600 134.000

60.700 76.500

25.000 38.100

8.800 12.500

1.000 3.000

214.100 264.100

Environmental Noise



589

This difference between exposure in rural and urban areas does not show for railway noise. Obviously, the ratio depends on the type of country and the density of its rail network, both inside cities (light rail and tramways) and outside cities. In the Netherlands, the railway network outside cities is quite dense, but inside agglomerations, it is not. This is reflected in the figures. For the Netherlands, roughly 10% of the total number of exposed persons is exposed to railway noise in urban environments, and 10% is exposed to railway noise in a rural environment. This may be different in other countries.

To account for the situation in other countries, some basic data from the Netherlands have been compared to those of other countries. The results are shown in Table 2. This table clearly shows that the Netherlands has not only by far the highest population density but also the highest road network density, expressed in kilometers of road network per square kilometer of area. Still, even in this country, the road traffic outside agglomerations does not contribute significantly, as urban road traffic is far more important. If this holds for the Netherlands, then it certainly holds for other European Union countries as well. Within the urban environment, there is a wide range of noise sources present. Some of these are known to cause complaints, for instance, mopeds and scooters, music from bars, lawn mowers, and leaf blowers. But complaints do not absolutely relate to exposure, and when these sources are looked into more closely, the tendency is that they occur only incidentally and locally. Therefore, if Lden is taken to be the indicator, other sources are far more relevant. These relevant sources are the ones addressed in the Environmental Noise Directive, that is, road traffic, rail traffic, air traffic, and industry. But within this group of four sources, road traffic is by far the dominant source in any motorized urban environment. Both the infrastructure and the vehicle density are much denser than for rail traffic. Industrial activities are more and more located in dedicated areas, and air traffic is directed more and more into dedicated corridors, avoiding densely populated areas. These observations are confirmed by the results of the noise mapping activity carried out in the Netherlands under the requirement of the European Noise Directive. The results of approximately 60 cities in 6 agglomerations with more than 250,000 inhabitants are shown in Fig. 2. The data analyzed in this article are results from the mapping exercise. This exercise focused on the domestic environment of people. The exposure assessed in this exercise is the one people experience in and around their homes. Neighbor noise is excluded, Basic data for three European countries

Table 2 Country

A ¼ population density (persons/km2)

Total area (km2)

Main road network (km)

B ¼ network density (networks/km2)

AB

The Netherlands France Germany

396 117 231

41,528 547,030 352,022

3000 10,908 12,749

0.072 0.020 0.036

28.6 2.3 8.4

The Netherlands is by far the most densely populated country with the most dense road network.

Total number citizens exposed (L den) in 2006 1 600 000 Industry Air traffic Rail traffic Road traffic

1 400 000 81.500 62.600

1 200 000 Citizens exposed

252.600 1 000 000 13.100 6.200 137.200

956.500 800 000

716.900 600 000 300 1.300 63.100

400 000

357.300

0 55−60

60−65

65−70 Noise exposure class in L den (dB)

Fig. 2

0 0 4.000 1.100

0 100 21.300 46.400

200 000

Numbers of exposed persons per noise class and noise source in 60 cities in the Netherlands.

70−75

> 75

590

Environmental Noise

since this is depending on social behavior and building quality. These two main parameters may differ strongly from one country to another or even from one residential area to another. Currently, neighbor noise is a serious problem, but it is beyond the scope of this article. Obviously, there are other environments where people could be exposed to noise, such as their working environment (that noise is usually indicated as occupational noise), during leisure and recreation and during education. Occupational noise is excluded from this assessment, as the exposure depends highly on the type of industry. Also, the long-term effect of occupational noise may be hearing impairmentdfor exposure levels over 80 dB(A)dor annoyance. The latter would tend to affect productivity. Both hearing impairment and productivity are the responsibility of the employer and the employee in dialogue. As this setting is quite different from that of environmental noise, occupational noise will not be discussed at all. For leisure noise, the exposure to excessive noise, as in open-air live concerts and discotheques, is considered to be quite harmful but nevertheless voluntary and is not the subject of the current article. Unwanted exposure to noise during leisure is at the other end of the spectrum, in cases where people seek a quiet surrounding to relax and are then disturbed by noises. For the assessment of noise in quiet areas, it is the character of the sound and the extent to which this sound is foreign to the surrounding of concern, rather than its sound pressure level. This noise is also excluded from the current assessment. Finally, noise in schools is an issue of serious concern, as learning abilities in children have been demonstrated to be affected by excessive noise. It is fair to assume that noise exposure levels for educational buildings in urban areas do not differ, on the average, from exposure levels for dwellings. For lack of information, it would be impossible to assess, with accuracy, the number of students, be it of children or adults, exposed to certain noise levels. Although the Dutch implementation of the European Environmental Noise Directive included the requirement, for agglomerations, to assess the number of schools within classes of noise exposure, the data produced appear to be not reliable at all. For this reason, educational noise is also excluded from the assessment. Summarizing, the current article refers to the exposure to noise from roads, railways, air traffic, and industries, as observed in the domestic environment. Within the four mentioned sources, road traffic is by far the dominant source in any motorized urban environment. Both the infrastructure and the vehicle density are much denser than for rail traffic. Industrial activities are more and more located in dedicated areas, and mapping results clearly show that they are a noise source of relatively small importance. For air traffic, the situation may be more complex; the END data reflect the impact of major airports only. Expressed in numbers of exposed persons, aircraft noise is of lesser relevance than road traffic noise. But dose–response relationships show that aircraft noise is more annoying, at comparable noise levels, than road traffic noise. As this article is confined to noise exposure, the assessment is limited to road traffic noise. In the following section, an effort is made to derive a global estimate for the exposure of the world population to noise, on the basis of these conclusions.

Basis for Extrapolation As has been seen in the previous sections, road traffic in the urban environment is the dominant noise source. The exposure to noise is dominant in urban areas because of two reasons:

• •

The population density is much higher than in a rural environment, so for a given area, more people are likely to be exposed. The traffic density is much higher than in a rural environment.

So these two parameters, viz., population density and traffic density, could be used as a basis for extrapolation. In a simplified theoretical approach, assuming an equal distribution of both people and traffic over a given area, and ignoring screening by building blocks, it can be demonstrated that the noise exposure depends solely on the population density, the sound production of a vehicle in operation, and the number of vehicles operating within the area. The latter can be related to the total traffic performance, expressed as the area-based annual number of passenger kilometers. It turns out, in a first order approach, that these two parameters are interrelated. This interrelation was investigated in practice for seven large European agglomerations, grouping themselves under the heading “Millennium cities,” ranging from Vienna (395 km2, 1,600,000 inhabitants) to Manchester (1270 km2, 2,600,000 inhabitants). The results are shown in Fig. 3. The correlation is very good. This simple and straightforward analysis shows thatdat least for road trafficdthe population density is a good descriptor for the traffic density and therefore an appropriate basis for the extrapolation of noise exposure data. Interestingly, the exact same conclusion was drawn, on an entirely empirical basis, in a 1974 report by W. J. Galloway, K. McK. Eldred, and M. A. Simpson of Bolt, Beranek and Newman, a consulting company for the US Environmental Protection Agency.

Galloway’s Findings Galloway and his colleagues selected 100 sites in total in 14 different cities in the United States, selecting sites for an as large as possible variety of traffic and population density. At each of these 100 sites, the day–night level Ldn was assessed by continuous measurement for 24 h. In addition to this level, the value for a number of other noise indicators was assessed. The sites were selected so that urban road traffic noise was the predominant source of environmental noise, but no site was selected within 300 m from a major road. The resulting levels were then related to the population density in the area where the site had been chosen. The

Environmental Noise

591

7 large European cities 18 000 000 y = 3707.2x + 167094 R 2 = 0.9654

Person km per km2

16 000 000 14 000 000 12 000 000 10 000 000 8 000 000 6 000 000 4 000 000 2 000 000 0 0

1000

2000

3000

4000

5000

2

Population density per km

Fig. 3

Road traffic performance as a function of population density in seven large European cities.

resulting graph is shown in Fig. 4. By correlating the 24 h Ldn to the logarithm of the population density, the authors found a correlation coefficient of 0.722 (R2 ¼ 0.52), which they considered satisfactory, certainly when compared to correlations found with other indicators such as Lmax or statistical levels such as L10 or L50. In addition, they compared older studies with their own work and found that the inclination of their curve had grown somewhat higher, viz.: Previous curve: Current curve:

Ldn ¼ 9 log r þ 26 Ldn ¼ 10 log r þ 22

Note that the population density here is in heads per square mile.

Validation of the Approach Since the publication of Galloway’s report, other authors have validated the approach, applying it to more recent data. Twenty-five years after Galloway, Catherine Stewart et al. monitored 49 different sites in Baltimore County and Baltimore City. The sites ranged from 2900 to 40,000 in population density (heads per square mile). They found a similar correlation between Ldn and r, although they suggested a slightly steeper regression line: Stewart’s curve:

Ldn ¼ 11 log r þ 14.45

They recommended that a distinction be made between summer and winter situations, where in summer they found up to 4 dB higher exposure levels (at low absolute levels) than in winter. A very recent (2008) report by Gjestland (Sintef, Norway) applied once more Galloway’s approach, relating background noise levels throughout Europe to population densities. In this study, background noise is anything other than aircraft noise, so it coincides with mainly road traffic noise. One of Gjestland’s conclusions is that the population density is a good predictor for the noise levels in urban areas. Galloway’s equation is valid for locations in urban areas not directly exposed to a major noise source. Gjestland did not make an attempt to derive an updated regression line. He found other publications applying the same regression to areas with population densities as low as 50 per square kilometer. He proposes to apply the approach to the whole of Europe, distinguishing population densities in grids of 1000 km2. Both studies support the idea that population density is a good predictor for noise exposure, both in urban and rural areas, probably with the exception of areas within 300 m from major noise sources.

Other Approaches Clearly, since 1974, there have been other attempts to assess the exposure to noise for a specific population. For instance, a paper by J. Hooghwerff, G. J. van Blokland, and M. Roovers, published in 1998, related the percentage of exposed citizens to the size (expressed in total number of inhabitants) of the city. Extrapolating on that basis, they estimated that 32% of the population of EU-15 was exposed to road traffic noise levels of more than 55 dB(A).

592

Environmental Noise

100

Population density - Thousands of people per square mile

Current study Previous studies 50

20

10

5

2 10 log p + 22 dB 9 log p + 26 dB 1 45

50

55

60

65

70

75

80

Day / night sound level, dB

Fig. 4 Correlation between population density and day–night level. From Galloway, W., Eldred, K., and Simpson, M. (1974). Population distribution of the United States as a function of outdoor noise. US Environmental Protection Agency Report No. 550/9-74-009. Washington, DC: US EPA.

For the current article, however, it is preferred to base the estimate on the END mapping data and extrapolate these on the basis of population density, similar to Galloway’s approach. The data for urban road traffic noise will be used and the figures extrapolated using the population density as the only parameter. Both sources of noise, other than road traffic and rural areas, are of far lower significance and are left out of consideration for the time being.

Updated Approach In the updated approach presented in this article, data have been collected from the noise mapping activity carried out in the European Union member states under the European Noise Directive. Unfortunately, there was an enormous delay and also some misunderstanding in many member states. The data available and considered to be reliable at the moment this article was edited comprise only 22 agglomerations in the United Kingdom, 6 in the Netherlands (where the data for 60 different cities constituting these agglomerations were analyzed), 3 in Sweden, 1 in Finland, 6 in Germany, and 2 in France (where the data for 20 Paris urban sections, the so-called arrondissements, were available separately). For all of these, the population density was acquired from public sources (mostly Internet) and related to the noise exposure data from the noise mapping. Generally, the noise exposure data are available as numbers of exposed citizens in 5 dB incremental classes of Lden, starting from 55 to over 75 dB. The classification, required by the END, posed one problem: it is fair to assume that a number of citizens would be exposed to noise levels between 50 and 55 dB. However, there is good reason to believe that, in a city with motorized traffic, hardly any citizen would have noise exposures lower than 50 dB. This assumption was confirmed by a paper by Tor Kihlman and Wolfgang Kropp (Limits to the noise limits?). Combining these assumptions, the data for the noise classes from 55 dB were taken from the mapping data set, whereas it was assumed that the total remaining population of the city would be included in the 50–55 dB exposure class. This allows then, for a given city or agglomeration, to estimate the “average” exposure of the citizens for that city, by weighting the (energetic) mean value of the noise exposure in the exposure class with the number of citizens included in that class. This was carried out for the 114 data sets described previously. The average noise exposure was then related to the population density, as shown by the graph in Fig. 5.

Environmental Noise

593

Average exposure 75 Galloway et al. 114 EU agglomerations

Exposure level Lden

70 65 60 55 50

y = 0.5248x + 39.683 R 2 = 0.5404

45 40 25

Fig. 5

30

35 40 Population density logarithmic

45

50

Comparison between Galloway’s regression curve and the noise mapping data for 114 European agglomerations.

The graph indicates that there is a reasonable correlation between population density and average exposure (R2 ¼ 0.54; comparable to what Galloway et al. found) and that the regression curves can be stated as follows: Galloway et al., 1974: END analysis, 2008:

Lden, mean ¼ 10 log r þ 26 dB (r ¼ population density per square kilometer) Lden, mean ¼ 5 log r þ 40 dB

It might be coincidental, but it is striking that the more recent data lead to a curve that, for noise exposures above 55 dB, is somewhat less steep. This could be explained as an effect of the noise legislation that has been deployed in most European Union member states since the 1980s. Legislative limit values are often around 55 dB, which would have led to levels above that limit occurring more seldom, but exposure levels below that limit occurring more often.

Analysis per Exposure Class Referring to the theoretical approach, the ratio of exposed residents in consecutive classes of 5 dB increments depends solely on the population density and the amount of screening typical for the city of concern. As this amount of screening depends on the density, orientation, and height of the housing blocks, it is likely to differ somewhat from one noise class to the other. It would also differ from city centers to suburban areas. In the extrapolation, no attempt has been made to assess this influence, but all the available data have been simply averaged to assess the percentages of exposed residents in the different noise exposure classes. For this analysis, the same data set of 114 cities and agglomerations in Europe was used. The results are presented in Fig. 6. Note that the vertical axis is different in different pictures. Obviously, the correlation coefficient is much lower than that for the average exposure level. Nevertheless, the relations offer a chance to extrapolate the data from the 114 cases, purely on the basis of population density.

Extrapolation to World Level An analysis was carried out on the population densities in the largest agglomerations in the world. Data for 294 of the largest agglomerations were collected. In total, these agglomerations include 1 billion inhabitants or approximately 12% of the world’s population. The average population density for these 294 agglomerations is 5500 persons per square kilometer. Examples of agglomerations with such a population density are Istanbul, Tel Aviv, and Osaka. For each of the 294 cities, the numbers of exposed citizens were calculated, using the above-mentioned regressions and the population density for the agglomeration of concern. Thus, the noise exposure for the 1 billion people living in the 294 agglomerations was assessed, based on real population density data. As a basis for further extrapolation, it was assumed that 50% of the world’s population lives in urban agglomerations. This assumption is supported by current demographic data. Another important assumption is that modern cities throughout the world show the

594

Environmental Noise

Exposure to Lden 55−59 dB 70% Percentage exposed

60%

y = 0.0175x − 0.3824 R 2 = 0.4522

50% 40% 30% 20% 10% 0% 20

25

30

35

40

45

50

45

50

10 log (density) Exposure to Lden 60−64 dB

Exposure to Lden 65−69 dB

80%

40% y = 0.0133x − 0.2603 R 2 = 0.1329

35%

60%

Percentage exposed

Percentage exposed

70%

50% 40% 30% 20% 10% 0% 20

30%

y = 0.0134x − 0.3846 R 2 = 0.5253

25% 20% 15% 10% 5% 0% −5%

25

30

35 10 log (density)

40

45

−10% 20

50

25

30

35

40

8%

y = 0.0031x − 0.0856 R 2 = 0.263

7% 6%

y = 0.0013x − 0.0409 R 2 = 0.3565

5% 4% 3% 2% 1% 0% −1%

20

25

30

35 10 log (density)

Fig. 6

Exposure to Lden > 75 dB

Exposure to Lden 70−74 dB

18% 16% 14% 12% 10% 8% 6% 4% 2% 0% −2% −4%

Percentage exposed

Percentage exposed

10 log (density)

40

45

50

−2% 20

25

30

35

40

45

50

10 log (density)

Regressions per noise exposure class: percentage of exposed persons relative to population density.

same level of car mobility and spatial planning. This may not be entirely true for major cities in developing countries, but the recent development in China, for example, shows that the differences will disappear rapidly. The data for the 294 world agglomerations were then extrapolated to the world’s level. This results in a total of 2 billion people in the world being exposed to urban road traffic noise with Lden levels of more than 55 dB. According to latest insights into the health effects of noise, these 2 billion world citizens are at risk of suffering from health effects due to environmental noise. The distribution of these numbers over the 5 dB exposure classes is presented in Fig. 7. For railway noise, the upper limit of the estimate reads that approximately 25% of this number is an indicator for the exposure to railway noise. So, as a very rough estimate, the total number of people in the world exposed to railway noise over 55 Lden is almost certainly less than 500 million. For aircraft noise and industrial noise, the figures are still much lower.

Extrapolation to EU-27 Level A similar extrapolation was carried out for the 27 European Union member states. The basic analysis included 38 of the largest agglomerations in the EU-27, with a total of 100 million inhabitants. The average population density is 2500 persons per square kilometer. Typical examples of agglomerations with such a population density are Milan, Budapest, and Düsseldorf/Essen. Seventyfive percent of the EU-27 population, comprising 490 million citizens, lives in an urban environment. Based on these figures, it was

Environmental Noise

595

World’s exposure to urban road traffic noise Exposed citizens (thousands)

900 000 800 000 700 000 600 000 500 000 400 000 300 000 200 000 100 000 0 000 55−60

60−65

65−70

70−75

> 75

Exposure class Lden

Fig. 7

Numbers of world citizens exposed to urban road traffic noise, by exposure class.

Noise exposure to urban road traffic in EU 27 Number of citizens (thousands)

60 000 50 000 40 000 30 000 20 000 10 000 0 000 55−60

60−65

65−70

70−75

> 75

Noise exposure class L den

Fig. 8

Numbers of European (27 member states) citizens exposed to urban road traffic noise, by exposure class.

derived that a total of 125 million citizens in the 27 European Union member states or slightly over 25% of the population are exposed to Lden noise levels of more than 55 dB due to urban road traffic. The distribution of the population over noise exposure classes is shown in Fig. 8. For railway noise, a very rough estimate is that no more than 30 million European citizens are exposed to railway noise with exposure levels over 55 dB Lden. Again, for aircraft and industrial noise, the figures are much lower.

The Quality and Reliability of the Estimate The quality and reliability of the above-mentioned estimate depends on the quality of the basic data set and the reliability of the extrapolation. In general, the quality of noise exposure data depends strongly on the chosen methods to assess the value. For the Environmental Noise Directive, all assessment methods, that is, for each of the four sources, are based on computational prediction methods. Again, the reliability and accuracy of the results achieved with such methods depends on the quality of the input data (traffic data, geographical data, meteorological data, etc.) and on the skills and expertise of the user. Unavoidably, reported data on noise exposure contain errors and uncertainties. The main sources are introduced briefly in the following text and an estimate of their relative importance is presented.

Indicators Different indicators are being used in different reports, which cannot be compared directly. This problem has been solved for the European Union noise mapping data set, as there was an obligatory application of the common indicator Lden. However, it may not be solved for many other data sets.

Prediction or Measurement As was argued previously, computed data have a general preference over measured data, as the former takes better account of longterm changes in weather conditions, avoids the problem of disturbing noise from other sources, and shows better reproducibility. Again, the data sets applied for the current estimate all stem from computed noise exposure data.

596

Environmental Noise

Systematic Uncertainties The approach required or chosen for the production of noise exposure maps under the European Union directive is likely to include some systematic errors:

• • • •

The data refer to a standard receiver height of 4 m above local ground. For high-storey dwellings, the exposure is likely to be underestimated, as there is less effective screening at greater heights. The noise mapping data include only the most important roads, even in agglomerations, because small city streets are generally ignored. This is likely to lead to an underestimation of the exposure. The traffic speeds are usually supposed to be within the local speed limits. As this is not always the case, the exposure is likely to be underestimated. The mapping data usually refer to the most exposed façade of every dwelling, thus overestimating the exposure for dwellings that are at the quiet side of the same housing block.

The biggest source of uncertainty in noise predictions is the weather. The general approach in most models is to base calculations of noise propagation and excess attenuation on circumstances with favorable weather conditions (downwind, temperature inversion) and correct the resulting noise levels down to account for the periods of time with unfavorable conditions. This approach presumes a known probability distribution of the occurrence of favorable and nonfavorable conditions. The approach is typically based on close-to-sea climate circumstances. In mountainous regions and densely built-up areas in towns, the approach may lead to nonrepresentative results. The errors in predicted long-term average noise levels may range from 1 to probably more than 5 dB, leading to approximately 30% offset in the numbers of exposed persons. Luckily, the errors are of a statistical nature, and standard deviations would be much lower. The above-mentioned weather-induced deviations occur particularly in free field. In an urban environment, these deviations are likely to be much lower.

Synthesis The applied method of extrapolating noise exposure data on the basis of population density appears to be a straightforward approach to estimate, with an acceptable level of accuracy, the exposure to noise from road traffic in urban situations. This is by far the main source of environmental noise. The noise mapping data collected under the European Noise Directive represents a relatively reliable basis for this approach, and although it contains only the data of European agglomerations, there is a global applicability, since life in cities tends to become more and more comparable throughout the world. Following this approach, it has been estimated that 2.5 billion people all over the world and 150 million European citizens are exposed to road and rail traffic noise levels that are considered potentially harmful. These alarming numbers should give rise to a renewed and strong policy of noise control.

See also: Combined Exposures to Noise and Chemicals at Work; Combined Transportation Noise Exposure in Residential Areas; Expressing the Significance of Environmental Exposures in Disability-Adjusted Life-Years (DALYs): The Right Answer to Wrong Questions?; Measuring Noise for Health Impact Assessment; Mental Health Effects of Noise; Monetary Valuation of Health Impacts From Noise; Noise Management: International Regulations; Noise Management: Soundscape Approach; Noise and Cognition in Children; Noise and Health: Annoyance and Interference; Sleep Disturbance in Adults by Noise.

Further Reading Bijsterveld, K., 2008. Mechanical sound, technology, culture and public problems of noise in the twentieth century. The MIT Press, Cambridge, MA. Brown, A.L., 1994. Exposure of the Australian population to road traffic noise. Applied Acoustics 43, 169–176. Galloway, W., Eldred, K., Simpson, M., 1974. Population distribution of the United States as a function of outdoor noise, US Environmental Protection Agency Report No. 550/9-74009. US EPA, Washington, DC. Gjestland, T., 2008. Background noise levels in Europe. Sintef ICT Report No. A6631 (in assignment of the European Aviation Safety Agency). Trondheim, Norway. Hooghwerff, J., van Blokland, G.J., Roovers, M., 1998. In: Road traffic noise in EU-15. Paper summarizing the results of a study for the European Environment Agency, Under the title “Present state and future trends on transport noise in Europe.”. Jamrah, A., Al-Omari, A., Sharabi, R., 2006. Evaluation of traffic noise pollution in Amman, Jordan. Environmental Monitoring and Assessment 120, 499–525. Kihlman, T., 2005. In: Sustainable development in an urbanizing worlddThe noise issue. Proceedings of Inter-Noise. Rio de Janeiro, Brazil, August 2005. Kihlman, T., Kropp, W., 1998. In: A limit to the noise limits? Proceedings of 16th International Congress on Acoustics. Seattle, USA, June 1998. Korfali, S.I., Massoud, M., 2003. Assessment of community noise problem in greater Beirut area, Lebanon. Environmental Monitoring and Assessment 84, 203–218. Rio de Janeiro, Brazil. Ma, G., Tian, Y., Ju, T., Ren, Z., 2006. Assessment of traffic noise pollution from 1989 to 2003 in Lanzhou city. Environmental Monitoring and Assessment 123, 413–430. Nolle, A., Pollehn, W., 1989. Geräuschbelastung der bevölkerung durch straßenverkehr. Zeitschrift für Lärmbekämpfung 36, 95–104. Onuu, M.U., 2002. Road traffic noise in Nigeria, measurements, analysis and evaluation of nuisance. Journal of Sound and Vibration 233 (3), 391–405. Ortscheid, J., Wende, H., 2002. Lärmbelästigung in deutschland. Zeitschrift für Lärmbekämpfung 49, 41–45. Pandya, G.H., 2001. Urban noisedA need for acoustic planning. Environmental Monitoring and Assessment 67, 379–388. Skinner C and Grimwood C (2002) The National Noise Incidence Study 2000/2001 (United Kingdom), BRE Environment Report No. 206344f, Watford, BRE.

Environmental Noise

597

Staatsen BAM, Nijland HA, van Kempen EMM, de Hollander AEM, Franssen AEM, and van Kamp I (2004) Assessment of health impacts and policy options in relation to transportrelated noise exposuresdTopic paper noise, RIVM Rapport 815120002. Stewart, C.M., Russell, W.A., Luz, G.A., 1999. Can population density be used to determine ambient noise levels? The Journal of the Acoustical Society of America 105 (2), 942. Tang, S.K., Chan, W.Y., 2003. Predictability of noise indices in a high-rise residential environment. The Journal of the Acoustical Society of America 114 (3), 1222–1226. United States Environmental Protection Agency, 1978. Noise: A Health Problem, Office of Noise Abatement and Control. United States Environmental Protection Agency, Washington, DC. Wende, H., Malow, M., 1996. Entwicklung der geräuschbelastung der bevölkerung in deutschland. In: Portele, T., Hess, W. (Eds.), Fortschritte der AkustikdDAGA. DEGA e. V, Oldenburg, pp. 244–245. Working Group on Health and Socio-Economic Effects of the European Commission, 2000WorkingGrouponHealthandSocio-EconomicEffectsoftheE. Position paper on EU-noise indicators. European Commission, Environment Directorate-General, Brussels, ISBN 92-828-8953-X.

Relevant Websites http://circa.europa.eu/Public/irc/env/d_2002_49/library?l¼/strategic_december&vm¼detailed&sb¼Title CircadThis is the site that publishes all the mapping information from the member states. http://www.umweltbundesamt.de/verkehr/laerm/strassen-und-schienen-verkehr.htmdData on noise exposure in the Federal Republic of Germany. http://www.demographia.comdDemographia, Demographic data.

Environmental Pollution and Human Health in Ancient Timesq J Nriagu, University of Michigan, Ann Arbor, MI, United States © 2019 Elsevier B.V. All rights reserved.

Introduction Until fairly recently, the general health status of an individual was primarily determined by environmental and genetic factors. There were disagreements mostly with regard to (1) the semantic interpretations of environment and environmental health and (2) the relative contributions of genetic and environmental factors to the etiology and propagation of disease. It is only in the past decades that unfavorable social and lifestyle factors have gradually become the most significant causes of avoidable health loss. Throughout human history, there have been a handful of times when major changes/transitions in human relationships with the environment occurred, and these thresholds have had major implications on health and cultural development. The seven epoch-making transitions and approximate times when they began were as follows: the use of fire by hunter-gatherers (600,000 years before the present (BP)), farming (10,000 years BP), creation of cities (6000 years BP), exponential growth in the human population (1000 years BP), European colonialism (500 years BP), industrial revolution (250 years BP), and globalization (50 years BP). This article concentrates on what happened to human health at some of these transformative periods. Most histories of environmental health begin with Industrial Revolution toward the end of the 18th century and quickly move to the landmark publication of The Report from the Poor Law Commissioners on an Inquiry into the Sanitary Conditions of the Laboring Population of Great Britain by Edwin Chadwick in 1842. The Chadwick Report outlined in detail the wretched social and environmental conditions within the world’s first industrial society and highlighted a number of phenomena concerning economic development, urbanization, and health within industrial settlements. This article views the changes ushered in by Industrial Revolution as just another threshold in a gradually accelerating processda continuum of human interference with the natural cycles that tends toward greater levels of ecological disequilibrium which may threaten human survival on this earth.

The Hunter-Gatherers of Neolithic Time Human antiquity varies markedly among the regions of the world as does the cumulative impact on the environment. Hominid (the human line) is generally believed to have evolved in Africa some 4–6 million years BP. The ape man (Australopithecus) was illequipped for extreme climatic conditions and was probably confined to Africa or, if they had independent origins in Asia, to tropical and subtropical areas. His successors (Homo erectus) were much more versatile and were able to occupy large geographic regions of Africa, Asia, and Europe starting from approximately 1 million years ago. Since the techniques essential for survival in cold climate (especially fire, clothing, and footwear) had not been invented, their habitat was probably restricted to the Old World outside the frigid zones. The “wise man” (Homo sapiens) expanded to nearly all other parts of the habitable earth. The ability to live under arctic condition enlarged the human habitat and made it possible for humans to reach North America approximately 12,000– 20,000 years BP. During the Pleistocene Epoch, Homo sapiens sapiens lived primarily as nomadic bands, moving about a loosely defined territory in small familial groups in search of food. They were gatherers, scavengers, and opportunistic hunters. Presumably they had no settled abode for much of the year, but under harsher climatic conditions used caves and simple dwellings. Under natural conditions, human beings were at the mercy of the environment over which they had little control. Very early in human history, the conditions of human life were not different from those of wild animals. The expectation of life was short, and their birthrate and survival of their children were curbed by scarcity of food and hazards of their lives. Neither the external environment nor the reproduction rate was controlled, and the hunter-gatherers survived because the deaths were offset by excess number of births. Most textbooks maintain that traditional hunter-gatherers had very little impact on their environment, because of their low population density, high mobility, limited technology, subsistence economy, minimal needs and wants, intimate environmental knowledge and monitoring, and animistic worldview, attitudes, and values that are nature oriented. Evidence that the hunter-gatherers were in perfect harmony with the environment is equivocal. It is a truism that human life requires destruction of the living and consumption of the dead. Human beings must kill other life forms for their protein requirements and destroy their habitat to create livable space. In return, Nature is said to be red in tooth and claw with only those equipped to meet the threats of predator, parasite, climate, and food shortages being able to survive and reproduce. Human impact in ancient environments was inevitable because, ultimately, the local ecosystem was the primary or raw natural resource that all societies rely on for their survival, adaptation, and welfare. The “wilderness” where the early hominids (tropical grasslands, semiarid deserts, and

q

Change History: December 2018. Jerome Nriagu author of the article. This is an update of J. Nriagu, Environmental Pollution and Human Health in Ancient Times, Editor(s): J.O. Nriagu, Encyclopedia of Environmental Health, Elsevier, 2011, Pages 489–506.

598

Encyclopedia of Environmental Health, 2nd edition, Volume 2

https://doi.org/10.1016/B978-0-12-409548-9.11756-7

Environmental Pollution and Human Health in Ancient Times

599

arctic regions) lived was ecologically fragile and vulnerable. The condition of an ecosystem at a given time is the cumulative product of previous conditions, usually including human impacts that were culture-dependent. Because human beings and ecosystems are interrelated and interdependent, the impact on one species or food chain can influence others as well, much like a chain reaction. As humans colonized new regions, they became a new top carnivore in the food web, rather than through the usual gradual process of predator–prey coevolution, and thus would have a great advantage over prey that had no previous experience with them. Their weapons (particularly projectiles like the spear) allowed them to surprise and kill a prey from a safe distance. It has been hypothesized that the dispersal of humans into new areas caused the massive faunal extinctions since 500,000 BP, depending on the region of the world. The controversial “blitzkrieg hypothesis” was prompted by the apparent coincidence in massive megafaunal extinctions and time of human arrivals especially in Australia, the Americas, Madagascar, New Zealand, and Oceania. The more important question therefore is not whether the hunter-gatherers had any environmental impact, but its particular forms and magnitude and the extent to which it was reversible and allowed natural regeneration within a normal time period. The discovery of fire approximately 1 million years ago gave human beings an unprecedented instrument to instigate environmental changes; it was humans’ first means of manipulating ecosystems that allowed them to colonize cooler regions. At some point in their history, humans learned how to use fires deliberately and systematically to flush out game animals and to modify the vegetation. The resulting suppression of woody plants and the fertilizing effect of ash benefited the growth of herbaceous plants and improved their nutritional quality. These changes encouraged foraging species and raised the carrying capacity for game animals while facilitating travel and hunting by humans. The fires affected not only the vegetation but also the soils. Repeated clearing of woodlands and grasslands by burning led to soil erosion and landslides, which washed the topsoils into rivers and estuaries. Thus began an escalating cycle of ecological degradation where each modification of the environment was followed by additional human responses, which in turn further modified the environment. In Australia, for instance, there is substantive evidence to suggest that modification of the landscape by human foragers equipped with fire had burned down most of the indigenous forests in the millennia following the arrival of the first Aborigines. Areas on earth that were not receptive to fire, such as rain forests, deserts, and the polar region, proved to be hard to colonize by humans.

Environmentally Attributable Diseases Among the Hunter-Gatherers There is no direct evidence of human morbidity or mortality during prehistoric times, and educated guesses on disease prevalence have been based on three evidential sources: archeological, observations of primates in their natural habitats, and analogies with health of hunter-gatherers who still exist. Evidence from wildlife biologists suggest that the major causes of death among the huntergatherers were in all likelihood related to trauma associated with food gathering (bites, falls, heat stroke, drowning, etc.), predation, poisonings, and infections by parasitic agents, including viruses, bacteria, protozoa, helminthes, and arthropods. Examination of archeological material shows traumatic lesions to be common, with the site of the injury varying as one would expect with the nature of the habitat. Archeological remains also show weapon wounds and other evidence of interpersonal violence suggesting temporal patterns of violence that were probably linked to inter-band rivalry and competition for resources. Records in eastern North America, for instance, show that violence occurred early in prehistory, long before the adoption of agriculture. The hunter-gatherers are believed to have suffered frequently from osteoarthritis, and evidence for osteomyelitis has been reported. The question of food sufficiency and nutrition among the hunter-gatherers remains controversial; some authorities claim that starvation occurred only infrequently, whereas others have concluded that there was a good deal of hunger, which often led to aggressive behavior. The body of a hunter-gatherer served as host to biodiversity in the form of a multitude of internal and external parasites and other microscopic to small organisms that inhabited and visited it, such as bacteria, fungi, mites, lice, mosquitoes, ticks, and leeches. A number of “heirloom” parasites that were shared with other hominids would have included head and body lice, pinworms, and protozoa found in modern humans and bacteria such as salmonella, typhus, and staphylococcus; these heirloom pathogens probably caused diseases under conditions of stress such as during periods of starvation. Others acquired by direct and indirect contact with other animals became infective, leading to human reservoirs for diseases such as brucellosis, tuberculosis, relapsing fever, rickettsiosis, herpes, hepatitis, yellow fever, and malaria. Although infection might have been frequent, it probably was not the most common cause of death. Unlike their primate cousins who remained primarily vegetarian, humans diversified their diet to include the flesh of whichever animals they could find or catch, as well as a variety of plant products. It has been claimed that it was meat that made humans human, and the meat-rich diet preferred by humans has been implicated in the evolution of the brain and complex social behaviors. More importantly, from the environmental perspective, is the fact that meat provided a rich package of nutrients that reduce the time spent in foraging. Chimps spend approximately 75% of the waking day in search for plant foods, which are mainly lowenergy yielding. The evolution to hunting and meat-eating provided high energy rich meals that take less time to digest, and also reduced the necessity of all members of the group to spend considerable percentages of their lives in endless foraging. The huge increase in consumption of carrion exposed the hunter-gatherers to serious risks: increased exposure to infections and toxins in the prey. The spatial range for foraging of the hunter-gatherers must have been limited, and some of the cultures would have been subjected to localized diseases causally linked to the characteristics of the geographic or geological environment where they lived. Large areas of the world are known today where the soils and water are characterized by significant deficits of certain essential

600

Environmental Pollution and Human Health in Ancient Times

microelements or by increased concentrations of toxic elements, which have led to local ecosystems having unfavorable effects on human health. The study of the effects of the natural environment on human health has been growing under the rubric of “medical geology,” a sub-discipline that should rightly belong under the umbrella of environmental health. Microelementosis (or biogeochemical endemias) was recently coined to describe a class of diseases in which the etiology is primarily due to a deficiency or excess of microelements in the human organism or their imbalance in the sense of anomalous ratios of the micro- and macroelements. Manifestations of human pathologies linked with microelements have been well documented in modern-day communities, but there is no reason to suggest that localized risk factors did not extend to prehistoric cultures as well. Examples of hypo- and hyper-elementosis that have been reported in areas foraged by the hunter-gatherers include goiter (iodine deficiency), endemic cardiomyopathy (or Keshan disease caused by selenium deficiency), iron deficiency, zinc deficiency, and fluorosis. Epidemiological investigations have demonstrated the existence of a pronounced negative correlations between consumption of food deficient in selenium and mortality from cancer of the large and small intestines, mammary gland cancer, cancer of the esophagus, and lung cancer. The fact that microelementosis is a risk factor for cancer has not been considered in previous discussions of the diseases of prehistoric people. Endemic cretinism manifested in mental retardation, deaf muteness, blindness, cross-eye, and retarded growth has been associated with iodine deficiency during pregnancy. One would imagine that these diseases existed in some communities in ancient times. At the other end, the geographic position of the cavemen sometimes placed them at risk of exposure to toxic metals. Investigations have revealed that near Broken, in the territory of the recent Zambia, hominides who lived approximately 200,000 years ago suffered from lead poisoning. The reason for this illness was that lead oozed from the neighboring ore body into the spring near their cave. Air pollution began with the lighting of fire by H. sapiens. Wood smoke is a complex mixture of a large number of gaseous and particulate air pollutants that pose risk to human health. Toxic components of wood smoke include volatile organic compounds, polycyclic aromatic hydrocarbons and fine particulate matter (PM), carbon monoxide, and sulfur oxides. The cavemen burned wood for heating and cooking as well as for driving away pests and unwanted animals. Walls of caves, inhabited many thousand years ago, have been discovered covered by thick layers of soot. Lungs of mummified bodies from the Paleolithic Era are frequently found to be black, and the extent of the deposits suggests pulmonary impairments (anthracosis). With no ventilation, breathing inside many caves would have been difficult because of the thick smoke. It has been known for a long time that wood-smoke exposure increases the burden of respiratory disease, which has been described under different names in different parts of the world, such as the so-called hut lung. The burden of wood-smoke disease would have varied significantly depending on cave characteristics, type of wood, and burning operation. Recent research on skeletal remains has highlighted changes in maxillary sinuses (a common upper respiratory track disease) and in ribs, which may be used to assess the impact of environmental pollution on the respiratory system. In a bioarchaeological study of the past link between air quality and respiratory health, Charlotte A. Roberts examined skeletal samples from North America, England, and Nubia selected to represent different geographic locations, environments, and subsistence economies. She found that the frequency rates for maxillary sinuses varied from 17.2% to 51.5% in the samples examined. The frequency range for males was 17%–37%, but the female frequency ranged more widely (18%–76%), and at most sites female rates exceeded those for the male. A remarkable finding in Robert’s study was the high prevalence of respiratory disease in the hunter-gatherer populations: 18% for Indian Knoll, Kentucky (4570–3500 BP) and 39% for Bluff Mounds, Illinois (AD 800–1100). Although foragers probably spent more time outdoors than indoors, they nevertheless spent some time in temporary enclosed shelters (indoors) during the evenings, where wood or dry grass was burned for heating and lighting, the women probably cooked in open fire during the day or smoked the food for preservation and storage, and the men would have exposed themselves to smoke as they roasted their kills. The outdoor environment would have been contaminated with pollen and molds or aerosols from sources such as volcanoes and wild or deliberately set fires. Regardless of the actual routes of exposure, recent bioarchaeological evidence suggests that respiratory diseases with environmental etiology were common in the Neolithic times.

Early Holocene and Transition to Agriculture Even if the hunter-gatherers of the Paleolithic time were in relatively good harmony with the natural environment where food and resources were plenty, an irrevocable split between humans and Nature occurred with the introduction of agriculture approximately 10,000 years ago. Humans no longer regarded themselves as part of Nature, but believed that Nature was designed and created for their benefit. Where land and ecological services were not suitable, humans had the ability to alter and make them useful. The idea that Nature was created by God to be used and dominated by humankind was born at about this time. People became aware of their independence of Nature as well as their distinctiveness from Nature. The beginning of this process in different regions has been dated from approximately 10,000 years ago in Melanesia to approximately 4500 BP in sub-Saharan Africa, with the developments in the Fertile Crescent of the Middle East between 11,000 and 9000 BP generally considered to be the most important. The onset of agriculture thus marked the beginning of natural degeneration, which has been expanding ever since. In the birthplace of agriculture in the Near East, people increasingly became adept at and aggressive in their endeavors to humanize the landscape. A range of causal factors have been proposed to explain the origin of agriculture including population pressure (basic idea is that population growth forced foragers to adopt agriculture because wild resources became so scarce that eventually farming became a necessity); cultural progress (predicated on the assumption that agricultural life is inherently superior to foraging); environmental

Environmental Pollution and Human Health in Ancient Times

601

change (the end of the Pleistocene Epoch was marked by rapid environmental changes, extinction of many game species, rise of sea level, rapidly warming climate, increase in CO2 levels, etc.); and coevolution (refers to mutual evolutionary interaction of human culture and a plant or animal) that was beginning to be subjected to domestication process (protodomesticate). What was incontrovertible was that for much of their history, human beings made little or no direct use of grasses for food. By 15,000 BP, however, hunter-gatherers of the Near East were harvesting wild grasses intensively and processing it by stone grinding. These grasses were C3 (wheat and barley), which saw increased productivity with time because of the increase in the CO2 content of the atmosphere from 200 to 270 ppm between 15,000 and 12,000 BP (see Ruddiman’s hypothesis in the next paragraph), which could have improved the productivity of C3 plants by 25%–50%. By 10,000 BP, wheat and barley began to be domesticated in the Levant. Dogs and millets were first domesticated in northern China before 10,000 BP, pigs in western China before 9000 BP, rice in south China between 8000 and 10,000 BP, and sheep and wheat in northern China around 5500 BP. It was previously thought that the cradle of Chinese civilization was in the region around the middle Yellow River, but modern archeological studies increasingly support the idea that the origin of Chinese civilization is scattered all over the present-day country and most had sprouted in the river plains where the elite with rich court culture ruled over masses of pig-raising and millet and wheat (cold northern regions) or rice (warmer southern regions) farmers. The domestication of sorghum and several other millets in Africa and of maize in Mexico also occurred almost at the same time. William F. Ruddiman has advanced the argument that the Anthropocene (during which industrial-era human activities have altered greenhouse gas concentrations in the atmosphere enough to affect the earth’s climate) began thousands of years ago as a result of the discovery of agriculture and subsequent technological innovations in the practice of farming. His hypothesis was based on two lines of evidence: (1) the orbitally controlled variations in CO2 and CH4 concentrations that had previously prevailed for several hundred thousand years cannot explain the anomalous increases in levels of these gases that developed in the middle and late Holocene and (2) the initiation and intensification of human alterations of Eurasian landscapes began during the early to midHolocene and could be associated with the divergence of the ice-core CO2 and CH4 concentrations from the natural trends predicted by earth-orbital changes. Although Ruddiman’s hypothesis still needs validation, it points to a critical link between agriculture and climate change, which has shaped much of the cultural development and human health in ancient time. The ability to raise crops and livestock involved attachment to land, which encouraged the growth of permanent settlements that evolved into larger and more complex communities. The food and physical security provided the stimulus for population growth, which necessitated further expansion and intensification of agriculture. With the interdependent pattern of development that emerged, the transition from the nomadic hunter-gatherer mode of life to the settled farming culture in effect became irreversible. A number of social scientists have argued that the appearance of complex societies at the beginning of the Holocene heralded human resilience to external forcing (such as climate change), buffered by assured food supply, especially during times of stable climate. However, results of a large number of studies point to the fact that development of complex societies increased their vulnerability to environmental drivers, especially at times when climatic conditions were more variable. In other words, early complex societies were more severely impacted than a simpler society by a confluence of environmental modifications and climate changes. Climate has been implicated as the ultimate cause of the collapse of prehistoric societies in many parts of the world with ample evidence for a variety of proximate causes, many of which are environmental.

Environmental Impacts of Agriculture in Ancient Times Until the dawn of the Industrial Revolution, agriculture was one of the most destructive environmental pursuits invented by human beings and the one that was perpetrated for the longest time. The earliest forms of agriculture left a strong imprint on the land in many regions. The fauna and flora and physical environments were radically altered. The clearing and burning of brush (swiddening) led to air pollution. The consequences of tillage, fallowing, terracing, irrigation, and drainage included soil erosion, changes in surface and groundwater quality, aggradation of valleys, and creation of deltaic deposits that could affect the fish habitat and seafood resources for local population. Archeological records in many parts of the world are replete with examples of once-thriving farming regions that were reduced to desolation by interrelated environmental depreciation and conflict. Nowhere is the agriculture–environment conflict and discordance more apparent than in the Mediterranean region, which has borne the brunt of human activity more intensively and for a longer period than any other place on earth. The exploitation of this fragile environment for millennia by generations of farmers, forest cutters and burners, grazers, and irrigators has left scars of environmental abuse in the hills and valleys of Turkey, Iraq, Iran, Israel, Lebanon, Greece, Cyprus, Crete, Sicily, Tunisia, and southeastern parts of Spain. The ancients in these countries practiced rainfed farming and animal husbandry for centuries without effective or consistent soil conservation and erosion control. Inevitably, the land was stripped of its natural vegetative cover, and the original layers of fertile soil were washed down the valleys and transported toward the sea. Successive empiresdthe Phoenicians, Greeks, Carthaginians, and Romansdwere therefore compelled to venture further and further away from their own countries in pursuit of new productive land. Aerial and satellite photographs of southern Mesopotamia, especially the Iraqi part, still display wide stretches of barren, salt-encrusted terrain, which long ago were fabled fruitful fields and orchards. It has been claimed in a number of publications that the end of some ancient empires in the region came when they had become so dependent on distant and climatedestabilized sources of food supply that they could not maintain central control or ward off growing competition from other

602

Environmental Pollution and Human Health in Ancient Times

land-hungry nations. A classic example are the extreme wet and warm summers which occurred during periods of Roman and medieval prosperity. Increased climate variability from AD  250 to 600 coincided with the demise of the western Roman Empire and the turmoil of the Migration Period. The impact of deforestation for agriculture and fuel wood on human health and welfare in ancient times was a global phenomenon. An eyewitness, the great inheritor of the Confucian tradition in China, Mencius, who lived in the early 3rd century BC, wrote that in his time: The Bull Mountain was once covered with lively trees. But it is near the capital of a great State. People came with their axes and choppers; they cut the woods down, and the mountain has lost its beauty. Yet even so, the day air and the night air came to it, rain and dew moistened it till here and there fresh sprouts began to grow. But soon cattle and sheep came along and browsed on them, and in the end the mountain became gaunt and bare, as it is now. And seeing it thus gaunt and bare, people imagine that it was woodless from the start.

Another famous historical example of cultural impacts of deforestation associated with ancient agriculture included the desertification of Easter Island. After it was settled around 1500 BP, the slashing and burning of forest by the expanding population for farmland and for obtaining logs to transport large stone statues to the coast depleted the soils of nutrients and led to internecine warfare among the starving and decimated population of the island. Another famous example was in the Southwestern United States, where deforestation might have contributed to the dispersal and “disappearance” of the Anasazi Indians. The Sumerian culture is often used as an example of how unsustainable agricultural practices could lead to the demise of ancient civilizations. The Sumerians arose in the area known as the “Fertile Crescent” sandwiched between the Tigris and Euphrates rivers around the 6th millennium BC. One of the important pieces of information from that time is the Epic of Gilgamesh, an ancient poem about the mythological hero-king Gilgamesh and his search for immortality, which mentioned a number of environmental problems. The Sumerians developed a highly effective way of irrigating the farms with water in ditches fed by the rivers. This was so successful, and for the first time in history there was a food surplus to support the burgeoning urban population. The irrigation practices maximized agricultural production and at the same time made the system more vulnerable. The loss of water from poorly drained irrigated soils under the hot sun of the region led to groundwater rise, salinization, and waterlogging, which subsequently resulted in the soil initially losing its high productivity and eventually becoming completely unproductive. Sumerian texts noted that “the soil surface became white,” the result of salinization. High population growth, environmental stress, land degradation, and desertification created the conditions for conflicts, which fed on political, tribal, or ethnic differences. The vulnerability to environmental shocks and susceptibility of the region to drought, overpopulation, and famine brought about cataclysmic events that contributed to failed states and civilizations in this region. More importantly, the vulnerability to environmental shocks and susceptibility to drought were the fomenters for large numbers of environmental refugees, the forgotten members of the ancient population. The anthology of agriculture in ancient times was not always a linear trajectory toward degradation as some people have claimed, but a complex alternation of change with intervals of stability, interrupted by episodes of ecological decline and recovery. Climatic inputs and prejudicial land use were interwoven, and although different histories of soil erosion and stream destabilization have been well documented in many regions, the situation was not universal. Unlike the environmental abuse in Sumer, there were some societies that were able to develop sustainable soil and water management that enabled them to thrive in the long run. Examples include the civilization of Egypt, which depended on the natural flood cycle of the Nile, the terrace-building farmers of eastern Asia and the Near East, as well as the wetlands-based societies of Meso-America and South America. Remarkably, successful wetland management systems developed in ancient time have survived in some parts of China and other areas of southeast Asia.

Environmentally Attributable Diseases Associated With Agriculture in the Early Holocene The transition to sedentism with increased aggregation of people provided conditions that promoted the spread and maintenance of infectious and parasitic diseases and an increase in pathogen load in humans. More crowded living conditions facilitated greater physical contact between members of a settlement, and permanent occupation in all likelihood led to decreased sanitation and hygiene. The livestock revolution brought an unprecedented number of animals in proximity to living areas for humans. The cosharing of living space with domesticated animals created a cluster of zoonotic disease vectors. Of the 1415 species of infectious organisms known to be pathogenic to humans, over 60% are zoonotic, or transmissible from animals to humans, and the animal–human crossover occurred in ancient times for most of the diseases. Many parasites associated with domesticated goats, sheep, cattle, pigs, and fowl would have infected the early farmers. Environmental disturbances during the clearing and cultivating of land increased human contact with arthropod vectors that prefer human habitats and that carry yellow fever, filariasis, and other diseases. Slash-and-burn agriculture exposed populations to mosquito and other pathogens, whereas the cultivation exposed workers to insect bites and diseases such as scrub typhus. Irrigation agriculture increased contact with nonvector parasites. The milk, hair, and skin of domesticates, as well as animal dust, transmitted anthrax, Q fever, brucellosis, and tuberculosis, whereas peridomestic animals such as rodents and sparrows, which were drawn to human habitats, were also sources of zoonotic diseases. Wastes from

Environmental Pollution and Human Health in Ancient Times

603

settled communities released biological contamination into rivers, which could infect human hosts; examples might have included hookworm, diarrhea, and Escherichia coli. The domestication of plants and animals ushered in new ways of living and new foods, which had profound influence on human health and welfare. The shift from foraging to farming resulted in the consumption of less meat and less varied diet and reduced access to key micronutrients such as iron and zinc. The transition would have resulted in the consumption of less seafood in coastal communities as well. Micronutrient deficiencies increased the susceptibility to infectious diseases, whereas food storage increased the potential for food poisoning. The weight of available scientific evidence tends to suggest that the quality of life generally decreased with the introduction of agriculture in early Holocene. From a detailed review of the literature on the burdens of infectious diseases, Clark Spencer Larsen has concluded that the more densely settled agricultural societies were more prone to infection than the nomadic groups of Neolithic times. He noted that sedentism could not fully explain the general pattern of very high infection rates observed in some prehistoric agricultural societies and emphasized the importance of synergy between infection and other stressors such as poor nutrition, warfare, anemia, or social disruption in determining the remarkably poor health in some late prehistoric agricultural groups. One of the most prevalent diseases identified from studies of archeological remains is dental caries (tooth decay). Studies in many parts of the world point to a consistent trend toward an increase in caries of the teeth with the transition to maize agriculture. Maize contains high levels of sugar (sucrose), which is very readily metabolized by bacteria in the mouth and hence is highly cariogenic. Besides this increase in dietary sugar content, a decline in wear on the teeth has been noted, which allows food to accumulate between the cusps of the molars and premolars. One area with no discernable increase in caries incidence was southeast Asia, where rice (a noncariogenic crop) was the primary cultivar. In Britain also, caries did not increase until the Roman period, when imports of exotic sugary foodstuffs led to increased frequency of caries. High frequency of porotic hyperostosis (the flat bones of the human skull is more porous) has been reported in agrarian communities in many parts of the ancient world, and has been related to dietary iron deficiency and intestinal infections of various kinds. One study that examined health at the agricultural transition compared the skeletons of hunter-gatherer (50 BC–AD 200) and agricultural (AD 1050–1250) groups in West Central Illinois through analyzing the cross-sectional shape of the femur and humerus. The observation that females had stronger bones in the agricultural group was attributed to the fact that they were instrumental in growing and processing of cereal crops. Other skeletal indicators of impaired quality of health following the agricultural transition included reduced growth rates as determined by lengths of bones of children per age and reduced adult height at a number of places. Although significant declines in adult heights have been reported at a number of regions during the early Holocene, no change or increase in stature has been reported in other settings, which presumably reflected the degree of nutritional insufficiency in each region. Comparisons of the prevalence of osteoarthritis in Holocene populations show that, in general, foragers had more osteoarthritis than farmers, suggesting that there was some decline in workload and other activities that resulted in articular degeneration during the transition. Reported changes in reduction of size and robustness of the human skeleton in the Holocene, compared to Pleistocene populations and over the course of the Holocene, are consistent with the decline in osteoarthritis. A dramatic reduction in the size of the face and jaws appears to be a characteristic feature wherever humans have made the transition from foraging to farming. The voices of the ancient people, heard through the remnants of their bones, speak to the severe adverse impact of the agricultural transition on their health. The bioarchaeological study by Charlotte Roberts included the effects of agrarian and rural environment on the occurrence of maxillary sinusitis, one of the most common upper respiratory tract infections associated with exposure to air pollution. This study found the average prevalence rate for rural agricultural sites to be 45%, which was higher than that for hunter-gatherers. The frequencies for females were also found to be higher than those for males. The increased burden of sinusitis following the transition to agriculture likely represented poorer air quality in their indoor environment (crowded smokey living structures equipped with no exits for smoke) and outdoor environment (increased exposure to dust during slash and burn, tilling of soils and pollen). The high burden of respiratory diseases in ancient agrarian societies was a global problem that has not been considered in previous reviews dealing with the health of ancient populations (see the Origin of Human Diseases by Thomas McKeown, for instance). Palaeoparasitology has provided interesting insights on the links between agriculture and the emergence and presence of infections and disease conditions among populations in ancient times. During the Neolithic time in Europe (4000–2800 BC), the intestinal parasites that have been identified were beef or pork tapeworm (Taenia asiatica, Taenia saginata or Taenia solium), bile duct fluke (Opisthorchis sp.), Capillaria sp., E. histolytica dysentery, Fasciola genus liver fluke (Fasciola hepatica), fish tapeworm (Diphyllobothrium sp.), giant kidney worm (Dioctophymidae), hookworm (Ancylostomids), lancet liver fluke (Dicrocoelium sp.), roundworm and whipworm. The group consisted of a mixture of zoonotic parasites from eating raw or undercooked wild animals (beef/pork tapeworm, bile duct fluke, capillariosis, fish tapeworm, giant kidney worm, lancet liver fluke) and parasites transmitted by the fecal contamination in villages (Entamoeba dysentery, hookworm, roundworm, whipworm). By the Bronze and Iron Ages in Europe (2800– 100 BC) the common parasites were E. histolytica, roundworm, whipworm, lancet liver fluke and a few instances of fish tapeworm. Apparently, the transition from Neolithic to the Bronze and Iron Age in Europe brought about a shift away from zoonotic parasites contracted by eating raw or undercooked animals to parasites transmitted by poor personal hygiene (dysentery, roundworm, whipworm). Whether the decrease in reported number of species is due to increased reliance on domesticated animals for food or better cooking of wild animals is yet not clear. By the time of the Roman Empire, the most widespread species were roundworm and whipworm. The Romans introduced the culture of person cleanliness with the regular washing in heated public baths and hygienic latrines across the empire which could have led to a decline in prevalence of fecal oral parasites in some parts of Europe.

604

Environmental Pollution and Human Health in Ancient Times

Ancient intestinal parasites found in the Middle East before the Roman times include Fasciola genus liver fluke, fish tapeworm, lancet liver fluke, pinworm, roundworm, schistosomiasis (Schistosomiasis haematobium and Schistosomiasis mansoni), Taenia genus tapeworm, threadworm (Strongyloides stercoralis) and whipworm. Examination of mummies in Egypt let to the identification of endoparasites not preserved in the human remains of northern Europe, including species that cause dracunculiasis, filariasis, leishmaniasis, malaria, toxoplasmosis and trichinosis. Piers Mitchell (of Cambridge University) attributed the greater diversity to the fact farming, herding and formation of early towns and cities states began in this region well they did in Europe. The major civilizations that had developed in Mesopotamia and Egypt were already experiencing the effects of higher population density, the challenges of sanitation and long distance trade which involved increased for spread of infectious diseases. Seven species of intestinal parasites have been identified in remains of ancient Chinese populations dating back to 2300–2100 BP, namely, roundworm (Ascaris lumbricoides), whipworm (Trichuris trichiura), tapeworm (Taenia sp.), oriental schistosomiasis (Schistosoma japonicum), and pinworm (Enterobius vermicularis) and the intestinal fluke (Fasciolopsis buski). The most common among these parasites were the whipworm, roundworm, and Chinese liver fluke which were seen in 50%–75% of Chinese mummies so far studied. Since whipworm and roundworm are spread by fecal contamination of food, their widespread occurrence in ancient China populations suggests poor personal hygiene (like hands not washed after going to toilet), regularly, that drinking water was contaminated by human feces, or that human feces were being used a crop fertilizer and vegetables were not properly washed or adequately cooked. The prevalence of Chinese liver fluke can be attributed to consumption of raw freshwater fish. Findings in Japan are related mainly to parasites acquired through the consumption of raw fish, a cultural tradition that dates to prehistoric times, popularized with the emergence of sushi in the 4th century AD. Clonorchis sinensis, Paragonimus sp., Metagonimus yokogawai, and Diphyllobothrium sp. eggs have been found in fecal material from Japanese archeological sites dated from 2300 BP to the 12th century AD.

Mid-Holocene to Classical Times: Transition to Urbanization and Manufactories The increasing size of fields to meet the growing population in settled communities could not be managed by manual labor alone and consequently led to harnessing of the first important extrasomatic source of mechanical energy, namely, the domestication of draft animals throughout the Old World (only pack animals were domesticated in pre-Colombian America). Draft animals speeded up the plowing of fields, transportation, and crop-processing tasks and boosted agricultural productivity, which encouraged further communal growth. For agroecosystems where grazing land was limited (such as the rice regions of Asia) and only a limited number of draft animals could be supported, the limited unit power of muscles was overcome by deploying massed labor. Massing people and draft animals then made it possible to transport and erect (with the help of simple devices) megaliths and to build impressive stone structures on all continents (with the exception of Australia) solely with human labor. The occupational hazards can only be imagined. The growth in agrarian villages stimulated important new utilitarian industriesdmaking of pottery and mining and smelting of metals, which began in the Near East around 8000 BC. Clay was shaped and baked to form hardened vessels used for storing grain, for liquid storage and conveyance, and for cooking. Early settled societies also developed the art of firing mud to produce bricks and fashioning bronze into artifacts around 8000 BP. The first complex, highly organized, state-level societies emerged in the Afro-Asiatic monsoon belt and northern South America during the 6th and early 5th millennia BP. This was a period of profound climatic and environmental changes in these regions and globally characterized by a weakening of the global monsoon system and widespread aridification in regions that today contain the bulk of the world’s warm deserts. The six key regions in which complex societies emerged during the Middle Holocene were the central Sahara, Egypt, Mesopotamia, South Asia (Indus–Sarasvati region), northern China, and coastal Peru. The emergence of the earliest civilizations in the Middle Holocene thus coincided with periods of large climate and environmental changes, in particular with increased aridity. Nick Brooks has argued that global climate change in the Middle Holocene was the key driver for societies to become more complex and organized as they responded to its local and regional manifestations. The rapid increase in complexity of agglomerated communities in the late 6th and early 5th millennia BP was interpreted as precipitated by hardship rather than by abundance. Irrespective of how they originated, the first historical type of city that evolved in Mesopotamia, Egypt, ancient India, and China consisted of walled towns with surrounding territories. Officials of the ruler, the clergy, and wealthy merchants lived in those cities that usually were located near palaces or temples and were surrounded by quarters of subordinating poor tribes. They flourished with long-distance trade and had no citizens as such. The second type of city appeared in the ancient Greek and Roman World and consisted of city-states that were typically formed by landowning warriors settling in militarily defensible places or acropolis. Rome, for instance, was founded on seven hills, and through conquering the neighboring communities morphed into the Roman Empire that ruled the Mediterranean World. Colonial Roman cities were likewise founded at strategic points in various provinces using a similar city plan.

Sanitation in Ancient Cities Compared to present-day megalopolis, cities in ancient times were small in terms of population, but the inhabitants were densely packed in closely spaced tenements and living quarters. There were no land-use demarcations, and small-scale factories and cottage industries were juxtaposed with residential buildings. As the cities became more populated, disposal of human wastes became quite difficult.

Environmental Pollution and Human Health in Ancient Times

605

Ancient cities evolved various ways for dealing with human waste and no doubt were aware of the fact that improper treatment of human waste could lead to foul odors and a generally unpleasant environment. Hardy-Smith and Edwards have studied the Natufian stone dwellings in the Jordan Valley, settled from around 13,000 BP. The Natufian culture showed progressive separation of garbage from inside the living and cooking space to designated middens. By 8000 BP, the interior living spaces were found to be very clean, evidently the result of sweeping and plastering. Other important findings on human wastes in ancient cities came from excavations at Çatalhöyük, a major town on the Anatolian Plateau inhabited from 9500 to 8000 BP, where hundreds of contiguous apartments that had definitive clean and dirty areas were discovered. The practice of separating garbage and feces from the living spaces implies some understanding that diseases were linked to hygiene. Despite the separation of clean and dirty spaces in the early Levantine towns, the diffusion of hygienic concepts was slow and irregular and many cities of the ancient world were bedeviled by foul odor and bad sewer. Ancient Mesopotamian housewives prided themselves on keeping their houses swept clean. The fact that this practice was understood to serve a public health purpose can be adduced from the use of sweeping in apotropaic rites. The house refuse was dumped on the street, however. Feral dogs, pigs, goats, and other scavengers were used as impromptu garbage disposal. Bathrooms were distinctive and readily recognizable features of ancient Akkadian and Babylonian households. In addition to basins for washing (hence they were called lavatories), the housing units contained emplacements for urination and defecation. Bathrooms were connected by means of elaborate drainage system to central sewer, which also carried off excess rainwater. Archeological excavations have found sumps, which together with the drains and sewer, suggest an advanced level of sanitary practices. Most of the wastes, however, were discharged into the river, which provided drinking water to the inhabitants. By 4500 BP, the Egyptians were solving the waste disposal dilemma by constructing bathrooms with latrines that were flushed by hand with buckets of water. The latrines emptied into earthenware pipes, many of which are still functional today. Rome also had a public sewage system called cloaca maxima. It was constructed to prevent the streets from filling up with rainwater and human waste. Public latrines were erected over channels of water. The latrines had stone seats with a hole in the center of them, much like the modern toilet seat that is in use today. Much of this forward-thinking technology did not appear to have diffused to Europe until the medieval time. Sanitation in ancient Rome has been investigated by historians and archeologists for centuries. Rome had a complex sanitation system, much like those in modern societies, but the system itself and knowledge about it were largely lost during the Dark Ages. (Dark Age or Dark Ages is a term in historiography referring to a period of cultural decline or societal collapse that took place in Western Europe between the decline of the Roman Empire and the eventual recovery of learning.) A system of 11 aqueducts provided the citizens of Rome with water of varying quality, the best being reserved for potable supplies. Lower quality water was used by everyone in the public baths and latrines. A “laterine” is a structure for defecation and urination. Latrines allow for safer and more hygienic disposal of human waste than open defecation. Craig Taylor has done a comprehensive assessment of human waste disposal practices in ancient Rome and reached the following conclusions: “The city provided facilities to dispose of human waste as well as services to remove this waste with as little inconvenience as possible. This was accomplished with the many [public] latrines, as well as the large [major open sewer] Cloaca Maxima, used to carry human waste into the Tiber and away from the city. For those who had private latrines connected to city sewers, their human waste was also straight away transported to the Tiber. Those who did not use latrines could give their waste to the stercorarii who would then transfer it outside of the city to farmers. Building owners were perhaps responsible for [removing human waste from] the front of their shops, while the aediles were responsible for cleaning the rest of the city streets. Underground sewers were cleaned and maintained so that waste would not build up on the sides and no obnoxious odor would arise. Laws were also passed to curb the practice of citizens dumping their human waste onto the streets.”

The picture painted by Craig Taylor is overly optimistic; the Romans were clearly not as sanitary as the system he described may lead casual observers to believe. Roman rubbish was often left to rot in alleys between buildings in the poorer districts of the city. It sometimes became so thick that stepping-stones were needed. As a consequence, the street level in the city rose as new buildings were constructed on top of rubble and rubbish. To avoid going out at night, tenants sometimes threw the contents of their chamber pots out of the window onto the street, and there were numerous literary accounts of men relieving themselves in alleys, behind statues and bushes, and in public fountains. Human waste generated from these sources could not have been easily cleaneddthe liquids would seep into the soil, whereas the residual solids were left to decompose or else eaten by dogs, cats, insects, and other animals. Cesspits were generally situated next to the kitchen in private homes and would also have represented a hazard to public health since it would have attracted insects and other creatures that could come into contact with food or drinking water. Human waste that was removed and subsequently used as fertilizer was a potential source of harmful pathogens in crops unless properly processed. Raw sewerage from Rome emptied into River Tiber, where the city’s inhabitants habitually bathed, swam, and fished. Even though the city of Rome was able to remove a large portion of the huge quantity of human waste produced daily, the hazard did not exactly go away. The problem was so serious that practical regulations for food safety, water supply, and dead bodies were being enforced, and before that, Plato (427–347 BC) had suggested that civil health inspectors should be appointed for streets and roads, dwellings, and the water supply. Urine was collected in barrels in the street for the so-called “fullones,” who were the owners of the laundry business in ancient Rome. The inhabitants disliked the odorous practice, and the “fullones” were forced to practice their business at remote places outside the city. The “fullones” made so much profit that the Roman Emperor Vespasianus (AD 69– 79) started to levy taxes on urine, and when his son Titus cried shame on this, he showed him a golden coin and asked him if it was

606

Environmental Pollution and Human Health in Ancient Times

smelly. Titus had to admit that it did not smell, and Vespasianus was reputed to have exclaimed “atqui e lotio est” (and yet it is of urine). Later authors changed the emperor’s sarcastic remark to “pecunia non olet” (money does not stink). Vespasianus was later to be immortalized by the French when they named their public conveniences vespasiennes. Paleo-archeological research presents a contrarian view of sanitation in Ancient Rome. So far, 12 genera of endoparasite and 5 species of ectoparasite have been identified in archeological sites of the Roman period. The most common intestinal parasites were human whipworm (Trichuris trichiura) and roundworm (Ascaris lumbricoides) which are spread by fecal contamination of food through poor personal hygiene and use of human feces as crop manure. Other common parasites of the Roman world were fish tapeworms (from eating raw, smoked or pickled fish); human and dog tapeworms (from eating raw or undercooked beef, pork or similar large farm animals); lancet liver flukes (Dicrocoelium genus) from undercooked cow liver; Entamoeba histolytica spread by the drinking of water contaminated by human feces and can cause outbreaks of dysentery, with bloody diarrhea, abdominal pain and fevers; hydatid disease (Echinococcus granulosus) from inadvertent exposure to infected dogs; pinworm spread by fecal contamination of hands or by airborne route; Fasciola genus liver fluke contracted by eating unwashed plants grown in locations where droppings farm animals such as sheep and cattle fester; Capillariosis (Capillaria hepatica) from uncooked animal organs. Other pathogens identified from preserved DNA in an Egyptian mummy the Roman period included malaria (Plasmodium falciparum) from mosquito bite and toxoplasmosis (Toxoplasma gondii) from infected cats. These data from paleo-archeological studies point to high prevalence of parasites that caused dysentery in the Roman population in spite of their piped drinking water from aqueducts, extensive public latrines with washing facilities, sewer systems, sanitation legislation, and drinking fountains. Fish tapeworm was unusually widespread and could be attributed to the Roman fondness for fermented garum (uncooked fish sauce) as a pathway in the spread of this helminth. In addition to the endoparasites, a variety of ectoparasites preserved in combs, textiles, tombs and waterlogged soils have been recovered at many settlements of the Roman period. These include head lice (Pediculus humanus capitis), body lice (Pediculus humanus corporis), pubic lice (Phthirus pubis), fleas (Pulex irritans) and bed bugs (Cimex lectularius). Such ectoparasites can also spread bacteriaborne diseases among humans. For instance, the body louse has been known to transmit louse-borne relapsing fever (Borrelia recurrentis), epidemic typhus (Rickettsia prowazekii) and trench fever (Bartonella quintana). It has been suggested that some epidemics of the Roman period could have been the result of bacterial diseases spread by ectoparasites. The common human parasites and/or associated diseases were known and are mentioned by many ancient Greek and Roman authors. Some authors like Celsus and Pliny only referred to two kinds of intestinal worms of man, namely, round worms and flat worms. The majority of the ancient writers, however, described three types of worms. Aristotle state, for instance, that “there are three kinds of helminths: those which one calls large and flat (tapeworms), those which are cylindrical (Ascaris lumbricoides) and thin ones, the ascarides (Enterobius vermicularis).” Galen recognized the three kinds of human helminths which he labeled lumbrici lati, teretes and ascarides and knew in which parts of the intestine they occur. Theophrastus listed populations with frequent taenia infection. According to Plutarch (ca. AD 46–120), the guinea-worm, Dracunculus medinensis, mentioned by the geographer Agatharchides of Cnidus (2nd century BC), as being common among the population on the shores of the Red Sea. Many Greek and Roman writers including Pliny, Galen, Soranus of Ephesus, Paulus Aegineta, Aetius of Amida and Joannes Actuarius knew of Dracunculus medinensis during their times. The paleo-archeological records and writings of ancient authors provide a good impression that the public sanitation measures of ancient Rome were either not being followed or were inadequate to protect the population from parasites spread by fecal contamination of their food and water. The possibility that the warm communal waters of the bathhouses, which were sometimes changed infrequently, helped to spread the parasites has been mentioned.

Water Pollution in Ancient Cities Most ancient civilizations grew along the banks of rivers, which provided water for irrigation and also served as primary receptacles for human wastes. Bathing and washing further contributed to pollution as did animal feces, storm water, and any industrial operations. The general belief then was that the solution to pollution was dilution. In most cases, the wastes discharged to nearby rivers were carried to downstream communities, and for small communities, the natural self-purification processes were able to cleanse the organic pollutants in a reasonable amount of time. With increasing population and population density, the carrying capacities of many urban rivers were exceeded and waste disposal became a major nuisance and a public health problem. Where the rivers were no longer able to effectively dilute or biodegrade an ever-increasing waste load, surrounding communities were likely to be enveloped in an atmosphere of stench and toxic gases. The accompanying depletion of oxygen in the rivers would have reduced the local supply of fish protein. Although the problems of water pollution were well known, historical records are awfully silent with regard to the creation of technologies and methodologies designed to treat liquid and solid wastes in ancient cities before they were discharged to the environment. Human impacts on tropical Asian rivers were all-pervasive and reflected the development of ancient civilizations around great rivers, such as the Harappa and Mohenjo Daro cultures along the banks of the Indus River. Because of the considerable year-toyear variation in the duration and intensity of the monsoon, the region experienced frequent floods and droughts, which caused pronounced temporal variations in water supply and necessitated large hydrological schemes, including works of irrigation and water conservation, river control, drainage, and inland navigation on a scale unprecedented in the West. Examples of ancient diversion and containment of rivers in Asia include the 4000-year-old Ifugao rice terraces of the Philippines, the Cauvery Delta canals in India constructed during the 2nd century AD, the Barrai irrigation dam in Burma (now Myanmar) built in the 10th

Environmental Pollution and Human Health in Ancient Times

607

century AD, and early attempts to dam the Mahaweli River in Sri Lanka (3rd century AD). Nowhere was the importance of river control and irrigation in the agricultural production, settlement patterns, and social economy more important than in China. The legendary Chinese hero-emperor Yu the Great (2205–2198 BC) was reputed to have cut canals through the hills to furnish outlets to the floods and to trace each river to its source and back again to its mouth, to clear its spring, regulate its course, deepen its bed, raise embankments, and change its direction. Other irrigation and major hydraulic works of ancient China include the Dujiangyan irrigation system (256 BC); the Zheng Guo (246 BC), Lingqu (214 BC), and Longshou (128 BC) canals; and the Grand Canal (6th century AD). The long history of water regulation must have had severe environmental and ecological consequences. Among these, river pollution would have been the most pervasive and conspicuous. Untreated sewage, manufactory effluents, and mine wastes were increasingly dumped into the rivers as the region became more and more densely populated. These polluted rivers represented a serious hazard to the population in terms of waterborne pathogens. Other effects would include the degradation of drainage basins through deforestation and overgrazing, which could have led to increased suspended sediment loads and extensive flooding, and might have affected land–water interactions and exchanges of material. Flow regulation would have reduced the peaks in seasonal flood discharge, thereby changing the magnitude and extent of floodplain inundation and land–water interactions. Fish breeding migrations might have been disrupted because dams block migration routes or change flow regimes and wipe out spawning grounds. The excessive floodplain siltation would have altered habitats, causing species decline or disappearance and reducing the availability of essential fish protein to local population. The degradation of water resources no doubt had dramatic impacts on the quality and quantity of food available to sustain the large human populations congregated in ancient communities that had sprung in the river basins of Asia. It is interesting to note that Chinese emperors were classified as “good dynasty” or “bad dynasty” depending on whether waterworks were maintained carefully or allowed to fall into disrepair.

Air Pollution in Ancient Cities The main sources of air pollution were odor from the open sewers; decomposing rubbish, dead animals, and excrement; wood smoke; metal smelting and fabrication; and production of various goods. Available information from various sources provides indirect but positive suggestions that the levels of air pollution in ancient cities were significant. One of the first recorded episodes of air pollution was in the village of Hit, west of ancient Babylon, where the Egyptian King Tukulti described an offensive odor emanating from a bitumen mining operation that released high concentrations of pungent gases in 2900 BP. Air pollution in Rome, especially during the Classical Age, resulted from large amounts of biomass fuel (including wood, other vegetal materials, or animal dung) burned domestically for heating and cooking by the huge population (estimated to be over 1 million and as many as 2.5 million under Constantine’s reign) and to heat public baths. Huge quantities of biomass fuels were required by some industries in Rome, for making bricks and roofing tiles used to build Rome and rebuild it after Nero’s famous fire, as well as construct the enormous and magnificent edifices for which the city was famous (such as the Baths of Caracalla). Other sources of air pollution included burning of animal and vegetable oils in terracotta lamps, burning of candles and incense, and cottage industries. Lucius Annaeus Seneca, the Younger, marveled in AD 61: “As soon as I had gotten out of the heavy air of Rome, from the stink of the chimneys and the pestilence, vapors and soot of the air, I felt an alteration to my disposition.” Although some people might have suffered from nuisance exposure to volatile organic compounds from decaying rubbish, polluted indoor air was responsible for a significant fraction of the sickness and diseases attributed to environmental causes in ancient times. The Chinese kang is an ancient technology developed approximately 2500 years ago during the Zhou Dynasty (722–481 BC). It is an integrated home system for cooking, sleeping, and domestic heating that is still widely used today. A recent large-scale study found kangs in nearly 85% of rural homes in northern China where it is mostly cold and dry and the heating season is long, but the use of this contraption is spread across the country. Similar heating systems were also developed and adopted by other cultures such as the ondol (heated floor) in Korea and hypocaust in ancient Rome. Although the design varies slightly from region to region, the so-called grounded kang was the most widely used in China. There are two main drawbacks with grounded kangs: the burning efficiency (approximately 14%–18% for the body and 8%–10% for the stove) is low, which causes significant fuel waste, and high levels of pollutants are released into the indoor air, especially when the stove and the kang body are put together in the same room. Furthermore, grounded kangs with bad ventilation design often results in smoke backflow and flue choking. In rural China, exposure to indoor air pollution due to the biomass combustion in kangs is associated with a number of morbidities such as acute respiratory infections, chronic obstructive pulmonary disease, lung cancer, and asthma; the mortality rate attributed to indoor air pollution was also estimated to be significant (approximately 425,000 deaths in China in 2000), and this has been happening since ancient times. The kang is an advanced technology compared to the simple and extemporaneous stoves and wood-burning devices employed by people in other parts of the world where higher morbidity and mortality rates from indoor air pollution would have been expected. Dario Camuffo has made some effort to estimate the degree of air pollution in Rome from deposits on the Trajan marble column built in AD 105 by the architect Apollodorus of Damascus. The deterioration of the column was typical of other ancient monuments with pitting, dissolution, traces of a so-called scialbatura (a gray-pink surface layer of calcium oxalate that covers the marble), and black crusts. Camuffo assumed that the patinas (especially the scialbatura) were formed by precipitation of calcium oxalates from oxalic acid secreted by microorganisms, such as lichens or blue algae, which colonized the

608

Environmental Pollution and Human Health in Ancient Times

monuments. He showed that the scialbatura were generally absent when the population of Rome exceeded 150,000–200,000, a threshold he associated with lichen-toxic levels of pollutants in the ancient urban atmosphere. With the fall of Rome and improvement in air quality, biological activities flourished, which was associated with the patina formation for the next 13 centuries. The high level of outdoor air pollution adduced by Camuffo has not been supported by the results of other researchers.

Environmentally Related Diseases in Ancient Cities The nexus of urbanization, sanitation, and infectious diseases is an ancient one, which is still driving public health in many developing countries today. It is anchored on the fact that a critical population size is required to sustain an epidemic and for diseases to become endemic. Before the transition to urbanization, small and isolated cities were generally unable to generate a sufficiently large and constant flow of immunologically susceptible people to fuel epidemics. Epidemics that did arise as a result of an imported pathogen probably burned themselves out quickly. The emerging large and overcrowded ancient urban centers provided the conditions for disease endemicity, whereas the flow of rural migrants provided a constant crop of susceptible individuals that served to fuel the epidemic cycles. Furthermore, as the connectivity of cultures across the ancient world grew (in the form of networks of cities linked by land and sea through the movement of people), the barriers to pathogen dispersal were lowered and the population size thresholds that once limited their continual transmission were reduced significantly. The net result was the creation of disease pools where pathogens were shared over large areas. The rapid urbanization of human populations and expansion into new ecological zones represent some of the most important forces in the evolution of environmentally attributable disease. A paleopathological study of 1620 human skeletals by Luigi Capasso and colleagues has provided an interesting insight into disease burdens of ancient Adriatic populations before and after the Roman conquest. Before the Roman conquest, the local populations had a life expectancy of 28–42 years; infant mortality was 5%–15% during the first decade of life; osseous evidence of chronic inflammatory diseases (which are linked to personal and environmental hygiene) was rare, limited to 2%–5% of the skeletal remains examined; and the large variations of 3.7%–18% in the number of traumatic lesions reflected the varying environmental and sociocultural contexts in which people lived. The most isolated groups, such as the communities of Val Fondillo (who lived from the 6th to the 4th century BC in a particularly inaccessible area), were characterized by low infant mortality (from 7% to 12%) and a very low frequency of environmentally related diseases. The pattern of disease in these preconquest populations was probably typical of many agrarian communities in the Mediterranean region of ancient times. After the Roman conquest, the populations had a shorter life span, with a reduction in life expectancy to approximately 27 years; infant mortality was greatly increased, reaching 25% in the Sulmona population (approximately fourth to second centuries BC); and inflammatory diseases became very common. These changes in morbidity were in likelihood mediated by environmental risk factors. The reduced life expectancy and increased infant mortality could be the result of increased exposure to environmentally vectored pathogens, and the high prevalence of inflammatory diseases could have been associated with increased exposure to air pollution. Capasso and colleagues concluded that “as the Roman empire spread from Italy across Europe, so did these [environmentally attributable] adverse health effects.” The Roman culture noted for its environmental brutality can be said to be mightier than the Roman sword from a public health perspective. In addition to poor sanitation, famine and malnutrition were rampant, which increased the susceptibility to parasites. Contamination of water supplies with human wastes led to outbreaks of cholera, a waterborne disease, whereas fleas infested with the plague bacillus and lice that carried typhus spread disease from person to person. High population densities enhanced the transmission of viral diseases such as measles, mumps, chicken pox, and smallpox, and respiratory transmission resulted periodically in cataclysmic plague epidemics, which killed thousands or millions of people and radically altered societies. The physicians of Assyria and Babylon left behind a body of knowledge on diseases that appear to contain relatively accurate descriptions of many types of infections from environmental exposures. There were frequent throat and upper airway infections that could have both viral and bacterial causes. Pneumonia was said to be common in winter months. As is expected, the risk for infectious disease outbreaks in urban areas is greatest where there is little or no sanitation, the population density is high, safe and adequate housing is lacking, and medical services are poor. These criteria would fit most ancient cities where neighborhoods often lacked safe and adequate housing, as well as reliable clean water, sewage disposal, and waste management. Accounts of plague and pestilence of environmental origin are well documented in ancient literary records. The Akkadian texts alluded to outbreaks of diseases that resembled the plague in terms of symptoms but cannot be proved conclusively. Plagues were mentioned in Papyrus Ebers around 3500 BP. According to the Old Testament, God punished the Egyptians with plague and pestilence (through some type of environmental modifications) to force them to liberate the Jews from bondage. A number of environmentally mediated epidemics were reported in the Greco-Roman times including the Plague of Athens described by Thucydides (around 2430 BP) and the “Antonine Plague” around 1160 BP at the time of the Roman Emperor Marcus Aurelius. Poor living conditions and environmental filth were the main risk factors for diarrheas, and there is evidence to show that diarrheal diseases were of endemic and epidemic proportions in ancient times. The considerable detail in which the symptoms and treatments for diarrhea diseases were described in Akkadian texts suggests that these diseases were extremely common and persistent in ancient Mesopotamia reflecting the poor environmental conditions of that time. The writings of Hippocrates (460–377 BC) described diarrheal diseases as “abundant liquid stool at short intervals.” Celsus provided more clinical elements of the disease, noting that it was “an illness in which intestines are not to able to retain anything, and in which almost everything in the alimentary

Environmental Pollution and Human Health in Ancient Times

609

canal is lost as soon as it is eaten without digestion” and also that “the patient excretes blood which is usually mixed up with liquid excreta and at other times with mucus.” The record of poor sanitation in ancient cities that could be linked to ill-health is quite rich. Plato (427–347 BC) suggested that civil health inspectors should be appointed for streets and roads, dwellings, and water supply. According to the Mishnah Laws in the Talmud (AD 200–500), “Carcasses, cemeteries and tanneries must be removed from the town to a distance of fifty cubits. A tannery must not be established except on the east side of the town.” The Assyrian and Babylonian medical texts have 12 references that identified diseases associated with exposure to river water. Six texts mention cases that were exposed during bathing and other water contact routes and six that were infected through drinking the water. Most of the references described neurological abnormalities including seizure, vertigo, paralysis, twisting, and altered mentation, which pointed to ingestion of high levels of neurotoxins in the water. The following section from Seneca shows a good understanding of the health effects of water pollution: “Why does water taste differently in different places? For four reasons. The first is the type of soil from where it comes. The second depends on the soil, whether it arises from its transformation [they believed that the springs were the result of a direct transformation of the soil into water]. The third from the air which is transformed into water. The fourth is due to the corruption of the water when it is contaminated by pollution agents.”

Literary records of ancient times support the notion that cities were a dangerous place to live. One of the fundamental legislative works of Solon in the 6th century BC included the rule that blacksmiths should transfer their activities outside the city of Athens to avoid noise and air pollution. For the siege of the city of Plataies during the Peloponnesian War (around 430 BC), Thucydides claimed that the Peloponnesian troops surrounded the defensive walls of the city with wood mixed with asphalt and sulfur and ignited them, thus producing a considerable amount of smoke and SO2, with the aim of “smoking” the residents out of their city. Horace (65 BC–AD 8) wrote that Roman buildings turned dark from smoke, and this phenomenon has been confirmed by recent studies of patina on ancient monuments. Seneca (4 BC–AD 65), the teacher of Emperor Nero (AD 37–68), who was in poor health most of his life, was frequently advised by his physician to leave Rome. In one of his letters to Lucilius in AD 61, he expressed his intention to escape from the gloomy smoke and kitchen odors of Rome to get better. Asthma was probably pandemic in ancient time. An important clinical pattern of this disease is the association with environmental causes and with psychological stresses. Although there has been debate over the degree to which environmental factors can explain the marked rise in asthma incidence during the 20th century, there is little argument about the adverse effects of air pollutants, such as wood smoke, bioaerosols (animal dander, dust mite, and mold spores), pollen, volatile organic compounds on individual asthmatics, and viral infections on asthmatics. The five key symptomsdcough, wheeze, dyspnea, chest tightness, and increased mucus productiondare readily identifiable and reported often by ancient medical practitioners. Asthma has been known and reported for thousands of years among different ancient cultures. The oldest canon of internal medicine, the “Nei Ching,” reputedly authored by Huang Ti (2698–2598 BC), provided a good description of pathophysiological features of asthma including labored and noisy breathing and dyspnea (shortness of breath during walking) often associated with thick phlegm and nasal mucus. Asthma-like symptoms were reported 3500 years ago in the famous Egyptian manuscript called Ebers Papyrus, which mentioned its seasonal nature, and in ancient medical compendia of Akkadia, Sumeria, and Mesopotamia dating back to 5000 BP. The word “asthma” was also used in Homer’s “Iliad” to describe short-drawn breath. The first mention of asthma in Greco-Roman era has been dated to the Corpus Hippocraticum by Hippocrates (460–357 BC), which noted the spasmodic nature of asthma and believed its onset to be caused by moisture, occupation, and climate. Hippocrates’s descriptions of asthma among tailors, fishermen, and metal workers represent one of the earliest reported cases of occupational asthma. Cornelius Celsus (AD 25–50) modified Hippocratic concepts in his De Medicina, and Aretaeus the Cappadocian (c. AD 81–138) is generally credited with the first accurate description, specifically reporting wheezing, dry productive cough, and difficulty sleeping in bed as characteristics of asthma. The numerous mentions of “asthma” in the extensive writings of Galen of Pergamus (AD 130–200) are in general agreement with the Hippocratic texts and to some extent with the statements of Aretaeus that asthma was very common and a major problem in Greco-Roman times. The fact that asthma became pandemic after the urban transition points to possible causal association with increased levels of exposure to environmental pollution in urban areas. Histological assessments of the lungs of ancient human mummies have shown that anthracosis (chronic reduction in the function of the ciliated respiratory epithelium often associated with exposure to indoor air pollution) was a regular disorder in many ancient societies, such as those of Egypt, Peru, and Aleutia. The only human mummy recovered from ancient Rome, the so-called Grotta Rossa mummy, also revealed severe anthracosis despite the young age of the person at the time of death. These records point to widespread incidence of respiratory diseases of environmental origin.

Impacts of Mining on Human Health and the Environment in Ancient Times Mining has changed not only the environment but also the history of humankind. The discovery of metals that can be fashioned into tools that could inflict death liberated human beings from paradise and made them masters over all other forms of life on earth. The inseparable association of metals to human culture has been canonized by naming critical periods in human history after metals, with the Copper Age, Bronze Age, and Iron Age representing well-defined golden ages in technological developments.

610 Table 1

Environmental Pollution and Human Health in Ancient Times Environmental and health impacts of mining in ancient times

Environmental change

Environmental impact

Environmental/occupational exposure and health impacts

Destruction of natural habitat at mining and waste disposal sites Alteration of natural habitat by mine-related emissions and discharges Changes in river flow regime and ecology of river basin Land degradation due to deforestation for fuel wood Development of population centers around mining sites Abandoned slag and mine wastes

Water pollution and formation of acid mine drainage Air pollution from mineral processing and smelting operations Reduced biodiversity and seafood resources Erosion and depletion of soil fertility due to deforestation Soil contamination

Exposure of workers to hard labor and toxic dusts and fumes Inhalation of polluted air from smelters and wood combustion Ingestion of contaminated waters

Leaching of pollutants from mine spoils and contaminated soils Human wastes from population centers

Exposure to soil-based pollutants and fugitive emissions from mine spoils Exposure to physical risks around mining sites Exposure to unsanitary conditions in mining communities Creation of environmental refugees Influx of people with new infective pathogens into mining communities Wars and conflicts for resource control

Although metals have benefited human beings immeasurably, mining and smelting of metals have also dramatically altered features of landscapes in many parts of the world and left toxic legacies in the form of massive mining spoil and smelting slag, which have gradually been exhaling pollution into the surrounding environment and poisoning people for millennia. Wars, imperialism, and colonialism were fueled by the quest for mineral resources, but the dramatic collateral damage on the environment is never romanticized in the annals of human history. Roman and Greek writers depicted mining as an abuse of mother earth, and this assault has been going on continuously for the past 10,000 years. The oldest mining operation in the world is believed to be the Ngwenya hematite mines of Swaziland, which have been dated to approximately 43,000 BP; the iron mineral was used primarily as cosmetics and for medicinal purposes. Mining of copper and lead began between 7000 and 6000 BP, and the exploitation of two other metals of antiquity (gold and silver) must have predated the discovery of copper and lead. Mining of mercury and iron began later in ancient time. Most of the major ore deposits of the seven metals of antiquity were discovered in the Old West and Asia (especially India, China, and Japan) and exploited intensively. The environmental impacts of mining can begin at any stage during the exploration and discovery of ore deposits; the extraction, processing, and smelting of ores; through the transportation of the refined metal to the site of consumption. Even after the ores are exhausted, wastes in abandoned mines represent a toxic legacy that can be spread over tens of kilometers. Table 1 shows some of the important environmental effects of mines, which can be categorized into physical impacts, pollution impacts, and occupational and environmental health impacts. Mines produce large amounts of wastes because the metal of interest is only a very small fraction of mined material and ores smelted. In surface mining with typical ore tenor of 5%–10% worked in ancient times, the amount of waste ranged from 10 to 20 times the total volume of crude metal. For precious metals, the waste-to-ore ratio can be over 1000. The tailings and waste rocks discarded close to the mining areas are the primary sources of environmental pollution because the metals can be leached into the local surface and groundwater and dispersed more widely, whereas wind erosion can release contaminated dusts into the atmosphere to be deposited in surrounding ecosystems. These impacts are local to a large extent and regional in some cases. Smelting and refining processes produce particulate and gaseous emissions, wastewaters, and solid wastes (slag). Slags produced by some of the ancient mines have become famous because of their enormity. Cyprus has more than 40 slag heaps containing more than 4 million tons of historic slag. The Mitterberg region of Austria has hundreds of slag heaps, some more than 500 tons, left over from copper mining during the Bronze Age. A huge copper mine dating from around 400 BC that has been excavated at Tonglushan, China, covered an area of 2  1 km2 and left behind tens of thousands of tons of slag and wastes. The great silver mines of Laurion near Athens yielded approximately 3500 tons of silver and 1.4 million tons of lead, and over their 300-year life span must have produced hundreds of thousands of tons of waste. Over 5000 years of mining in Spain has left behind 16–20 million tons of slag in the Rio Tinto region, 6–7 million tons in the Tharsis mine region, and 3–4 million tons in 60 or more other sites. The claim that Emperor Trajan took 5 million pounds of gold and 10 million pounds of silver from Dacia (Balkan states) may be exaggerated but cannot be disputed because of the large number and quantity of ancient mine wastes in the region. The long history of mining and metallurgy in Asia Minor is reflected in the large number of ancient slag heaps that dot the region. The copper mining district at Fenan in the southern Levant was in operation from the Bronze Age through the Iron Age, Nabatean, Roman, and Byzantine periods. The most famous sites were in the Sinai Desert, at Timna near Eilat; others stretch from southern Sinai to northern Israel and Jordan. The legacy of such extensive ancient metallurgical enterprises remains today in the form of major spoil tips from more than 300 sites near Timna alone, and over 400 in the Sinai Peninsula. These visible scars from pillaging of the earth’s resources that dot the

Environmental Pollution and Human Health in Ancient Times

611

landscape of the ancient world were remarkable for exhaling contaminated dusts into the air and toxic metals into local watercourses and remain environmental hazards even today. It has been estimated that worldwide production of lead was 11 million tons during the Copper Age (4000–2100 BC), 9 million tons during the Bronze Age (2100–1200 BC), 17 million tons during the Iron Age (2100–50 BC), and 21 million tons during the Roman Empire (50 BC–AD 500). The cumulative production of lead in ancient times is estimated to be 58 million tons (revised from compilation by Nriagu in 1983), and the amount of lead release to the atmosphere during that time must have been enormous. The Romans accounted for approximately 36% of all the lead produced in ancient times and during the Roman Empire, the production of lead is estimated to have exceeded 100,000 tons per year. It is not surprising that the mining and smelting of huge quantities of lead ores have been associated with a significant increase in atmospheric lead pollution in the Northern Hemisphere during the Roman times. A study of the historical records in Arctic snow layers by Hong and colleagues found a fourfold increase in tropospheric lead pollution during the Roman times. Historical records in lake sediments and bogs all over Europe also document similar regional increases in airborne lead pollution at that time. The huge increase in production of copper in China and the Western world during the classical time has also been associated with increased deposition of this metal in Arctic snow field as well as lake and bog sediments throughout the Northern Hemisphere. The levels of air pollution in ancient times were not matched again until after the Industrial Revolution. The energy demand occasioned by mining and smelting operations was a critical driver of deforestation during the ancient time. Once the smelting of metallic ores developed from pottery-making, the use of wood fuel accelerated, and by the time the Bronze Age was well under way, wood was being consumed around the Eastern Mediterranean on a scale that could not possibly be sustained on a long-term basis. Ancient mining, smelting, metal-working, shipbuilding, pottery-making, and construction industries all had massive appetites for fuel, and practically all the available fuel was wood. Mesopotamia was particularly susceptible to wood shortage. Because of its geography and dry climate, it could not provide the wood fuel to support a Bronze Age civilization that worked metal, built large cities, and constructed canals and ceremonial centers that used wood, plaster, and bricks. Most timber had to be imported (appropriated) from the surrounding mountains, precipitating a deforestation that in a climate that receives occasional torrential storms would have led to severe erosion and soil impoverishment. An ecological collapse triggered by mining and deforestation would have given rise to innumerable environmental refugees who formed the pool of cheap labor for wars, mines, and construction projectsdfurther exacerbating the conflicts and social breakdown. The ontology of environmental (and mediated health) problems associated with ancient mining was clearly complex, replete with feedbacks and other nonlinearities. Fuel shortage most likely was the single most serious constraint on metal production since the Bronze Age in some areas. For example, the Rio Tinto mines in Spain probably needed 260 tons of wood a day even in Roman times. Smelting of metal sulfides in particular needs a great deal of fuel. To produce 1 kg of copper by smelting 30 kg of sulfide ore would require approximately 300 kg of charcoal. To make 1 ton of charcoal needs somewhere between 12 and 20 m3 of wood. Archeologists have estimated that the Bronze Age copper mines at Mitterberg, in the Austrian Tyrol near Salzburg must have employed approximately 180 miners and smelters to produce approximately 20 tons of copper a year, which probably required approximately 19 acres of forest to be felled just for the smelters. Over a timescale of 10 years and longer, these mining operations must have caused local deforestation on a large scale and ever-increasing costs for hauling the wood to keep the furnaces going. By late medieval times, even the productive forests of Germany could support iron smelting for only 3 months a year. The problems might have been more severe in the Mediterranean countries, where dry weather conditions make the vegetation and landscape highly vulnerable. In Cyprus, the magnificent pine forest that once covered the island was said to have been cleared in a comparatively short time, mainly to make charcoal for smelting. The huge slag heaps on the island suggest a total production of perhaps 200,000 tons of copper, which would have required fuel equivalent to 200 million pine trees to be harvested from forests 16 times the total area of the island. Even if Cypriot forests can regenerate quickly under the right conditions, shortage of wood fuel was probably the externality that led to the collapse of Cypriot copper industry around AD 300 and must have remained a persistent problem on the island for both domestic and industrial activities. Another famous victim of wood shortage was the island of Elba once called “Aethaleia,” “the smoky island” because of the massive smelting industry there. The Romans were forced to give up smelting ores from Elba on the island itself in the 1st century BC because they ran out of wood, and ores were subsequently shipped to Populonia on the mainland to continue the industry. The great silver mines of Laurion, near Athens, required not only the fuel to smelt the ores but also the fuel to build and maintain the water cisterns. Based on the 3500 tons of silver and 1.4 million tons of lead production for classical Athens over a period of approximately 300 years, Wertime has estimated that the Laurion mines had consumed 1 million tons of charcoal and 2.5 million acres of forest. The decline of these mines has been attributed to fuel costs that had risen to the point that they were uneconomic to run. Literary records show that deforestation accompanied by soil erosion was already a severe problem in Attica, the region surrounding Athens. In the “Critias,” Platodthe famous Greek authorddecried the incredible environmental damage that had been done to the Attica Peninsula during his time: But in those days the damage had not taken place, the hills had high crests, the rocky plain of Phelleus was covered with rich soil, and the mountains were covered by thick woods, of which there are some traces today. For some mountains which today will only support bees produced not so long ago trees which when cut provided roof beams for huge buildings whose roofs are still standing. And there were a lot of tall cultivated trees which bore unlimited quantities of fodder for beasts. The soil benefitted from an annual rainfall which did not run to waste from the bare earth as it does today, but was absorbed in large quantities and stored in retentive layers of clay, so that what was drunk down by the higher regions flowed downwards into the valleys and appeared everywhere in

612

Environmental Pollution and Human Health in Ancient Times

a multitude of rivers and springs. And the shrines which still survive at these former springs are proof of the truth of our present account of the country.

Health Effects Associated With Mining in Ancient Times Lead is said to have poisoned the first person who set his eyes on the birth from its ore, and as such deserves the distinction of being one of the earliest occupational diseases contracted by humankind. From the Bronze Age onward, metal mining became a major industry that employed large numbers of people in different parts of the world. Polybius observed that the 40,000 workers employed in the lead–silver mines near the city of New Carthage produced approximately 36 tons of silver per year from mining approximately 12,000 tons of lead. These figures translate into three occupationally exposed individuals per ton of lead produced. On the basis of such production figures in ancient texts, it has been estimated that approximately 180,000 workers per year engaged in lead mining and smelting were occupationally exposed to this metal during the time of the Roman Empire. An approximately equal number of people can be assumed to be involved in the mining of copper, zinc, and other metals and to have been exposed to toxic metals. Most of the ore deposits exploited in ancient times were polymetallic and any by-product lead in the production of other metals went into the smoke. The total number of workers exposed to lead can be estimated to be between 350,000 and 400,000 per year during the Roman time. Since no deliberate efforts were made to curtail personal exposure to emissions from the forges and crucibles, it is reasonable to assume that the miners and smelters and artisans who worked with lead experienced acute or chronic lead poisoning. It is easy to see how the ancient slave miner who crawled and chiseled the lead–silver ores in the deep galleries could have contracted lead poisoning. The slave miner handled lead and inhaled its particles and some of its comingles with his meager diet. In short, he wallowed in lead. Ancient literary sources were quite silent on the health effects of metal mining and smelting on workers. This may be explained by the fact that mining was primarily a job for slaves, and the Roman aristocrats and literati, as well as the nobility of many other ancient cultures, regarded labor of any sort as beneath their dignity and lived oblivious to the sufferings of this particular occupational group. There are good reasons for suspecting that a large percentage of the population in mining communities received unhealthy doses of toxic metals in their air and water. In addition to the miners and smelters of each community, one has to add the woodcutters, carpenters, charcoal burners, and carters, who cut, carried, and processed the wood needed for the gallery timbers, and fuel for the furnaces, and then the farmers who produced the food to feed all these people. Mining was a large-scale operation by ancient standards. Since there were no means of transportation, most people lived near the mining and smelting operations. Excavations at Laurion show that houses and the theater were jumbled together with washeries, cisterns, adits, shafts, and slave barracks, increasing the vulnerability to local lead pollution. Xenophon observed that the areas around the Laurion silver mines were considered unhealthy and thus not worthy of a visit. According to Pliny, the exhalations from the silver mines were dangerous to all animals but especially to dogs. Lucretius (De Rerum Natura) warned against the deadly emanations from mines and lamented the hardship of miners: And where there is mining for veins of gold and silver. Which men will dig for deep down in the earth. What stench arise, as at Scaptensula! How deadly are the exhalations for gold mines! You can see the ill effects in the miners’ complexions. Have you not heard and seen how short is the life. Of a miner compelled to remain at this terrible task? Vitruvius also spoke about water pollution around the mines and the potential adverse health implications: But when gold, silver, iron, copper, lead and the like are mined, abundant springs are found, mostly impure. They have the impurities of hot springs, sulfur, alum, bitumen; and when the water is taken into the body and, flowing through the veins, reaches the muscles and joints, it hardens them by expansion. Therefore, the muscles swelling with expansion are contracted in length. In this way men suffer from cramps and gout, because they have pores of the vessels saturated with hard, thick and cold particles.

The mechanistic explanation proffered by Vitruvius may be gibberish, but the author was right about gout and joint pains being symptoms of poisoning by lead in drinking water. Hippocrates (On Airs, Water and Places) also reckoned that waters “having iron, copper, silver, gold, sulfur, alum, bitumen and nitre” (pollutants typical of mine effluents) are bad for every purpose and for health. The possibility that lead poisoning of the aristocracy contributed to the decline of Rome has been mentioned in many publications. The primary route of lead exposure during the period of the Roman Empire was through contamination of foods and drinks with sugar lead (lead acetate). Details of this hypothesis and supporting evidence are provided in the book Lead and Lead Poisoning in Antiquity by this author.

See also: Environmental Health, Planetary Boundaries and Limits to Growth; Evolving Concepts of Environmental Health; History of the Dose Response; Occupational Cancer: Modern History.

Environmental Pollution and Human Health in Ancient Times

613

Further Reading Alves, R.R.N., da Silva Policarpo, J., 2018. Animals and human health: Where do they meet? In: RRN, A., Albuquerque, U.P. (Eds.), Ethnozoology. Academic Press, New York, pp. 233–259. Armelagos, G.J., Brown, P.J., Turner, B., 2005. Evolutionary, historical and political economic perspectives on health and disease. Social Science and Medicine 61 (4), 755–765. Bridges, P.S., 1992. Prehistoric arthritis in the Americas. Annual Review of Anthropology 21, 67–91. Brooks, N., 2006. Cultural responses to aridity in the Middle Holocene and increased social complexity. Quaternary International 151, 29–49. Brown, N., 1994. Climate change and human history: Some indications from Europe, AD 400–1400. Environmental Pollution 83, 37–43. Bruce, W., Kennett, D., 2006. Behavioral ecology and the transition from hunting and gathering to agriculture. In: Kennett, D., Winterhalder, B. (Eds.), Behavioral ecology and the transition to agriculture. University of California Press, Berkeley, pp. 1–21. Büntgen, U., Tegel, W., Nicolussi, K., McCormick, M., et al., 2011. 2500 years of European climate variability and human susceptibility. Science 331, 578–582. Butzer, K.W., 2005. Environmental history in the Mediterranean world: Cross-disciplinary investigation of cause-and-effect for degradation and soil erosion. Journal of Archeological Science 32, 1773–1800. Camuffo, D.E., 1993. Reconstructing the climate and the air pollution of Rome during the life of the Trajan column. Science of the Total Environment 128 (2–3), 205–226. Capasso, L., 2000. Indoor pollution and respiratory diseases in ancient Rome. Lancet 356 (9243), 1774. Capasso, L., D’Anastasio, R., Pierfelice, L., Di Fabrizio, A., Gallenga, P.E., 2003. Roman conquest, lifespan, and diseases in ancient Italy. Lancet 362, 668. Carmichael, A.G., 2009. Plague: Historical. Encyclopedia of Microbiology 58–72. Chase-Dunn, C., Alvarez, A., Pasciuti, D., 2005. World systems in the biogeosphere: Three thousand years of urbanization, empire formation and climate change. Research in Rural Sociology and Development 10, 311–331. Cohen, S.G., 1992. Asthma in antiquity: The Ebers papyrus. Allergy Proceedings 13 (3), 147–154. Daniel, A., Vallero, D.A., 2007. The changing face of air pollution. In: Fundamentals of air pollution, 4th edn. Elsevier, Amsterdam, pp. 3–51. Epstein, R., 1992. Pollution and the environment: Some radically new ancient views, Dharma Realm Buddhist University Public Lecture Series, Talmage, California. Vajra Bodhi Sea: A monthly. Journal of Orthodox Buddhism 30 (1), 36–40. Epstein, R., 1999. Pollution and the environment: Some radically new ancient views, Vajra Bodhi Sea: A Monthly. Journal of Orthodox Buddhism 30 (Pt. 1), 36–43. Finch, C.E., 2007. The human life span: Present, past and future. In: The biology of human longevity. Elsevier, Amsterdam, pp. 373–416. Gomes, C.S.F., Silva, J.B.P., 2007. Minerals and clay minerals in medical geology. Applied Clay Science 36 (1–3), 4–21. Gordon, B.R., 2008. Asthma history and presentation. Otolaryngologic Clinics of North America 41, 375–385. Goudsblom, J., 2004. Fire: A socioecological and historical survey. In: Encyclopedia of energy. Elsevier, Amsterdam, pp. 669–681. Harkins, K.M., Stone, A.C., 2015. Ancient pathogen genomics: Insights into timing and adaptation. Journal of Human Evolution 79, 137–149. Hackman, R.M., Stern, J.S., Gershwin, M.E., 2003. Asthma and allergies. In: Complementary and alternative medicine, 2nd edn. Elsevier, Amsterdam, pp. 70–92. Haug, G.H., Gunther, D., Peterson, L.C., Sigman, D.M., Hughen, K.A., Aeschlimann, B., 2003. Climate and the collapse of Maya civilization. Science 299, 1731–1735. Jalut, G., Dedoubat, J.J., Fontugne, M., Otto, T., 2009. Holocene circum-Mediterranean vegetation changes: Climate forcing and human impact. Quaternary International 200, 4–18. Karatzas, U.K., 2000. Preservation of environmental characteristics as witnessed in classic and modern literature: The case of Greece. Science of the Total Environment 257, 213–218. Larsen, C.S., 2006. The agricultural revolution as environmental catastrophe: Implications for health and lifestyle in the Holocene. Quaternary International 150 (1), 12–20. Lessler, M.A., 1988. Lead and lead poisoning from antiquity to modern times. Ohio Journal of Science 88 (3), 78–84. Makra, L., 2015. Anthropogenic air pollution in ancient times. In: Wexler, P. (Ed.), History of toxicology and environmental health. Elsevier, Amsterdam, pp. 21–40. Makra, L., Brimblecombe, P., 2004. Selections from the history of environmental pollution, with special attention to air pollution. International Journal of Environment and Pollution 22 (6), 641–656. Maravelaki-Kalaitzaki, P., 2005. Black crusts and patinas on Pentelic marble from the Parthenon and Erechtheum (Acropolis, Athens): Characterization and origin. Analytica Chimica Acta 532 (2), 187–198. Markham, A., 1994. A brief history of pollution. St. Martin’s Press, New York. Marshall, G.D., Roy, S.R., 2007. Stress and allergic diseases. In: Psychoneuroimmunology, 4th edn. Elsevier, Amsterdam. McMichael, A.J., 2002. Population, environment, disease, and survival: Past patterns, uncertain futures. Lancet 359, 1145–1148. Mitchell, P.D., 2017. Human parasites in the Roman world: Health consequences of conquering an empire. Parasitology 144, 48–58. Nriagu, J.O., 1983. Lead and lead poisoning in antiquity. Wiley, New York. Nunn, P.D., 2007. Climate, environment and society in the Pacific during the last millennium. Developments in Earth and Environmental Sciences 6, 1–302. Peset, J.L., 2004. Plagues and diseases in history. In: International Encyclopedia of the social and behavioral sciences. Elsevier, Amsterdam, pp. 11463–11466. Pyatt, F.B., Gilmore, G., Grattan, J.P., Hunt, C.O., McLaren, S., 2007. An imperial legacy? An exploration of the environmental impact of ancient metal mining and smelting in Southern Jordan. Journal of Archeological Science 27, 771–778. Reale, O., Dirmeyer, P., 2000. Modeling the effects of vegetation on Mediterranean climate during the Roman classical period part I: Climate history and model sensitivity. Global and Planetary Change 25, 163–184. Richerson, P.J., Boyd, R., Bettinger, R.L., 2001. Was agriculture impossible during the Pleistocene but mandatory during the Holocene? A climate change hypothesis. American Antiquity 66 (3), 87–411. Roberts, C.A., 2007. A bioarcheological study of maxillary sinusitis. American Journal of Physical Anthropology 133, 792–807. Ruddiman, W.F., 2003. The anthropogenic greenhouse era began thousands of years ago. Climatic Change 61, 261–293. Saavedra-Delgado, A.M.P., Cohen, S.G., 1991. Huang-Ti, the yellow emperor and the Nei Ching: Antiquity’s earliest reference to asthma. Allergy Proceedings 12 (3), 197–198. Scurlock, J.A., Andersen, B.R., 2005. Diagnoses in Assyrian and Babylonian Medicine: Ancient Sources, Translations. University of Illinois Press, Urbana-Champaign, IL. Sianto, L., Chame, M., Silva, C.S.P., Gonçalves, M.L.C., et al., 2009. Animal helminths in human archaeological remains: A review of zoonoses in the past. Revista do Instituto de Medicina Tropical de São Paulo 51 (3), 119–130. Sigerist, H.E., 1956. Landmarks in the History of Hygiene. Oxford University Press, London. Smil, V., 2004. World history and energy. In: Encyclopedia of energy, vol. 6. Elsevier, Amsterdam, pp. 549–561. Sundell, J., 2004. On the history of indoor air quality and health. Indoor Air 14 (supplement 7), 51–58. Taylor, C., 2005. The disposal of human waste: A comparison between ancient Rome and Medieval London. Past Imperfect 11, 53–72. Ugawa, K., 2004. Urban history. In: International encyclopedia of the social and behavioral sciences. Elsevier, Amsterdam, pp. 16021–16026. Vercellotti, G., Piperata, B.A., Agnew, A.M., Wilson, W.M., 2014. Exploring the multidimensionality of stature variation in the past through comparisons of archaeological and living populations. American Journal of Physical Anthropology 155, 229–242. Wijbenga, A., Waterstaat, P., 1984. Chemicals, man and the environment: A historic perspective of pollution and related topics. Naturwissenschaften 71, 239–246.

614

Environmental Pollution and Human Health in Ancient Times

Wilcox, B.A., Gubler, D.J., Pizer, H.F., 2008. Urbanization and the social ecology of emerging infectious diseases. In: The social ecology of infectious diseases. Elsevier, Amsterdam, pp. 113–137. Yeh, H.Y., Mitchell, P.D., 2016. Ancient human parasites in ethnic Chinese populations. The Korean Journal of Parasitology 54 (5), 565–572. Yongsi, N., Blaise, H., Dovie, D.B.K., 2007. Diarrheal diseases in the history of public health. Archives of Medical Research 38, 159–163. Zahida, H.J., Robinson, E., Kelly, R.L., 2016. Agriculture, population growth, and statistical analysis of the radiocarbon record. Proceedings of the National Academy of Sciences 113 (4), 931–935. Zhuang, Z., Li, Y., Chen, B., Guo, J., 2009. Chinese Kang as a domestic heating system in rural northern ChinadA review. Energy and Buildings 41 (1), 111--119.

Environmental Reservoirs of Antimicrobial Resistance of Foodborne Pathogens Vangelis Economou, Aristotle University of Thessaloniki, Thessaloniki, Greece Panagiota Gousia, Department of Food Testing and Research Laboratories, Hellenic Food Authority, Thessaloniki, Greece Hercules Sakkas and Chrissanthy Papadopoulou, University of Ioannina, Ioannina, Greece © 2019 Elsevier B.V. All rights reserved.

Glossary AmpC b-lactamases AmpC b-lactamases are enzymes that confer resistance to penicillins, narrow-spectrum cephalosporins, oxymino-b-lactams, and cephamycins and are not susceptible to b-lactamase inhibitors such as clavulanic acid. Antibiotic A substance produced by or a semisynthetic substance derived from a microorganism and able in dilute solution to inhibit or kill another microorganism. Antibiotic resistance Antimicrobial resistance of bacteria. Antimicrobial resistance The ability of a microorganism (like bacteria, viruses, and some parasites) to stop an antimicrobial (such as antibiotics, antivirals and antimalarials) from working against it. As a result, standard treatments become ineffective, infections persist and may spread to others. Antimicrobial resistance gene A gene that gives microbes the ability to resist the effects of one or more antibiotics. Carbapenemases b-lactamases with versatile hydrolytic capacities that can hydrolyze penicillins, cephalosporins, monobactams, and carbapenems. Extended-spectrum b-lactamases b-lactamases capable of conferring bacterial resistance to the penicillins; first-, second- and third-generation cephalosporins; and aztreonam (but not the cephamycins or carbapenems) by hydrolysis of these antibiotics, and which are inhibited by b-lactamase inhibitors such as clavulanic acid. Horizontal gene transfer The nonsexual transmission of genetic material between unrelated genomes. Multidrug-resistance Antimicrobial resistance shown to at least one antibiotic in three or more drug classes. Pan-drug resistance Resistance to all antibiotics. Plasmid A small, circular, double-stranded DNA molecule that is distinct from a cell’s chromosomal DNA. Resistant isolate An isolate that is resistant to one or more antibiotics. Resistome The collection of all genes that directly or indirectly contribute to antibiotic resistance. Superbugs Strains of bacteria that are resistant to the majority of antibiotics commonly used today.

Antimicrobial Resistance Antibiotics are substances used to inhibit growth and proliferation of bacteria. They are categorized as naturally occurring, semisynthetic and synthetic compounds with antimicrobial activity. Their application can vary since they can be administered parenterally, orally or topically. The introduction of antimicrobials in therapy and other uses was a breakthrough in human and veterinary medicine. The Golden era for antimicrobial discovery was between 1945 and 1970 with an ample number of new substances discovered at that time. Until recently, antimicrobials have succeeded, among others, to lower the burden of infectious diseases, mainly of bacterial origin. Except from human therapy, antibiotics have been used to control animal diseases, both in the terrestrial and aquatic environment. Moreover, antibiotics were utilized for disease prevention (prophylaxis and metaphylaxis) and for growth promotion with excellent results until recently, where the emergence of antibiotic resistance and the correlation with the overuse of antibiotics in humans and animals has been unearthed. Antibiotic use for growth promotion has been banned since 2006 in the European Union and most recently has been discouraged in the United States. Still, several countries do not regulate use of antibiotics, or they have not yet established effective control mechanisms in order to control antibiotic use in both humans and animals. This is evident by the emergence of new resistant strains, mainly of animal origin. And since humans, animals and animal products circulate the globe, the emergence of new resistance traits can occur in almost every part of the world. Generally, the microbial resistance is not a new phenomenon. Microbes have an able arsenal of mechanisms to overcome survival dangers in their environment. These dangers can be posed by the change of environmental conditions (temperature, salinity, redox potential, low nutrient concentration, etc.), by the production of antibiotic substances by the plants and the animals, and most frequently by the production of antibiotics by other microbes trying to find their place in the available niches. Most of the antibiotics used have a precursor produced by microbes. Benzylpenicillin was produced by Penicillium notatum, a fungus that is abundant in terrestrial environments. In addition, several other antibiotics, like aminoglycosides, are produced by microorganisms in nature. For example, streptomycin was derived from Streptomyces griseus. The hypothesis that the resistance mechanisms have been there before the therapeutic administration of antibiotics is strengthened by several observations of resistant bacteria soon after the

Encyclopedia of Environmental Health, 2nd edition, Volume 2

https://doi.org/10.1016/B978-0-12-409548-9.11418-6

615

616

Environmental Reservoirs of Antimicrobial Resistance of Foodborne Pathogens

antibiotics first use. Upon examination of Staphylococcus aureus strains isolated from British patients in 1948, they were found resistant to penicillin. Similarly, Mycobacterium tuberculosis isolates were also found resistant to streptomycin soon after its first use. Currently antimicrobial resistance is one of the major health concerns of this century. The use and abuse of antimicrobials in general has resulted to the emergence of antibiotic resistant bacteria, treatment failure and possible death of the patient. From the above mentioned it is clear that antibiotic resistant bacteria are widespread among humans and animals. Still it seems that the environment is having a major contribution in the emergence of this phenomenon. This can be justified by the fact that antibiotics or their metabolites can be excreted to the environment and gathered finally in some sort of environmental container. This affects and modifies the bacterial microecosystem and possibly forms a pool in which the existence of environmental pressure is steady and lasts long periods of time. The route and the fate of each substance is governed by the pharmacokinetic and pharmacodynamic properties in the organism and the reservoir as well. The antibiotic awareness has resulted in the proposal of their use as environmental indicators for pharmaceuticals in general. Although there is growing awareness about antibiotic occurrence in the environment, to our knowledge there are no limits for antibiotic concentrations in the environment and therefore no available regulation in the US and Europe.

Antimicrobial Resistance in Bacteria That Can Contaminate the Environment Hospital Acquired Most of the emerging bacterial pathogens that are acknowledged as such, share a common trait, the exhibition of multidrugresistance (MDR). With the term multidrug-resistance or multiple drug resistance, we are refereeing to antimicrobial resistance shown by a species of microorganism to multiple classes of antimicrobial drugs. As an example, Mycobacterium tuberculosis, a well-recognized pathogen that has acquired attention due to its MDR form, is causing disease to both developing and industrialized nations. The most recognized pathogens that cause disease and probably can end up to the environment are those causing nosocomial infections. Pathogens such as Gram-negative Acinetobacter baumannii, Burkholderia cepacia, Campylobacter jejuni, Citrobacter freundii, Enterobacter spp., Escherichia coli, Haemophilus influenzae, Klebsiella pneumoniae, Proteus mirabilis, Pseudomonas aeruginosa, Salmonella spp., Serratia spp., Stenotrophomonas maltophilia, or Gram-positive Clostridium difficile, Enterococcus faecium, Enterococcus faecalis, Staphylococcus aureus, Staphylococcus epidermidis, and Streptococcus pneumonia are typical paradigms. Although there are several contamination pathways for the above-mentioned bacteria, the hospital acquired strains are of outmost importance since they circulate between antibiotic treated patients in the hospital settings and acquire strong, almost incurable, resistance to antibiotics. And when the term “superbugs” is used, it is for microbes with enhanced morbidity and mortality due to high antibiotic resistance, specifically to the classes used for their treatment. In such cases the therapeutic options are limited. Therefore, hospitalization is extended, raising costs due to more treatment expenses, reduced work time, and possible deaths. In some cases, certain superbugs have acquired advanced virulence traits and enhanced transmissibility. Therefore, antibiotic resistance can be regarded as a virulence factor and actually one of the most emerging traits of virulence nowadays. The Infectious Diseases Society of America has focused on certain antibiotic resistant bacterial pathogens that cause most of the life-threatening diseases in hospital settings and elsewhere, by using the acronym ESKAPE pathogens (Enterococcus faecium, Staphylococcus aureus, Klebsiella pneumoniae, Acinetobacter baumannii, Pseudomonas aeruginosa and Enterobacter spp.). ESKAPE pathogens are at least multidrug resistant, sometimes exhibiting high multi-, extended- or pan-drug resistance, a characteristic that poses a great challenge for clinical practice. The acronym refers to the ability of these strains to evade antibiotic treatment since they exhibit a variety of resistance mechanisms to most of the commonly used antibiotics. Although these pathogens can circulate in most premises, including environment sources like water bodies and soil, the hospital acquired strains pose the major risk. Since it is quite common for the mechanisms of resistance to reside in transferable genetic elements, the possibility of resistance transfer to environmental deposits or through environmental sources to humans cannot be ruled out. The efficiency of clearance of these pathogens is also hindered by the formation of biofilms, a trait that is quite common in environmental sources, permitting therefore further dispersal and survival of these strains.

Enterococcus spp. Enterococci are commensal bacteria that have been used for food fermentation and preservation. They are intestinal tract colonizers of mammals and birds and considered as indicators of enteric contamination of food and water. They can withstand quite rough conditions since they can survive high or low temperatures, pH shifts, and are well adapted in saline environment such as seawater. Although usually enterococci normally live harmlessly in the mammalian intestine, in certain cases (especially E. faecalis) they can cause bacteremia, endocarditis, intra-abdominal, pelvic, soft-tissue, and urinary tract infections. The last two decades, enterococci have emerged as an important cause of nosocomial and community-acquired infections with treatment failure being common, since enterococci have exhibited remarkable resistance to antibiotics. The mechanisms of antibiotic resistance are usually target modification or enzymatic drug inactivation. Resistance can also be intrinsic, as shown in the case of b-lactams, aminoglycosides and macrolides/lincosamides, or acquired, through mutations or exogenous genetic material acquisition. E. faecalis is considered a better receiver and is less affected by host or origin. In contrast, E. faecium usually expresses higher resistance and virulence. Among other antibiotics, enterococci have shown resistance to vancomycin, a glycopeptide commonly used for treatment of methicillin resistant Staphylococcus aureus in nosocomial infections and for treatment of infections by other bacteria in case of resistance or

Environmental Reservoirs of Antimicrobial Resistance of Foodborne Pathogens

617

allergic reactions to b-lactams. The rapid increase of vancomycin resistant enterococci (VRE) isolated from livestock and related food products, was probably a result of the use or misuse of glycopeptide antimicrobials such as avoparcin in food-producing animals. As a result, the emergence of VRE has hindered the therapeutic utility of vancomycin. Enterococcus faecium and E. faecalis are used as indicators of fecal contamination of water and food since they are hardy microorganisms that can withstand harsh environmental conditions and their occurrence in these matrixes is a result of direct or indirect contact with feces. Moreover, the ability of enterococci to acquire genetic resistance traits makes them a good indicator of antimicrobial resistance of Gram-positive bacteria in general. Animals are considered a reservoir for human contamination by E. faecalis and E. faecium of animal origin either directly through contact with fecal material or indirectly by animal products. The enterococcal strains of animal origin show a possible selective pressure of antibiotic usage. As a result, the European Union member states have selected E. faecalis and E. faecium as suitable organisms for antimicrobial resistance data gathering and evaluation from animals both in farm and slaughterhouse premises. The genetic background of antibiotic resistance in enterococci is rather interesting since they are renowned for their genome plasticity. They can integrate and utilize mobile genetic elements such as plasmids, transposons, insertion sequences, and prophages by which the transfer of antibiotic resistance is performed. Less likely the acquisition of new genetic traits can be achieved through mutations. The ability of enterococci to exchange resistance genetic traits is shown by the fact that foodborne enterococci are not identified as a direct cause of enterococcal disease in humans. This has been also depicted by the description of two distinct clades of E. faecium isolates, the clade A strains causing nosocomial infections and clade B isolates mainly encountered in the community. Moreover, the resistance determinants circulation can be transferred to strains of the same species, to other species of the same genus, and to numerous pathogenic or nonpathogenic bacteria, even in absence of selective pressure. The exchange is facilitated in habitats where different species are in close proximity, such as the intestinal tract, with the exchange of resistance traits from enterococci involving bacteria of other species, namely Staphylococcus spp. and Listeria spp. Considering these traits, Enterococcus spp. are regarded as reservoirs of antimicrobial resistance genes (ARGs) and are considered as indicators of antibiotic resistance.

Staphylococcus aureus S. aureus is a common human and animal pathogen that was characterized quite early as resistant to antimicrobials, with S. aureus penicillin resistance to be observed from 1948. Nowadays, up to 90% of human S. aureus isolates are resistant to penicillin and many commonly used antibiotics. The b-lactamase-resistant penicillins were developed to treat penicillin-resistant S. aureus and are still the drug of choice for first-line treatment. Methicillin was introduced in 1959 and only 2 years later, methicillinresistant S. aureus (MRSA) was reported in England. MRSA is regarded a major cause of severe healthcare-associated (HA) infections. It seems though that during the last decade HA-MRSA incidence has decreased. On the contrary the incidence of communityassociated MRSA (CA-MRSA) infections has risen. Because of the growing awareness of MRSA in clinical settings, the increased communication of livestock associated (LA) MRSA has been highlighted. MRSA is regarded as a major cause of hospitalacquired infections for more than three decades. Still MRSA clonal complex 398 (MRSA CC398) was an emerging strain originating from livestock. The nasal carriage of LA MRSA CC398 has been widely reported among farmers and other persons in contact with animals. MRSA CC398 is quite frequently carried by pigs; still, it does not seem to have a host specificity, since it has been isolated in a variety of other animals including cattle, dogs, horses, and chickens. This lack of host specificity makes MRSA CC398 capable of contaminating humans. Resistance to methicillin is mediated via the mec operon, which is situated on the staphylococcal cassette chromosome mec (SCCmec). The mecA gene confers resistance to methicillin by encoding an altered penicillin-binding protein (PBP2a or PBP2’) with a lower affinity for b-lactams (penicillins, cephalosporins, and carbapenems). Therefore, the efficacy of all b-lactam antibiotics is reduced, making obsolete their use in MRSA infections. It seems that these mobile genetic elements have been acquired in parallel by different lineages, therefore indicating no common ancestor between HA-MRSA, CA-MRSA and LA-MRSA. Concerning glycopeptide resistance and specifically vancomycin resistance, this is mediated by the vanA gene, commonly found in the Tn1546 transposon of resistant enterococci and codes for an enzyme that produces an alternative peptidoglycan to which vancomycin will not bind.

Enterobacteriaceae Enterobacteriaceae such as Enterobacter spp., Escherichia coli, Klebsiella spp., Salmonella spp., and Yersinia enterocolitica, are typically food-borne pathogens. Still these bacteria have emerged as MDR organisms with resistance to cephalosporins and carbapenemases being the most intriguing.

Extended-spectrum b-lactamase (ESBL)-producing Enterobacteriaceae

Extended-spectrum b-lactamases (ESBL) are enzymes that hydrolyze most penicillins and cephalosporins, including oxyiminob-lactams (cefuroxime, third- and fourth-generation cephalosporins and aztreonam), but not cephamycins or carbapenems. Most ESBLs belong to the Ambler class A b-lactamases and are inhibited by b-lactamase inhibitors such as clavulanic acid, sulbactam or tazobactam, and by diazabicyclooctanones (avibactam). ESBL-producing Gram-negative bacteria have been reported in Europe

618

Environmental Reservoirs of Antimicrobial Resistance of Foodborne Pathogens

and worldwide with most of them being multidrug-resistant, and susceptible only to carbapenems. Infections caused by multidrug ESBL bacteria are associated with high morbidity and mortality, high health care costs, and limited therapeutical options. The first ESBL-producing strains were identified in 1983, and since then have been observed worldwide. The global spread is a result of the clonal expansion of producer organisms, the horizontal transfer of ESBL genes and, less commonly, their de novo emergence. By far the most clinically important groups of ESBLs are the CTX-M enzymes, followed by SHV- and TEM-derived ESBLs. The bacteria that can produce ESBL are mostly Enterobacteriaceae. The isolation of ESBL producing Enterobacteriaceae has been performed from clinical and hospital settings, later in nursing homes, and since the new millennium in the community, namely in outpatients, healthy carriers, sick and healthy animals, and more recently in food products. The increasing incidence of infection with ESBL-producing E. coli has been observed in food animals such as cattle, broiler chickens, and pigs. The most frequently encountered ESBL-producing species are E. coli and K. pneumoniae, an observation that can be linked to the clinical significance of these two species. However, all other Enterobacteriaceae can be ESBL-producers, with phenotypic and genotypic resistance exhibited in most of the clinically relevant species. The prevalence of ESBL-positive isolates depends on various factors including species, geographic locality, clinical setting, group of patients and type of infection, with large deviations reported between different studies. The spreading of invasive K. pneumoniae among European countries has reached the point that, according to EARS-Net data for 2015, at least 25% or even 50% of isolates were nonsusceptible to the third generation cephalosporins. Most of these isolates were presumed to be ESBL-producers based on local ESBL test results Except for Greece and Italy where high percentages of KPC-type carbapenemase-producing isolates are recorded. The vast majority of ESBLs are not intrinsic but acquired enzymes, with the genes responsible usually situated on plasmids, therefore making them easily transferable to other sensitive bacteria. The dissemination of ESBL-producing strains has been reported recently in healthy food animals, from Europe, Asia, and the United States. The acquired ESBLs are expressed at various levels and differ significantly in biochemical characteristics such as activity against specific b-lactams (e.g., cefotaxime, ceftazidime, aztreonam). The level of expression and properties of an enzyme, and the copresence of other resistance mechanisms (other b-lactamases, active efflux, altered permeability) result in the large variety of resistance phenotypes observed among ESBL-positive isolates. Therefore animals, food, and environment can be the source of contamination from ESBL-producing bacteria. The prevalence of ESBL producing Enterobacteriaceae is reported to be rather high, with prevalence rates up to 88% for sampled carcasses and 72% of sampled broiler ceca at slaughter. As expected, the presence of genes encoding ESBL production is similar with almost 80% of retail chicken meat being tested positive. Even more alarming is the fact that the genetic analysis of ESBL genes from human rectal samples and poultry meat reveals that the predominant ESBL-coding genes are identical. Therefore, the foodborne contamination of humans is the most probable contamination route.

Acquired AmpC b-lactamase-producing Enterobacteriaceae

AmpC-type cephalosporinases are Ambler class C b-lactamases which inactivate penicillins, cephalosporins (including thirdgeneration but not the fourth-generation compounds) and monobactams. Usually the AmpC-type enzymes are not inhibited by ESBL inhibitors and especially clavulanic acid. During the 1980s the first isolates producing acquired AmpCs were identified. Since then a universal spread has been observed as a result of clonal spread and horizontal transfer, with the latter being attributed to the plasmid positioning of AmpC genes. The transferable AmpC genes have been characterized according to their original producers. Therefore, they are categorized in the Enterobacter group (MIR, ACT), the Morganella morganii group (DHA), the C. freundii group (CMY-2-like, LAT, CFE), the Aeromonas group (CMY-1-like, FOX, MOX), the Hafnia alvei group (ACC), and the Acinetobacter baumannii group (ADC). The most widespread are the CMY-2-like enzymes, although some of the other groups have spread extensively too. AmpCs are found in E. coli, K. pneumoniae, Salmonella enterica and P. mirabilis. Isolates harboring these enzymes have been found in hospitalized and community patients, and in farm animals and food products (in E. coli and S. enterica), even earlier than classical ESBL-enzymes. AmpCs are produced by Enterobacteriaceae and some other Gram-negative bacilli either at trace level (e.g. E. coli, Shigella spp.) or after induction (e.g. Enterobacter spp., C. freundii, M. morganii, P. aeruginosa). The spread of the acquired AmpCs is significant and has been recorded in multicenter studies of enterobacterial resistance to third-generation cephalosporins. Still their frequency is lower than that of ESBLs, at least in Europe. However, the epidemiological significance of AmpCdproducing organisms may further increase.

Carbapenemase-producing Enterobacteriaceae (CPE)

Carbapenemases are b-lactamases that hydrolyze penicillins, in most cases cephalosporins, and to various degrees carbapenems and monobactams (the latter are not hydrolyzed by metallo-b-lactamases). The enzymes contributing to the carbapenem resistance can be categorized in three major groups, the class A carbapenemases, the class B metallo-b-lactamases (MBL), and the class D b-lactamases (OXA). Class A includes SME enzymes with three types associated with S. marcescens, IMI enzymes associated with E. cloacae, GES with variants found in P. aeruginosa, K. pneumoniae and E. coli, and KPC, namely K. pneumoniae carbapenemase. In Europe carbapenemases were noticed the second half of the 1990s in several Mediterranean countries mainly in Pseudomonas aeruginosa strains. In the early 2000s, an epidemic in Greece of the Verona integron-encoded metallo-b-lactamase (VIM) from Klebsiella pneumoniae strains was followed by an epidemic related to the K. pneumoniae carbapenemase (KPC). Nowadays the OXA-48 carbapenemases comprise the fastest growing group of carbapenemases in Europe. In Mediterranean countries and more notably in Greece and Italy more than one third of the invasive K. pneumonia strains are resistant to carbapenems. According to the European surveillance data for 2015, more than one third of the European countries have reported that CPE are either endemic or exhibit interregional spread, with only three countries not identifying CPE in their premises. Further, the situation has worsened

Environmental Reservoirs of Antimicrobial Resistance of Foodborne Pathogens

619

considerably from the previous European report, published only 2 years before. The situation seems worrisome in the United States too, with CPEs frequently isolated from both healthy carriers and clinical cases. Concerning the New Delhi metallo-b-lactamases, they are mainly encountered in the Indian subcontinent and the Middle East, although their occurrence elsewhere is not unusual because of travel or commerce. Carbapenemases are a source of concern because they are transferable and may confer resistance to almost all b-lactams, whereas the strains producing carbapenemases usually are multi resistant. Therefore, infections with CPE are associated with high mortality rates. Carbapenemases are of high epidemiological importance, especially when they confer decreased susceptibility to carbapenems, like imipenem, meropenem, ertapenem and doripenem. Usually resistance to carbapenemases is acquired with the encoding genes harbored on transposable elements on plasmids. The expression, the characteristics and the activity against b-lactamase differs significantly between different carbapenemases. In addition, the simultaneous occurrence of other resistance mechanisms results in various resistance phenotypes present in carbapenemase-producing isolates. In addition, Enterobacteriaceae resistance to carbapenems can result from the occurrence of extended spectrum b-lactamases (ESBL) or AmpC enzymes, especially when occurring along with decreased permeability due to alteration or down-regulation of porins, and possibly also penicillin-binding proteins. CPEs usually have decreased susceptibility to carbapenems, and in most cases are resistant to extended-spectrum cephalosporins, such as cefotaxime, ceftriaxone, ceftazidime and cefepime. Still this is not obligatory since in some cases, carbapenemase producing strains harboring, e.g., OXA-48, do not show decreased susceptibility to cephalosporins. Currently, most of the CPE isolates produce also enzymes that hydrolyze cephalosporins, such as CTX-M type ESBLs, making them also cephalosporindresistant.

Salmonella spp.

Salmonella spp. are one of the primary causes of foodborne disease worldwide with resistance traits similar to those exhibited by other Enterobacteriaceae members. A microorganism with great variability by itself Salmonella has been under the microscope metaphorically and literally for decades since the societal and economic burden of the disease is too large to be ignored. In addition, mortality is significant with an estimate of 90,300 deaths occurring from nontyphoidal and 178,000 deaths from typhoidal salmonellosis in 2015. Therefore, the emergence of Salmonella antibiotic resistance in farms, animals, food, and humans is monitored by international groups of acknowledged importance such as The National Antimicrobial Resistance Monitoring System for USA and the European Food Safety Authority in close collaboration with the European Centre for Disease Prevention and Control for Europe. It is perhaps the best example of strong correlation between the use of antibiotics in animals and the emergence of resistance in humans. Although this link seems self-evident, limited data are available concerning the importance of the environment in the transmission to humans and the possible reservoirs of the pathogen or the resistance genetic elements. Salmonella isolates frequently exhibit multidrug resistance. This is not something new since the resistance of Salmonella to several antibiotics has been observed since the 1960s. Resistance has been recorded for kanamycin, b-lactam antibiotics (penicillins and cephalosporins), tetracyclines, chloramphenicol, sulfonamides, and streptomycin, with an increasing trend for antibiotics such as ceftriaxone, ceftiofur, amoxicillin/clavulanic acid, and nalidixic acid. The antibiotics under concern are the extended-spectrum cephalosporins since ceftriaxone is the drug of choice for severe salmonellosis in children. The most common multidrug resistance phenotype confers resistance to ampicillin, chloramphenicol, streptomycin, sulfonamides, and tetracyclines (ACSSuT). MDR Salmonella Typhimurium DT104, a strain of bovine origin that has been since spread globally and has been declining ever since, has exhibited this quite common multidrug phenotype. Considering that the transmission of Salmonella is mainly zoonotic, the surveillance of Salmonella resistance among animals is of extreme importance. According to the annual EFSA/ECDC summary report the antibiotic resistant serotypes most frequently isolated from broilers and laying hens, the principal animals implicated in Salmonella contamination, are S. Enteritidis, S. Infantis, S. Kedougou, S. Kentucky, S. Livingstone, S. Mbandaka, S. Typhimuriumdmonophasic S. Typhimurium, and S. Senftenberg. MDR is high in both humans, broilers and laying hens, with at least 25% of strains exhibiting multidrug resistance and some pandrug resistance. Similar observations have been made by NARMS concerning United States where MDR is high but decreasing, whereas S. Dublin, a serotype mainly of bovine origin is emerging as a dominant human pathogen.

Campylobacter spp. Campylobacteriosis is probably the most common foodborne disease worldwide. The disease is considered generally mild, since it is self-limiting, resolving within a couple of days, whereas low or null mortality is expected. In contrast, the economic and public health burden is very important, bearing in mind that the disease is generally underreported due to the low hospitalization rates and the complex identification of the causative agent. On the other hand, disease can be complicated in young children and immunocompromised where chronic disease can develop. Serious syndromes such as Guillain–Barré syndrome are related to infection by Campylobacter spp. with occasional deaths occurring. Therapy when needed usually resides in macrolides and tetracyclines, antibiotics to which Campylobacter spp. is exhibiting increasing resistance. Campylobacter exhibits intrinsic resistance to several antibiotics including bacitracin, novobiocin, rifampin, streptogramin B, trimethoprim and vancomycin. Acquired resistance to aminoglycosides, ampicillin and other b-lactams, chloramphenicol, macrolides and lincosamides, quinolones, and tetracycline is on the rise. Concerning macrolides, Campylobacter coli exhibits higher resistance to erythromycin than C. jejuni in contrast with quinolones where resistance is higher in C. jejuni than in C. coli. Resistance to quinolones has been linked to the use of fluoroquinolones and specifically enrofloxacin in veterinary medicine. An alternative to Campylobacter spp. treatment are tetracyclines whose resistance exhibits increasing trends with major geographical differences and is therefore problematic. In contrast, resistance of Campylobacter

620

Environmental Reservoirs of Antimicrobial Resistance of Foodborne Pathogens

to aminoglycosides is low, with resistance to gentamicin reported in less than 2% of the isolates. Still increasing trends have been reported, therefore making the sensitivity testing against gentamicin advisable. Concerning the genetic background of Campylobacter resistance, fluoroquinolone resistance is mediated by point mutations in the DNA gyrase A. The multidrug efflux pump CmeABC functions synergistically with the gyrA mutation by reducing the accumulation of the agents in Campylobacter cells. All of the known Campylobacter fluoroquinolone resistance determinants are chromosomally encoded. Macrolide resistance is mainly associated with target modification and active efflux, with point mutations in domain V at positions 2074 and 2075 of the 23S rRNA been recognized as the most common mechanism for macrolide resistance in both C. jejuni and C. coli. Active efflux also contributes to macrolide resistance in Campylobacter. The target mutations and active efflux confer resistance not only to macrolides, but also to ketolides (telithromycin). The mechanisms of intrinsic resistance to several antibiotics are unclear, but it is likely that they consist of low permeability of the membrane and active efflux by multidrugefflux transporters.

Pseudomonas spp. P. aeruginosa is a bacterium found in water and soil that is pathogenic to plants, animals and humans. In both humans and animals, it is an opportunistic pathogen. In humans P. aeruginosa can cause community and nosocomial infections, especially in immunocompromised patients and patients with cystic fibrosis. In animals it can cause skin infections such as pyoderma, otitis and infections of the urinary tract in companion animals, mastitis in dairy cows and endometritis in horses. It has been reported that animal P. aeruginosa isolates have a nonclonal epidemic structure and therefore can pass through species. Moreover, P. aeruginosa strains associated with human disease have been found in animals too, not being though the highly pathogenic ones. P. aeruginosa is intrinsically resistant to a wide range of antimicrobials including benzylpenicillins, aminobenzylpenicillins, carboxypenicillins, first and second generation cephalosporins, chloramphenicol and tetracycline. This is attributed to the several drug efflux systems and porins, and the low permeability of Pseudomonas cell wall. Except from the intrinsic resistance, P. aeruginosa can acquire diverse resistance mechanisms through mutation or by horizontal gene transfer mechanisms. In addition, its pathogenicity and survival in the environment is further enhanced by its ability to form biofilms. P. aeruginosa exhibits quite high rates of resistance to fluoroquinolones, with resistance to ciprofloxacin and levofloxacin ranging from 20% to 35%. Also P. aeruginosa isolates exhibit high resistance rates against b-lactams (70%–100%) and sulfonamides (80%– 90%). Resistance rates against quinolones and aminoglycosides are more variable. Still it is alarming that the rate of multidrug resistant strains is on the rise. Resistance of P. aeruginosa to quinolones has been related to point mutations in DNA gyrase genes gyrA and gyrB, and/or topoisomerase IV genes parC and/or parE. The most common among b-lactamases are penicillinases of the A serine b-lactamases class (PSE, CARB, and TEM families). Concerning extended spectrum b-lactamases class A ESBLs have been reported (TEM, SHV, CTX-M, PER, VEB, GES, and IBC ESBLs), and some class D OXA-type enzymes encountered. Concerning metallodlactamases, all four major families (IMP, VIM, SPM, and GIM) have been detected in Pseudomonas cells.

Acinetobacter baumannii A. baumannii is an important nosocomial pathogen usually affecting immunocompromised patients. It has been associated with nosocomial epidemics and is able to survive for prolonged periods in hospital environments. Also, it has been related with community-acquired infections as single events or case series. This ubiquitous microorganism is highly prevalent in nature and has been isolated from various environmental locations such as soils contaminated with petroleum hydrocarbons, vegetables, surfaces, manured agricultural soil, pig slurry and aquaculture environments. Still it has the ability to form biofilms on abiotic surfaces and therefore can survive even in unfavorable conditions. Since the 1970s, where susceptible A. baumannii were isolated from clinical settings, its extraordinary ability to upregulate or acquire resistance determinants has resulted in the emergence of infections with multidrug/pandrug resistant isolates with most of the resistance genes acquired from Pseudomonas, Salmonella, or Escherichia. All genomic variants of A. baumannii contain a noninducible chromosomal AmpC cephalosporinase, also known as Acinetobacter-derived cephalosporinase. A. baumannii also possess an intrinsic class D oxacillinase belonging to the OXA-51-like group, with OXA-51-like enzymes being able to hydrolyze penicillins (benzylpenicillin, ampicillin, ticarcillin and piperacillin) and carbapenems (imipenem and meropenem). Concerning AMR-genes in animal A. baumannii, an emergence of a carbapenemase producing clone has been reported in livestock and companion animals with the blaOXA-23 being a widely distributed carbapenemase gene. Resistance traits are often organized in AbaR resistance islands, genetic structures that increase virulence and in the case of Acinetobacter are organized by transposon backbone of  16.3 Kb that facilitates horizontal gene transfer.

Antibiotic Pressure in the Environment Although genes encoding resistance have been found in materials as old as 30,000 years, the link between emergence of resistance and human activities is not questioned. From the time of the first use of antibiotics in the 1940s, larger amounts of antibiotics have steadily been produced, used, and finally released in the environment. Therefore, there is a constant selective pressure on the bacteria in all environments. Governmental bodies like EMA (European Union) or FDA (United States of America) are providing annually more or less precise figures of the quantities of antimicrobials used; still it is highly unlikely that these numbers are

Environmental Reservoirs of Antimicrobial Resistance of Foodborne Pathogens

621

accurate. Millions of metric tons of antibiotics released into the biosphere over the last 70 years have been used for (i) growth promotion/therapy/prophylaxis in animals; (ii) therapy/prophylaxis in humans; (iii) therapy/prophylaxis in aquaculture; (iv) pest control/cloning for plants; and (v) biocides in beauty, hand care and household cleaning products. It is interesting to note that therapeutic use in humans is responsible for less than half of all antibiotic application, although certain antibiotics are retained for first-line therapy. Although specific bacterial isolates can produce antibiotics in their natural environment, the exogenous introduction of antibiotics is far greater, therefore making highly probable their implication in the emergence of antibiotic resistance. The most important interactions between animals, humans and the environment concerning antibiotics and AMR bacteria and genes are graphically presented in Fig. 1. The anthropogenic activity and the accumulation of antibiotics in certain ecological habitats augment the selective pressure for resistant strains, that consequently form a large reservoir of antibiotic resistance bacteria and genes present in the environment. The exchange of resistance genes can happen through conjugative and mobilizable elements, the former containing all information required to transfer between cells whereas the latter use the functions of conjugative elements such as plasmids or transposons to transfer to another cell. Bacteriophages can also facilitate DNA exchange between cells by transduction, a process in which the bacterial DNA is incorporated in the phage DNA and further injected into the recipient cell. It is for the bacteria ecological advantage that the resistance to several environmental factors, including antibiotics, are present in the easily transferable genetic elements, such as the plasmids. It is interesting though that although ARGs and encoding integrons have been isolated for people not exposed to antibiotics, therefore questioning the absolute responsibility of anthropogenic antibiotics accumulation. Concerning the location of the resistance genes in the bacterial genetic structures, it seems that the plasmids are the ones mainly responsible for resistance emergence and its transmission. The evolution of the resistance encoded by plasmid genes seems to be related to antibiotic use, since the examination of pathogen collections prior to the “antibiotic era” showed that plasmids were common with resistance genes being rare.

Human Effluents Antibiotic usage is larger in the developed world in which sewage treatment is practiced widely. Sewage treatment is essential to safeguard the environment and human health against sewage related pathogens, toxic contaminants and generally harmful substances. The sewage treatment plants are a place that the selective pressure on the present bacteria is quite high since they are characterized by high microbial density, high nutrient content and subinhibitory concentrations of antibiotics, biocides and metals. It should be postulated that the subinhibitory concentrations of antimicrobials exert the larger selection pressure since they give the available time to bacteria to adapt to hostile conditions. The most frequently encountered compounds in sewage treatment plants include ciprofloxacin, enrofloxacin, erythromycin-H2O, norfloxacin, ofloxacin, oxytetracycline, roxithromycin, sulfadiazine, sulfamerazine, sulfamethazine, sulfamethoxazole, tetracycline, and trimethoprim. These antibiotics can be encountered in both influents and effluents in concentrations ranging from a few ng/L to tens of mg/L, therefore implicating that the removal of the water ending in the water bodies was not complete. As a result, antibiotics are gathered in the main water bodies. Especially in lakes, antibiotics that are usually detected are sulfonamides, tetracyclines, quinolones, macrolides, lincosamides, b-lactams, quinoxalines, and polyether and amphenicol antibiotics. Except from the accumulation, the concentration of antibiotics depends on the degradation and adsorption that takes place in lake water and sediments that varies according to the lake individual characteristics and the antibiotic group. Tetracyclines, fluoroquinolones and macrolides can be readily absorbed on particles and sediments. b-lactams are easily hydrolyzed when the conditions are weakly acidic or alkaline and can be removed by ozonation, or photocatalysis. Therefore, they can be detected in low concentrations in certain environments, despite of the burden. Sulfonamides and quinolones are the most commonly investigated groups in lakes worldwide.

Fig. 1

Interactions between animals, humans and the environment regarding antibiotics and AMR bacteria.

622

Environmental Reservoirs of Antimicrobial Resistance of Foodborne Pathogens

Animal Husbandry The use of antibiotics in animal husbandry has started as early as 1950. In addition to veterinary therapy, antibiotics have been administered as growth promoters, prophylactic treatment or as metaphylactic drugs in order to prevent the occurrence of certain diseases. Antibiotic usage is related to the degree of intensive farming since it was evident that the density of animals in contained premises leads to larger possibility of disease appearance, since because of the stress the immune response of the animals is reduced and of course the close contact of animals helps the spread of the pathogens. It is not easy to calculate the amount of antibiotics used. Concerning consumption of antibiotics worldwide, there is much controversy. In 1999, the antibiotics used in the EU were 65% in human medicine, 29% in veterinary medicine and 6% as growth promoters. Starting though from the ban on antibiotics as growth promoters from 2006 and on, a decline in antibiotics used in agriculture has been observed. From 2005 to 2012, the data of the European Medicines Agency show that approximately 300 tons of antibiotics were used annually for food-producing animals, including horses, on average per European Union member state. In contrast, the amount of antibiotics used in the United States for livestock farming is quite larger. In China in 2013 an estimated 92,700 tons of antibiotics (36 antibiotics) were used from which 48% were used in human therapeutics and the remaining 52% by animals. The administration of antibiotics depends on the animal species. In large animals, such as cattle, sheep, goats and pigs, antibiotics are administered individually; administration in birds (poultry, turkeys) or fish in aquaculture is done in large quantities, usually by the water or, in the case of fish, feed. Even in these cases, individual treatment is extremely difficult and time consuming, and as a result, it is not practiced. The metabolism of each antibiotic in an organism is dependent on the chemical structure of the antibiotic and of course, the animal in which it is administered, with different animals exhibiting different metabolism pathways. The antibiotic is metabolized and excreted after a certain period. Although antibiotics can be accumulated, usually they are optimized to reduce accumulation, to be highly effectively at low doses and to be excreted after short periods of time. Excretion rates vary according to the substance, the mode of application, the species of the animal and the time after the administration. Still the rates can vary considerably. Tetracyclines and sulfonamides degradation rate vary between 40 and 90%, with the degradation rate of sulphamethoxazole being up to 85%, whereas amoxicillin degrades only to 10 and 20%. Excretion rates of antibiotics depend also on the chemical structure and the site of action of the drug. Therefore, if the antibiotic is degraded intracorpusly it is usually excreted by the feces. If antibiotics are not metabolized they are usually excreted as such in the environment in which they tend to persist. To make it more complicated, after excretion in the environment, the metabolites can be transformed back to their parent compound, not forgetting that these too can also be bioactive. Certain antibiotics can form acetylated metabolites that do not exert biological activity and cannot be detected by common analytical methods. In the manure though, where for example fluoroquinolones and sulfonamides can be strongly adsorbed, the acetyl group can be cleaved and therefore release the original compound with the original biological activity. Since these antibiotics can be resistant to aeration of manure or elevated temperatures like the ones occurring in the manure piles, they are released in the environment unaltered.

Environmental Pollution by Antibiotic Residues Antibiotics in soil

Soil can be contaminated by antibiotics through irrigation, sludge, and manure as fertilizer or landfill. The concentration of antibiotics can vary according to the type of soil, regardless from the level of contamination, ending up in mid or upper mg of antibiotic per kg of soil. Manure and animals in general are implicated in soil contamination, since the higher concentrations have been detected in soil adjacent to animal farms. The detected antibiotics include chlortetracycline, oxytetracycline and sulfonamide group antibiotics, such as sulfadiazine and sulfamethoxazole. In vegetable production, and especially in the organic farming practices where manure is used as a fertilizer, the existence of antibiotic residues is of particular concern. Fluoroquinolones residues have been reported in vegetable growing areas in China with high concentrations of ciprofloxacin and ofloxacin observed. The irrigation of soil with reclaimed water or wastewater containing high concentrations of antibiotics (e.g., oxytetracycline and sulfonamides) has also been implicated, although the antibiotic concentrations are smaller than those of farms.

Antibiotics in aquatic ecosystems

The environment and, above all, aquatic ecosystems can provide an ideal foundation for the acquisition and spread of resilience as they are often and directly affected by anthropogenic activities. The aquatic environment is not only a way of spreading antibioticresistant bacteria between human and animal populations, but is also a path through which ARGs enter natural bacterial ecosystems. Many of these genes are not predominantly, resistance genes, but belong to a hidden set of genes that are capable of being transformed into antibiotic resistance genes, in both pathogenic and nonpathogenic bacteria. Since antibiotics are among the most important drugs for the treatment of infectious diseases, large amounts of these compounds are released into urban waste water because of their excessive consumption and rejection of unused quantities. Antibiotics used in medicine, in the prevention and cure of diseases in animals and plants, in accelerating the growth of animals in livestock farming, are released in huge amounts in natural ecosystems. Resistance to antibiotics develops in bacteria due to the effect of industrial production of antimicrobial agents on bacterial communities. Genetic “reactors” are places where genetic development takes place because of the high biological connectivity and genetic diversity. Beyond the mutations, important genetic variants are

Environmental Reservoirs of Antimicrobial Resistance of Foodborne Pathogens

623

encountered through genetic “exchanges” between organizations within communities. Therefore, antibiotics and resistant bacteria can enter the surface and underground aquatic environments via different pathways. An overview on the ecopath of antibiotics shows that antibiotic drugs alternate between different environments, such as the medical environment, the agricultural environment, the aquaculture environment, pharmaceutical industries and the wider environment. A large percentage of antibiotics used worldwide is released into the environment with one active form, through the excretion of drugs in urine and feces. Therefore, the antibiotics exert pressure on choice in bacteria, humans, animals and plants, due to their excessive use. The antibiotics usually encountered are usually occurring in concentrations in average of 1–10 ng 1. The inhibitory effects of antibiotics on aquatic biota depend on the type and dose of antibiotic, as well as the organisms sampled. Despite the almost universal presence of antibiotics in water bodies, it is essential to calculate the ecological risk from antibiotics since in several instances the ecological risk is not significant. The antibiotics encountered, according to the ecological risk posed, are erythromycin, clarithromycin and azithromycin, which are included on a watch list of substances that could pose a significant ecological risk for aquatic environments in countries within the European Union.

Environmental Reservoirs of Resistance Soil Soil is an important reservoir of microbial diversity with the majority of the antimicrobials currently used in human and animal medicine been isolated from soil microorganisms. Darwin’s theory of a “weapon shield breed” between antibiotic and resistant strains is used frequently to attribute the diversity and origin of the ARGs. We should not forget that ARGs abundance and exposure to an antibiotic molecule are not systematically correlated, and there are obviously other factors that may contribute to their appearance, selection and transmission of environmental resistance genes. The soil, a heterogeneous biotope, has a large genetic diversity on a small spatial scale, favoring the exchange of genetic materials through horizontal gene transfer. This contributes to the spread of ARGs between the bacteria and ultimately to the acquisition of pathogen genomes, with the greatest risk of antibiotic therapies being threatened. Our knowledge of the extent of soil abundance and diversity has been broadened by the postgenomic revolution and the help of high-sequence technologies. Soil bacteria naturally produce antibiotics as a competitive mechanism, with a concomitant evolution, and exchange by horizontal gene transfer, of a range of antibiotic resistance mechanisms. Surveys of bacterial resistance elements in edaphic systems have originated primarily from human-impacted environments, with relatively little information from remote and pristine environments, where the resistome may comprise the ancestral gene diversity. Regarding remote and virgin soils, with minimal anthropogenic influence and selection pressures caused by anthropogenic factors, they only reflect the range of natural antibiotics and related resistance mechanisms with little or no genetic influence from the selection pressures introduced by the influx of synthetic and semisynthetic antibiotics of the 20th century. There are studies that have researched resistance to antibiotics in so-called virgin soils. Thus, in soils from Alaska’s remote environment, abundant b-lactamases were detected. Genes encoding tetracycline resistance and glycopeptide antibiotics were found in ancient offshore DNA and isolated cavity samples contained multiple antibiotic resistance genes for macrolide glycosylation. As far as Antarctica is concerned, evidence of resistance was observed in Antarctic seawater, although there was a relatively small anthropogenic influence from ongoing research projects carried out in its territories. The soil is one of the largest and most diverse microbial habitats and a natural habitat for the Actinomycete genus Streptomyces, the species that are the most potent producers of antibiotics of natural origin. The microbiota found in soil is the proof for the evolutionary origin of antibiotics as survival factories. Along with the antibiotics, soil microbiota has developed mechanisms of resistance encoded by ARGs that, after the massive use of antibiotics and the transfer to clinical isolated in the modern era, have emerged as clinical ARGs. The high concentrations of antibiotics gathered on soil due to anthropogenic activities have accelerated the selection toward resistance. Moreover, the use of manure, usually rich in antibiotics and their metabolites, gathered in animal feces after treatment, are one of the main pathways of antibiotic contamination of soil. There is a correlation between ARGs diversity and abundance in soils where manure has been applied. Metagenomic analyses have enabled the investigation of the complexity of manure treated soil, revealing that, except from the already AMR bacteria such as ESBL-producing E. coli, several other ARGs are present. Accordingly, manure treated soil greatly surpasses the variability and significance of AMR bacteria and ARGs. Among them reside pathogenic bacteria that are mainly zoonotic, making the risk higher. Interestingly it is common that a positive correlation exists between the levels of antibiotics, ARGs, human pathogenic bacteria and AMR human pathogenic bacteria. The highest relative abundance of ARGs in manure is shared by tetracycline resistance genes and MDR genes. Irrigation water quality can also stimulate ARGs in soil. It seems that irrigation by itself, regardless of the microbial quality of water used, can enhance the diversity and occurrence of AMR bacteria and ARGs. Especially when wastewater is used, the effect is more profound. Several ARGs have been isolated by both conventional and molecular techniques with most of resistance genes detected. Nevertheless, the aminoglycoside and beta-lactam ARGs are reported to be most abundant. Livestock manure is also an important reservoir of AMR bacteria, ARGs and transferable plasmids carrying ARGs. The antibiotic residues found in the gut of animals after animal treatment or contact, can alter the gut microbiota of animals and spread the increased prevalence of antibiotic resistance in manure. Food-producing animals, such as swine, chicken and cattle, have gained much attention since gut resistant bacteria can contaminate meat, easily reach the food consumers and generally humans with ease. For example, in a research performed in China (2000) most of the pig and chicken E. coli isolates were MDR strains with resistance to tetracycline, sulfamethoxazole, ampicillin, streptomycin, and trimethoprim-sulfamethoxazole. Also, high resistance to fluoroquinolones (levofloxacin,

624

Environmental Reservoirs of Antimicrobial Resistance of Foodborne Pathogens

ciprofloxacin, and difloxacin) was observed in E. coli, whereas class 1 integrons were identified in isolates of swine and chicken origin. Similar findings have been reported for E. coli isolates from most of the food animals. The most abundant ARGs in livestock manure are tetracycline ARGs (tetB, tetM, tetO, tetW), sulfonamide ARGs (sulI, sulII, sulIII and sulA) and the class 1 integrase gene with sulfonamide ARGs being more abundant or more or less equal to tetracycline ARGs. The development of molecular methods, namely highdcapacity q-PCR and metagenomic analyses, has permitted the simultaneous detection of a multitude of resistance genes. These techniques have permitted the discovery of several unique genes in manure treated soil which, when compared to untreated soil is several times richer in ARGs. This attribute along with the abundance of transferable genetic elements, facilitates the horizontal transfer of ARGs in manure treated soil. Several environmental factors influence the fate of ARGs in the soil. For example, the antibiotics migration to deeper soil layers is followed by a similar increase of ARGs. In the selection of ARGs in deeper soils, other factors also exert selection pressure along with the antibiotics, such as heavy metals. A systematic assessment of the selection parameters is needed, including more field studies in order to identify the relative contribution of each factor to the fate of ARGs. Although usually a significant positive correlations between concentration of antibiotics and ARGs is observed, this is not always the case. Perhaps the different kinetics of antibiotic residues and ARGs transport in soil, the different rates of degradation, or the different physicochemical parameters (e.g., total organic matter) are responsible for these inconsistencies. The persistence of past selection of ARGs in soil and the trigger of it due to even small concentrations of antibiotics will not permit the quantitative correlation of ARGs versus concentration of antibiotics, which is also dependent on the possibility of coselection and cross-selection effects. Many ARGs are often found on the same plasmid or mobile genetic elements, therefore cotransferring resistance to antibiotics not present. The simultaneous presence includes resistance genes to heavy metals that are abundant in deeper soil layers. Therefore, further studies are needed.

Water Antibiotic resistance in water bodies is directly related to the usage of antibiotics in the release source. Both antibiotic residues and ARGs through the water bodies are governed by long distance migration and diffusion. Still, the anthropogenic activities are much more profound in streams and rivers, with the effect of environmental conditions being lesser in the large water bodies, and finally the oceans where, although existent, the direct effect of anthropogenic activities is small. Further, the environmental conditions that affect ARG migration include physical and chemical factors, and chemical pollutants existent in the areas of study. ARGs spread changes from active transmission dependent on the responding contaminants exhibiting antimicrobial activity (antibiotics), to passive transmission related to noncorresponding contaminants (heavy metals, organic pollutants and physical and chemical factors). Especially metal exposure as a selectivity factor for AMR bacteria and therefore ARGs has not been studied extensively within a framework of ecological relevance. In water bodies ARGs are regarded as an emerging pollutant. Therefore, antimicrobials and the effects they exert on the environment, mainly observed as ARGs and AMR bacteria are becoming an important topic in environmental science. The aquatic environment can harbor both antibiotic residues, AMR bacteria and ARGs. In certain bodies, they can be accumulated considerably, making these aquatic sites major reservoirs of antimicrobial resistance. Effluents from wastewater treatment plants, industry, hospitals and pig farms, for example, will all eventually reach some water source. However certain properties of the aquatic environments influence largely the accumulation of the aforementioned factors. Due to hydraulic characteristics, the concentration of the contaminants in a river sediment gradually decreases downstream of the source. Although rivers have received most attention among aquatic environments, it is the lakes that the residence time of contaminants, the accumulation of the AMR bacteria and ARGs are expected in a greater extent. Since lakes are the main drinking source of freshwater their importance in possible human contamination, directly through potable water, or indirectly through irrigation and food from aquatic organisms, is to be evaluated. Research in ARGs has detected more than 130 different ARGs in wastewater from both human and animal effluents, active against at least 12 types of widely used antibiotics. ARGs related to tetracycline resistance (tet) and sulfonamide resistance (sul) are commonly detected in aquatic environments, since they are extensively used worldwide, and they can persist in the environment for long periods. The ARGs can be found in mobile elements, namely integrons and transposons, that contribute significantly to the horizontal transfer among bacterial species. Specifically, the integrase gene (intI1) has been as proposed as a marker of pollution for resistant bacteria and anthropogenic pollutants because of its rapid regulation to diverse environmental pressures. The commonly found tetracycline resistance genes include the efflux pump genes tetA, tetC, and tetG, the ribosomal genes tetM, tetO, tetQ, and tetW, and the enzymatic modification gene tetX. In effluent systems the initial effluent concentrations of ARGs (101–107 copies/mL) are augmented in sludge samples (107–1011 copies/g). However, tet genes tend to be inactivated under sewage treatment facilities treatment in larger extent than sul genes Except from tet and sul ARGs, several other genes have been reported, including quinolone ARGs (qnr), macrolide ARGs (erm) and multidrug- resistant New Delhi metallo-b-lactamase genes (NDM-1). In lakes and rivers, as with effluents, sulfonamide resistance genes (sul) and tetracycline resistance genes (tet) are the most studied ARGs. In lake water, the abundance of sul1 is usually higher than that of sul2, although this is a general observation which can vary depending on the lake of study. The sul genes have been studied in several lake waters around the globe, including urban surface water in China and Switzerland were the reported concentrations were 10 2–10 3 copies per 16S rRNA level in the water and 10 6– 10 10 copies per gram of sediment. As expected, similar results have been reported for ARGs from rivers, with sediment being more abundant in ARGs than water. Several studies have concluded that the antibiotic resistance reported in waterborne bacteria cannot be explained by gene mutations only. This is to ratify that horizontal gene transfer (HGT) is of major importance for AMR in water bodies. HGT frequency of sulfonamide and tetracycline ARGs in aquatic bacteria is higher than that in bacteria from other habitats

Environmental Reservoirs of Antimicrobial Resistance of Foodborne Pathogens

625

with the main mechanisms of HGT being transformation, transduction, and conjugative transfer. Mobile genetic elements, such as plasmids, transposons, insertion sequences and integrons, are the most important carriers in HGT of ARGs. The most frequently isolated AMR bacteria in lake water and sediment usually are Escherichia coli, Enterococcus spp. and Pseudomonas spp. More recently the discovery of relevant ARGs has been elucidated. The sediments of lakes are usually rich in organic matter, therefore permitting the proliferation of bacteria and the intensification of selection. In Swiss lakes sediment, blaTEM has been found in almost half of the E. coli and one third of Enterococcus spp. isolates. Among ESBL genes in a Chinese lake, blaTEM was the most common as expected with blaSHV, and blaCTM-X also detected in smaller percentages. In Chinese urban lakes, AMR has been reported more frequently among Gram-negative isolates than Gram-positive isolates, with most of the isolates being resistant to at least two ARGs. As expected, highly contaminated lakes like the Kazipally Lake in India are reported to harbor more ARGs that nonpolluted ones like Nydalasjön Lake in Sweden.

Modes of Spread to Humans There are numerous possible transport routes between animals and humans. Still, the most likely modes of interaction are summarized in (a) transmission through the food chain, (b) through direct or indirect contact with people working in close contact with animals (farmers and animal health workers) and (c) via manure-contaminated environments and aquaculture. In particular, the role of the environment is extremely important, as it can serve as a reservoir of antibiotic resistance genes. The immediate risk from AMR food-borne pathogens is more visible and comprehensible; however, the greatest risk is posed by the transfer of antimicrobial resistance characteristics through the genetic library contained in bacteria, bacteriophages or DNA fragments. HGT is the most basic mechanism by which bacteria can transfer ARGs, which can occur in all matrices. Of course, it is more likely to occur in matrixes containing a large number of microbial cells. Coexistence of these agents with pathogenic bacteria in various environments, especially the human intestine or treatment plants, may lead to the emergence of resistant strains. In in vitro experiments the transfer of resistance genes to erythromycin from lactic acid bacteria to L. monocytogenes was evident. In addition, the transfer of tetracycline and erythromycin resistance genes from Enterococcus faecalis to L. monocytogenes was demonstrated in vitro in the gastrointestinal tract of mice. Correspondingly, ampicillin resistance has been transferred from Salmonella Typhimurium to E. coli in milk and minced veal, while meat has also been transported by tetracycline resistance from E. faecalis to Listeria innocua in meat. Transfer of resistance is well documented in bacteria of the same species as the ones inhabiting the human gut. The transfer of mobile genetic elements among E. coli strains has been demonstrated in vitro and there are ample proofs that this can occur in vivo. Identical ESBL genes have been traced in several points, including the environment, food and the human gut, making direct transfer of resistance to b-lactamic antibiotics quite possible. Similarities of ESBL genes among poultry, the farm environment and the human resistome largely justify this opinion. Therefore, bacteria that contain antimicrobial resistance genes can be an indirect public health hazard, regardless of their pathogenicity, as the available genetic pool of resistance is increased.

Conclusions The serious effects on human health of the presence and spread of antibiotic-resistant food-borne pathogens are well documented. They consist predominantly of the increased number of hospitalizations, increased morbidity and mortality in terms of resistance of strains of Salmonella spp. and Campylobacter spp. The effects of infections with antibiotic-resistant pathogens are summarized in the following points: 1. Delayed or unsuccessful treatment. The administration of antibiotics to patients is initially given empirically prior to the antibiotic results. For that reason, treatment often fails. However, the weakening of the patient over time may be lethal. 2. Inadequate selection of antimicrobials. Due to the emergence of antibiotic resistant strains, the choice of available antimicrobials for these infections is significantly limited. Besides, the frequent use of effective antimicrobials poses a risk. 3. Selection of resistant pathogenic isolates when antibiotics are used for treatment of other diseases. 4. Coexistence and possibly increased regulation of pathogenicity genes with resistance genes. As a result, highly pathogenic AMR strains can emerge. As an example, the S. Typhimurium DT104 multiple antibiotic resistance is located in the gene cluster SGI1, where virulence proteins genes are also contained. The occurrence of AMR bacteria in the environment and the consequent cycle of survival between the large reservoirs, namely the soil and the lakes, animals, food and humans, can maintain and actually augment the antimicrobial resistance phenomenon. The control of factors influencing selection of antimicrobial resistance in the environment is rather complex and difficult to contain. Moreover, antimicrobial resistance is a naturally occurring characteristic of bacteria present in several, even pristine environment. A serious effort though should be made in order to keep the anthropogenic burden, specifically the environmental disposal of antibiotics and ARGs, at a minimum, considering the serious effects that AMR bacteria can have on the human health of future generations.

626

Environmental Reservoirs of Antimicrobial Resistance of Foodborne Pathogens

See also: Food Safety and Risk Analysis.

Further Reading Aarestrup, F.M., 2006. Antimicrobial resistance in bacteria of animal origin, 1st edn. ASM Press, Washington, DC. Amabile-Cuevas, C.F., 2016. Antibiotics and antibiotic resistance in the environment, 1st edn. CRC Press, Boca Raton. Bengtsson-Palme, J., Kristiansson, E., Larsson, D.G.J., 2018. Environmental factors influencing the development and spread of antibiotic resistance. FEMS Microbiology Reviews 42 (1). https://doi.org/10.1093/femsre/fux053. Davies, J., Davies, D., 2010. Origins and evolution of antibiotic resistance. Microbiology and Molecular Biology Reviews 74 (3), 417–433. Hashmi, M.Z., Strezov, V., Varma, A., 2017. Antibiotics and antibiotics resistance genes in soils: Monitoring, toxicity, risk assessment and management, 1st edn. Springer, Singapore. Karkman, A., Do, T.T., Walsh, F., Virta, M.P.J., 2018. Antibiotic-resistance genes in waste water. Trends in Microbiology 26 (3), 220–228. https://doi.org/10.1016/ j.tim.2017.09.005. Keen, P.L., Montforts, M.H.M.M., 2012. Antimicrobial resistance in the environment, 1st edn. Wiley–Blackwell, Hoboken. Luangtongkum, T., Jeon, B., Han, J., Plummer, P., Logue, C.M., Zhang, Q., 2009. Antibiotic resistance in Campylobacter: Emergence, transmission and persistence. Future Microbiology 4 (2), 189–200. Nishida, H., Oshima, T., 2019. DNA traffic in the environment, 1st edn. Springer, Singapore. Qiao, M., Ying, G.G., Singer, A.C., Zhu, Y.G., 2018. Review of antibiotic resistance in China and its environment. Environment International 110, 160–172. https://doi.org/10.1016/ j.envint.2017.10.016. Schwarz, S., Cavaco, L.M., Shen, J., 2018. Antimicrobial resistance in bacteria from livestock and companion animals, 1st edn. ASM Press, Washington DC. Singer, A.C., Shaw, H., Rhodes, V., Hart, A., 2016. Review of antimicrobial resistance in the environment and its relevance to environmental regulators. Frontiers in Microbiology 7, 1728. https://doi.org/10.3389/fmicb.2016.01728. Van Hoek, A.H., Mevius, D., Guerra, B., Mullany, P., Roberts, A.P., Aarts, H.J., 2011. Acquired antibiotic resistance genes: An overview. Frontiers in Microbiology 2, 203. https:// doi.org/10.3389/fmicb.2011.00203. Yang, Y., Song, W., Lin, H., Wang, W., Du, L., Xing, W., 2018. Antibiotics and antibiotic resistance genes in global lakes: A review and meta-analysis. Environment International 116, 60–73. https://doi.org/10.1016/j.envint.2018.04.011.

Environmental Risks Associated with Waste Electrical and Electronic Equipment Recycling Plants Ourania Tzoraki, University of the Aegean, Mytilene, Greece Michael Lasithiotakis, Greek Atomic Energy Commission (EEAE), Athens, Greece © 2019 Elsevier B.V. All rights reserved.

Introduction Electronic waste (e-waste) or waste electrical and electronic equipment (WEEE) is one of the most hazardous parts of the municipal solid waste, even though it represents only 2% of its volume that is deposited in landfills. Almost 20–50 millions tons of e-waste are produced annually on a global scale and it is estimated that only 15%–20% of that part is recycled. United States, European Union, Australia, Japan, and South Korea are countries with enormous e-waste volumes that are transferred for recycling to China, India, Brazil, Mexico, Nigeria, Thailand, and Singapore (Fig. 1). Recycling of e-waste provides with huge amounts of raw materials recovery and the protection of land that would otherwise be claimed as landfills. In developing countries and especially in China, the world’s leading electronic manufacturing country, e-waste, especially waste that is recycled in an informal manner, is a rapidly growing waste stream, and therefore, a lasting problem. E-waste contains significant amounts of materials of commercial interest such as commodity metals, precious metals, high quality plastics, and other components, which can, and should be recovered. The chemical composition of e-waste depends on the type and age of the electronic object rejected. Usually they contain persistent organic pollutants, dioxins and various metal alloys. Various substancesdelementsdpollutants associated with e-waste are presented in Table 1. Four distinctive processes are mainly used in e-waste recycling: Fragmentation, size reduction, homogenization, and a metals reclamation step (copper most notably). Dismantling electrical and electronic equipment mainly produces ferrous and non-ferrous metals, glass, plastic, electromechanical parts, and other elements. The latter include other precious metals like Au and Ag and hazardous substances like Pb, Hg. These elements apart from being hazardous, they do not cease to be useful and valuable as well. Other substances are As, Ba, Be, Cd, Cr, Li, Ni, Se, rare earths, zinc oxides, radioactive substances (e.g., 241Am). In general, the hazardous substances existence poses a challenge to occupational health specialists, since contact of worker with these pollutants during recycling cannot be avoided, even in the cases where strict safety protocols are followed. Thus, the huge environmental benefit of e-waste facilities, is opposed by a high environmental risk in the close vicinity of recycling plants. The majority of the facilities, especially the informal ones, may be close to land used for agricultural purposes. Increased heavy metals concentration is measured in soils, plants and the water bodies (streams, wells, wetlands) close to the e-waste recycling facility. Especially the plants in these areas tend to bioaccumulate the metals in their roots, shoots, and inside their tissue and fruits. The human exposure to the e-waste dismantling processes (such as to the informal e-waste recycling site in Guiyu, China which has been operating for the last two decades) is associated with higher levels of heavy metals in the blood of children and babies. Several studies connect the above with learning disabilities, liver damage, and hearing loss. WEEE discharged at dedicated urban deposit sites does not usually produce leachates with concentrations of heavy metals exceeding environmental limits. However, this chemical cocktail that is created as a leakage stream by some electronics was found

Fig. 1

Known e-waste source and destination countries.

Encyclopedia of Environmental Health, 2nd edition, Volume 2

https://doi.org/10.1016/B978-0-12-409548-9.10980-7

627

628

Environmental Risks Associated with Waste Electrical and Electronic Equipment Recycling Plants

Table 1

Potential environmental pollutants produced during e-waste processing procedures

Elements

Component of e-waste

Exposure

Persistent organic pollutants • Brominated flame retardants • Polybrominated biphenyls (PBB) • Polybrominated diphenyl ethers (PBDE) • Polychlorinated biphenyls (PCBs) Dioxins • Polychlorinated or dibenzo-p-dioxins (PCDDs) • Dibenzofurans (PCDFs or furans) • Dioxin–biphenyls (PCBs) • Polyaromatic hydrocarbons (PAHs) Radioactive substances

Flame retardants, fluorescent lamps, ceiling fans, dielectric fluids, dishwashers, electric motors

Dust, air, soil, sediment, water, food

Byproducts of combustion, dielectric fluids, capacitors, electric motors, etc.

Released as combustion byproduct and/or dust, air, soil, sediment, water, food

Medical devices, fire detectors, active sensing element in smoke detectors Resins

Dust, air, soil, sediment, water, food Dust, air

Teflon

Dust, air

Fire retardants, plastics Small amounts in the form of gallium arsenide within light emitting diodes CRT screens Power boxes containing silicon controlled rectifiers and x-ray lenses Rechargeable nickel cadmium batteries, fluorescent coating (CRT screens), printer inks and toners, photocopying machines (printer drums) Data tapes, diskettes Wiring CRT monitors, batteries, printed circuit boards batteries Li Fluorescent lamps that provide backlighting to LED displays in some alkaline batteries and mercury relay Rechargeable NiCd or NiMH batteries, the electron gun on CRT screens Old photocopiers (photo drums) Soldering, LCD Interior of CRT screens, blended with rare metals Phosphoric coating (CRT monitors)

Dust, air, soil, sediment, water Dust, air, soil, sediment, water

Phenols, amines, anilines, furans phenols, benzene elements, halogenated organic compounds such as chloroanilines Fluoride halides Metals Antimony(Sb) Arsenic (As) Barium (Ba) Beryllium (Be) Cadmium (Cd)

Chromium (Cr) Copper (Cu) Lead(Pb) Lithium (Li) Mercury (Hg) Nickel (Ni) Selenium (Se) Tin (Sn) Zinc (Zn) Rare earths

Dust, air, soil, sediment, water, food Dust, air, soil, sediment, water, food Dust, air, soil, sediment, water, food

Dust, air, Dust, air, Dust, air, Dust, air, Dust, air,

soil, sediment, water soil, sediment, water soil, sediment, water soil, sediment, water, food soil, sediment, water, food

Dust, air, soil, sediment, water, food Dust, air, soil, sediment, water Dust, air, soil, sediment, water Dust, air, soil, sediment, water Dust, air, soil, sediment, water

to be toxic to aquatic organisms. In addition, the usual practice of handlingdcompactiondbefore or during landfill may increase the levels of toxic substances due to the disruption of the various parts of electronic circuits. For this reason, it has been proposed to cement the waste so as to increase the pH and reduces or prevent the flow of aqueous solutions to the waste. Combustion before landfilling increases the mobility of heavy metals contained in WEEE as they are released into the atmosphere by aerosol formation. On the other hand, WEEE recycling processes involves dismantling and destroying the individual parts in order to recover various materials of commercial value. Through recycling, 95% of the useful materials of a computer and 45% of the materials of the cathode ray tube can be recovered. Recycling methods have fewer environmental impact when combined with appropriate techniques. With practices such as child labor, opendburning of WEEE and the emission of various pollutants in the air, pollution of ground and surface waters, etc., the final balance of environmental benefits is not always positive. Also, any environmental benefit from recycling is eliminated when the waste is transported over long distances due to the negative environmental impact of the energy consumed for transport. Recycling of WEEE however, has a generally lower ecological footprint than the disposal or incineration. The chemical composition of e-waste depends on the type and age of the object being discarded. It usually contains various metal alloys, mainly Cu, Al, and Fe, which are coated or mixed with various plastics or ceramics. Their various components are shown in Table 1. Some of these, such as heavy metals, are used for the production of electronic materials, while others, such as

Environmental Risks Associated with Waste Electrical and Electronic Equipment Recycling Plants

629

dioxins (Polycyclic Aromatic Hydrocarbons, PAHs), are produced by heatingdat low temperatures. Combustion of plastic insulated cables in open air produces 100 times more dioxins than controlled incineration of household waste. Metals released in the environment are strongly related to human diseases especially in areas that receive and process ewaste. Especially e-waste recycling that is performed in an amateur and unregulated occupational mode has become a growing environmental concern in some parts of the world. Surveys on the e-waste worker’s urinary levels of Pb, Cd, Mn, Cu, Zn indicated that the levels of urinary Cd were elevated. Blood lead levels studies in children < 6 years of age showed that children living close to electronic recycling sites had significantly higher lead levels. Recent studies suggest that early childhood exposure in e-waste polluted areas may be an important risk factor for hearing loss. The environmental impact of cathode ray tubes (CRT) waste in particular is mainly due to its lead content. The primary effect of lead toxicity is on the human central nervous system. High exposure to lead could also severely damage brain cells and the kidneys of humans and could lead to miscarriages. High levels of lead could also affect the development of children’s brain as well as organs responsible for sperm production in men.

Hazardous Products and By-Products of the E-waste Recycling Process Persistent Organic Pollutants, Brominated Flame Retardants and Their Correlation With Bromine-Dioxin Emissions Brominated flame retardants (BFRs) are used in plastic materials up to 20% w/w. Consumption in Europe in 1995 was 64,000 tons, and in 2001 42,000 tons and around the world up to 239,000 tons. Some of the most common brominated flame retardants are as follows: tetrabromobisphenol A (TBBP-A), polybrominated biphenyls (PBB), polybrominated diphenyl ethers (PBDE), hexadecyl cyclododecane (HBCD), brominated phenyl ethers, and bisphenols. There are four main ways of forming their precursors: thermal, chemical, photochemical, and biological. The thermal formation can be subdivided into a de novo synthesis, and “precursor formation”. Evidence shows that adding bromine to an oven during combustion actually increases the amounts of dioxins produced. After the incineration of waste containing BFRs, it has been shown that mixed dioxins predominate, in particular dibromo-dichromate replaces polychlorinated dibenzofurans as similar substances. Precursor chemicals such as bromides and chlorinated phenols can form halogenated dioxins and/or furans. These reactions are often carried out at temperatures between 250 C and 500 C, usually on catalytically active surfaces, but sometimes spontaneously. Precursor chemicals can produce dioxins during the heat treatment of materials (such as extrusion or casting) and from reduction within a temperature range of 350 C–400 C. It has been shown that bromodioxins can also be formed during UV-irradiation of decabromodiphenyl ether. In theory, there are a total of 5020 bromides, chlorides or mixed bromo-chlorobenzo-p-dioxins or furan-like congeners. There is also the possibility of forming mixed dioxins by replacing the halogen by a bromine and a chlorine molecule: With thermal stress, brominated retardants release HBr. HBr inhibits the spread of fire by replacing the most active radicals H and HO in the polymer chain. With antimony oxide (III) as a synergist, the effect of fire inhibition is increased. During this reaction, the formation of bromodioxins from resynthesis is a possible mechanism. The most likely route is through condensation or recombination of molecular fragments. During PCBs recycling processes, high temperatures and high pressures can be caused by friction and knocks of hammers and knives. In particular for the recycling of electrical and electronic devices, extremely high concentrations of dioxins are found when diphenyl ethers are polybrominated. The percentage of bromodioxins formed increases with the presence of antimony oxide (III), water, and iron oxide (III). The general form of distribution of bromodioxins often changes to higher or lower concentrations of bromodioxins, compared to metal-free and metal-free plastic residues.

Dioxins in Recycling: Sources Dioxins are a class of structurally and chemically related polyhalogenated aromatic hydrocarbons, including mainly polychlorinated or dibenzo-p-dioxins (PCDD dioxins), dibenzofurans (PCDFs or furans), and dioxindbiphenyls (PCBs). They are a group of persistent contaminants and usually appear as a mixture of alike compounds. The greatest release of these chemicals is today the heat treatment of household waste, refuse, medical waste, fire in landfills, and agricultural and forest fires. In addition to the toxicity of dioxins and their presence in the environment, many scientists have shown that they are highly resistant to biodegradation. This resistance may be due to their low solubility in water. The available data show that PCDDs and PCDFs, especially the tetra- and higher chlorinated alloys, are extremely stable compounds in most environmental conditions. The environmentally significant breakdown process for these congeners is believed to be photodegradation in the gas phase, and water contact. PCDD/PCDFs enter the atmosphere and break down either by photodegradation or by deposition. Burial in the ground, and resuscitation back into the air, or soaking of the soil and coalescing with water appear to be the predominant part of PCDD/ PCDF absorbed into the soil. The final storage site in the PCDD/PCDF environment is believed to be the water sediment. Toxicity of dioxins is expressed as a toxic equivalent amount where the more toxic similar TCDD substance is estimated to be 1.0 and less toxic to the like as fractions thereof. Several adverse health effects have been associated with dioxins (sarcomas, lymphomas, cutaneous lesions, stomach cancer, biochemical abnormalities in the liver, elevated blood lipids, immune system damage, and neurological effects).

630

Environmental Risks Associated with Waste Electrical and Electronic Equipment Recycling Plants

Metals It is estimated that 40% of lead found in landfills in the United States comes from electrical and electronic equipment. One study found that 40.2% of the lead in household waste comes from household electronics of which television screens represent 35.8% with the remaining 4.4% being derived from other sources. Plenty of studies relate the high concentrations of metals in the dust, air, soil, sediment and water with the e-waste facilities. Most common metals correlated are: Cd, Cu, Pb, Zn, Se, As, Sn, Hg, Ni, and Sb (Table 1). For CRTs in particular, the main environmental impact of CRTs is the release of lead oxides from the glass fragments (e.g., broken conical glass CRTs) when mixed with acidic waters in landfills.

Construction of Integrated Circuits Plates: Resins The main part of printed circuit boards with respect to the organic part is resins, namely epoxy resins. Epoxy resins, also known as polyepoxide or epoxy, are polymeric thermosetting plastics materials formed by the reaction of a resin with polyamides which play the role of a “hardener”. Epoxy resins have a wide range of applications, including high-tech materials such as plastics reinforced with fibers, other materials and adhesives of general purpose.

Epoxy resin polymerized

Epoxy resins, upon thermal degradation, usually break down at the O–CH2 and C–N bonds to produce phenols, amines and other relatively small molecules. Generally, during the thermal decomposition of the resins, various different monomeric organic hydrocarbons can be produced, such as anilines, furans phenols, benzenes, and other cyclic hydrocarbons, benzene elements, as well as halogenated organic compounds such as chloroanilines.

Polytetrafluoroethylene Teflon Polytetrafluoroethylene (PTFE) is a synthetic fluoropolymer of tetrafluoroethylene, which finds many applications. Its most wellknown brand name is Teflon, established by the DuPont Company. PTFE is a fluorocarbon solid as it is of a high molecular weight composed entirely of carbon and fluorine. PTFE is hydrophobic: neither water nor water-containing substances moisten PTFE, as fluorocarbons exhibit attenuated London forces due to the high fluorinated electron of the fluoride. Teflon has one of the lowest friction factors for all solids. It is used as a nonstick coating for pans and other cooking utensils. It is very unreactive, partly due to the strength of carbon–fluorine bonds, and is therefore often used in containers and piping for reactive and corrosive chemicals. When used as a lubricant, Teflon reduces friction, wear, and energy consumption of machinery. Teflon during thermal decomposition produces a series of simple fluorine hydrocarbons (fluoride halides), and the composition of the products depends on the pyrolysis temperature. They are particularly toxic, and even more recent studies incriminate Teflon for carcinogenesis.

Generation of fluoroalkenic compounds depending on thermal decomposition temperature

Nevertheless, the great forces exerted between the C–F bonds impart a great deal of Teflon resistance and thermal decomposition, therefore the first decomposition products do not appear below 600 C. However, in any case, the unsaturated haloids released are particularly toxic. With this in mind, thermal decomposition must be taken seriously into account.

Environmental Risks Associated with Waste Electrical and Electronic Equipment Recycling Plants

631

Environmental Impact of Hazardous E-waste Air/Dust Most materials used in electrical and electronic devices are safe for humans, while hazardous materials make up for only a small part of them. Even these small amounts however, can be of high concern in recycling plants since they may affect the health of workers. E-waste recycling processes produce airborne particles that may have important consequences in occupational health. It is therefore essential to estimate worker exposure to this particulate matter and optimize recycling methodology and site ergonomics in order to minimize the associated risks. During WEEE recycling operations, possibly the most toxic chemical substances in common electronics that escape to the environment in gaseous or aerosol form are brominated flame retardants (BFRs) and the inorganic ingredients of the cathode ray tubes (CRTs) and printed circuit boards (PCBs), such as heavy metals and metal oxides, and phosphoric coatings. BFRs are the precursors of bromodioxins as they can form halogenated dioxins and/or furans with Br. They are used in printed circuit boards of WEEE, sometimes up to 20 wt%. Consumption of BFRs in Europe was 32,000 tons in 2000, and throughout the world, over 540,000 tons the same year. BFRs can produce brominated dioxins during industrial processes that involve heat. This can take place by chemical reduction, in the temperature range of 350 C–400 C. Production of brominated dioxins can even occur spontaneously or even during sunlight exposure. During WEEE recycling processes in particular, high temperatures are caused by hammers and knives of mills and shredders. The occurrence of bromo-dioxins is enhanced by antimony oxide (III), water, and iron oxide (III) substances that are present in common waste electronics. CRTs constitute a vast category of end of life computer screens and televisions. CRTs contain a variety of specifically hazardous substances such as Pb, Cd, Hg, Ba, Al, P, Sb. The main hazardous substance of CRTs is lead (Pb), which is contained in the CRT funnel (22–28 wt%) and to a lesser extend in the screen, or panel 3%–5%). The above classifies CRTs to absolutely hazardous waste. Barium is a relatively soft silverydwhite metal, which is used at the front of CRTs (panel) and regarding its risk, studies have shown that its short-term exposure to harm the human body. The improperly termed as “Phosphoric” coatings, since no phosphorus is used or contained by them, are mixed salts of Zn, the Mg, and Y and other materials (Cd, V, Se, Eu). Each manufacturer of CRT constructs their own phosphoric coating, making a common approach on recycling, collection and reuse of phosphate material difficult. The environmental impact of the waste CRT is mainly due to their Pb content, as Pb is a hazardous and toxic material. All the abovementioned hazardous components may become airborne in the form of aerosol particles having sizes varying from a few nanoparticles to tens of microns during the different stages of the recycling operations (shredding, milling cutting, and further processing). Airborne nanoparticles (i.e., particles having diameter smaller than 100 nm) is one of the most common occupational health hazards in industry. They can adversely affect human health as they can travel and deposit deep into the respiratory system. Airborne particulate matter exposure poses the most direct health hazard to workers. A growing number of studies, showed a strong correlation between respiratory and cardiovascular diseases and airborne nanoparticles. The mechanisms of interaction between the human body and airborne nanoparticles have not hitherto been fully described. However, an increasing number of studies on toxicology and epidemiology suggested that the lower the size of the particles, the more intense their effects on the human body.

Soils and Plants Implications In various countries (i.e., China, Argentina, Nigeria, Pakistan) the land that houses the formal or informal e-waste recycling plants is severely polluted by heavy metals, especially lead, cadmium, copper, and chromium (Table 2). The dust released during e-waste recycling processes is enriched with high heavy metals that are dispersed and transported by wind currents to the surrounding areas. Several studies examined the environmental effect of the e-waste to the surrounding environment in various facilities (India, China, Nigeria, Vietnam, Greece, Argentina, France) and showed the extension and the impact of the pollution. Mainly the concentrations Table 2

Heavy metals content of soils of various e-waste recycling sites (unit: ppm)

Sampling site

Location

Cr

Cu

Zn

Cd

Pb

Surface soils of e-waste recycling area Abandoned e-waste cycling area Nearby abandoned area E-waste incineration site Vegetable garden close to e-waste incineration field Large scale e-waste recycling plants Surface soils of e-waste recycling areas Soils close to metal scrap recycling factory Agricultural field close to a former battery recycled plant Gardens close to a 50-year old lead recycling plant

Mytilene, Greece Taizhou, China Taizhou, China Longtang, China Longtang, China

38 771 70 d d

162 2,364 157 11,140 324

155 5,996 276 3,690 122

1.0 42.3 2.2 17.1 0.9

202 6,083 167 4,500 96

Wenling in Taizhou, China Mandoli industrial area, Delhi SW Nigeria Córdoba, Argentina

89 d 1.5–2.0 d

158 116 d d

300 777 9.2–24.7 4.4–26.3

2.6 1.3 0.3–0.5 d

164 2,645 0.8–3.1 18–641

Bazoches-les-Gallerandes, Loiret, France

d

d

d

d

164 (28–1,522)

632

Environmental Risks Associated with Waste Electrical and Electronic Equipment Recycling Plants

M AB BA G N E LE TT U C E RA DI SH C O LO C AS IA SH AL LO T BR O C C O LI LE TT U C E C AR RO AS T IA N RI C E C

E

C

ES

D G

IN

G

H

13,6

243

98,3

E C

T

E

AS

IA

N

RI

RO

C

LI

U TT LE

BR

O

C

C

O

LO

T

IA

AL

AS C LO

O

SH

SH

E

DI

C U TT

N

41

AR

DE

RA

G BA

LE

AB C E

IN H

94

89,4

G

ES

N LA C

AR G

E

M N R/

D

AS

C

H

IA

N

PI

RI

SU

C

U

M

E

T

E

RO C

AR

C

LI

U TT LE

BR

O

C

C

O

LO

T

IA AS

AL

LO O

SH

C

DI RA C

DE

SH

E

E U

N

LE

AB C E

ES

AR G

IN H

TT

BA

N R/ H C C

113 79,9

3,93

1,21

C

G

M U

M SU PI D N LA AR G

172

109

5,3

4,34

3,54

174

8,59

C

8,82

AR

227

C

14,4

5,14

11

Zn

Pb

2,62

12,2

6,54

N AR LA

G

IN H C

11,5

15,8

AR DE

ES U E M C AB AR BA DE G N E LE TT U C E RA DI SH C O LO C AS IA SH AL LO T BR O C C O LI LE TT U C E C AR RO T AS IA N RI C E

M H

R/ N

PI SU C D N AR LA G

Concentrations (mg kg–1 D W ) of Pb in the shoots of vegetables and rice

0,43

23.2 18,9

C

0,54

0,4

31,8

28,5

M

2,05

42,3

U

2,74

R/ N

3,11 2,62

44,3

H

4,22 3,66

3,61 3,26

Cu

PI SU

Concentrations (mg kg–1 D W ) of Cu in the shoots of vegetables and rice

Concentrations (mg kg–1 D W ) of Cd in the shoots of vegetables and rice

Cd

Fig. 2 Concentrations of Cd, Cu, Pb, and Zn in the shoots of vegetables and rice from a former e-waste incineration site in Longtang, Guangdong province of south China.

of Cd, Cu, Pb, Sn, Hg, Ni, and Sb were enriched in the topsoil of the e-waste facilities in Guiyu, China. Also the concentrations of Cd, Cu, Pb, Se, As, and Zn were higher in surface soils of e-waste recycling areas of Mandoli industrial area, Delhi, India compared to those of a clean reference site. Examination of five households in a Vietnamese village located in an non-intensive e-waste processing area found that the garden soil and the dust are the main daily Pb intake contributors to humans. The concentrations of Pb and Zn were found above the permitted levels in the soils of the neighboring areas of a former battery recycling plant in Argentina. Increased Pb and Cd concentrations are measured in the soils of a garden close to a 50-year old secondary lead smelter (France) due to direct foliar transfer. The garden vegetables had higher lead content in their tissues than usual (Pb in high content in lettuce) (Fig. 2). Also in longer distances, even at a distance of 350 m from a scrap metal recycling factory in Southwestern Nigeria, significantly higher heavy metal concentrations are found in the soils if compared with unaffected reference soils. The weather conditions cause a light effect in the pollution pattern since the concentrations of heavy metals in the soils during the wet season are slightly higher in comparison to the dry season. Most probably the rain events strengthen the dust transport and precipitation above the soils. Plants that are growing (vegetables, rice, wild plants) in the polluted soils of e-waste recycling sites are subjected to similar pollution patterns with the soils. Through the food chain, they are consumed by humans and result in a gradual accumulation of the heavy metals in the human body. China is one of the world leaders in electronic equipment recycling. Three typical informal ewaste recycling sites (Guiyu, Taizhou, and Longtang) are typical examples of areas polluted with remnants of e-waste. Special attention is given in agricultural products produced in these areas and especially in rice production. The main agricultural product of China is rice. It is essential to maintain high quality standards of production so as to protect human health. The soils of former informal open-air incineration e-waste facilities (Longtang, Guangdong province of south China) show the highest heavy metal concentrations (i.e., Cd, Cu, Pb, and Zn) and the surrounding paddy fields and vegetable soils showed higher Cd and Cu concentrations in relation to their corresponding reference soils. Especially Cd seems to be more extractable in paddy fields and vegetable soils, which implies that Cd has higher mobility in relation to the other metals. Rice and vegetables (i.e., Chinese cabbage, colocasia, lettuce) planted close to e-waste facilities or at the exact points over former recycling plants are strongly polluted by heavy metals. Fig. 2 shows the concentration of Cu and Pb in the shoots of vegetables and rice growing up in Longtang fields and there is obvious severe risk that is caused by the bioaccumulation of heavy metals and especially Pb (the maximum allowable concentration (MAC) (0.20 mg/kg) set as the safety criteria for milled rice (NY5115-2002)) and Cu in the rice, which is an important food part of the human nutrition. Also, the content of Cd in rice (MAC 0.20 mg/kg), is lower compared to the vegetables, but close to the allowable levels. Consequently polluted vegetables and soils of these former incineration sites and current e-waste recycling facilities cause a high environmental risk and the remediation of these sites should be in the first position of the current and future remediation actions ranking list.

Environmental Risks Associated with Waste Electrical and Electronic Equipment Recycling Plants

633

Water and Sediment Implications The operation of e-waste recycling plants affects negatively the water quality of the surface and groundwater and their biota is adapted to the pollution by tolerating it. Several studies especially in China reveal the existence of pollutants originated by the e-waste recycling facilities into their adjusted water bodies (wetlands, streams, rivers, ponds). In the majority of the studies the surface water pollutants such as PAHs, toxic organic pollutants (TOPs), and heavy metals are measured in the sediment matrix and not in the water phase due to their tendency for accumulation/precipitation and sorption in the sediment phase. In China the sediments of rivers and reservoirs located near e-waste recycling facilities are strongly polluted. The average PAHs concentration in surface sediment of the Meiliang Bay, Taihu Lake, China (average 2563 ng/g) was found much higher in relation to the PAHs concentration of the Yalujiang River, China (290 ng/g) and of the water reservoir Guiyu, China (average 45 ng/g). The synthesis of PAHs found in the sediments of the water reservoir Guiyu is indicative of kerosene, grass, coal, and wood combustion, even though the PAHs values remain very low. However the PAHs in the sediments were derived from both combustion and petroleum sources due to open burning of e-waste using liquid fossil fuels and burning of circuit boards. Also studies on PCBs transport in e-waste sites show a vertical migration to the shallow groundwater. Water reservoir and shallow water is used for drinking purposes and the existence of persistent organic pollutants is a high threat for the human health since they responsible for carcinogenic and various other diseases. Decades are needed for the biodegradation and remediation of these pollutants and the reduction or elimination of them at the source. The high concentrations of heavy metals and total organic pollutants measured in the sediments of Nanyang River and Beigang River around Guiyu town of China indicate that the two rivers are facing an increasing burden of metals pollution (copper, zinc and cadmium) during the last decade. Other pollutants such as polybrominated diphenyl ethers (PBDEs, tri- to hepta-BDEs) (2.1– 3.2  103 ng/g) in sediment samples taken in 2014 in locations around Guiyu town of China were similar to those from the sediment samples taken in 2004. Even the stricter rules and legislation and the recent advances in knowledge related to e-waste recycling technologies the sediments pollution is increased or remains at the same level with the past conditions. Remediation measures have to be considered for persistent organic pollutants mitigation and control of their accumulation into the sediments and their vertical transport to decrease the health risk associated with surface water bodies and shallow groundwater. The combined pollution of TOPs and heavy metals (especially bioavailable lead) affects the aquatic microbiota of the freshwater at both taxonomic and functional levels. The organic pollutants degradation in the sediment might be mainly performed by some of the dominant species detected in situ. Also heavy metals are bioaccumulated in freshwater edible fish such as the Cyprinus carpio Linnaeus and Pelteobagrus fulvidraco, that were caught in the Meiliang Bay, Taihu Lake, of China and despite that the remaining low levels are still safe for human consumption, the total amount consumed should be controlled under the Chinese Food Health Criterion to avoid excessive intake of Pb. The concentrations of Pb in human hair of citizens close to an e-waste recycling site, Accra, of Ghana possibly originated from contaminated soils, fish, and foodstuff. Metals released in the environment are strongly related to human diseases especially in areas in some parts of the world that receive and process e-waste. Surveys on the e-waste workers urinary levels of Pb, Cd, Mn, Cu, Zn indicated that the levels of urinary Cd were elevated. Studies of blood lead levels in children < 6 years of age showed that children living close to electronic recycling sites had significantly higher blood lead levels. Recent studies suggest that early childhood exposure in e-waste polluted areas may be an important risk factor for hearing loss.

Processes to Avoid/Eliminate Hazardous Substances Generated During E-waste Recycling Process Dismantling of e-waste and especially combustion of e-waste releases most of the pollutants such as the dioxins, into the environment. The formation of dioxins in the incinerator tubes is based on precursors but also on resynthesis at temperatures of 300 C– 500 C. The composition of dioxins in the gas tubes varies from 1 to 500 ng/m3. Therefore, it is important for the treatment of gas pipes to reduce the concentration to an acceptable limit (0.1 ng/m3) before entering the environment. Various technologies have been developed to eliminate the hazardous substances generation (Table 3), which are applied with high removal rates or are still at pilot testing scale. During the e-waste recycling there are various stages of the process and numerous pollutants are released to the environment. In order to show this complex procedure, the example of CRT recycling is presented in the current article.

Regulatory Requirements for Waste CRTs In order to minimize the environmental impact of CRT waste, a number of regional, national and transnational governing bodies issued regulations worldwide. The European Waste Electrical and Electronic Equipment Directive in particular was adopted by the European Parliament on 13 February 2003, was transposed into the legislation of the Member States on 13 August 2004, and entered into force on 13 August 2005. Also, since April 2007, at least seven States in the United States (Arkansas, California, Maine, Massachusetts, Minnesota, New Hampshire, and Rhode Island) have banned the landfill of various types of electronic waste, including CRTs. In four states (California, Maine, Maryland, and Washington) they have passed comprehensive recycling legislation. In addition, at least 16 states and New York have proposed recycling legislations in 2007 that can adopt either an advanced recovery fee or an extensive producer responsibility system.

634 Table 3

Environmental Risks Associated with Waste Electrical and Electronic Equipment Recycling Plants Technologies to eliminate hazardous substances release of the e-waste recycling

Process

Description of the process

Effect

Particle collection

Filter roast iron ores, filter cloth, and electrostatic precipitators are used to collect particulates. Lime slurry is pulverized and placed in the spray tower. Injection of crushed coke or hard coal mixed with limestone, lime, or inert material into the waste gas stream at a temperature of about 120 C. The gas tube passes through the grate from the bottom and forms a fluid bed of coke usually derived from bituminous coal and inert material (limestone) at a temperature of about 100 C–120 C. Coke moves slowly from top to bottom, while waste gas flows in the opposite direction The catalysts are mostly oxides of Ti, V, and W. In addition, Pt and Au oxides are effective for the destruction of dioxins at 200 C. Action of ionizing radiation on components of the gas macromolecules Heat is applied to waste to sterilize it. d

Collect dust and eliminate dioxin molecules Reduce dioxin emissions

Absorbent scrubbers or spray with electrostatic precipitators Adsorbent injection proceduresdflow injection Fluidized bed procedure with absorber recycling

Fixed bed or removable bed Catalytic decomposition Electron irradiation Thermal process Nonthermal plasma UV radiation (photolysis)

Semiconductor films such as TiO2, ZnO, Cd, and Fe2O3 under UV sunlight Use of reagents Mechanical energy is transferred from the grinding instruments to the solid system via pressure or compression, depending on the device being used. An important part of the milling action is converted into heat, and a small part is used to cause compression at microscopic and macroscopic level to perform a reaction.

Chemical reaction Mechanochemical method

Collection of dioxins, HCl, HF, and SO2, 99% removal of dioxins Acidic components such as HCl, HF, and SO2 can be collected

Decomposition of dioxins Decomposition of dioxins Dioxins and furan isomers d Still in research phaseddecomposition of dioxins Decomposition of dioxins Decomposition of PAHS In a laboratory experiment, polyhalogenated pollutants such as polychlorinated biphenyls are destroyed to biphenyl and phenol, respectively

Lead Content The lead content is not constant in all parts of the CRT. The areas with a higher lead content, and thus readily exploitable, are the funnel, which is the largest part of the CRT with a lead content of 22–28 wt% (Table 4). The screen, or panel, contains much smaller amounts of lead, usually around 3%–5%. From the above it appears that this quantity approximates the levels required for the manufacture of radiation protection glasses, which makes crushed CRT a suitable raw material for making such glasses. However, the funnel does not only include pure glass. It is coated with a conductive coating usually consisting of graphite particles, iron oxides and other substances.

Phosphoric Coatings Phosphoric coatings are mixed salts of Zn, Mg, and Y and other materials. Each CRT maker manufactures its own phosphate coating, which makes it difficult to tackle recycling in general, and to collect and reuse phosphate. The exploitation and recycling of CRT currently faces this serious difficulty. In addition, these materials are toxic and fall under the Hazardous Waste Regulation. Therefore, the screening of CRTs is now done more by hand, which makes it more laborious and time-consuming to process them. Semiautomatic methods have been developed. But even these require human interference at some point. Table 4

Lead in parts of screens and its contents

Part

Type of screen

Quantity (Kg)

% lead content of each segment (by weight)

Funnel Panel Neck Frit

CRT CRT CRT CRT

0.910 0.180 0.012 0.026

22–28 0–4 26–32 70–80

Environmental Risks Associated with Waste Electrical and Electronic Equipment Recycling Plants

635

On this, sorting is done, or it should be done initially by separating the funnel from the screen. This is a manual work based on semiautomatic intervention. Such a machine is illustrated above. In general, detachment of the screen from the funnel takes place in a closed or enclosed space, either mechanically, that is, by cutting, by means of diamonds (Fig. 3, left), or thermally by means of a wire blazing around the point of the joint of the funnel with the panel (Fig. 3, right), or even by mechanical means (Fig. 4). Smaller recycling plants use simple airtight gloveboxes (Fig. 5). This can be made in an improvised manner with simple commercial materials without the need for purchasing heavy machinery. A typical glovebox, some simple diamond electric

Fig. 3 Left: Removing the funnel from the panel mechanically (diamonds) and right: thawing with wire heating (shown as a thin gray band surrounding the CRT).

Fig. 4 Semi-automatic CRT panel detachment from the neck. The whole procedure is done in a glass canopy to avoid inhalation of aerosols of the phosphor coating.

Fig. 5

A typical glove box.

636

Fig. 6

Environmental Risks Associated with Waste Electrical and Electronic Equipment Recycling Plants

Abrasive pump connected to air to absorb poisonous phosphate.

diamonds or hand-held diamonds are sufficient. A powerful vacuum cleaner with a scraper can be used to remove the phosphate coatings (Fig. 6). Such or similar insulated boxes are used to handle hazardous and radioactive materials. The average granule size of the phosphoric coatings is very small (4–10 mm) which multiplies its risk. In addition to toxicity, grain size is capable of causing irritation to the lungs and pneumoconiosis. Therefore, additional precautions must be taken because when moving the materials inside and out of the box, dust leakage may occur, albeit small.

Conclusions: Recommendations E-waste consist a serious environmental problem especially in developing countries and specific approaches should be followed to face this serious problem. A priority should be given to the adaption of the best technologies or to improve existing and to adapt best environmental practices. The developed nations should focus in improving the operation of the existing facilities by taking advantage of novel cutting edge technologies. On the other hand the developing nations should put into force more strict environmental laws and legislation and give more attention to the electrical equipment that is not recycled. Especially polluted areas should be delineated by soil field survey and soil analysis and special measures should be adapted according to the pollution status. Special effort should be given to public awareness in order to increase the quantities of e-waste that are recycled and to decrease the informal e-waste facilities. Also subsidies for e-waste facilities should be considered.

Further Reading Huo, X., Peng, L., Xu, X., Zheng, L., Qiu, B., Qi, Z., Zhang, B., Han, D., Piao, Z., 2007. Elevated blood lead levels of children in Guiyu, an electronic waste recycling town in China. Environmental Health Perspectives 115 (7), 1113–1117. Khaliq, A., Rhamdhani, M.A., Brooks, G., Masood, S., 2014. Metal extraction processes for electronic waste and existing industrial routes: A review and Australian perspective. Resources 3, 152–179. Owoade, O.K., Awotoye, O.O., Salami, O.O., 2014. Ecological vulnerability: Seasonal and spatial assessment of trace metals in soils and plants in the vicinity of a scrap metal recycling factory in southwestern Nigeria. Environmental Monitoring and Assessment 186 (10), 6879–6888. Rodrigueza, J.H., Salazar, M.J., Steffan, L., Pignata, M.L., Franzaring, J., Klumpp, A., Fangmeier, A., 2014. Assessment of Pb and Zn contents in agricultural soils and soybean crops near to a former battery recycling plant in Córdoba, Argentina. Journal of Geochemical Exploration 145, 129–134. Savage, M. (2006). Implementation of the Waste Electric and Electronic Equipment Directive in the EU. EUR 22231 EN. Institute for Prospective Technological Studies 2006. European Communities. Wang, H., Han, M., Yang, S., Chen, Y., Liu, Q., Shen, K., 2011. Urinary heavy metal levels and relevant factors among people exposed to e-waste dismantling. Environment International 37 (1), 80–85.

Relevant Website http://ewasteguide.info/hazardous_substancesdA knowledge base for the sustainable recycling of e-Waste, Hazardous Substances in e-waste, 2010.

Environmental Specimen Bank for Human Tissues GA Wiesmu¨ller, Environmental Specimen Bank for Human Tissues, Westphalian Wilhelms University Münster, Münster, Germany A Gies, Federal Environment Agency, Dessau-Rosslau, Germany © 2011 Elsevier B.V. All rights reserved.

Abbreviations AAS atomic absorption spectrometry BBzP butylbenzyl phthalate CV-AAS cold vapor atomic absorption spectrometry DEHP di(2-ethylhexyl) phthalate DiBP di-iso-butyl phthalate DiNP di-iso-nonyl phthalate DnBP di-n-butyl phthalate ESB German Environmental Specimen Bank ESB-Human Environmental Specimen Bank for Human Tissues HBM-I human-biomonitoring value I HBM-II human-biomonitoring value II HCB hexachlorobenzene HRGC/HRMS high-resolution gas chromatography/high-resolution mass spectrometry HR-ICP-MS high-resolution inductively coupled plasma mass spectrometry ICP-OES inductively coupled plasma optical emission spectrometry IS-ESB Information System of the German ESB I-TEQ international toxicity equivalent LC-MS-MS liquid chromatography coupled with double-focusing mass spectrometry MBzP monobenzyl phthalate MEHP mono(2-ethylhexyl) phthalate 5OH-MEHP mono(2-ethyl-5-hydroxyhexyl) phthalate 5oxo-MEHP mono(2-ethyl-5-oxohexyl) phthalate 5cx-MEPP mono(2-ethyl-5-carboxypentyl) phthalate MiBP monoisobutyl phthalate 7OH-MMeOP mono(4-methyl-7-hydroxyoctyl) phthalate 7oxo-MMeOP mono(4-methyl-7-oxooctyl) phthalate MnBP mono-n-butyl phthalate ODBC open database connectivity PBDE polybrominated diphenyl ethers PCB polychlorinated biphenyls PCDF polychlorinated dibenzofurans PCDD polychlorinated dibenzodioxins PCP pentachlorophenol PFBA perfluorobutanoic acid PFBS perfluorobutane sulfonate PFC perfluorinated compounds PFDA perfluorodecanoic acid PFDoA perfluorododecanoic acid PFDS perfluorodecane sulfonate PFHpA perfluoroheptanoate PFHpS perfluoroheptane sulfonate PFHxA perfluorohexanoate PFHxS perfluorohexane sulfonate PFNA perfluorononanoate PFOA perfluorooctanoic acid PFOS perfluorooctyl sulfonate PFPA pentafluoropropionic anhydride

Encyclopedia of Environmental Health, 2nd edition, Volume 2

https://doi.org/10.1016/B978-0-444-63951-6.00450-2

637

638

Environmental Specimen Bank for Human Tissues

PFTeA perfluorotetradecanoic acid PFTriA perfluorotridecanoic acid PFUnA perfluoroundecanoate RTM real-time monitoring RM retrospective monitoring SOP Standard Operating Procedures

Introduction The Environmental Specimen Bank for Human Tissues (ESB-Human) is part of the National Environmental Specimen Bank of the Federal Republic of Germany (German Environmental Specimen Bank (ESB)). The German ESB is an archive of environmental specimens from representative ecosystems as well as of tissues, tissue parts, and body fluids of plants, animals, and humans for monitoring and evaluating exposures to potentially harmful substances in Germany. The main focus of the ESB-Human is documenting and assessing trends of human exposures via real-time monitoring (RTM) of body burden and long-term storage of samples under stable deep-freezing conditions for later retrospective analyses. The German ESB is funded by the Federal Ministry for the Environment, Nature Conservation and Nuclear Safety and organized by the Federal Environment Agency. Logistically, the human part of the ESB is integrated in the medical institutions of the Westphalian Wilhelms University Münster and has an independent status. Environmental influences are indicated by the individual exposure of every human being. After appropriate standardized sampling and analytical investigations, these effects are recorded in the documentation of the individual person’s life. In addition, the material is stored under stable conditions for later use. Under strict personal data protection, the affiliated data bank administers and evaluates scientific information characterizing the human body fluid specimens, including analytical data. The investigations of the ESB-Human serve both current stock taking and long-term protection of human beings in their selfcreated environment.

Archive of Human Specimens Development of the ESB-Human In 1974 Professor Fritz H. Kemper took the initiative to install an archive for human tissues and body fluids. After a pilot phase, in which conditions of optimal storage were examined, systematic recruitment and sampling in Münster started in 1981. In 1985 the German ESB – including its human part – began their routine operation. The ESB was intended for use as a control and monitoring instrument for the newly developed German and European Chemicals Legislation. Routine annual sampling process consists of acquiring voluntary students aged 20–29 years in defined peripheral conditions. Since 1997 annual routine sampling has been expanded to the three German cities Halle/Saale, Greifswald, and Ulm.

Objectives of the ESB-Human Average values measured mainly in blood and urine, but also in human milk, and similar specimens, reveal the level of human exposure to environmental noxae. Long-term trends can be analyzed by repeated testing of comparable groups of individuals at regular intervals. The detection of these long-term trends in human exposure to harmful substances is important for the development of legal regulations and restrictions as well as the evaluation of their success. Concentrations of substances currently recognized as environmental noxae are being monitored continuously so that correlations with the occurrence of certain health disorders may be identified at an early stage. The long-term storage of collected specimens under reliably stable conditions is the prerequisite to carry out further tests at a later time or to repeat tests with improved measurement technologies after decades have elapsed. In this way, environmental noxae can also be identified retrospectively when these were not yet known or could not yet be analyzed or were not considered significant at the time of specimen collection.

Sampling, Sample Characterization, and Sample Treatment Sampling Sites of the ESB-Human The collection of human specimens is restricted to only four sampling sites in different parts of Germany. It is not intended to mirror the sampling sites, which are selected for environmental specimens on the basis of ecosystems according to the concept of the German ESB. Although human beings are without doubt part of their environment, they can scarcely be related to representative

Environmental Specimen Bank for Human Tissues

639

ecosystems because of their mobility and widely varying living and exposure conditions at home and at workplace. Consumed food is rarely of local origin and represents global distribution of substances. Human specimens are sampled from living persons at four selected areas of Germany (Figure 1):

• • • •

Münster (since 1985; specimens from a pilot phase before 1985 are partly available) Halle/Saale (since 1995) Greifswald (since 1996) Ulm (since 1997).

Routine sampling is done once a year and consists of acquiring healthy collegiate volunteers aged 20–29 years. The total number of students sampled is approximately 500 students per year, approximately 125 students per area and year (Figure 2). Students are regarded as a proper subgroup for analyzing general trends of exposures in Germany because they are a homogenous subgroup with similar sociodemographic features, high mobility, and little or no occupational or accidental exposures. Restriction on the aforementioned age interval results in exclusion of age- and disease-influenced body burden in most instances. The students come from different regions of Germany to the universities and colleges of the four aforementioned cities (Figure 3). With individuals and groups moving home frequently in mobile society, it is assumed that almost the entire country is represented.

Sample Characterization: Metadata Personal metadata (e.g., sex, age, and place of birth), medical history (e.g., health and dental status, body height, body weight, and medication), as well as information about individual behavior (e.g., nutrition, use of body care products, and use of other chemicals) and other sources of exposures (e.g., home and living surrounding situation) are collected using a standardized questionnaire that is filled in by the test persons themselves. The questionnaire is available from the authors.

Specimen Types Until 2004, 24 h-urine, blood (whole blood and blood plasma), scalp and pubic hair, as well as salivary samples had been collected; since 2005 saliva, as well as scalp and pubic hair have no longer been taken routinely; instead the sampling of perinatal specimens (placenta, umbilical cord blood, umbilical cord, amniotic fluid, amnion, newborn urine, maternal blood, and maternal urine) was started. Since 1984, human milk has been collected in separate maternal collectives in Münster. The present specimen stock – without perinatal specimens – is shown in Figure 4.

Sample Treatment Blood samples are prepared (plasma extraction) and portioned immediately after withdrawal. In 24 h-urine samples, volume, density, and conductivity are measured immediately after delivery. All samples are stored temporarily in a mobile nitrogen tank (150  C) for transportation to Münster (Figure 5).

Sample Storage In Münster all samples are stored under deep-freezing conditions. Until 2005, the human specimens were kept at a temperature of 80 to 85  C in two walk-in freezer rooms with a total volume of 65 m3 (Figure 6). A cooling system with graduated multiple security protection is used to maintain the continuous low temperatures in the freezing rooms. Two deep-cooling generators are employed for each room, where one alone would be sufficient to maintain the necessary temperatures. In the event of a general power failure, the generators can be supplied by an emergency power unit; if this generator fails, liquid nitrogen at a temperature of 196  C can be conveyed directly into the freezing rooms. Since 2006, the human specimens have been stored under stable deep-freezing conditions in the gaseous phase above liquid nitrogen at a temperature of 150  C (Figure 7). This will allow future additional implementation of effect monitoring in the concept of the ESB-Human. Currently a former sanitary depot in Münster Wolbeck (Figure 8) is set up for the cryogenic storage of human specimens.

Sample Analyses After completing all sampling processes of 1 year, laboratory analyses are performed sequentially, that is, separately according to the year of sampling (RTM). Sixty-four inorganic and 5 organic substances (Table 1) are analyzed to give a measure of background exposures and their time dependence. To be able to seize geographic differences, all samples of 1 year are measured randomly. Inorganic elements are measured via high-resolution inductively coupled plasma mass spectrometry (HR-ICP-MS), cold vapor atomic absorption spectrometry (CV-AAS), atomic absorption spectrometry (AAS), or inductively coupled plasma optical emission spectrometry (ICP-OES) depending on the element analyzed. Samples are wet-digested via microwave heating with high-pressure (teflon) vessel technology using nitric acid and hydrogen peroxide as oxidation agents.

640

Environmental Specimen Bank for Human Tissues

Sampling Areas of the German Environmental Specimen Bank 6°0 0″E

8°0 0″E

10°0′0″E

14°0′0″E

12°0′0″E

16°0′0″E

Western Pomerania Lagoon Area NP

Wadden Sea BRs/NPs Bornhoeved Lake District Greifswald

Blankenese El

be

Cumlosen

We ser

Havel

O

de

r

Berlin

Spr

ee

Netherlands Barby

Bimmen

Münster

Solling

Halle/Saale

Upper Harz NP

lda

Saale

Ma

as

Zehren Dueben Heath (Halle/Leipzig/Bitterfeld Conurbation) Fu

Bonn

Prossen

ra

Wer

Koblenz

Labe

l

in

Ma

Vitava

Mose

Czech Republic

Luxembourg

ein

Kelheim

Bavarian Forest BR/NP

hl

Rh

os

Saarland Conurbation



M

Alt

ell

e

Palatine Forest BR

Iffezheim

Donau

Jochenstein

use

Ulm Me

France

Upper Bavarian Tertiary Upland

Austria

Weil

Berchtesgaden BR/NP

ubs

Do

Nearly natural ecosystem

Agricultural ecosystem

Freshwater ecosystem

Forest ecosystem

Urban industrial ecosystem

Marine ecosystem

BR Biosphere Reserve bis 100

Figure 1

NP National Park

100−200 200−400 400−700 700−1200 1200−2000 2000−3000 >3000 m

Sample sites of the German Environmental Specimen Bank (ESB).

Human specimens

Data Sources: ESRI Data & Maps (2000) & Gemeindegrenzen (2001 & GTOPO30, U.S. Geological Survey (1998) Projection: Transverse Mercator, Ellipsoid Bessel, Date Potsdam Layout: M. Bartel (January 2005), University Trier, Germany

Environmental Specimen Bank for Human Tissues

Sampling site

600

Greifswald Halle/Saale Münster Ulm

500 400 Count

641

300 200 100 0 1981 1983 1985 1987 1989 1991 1993 1996 1998 2000 2002 2004 2006 1982 1984 1986 1988 1990 1992 1995 1997 1999 2001 2003 2005 Year

Figure 2 Composition of the whole student collective at the four routine sampling sites of the Environmental Specimen Bank for Human Tissues (ESB-Human).

Regional provenances of voluntary participants (1996−2006) (ZIPCODES)

1−5 6−10 11−50 >50

N E

W

0

40

80 kilometers

S

Figure 3 Regional provenances of voluntary participants (students aged 20–29 years) from 1996 to 2006 of the Environmental Specimen Bank for Human Tissues (ESB-Human).

The analysis of pentachlorophenol (PCP), hexachlorobenzene (HCB), as well as polychlorinated biphenyl (PCB) congeners PCB-138, PCB-153, and PCB-180 in blood plasma is done via gas chromatographic mass spectrometry (GC-MS) after extraction with hexaneþacetone (4þ1, pH 2.0) and derivatization with CH2N2. Important clinical parameters (blood plasma: proteins, cholesterol, creatinine, and triglycerides; 24 h-urine: density, conductivity, volume, and creatinine) acting as possible modulators of internal exposure are determined, too, using common clinical chemical methods. Substances that are not routinely analyzed in RTM are retrospectively measured by indication in the stored human specimens. Indications of retrospective monitoring (RM) are mainly availability of valid analytical methods or assessment of concentration trend of substances with actual interest of toxicology and environmental medicine.

Quality Assurance Sampling, analyzing, and archiving are performed according to published Standard Operating Procedures (SOP). Furthermore, all analytical determinations of inorganic and organic parameters are conducted under the conditions of the German external quality assessments scheme.

642

Environmental Specimen Bank for Human Tissues

100 000

80 000

60 000 94 400 40 000

20 000 17 961

19 998

32 346

1 251

3 156

0 Blood plasma

Human milk Whole blood

Scalp hair 24h-urine

Pubic hair

Figure 4 Number of specimen of the Environmental Specimen Bank for Human Tissues (ESB-Human) in January 2007: c.180 000 single samples of approximately 11 000 subjects.

Figure 5

Mobile nitrogen tank (150  C) for transportation of human samples from the sampling sites to Münster.

Environmental Specimen Bank for Human Tissues

643

Figure 6 Sample storage of the Environmental Specimen Bank for Human Tissues (ESB-Human) at the University Münster: one of two walk-in freezer rooms with temperature of approximately 80 to 85  C. Under these conditions all human specimens sampled are stored until and including 2005.

Data Management All collected data regarding human as well as environmental specimen are included in the Information System of the German ESB (IS-ESB), which is the basis for corresponding, reporting, and integrated evaluation of the current environmental state as well as the environment-related health protection. All data are consistently documented. Quality-assured data are provided for experts as well as the general public. In IS-ESB, all information beginning with the conception, operating guidelines, all data about sample taking, sample transport, sample analyses to the point of sample storage, and sample providing according to the responsibilities of the involved institutions of the German ESB are collected, edited, and cross-linked. Currently, the IS-ESB structure consists of client–server applications (UNIX/Windows Server 2003) and a relational ORACLEÒ database (version 9i/10g). Access to the database is carried out over specially developed MS Office ACCESSÒ clients (open database connectivity (ODBC) interface). At the Internet portal of the German ESB (http://www.umweltprobenbank.de), qualityassured data sets can be researched, visualized, and downloaded. Thus, IS-ESB is an important instrument for the assessment of the environmental status in Germany.

644

Environmental Specimen Bank for Human Tissues

Figure 7 Sample storage of the Environmental Specimen Bank for Human Tissues (ESB-Human) at the University Münster: nitrogen tanks with temperature of 150  C in the volatile nitrogen fraction. Under these conditions all human specimens sampled are stored since 2006.

Figure 8 Sample storage of the Environmental Specimen Bank for Human Tissues (ESB-Human) at a former sanitary depot in Münster Wolbeck: nitrogen tanks with temperature of 150  C in the volatile nitrogen fraction. Under these conditions all human specimens sampled will be stored as soon as possible.

Environmental Specimen Bank for Human Tissues Table 1

645

Substances analyzed routinely in 20–29-year-old students every year in the German cities Greifswald, Halle/Saale, Münster, and Ulm

Substance class

Substances

Elements H Li Be B Na Mg Al K Ca Sc Ti V Cr Mn Fe Co Ni Cu Zn Ga Rb Sr Y Zr Nb Mo Tc Ru Rh Pd Ag Cd In Cs Ba Hf Ta W Re Os Ir Pt Au Hg Tl Fr Ra

C Si Ge Sn Pb

He F Ne P S Cl Ar As Se Br Kr Sb Te I Xe Bi Po At Rn N

O

La Ce Pr Nd Pm Sm Eu Gd Tb Dy Ho Er Tm Yb Lu Ac Th Pa U Np Pu Am Cm Bk Cf Es Fm Md No Lr

In

Organic Xenobiotics

Cold Vapor Atomic Absorption Spectrometry (CV-AAS) Graphite Furnace Atomic Absorption Spectrometry (GF-AAS) Indutively Coupled Plasma Optional Emission Spectrometry (ICP-OES) High Resolution Inductively Coupled Plasma Mass Spectrometry (HR-ICP-MS) No Valide measuring method available As an internal standard for HR-ICP-MS

Hexachlorobenzene (HCB) Pentachlorophenol (PCP) Polychlorinated Biphenyls (PCB): PCB-138, PCB-153, PCB-180

Statistical Analyses Assessing human-biomonitoring data often means dealing with fragmentary prior knowledge and a complex set of variables. Routinely, the human-biomonitoring data of the RTM have been analyzed with usual statistical approaches for differences in mean levels of body burden between sample sites, sex, and time trends; these results are reported annually. To receive high-level descriptive summary information on human exposure and for identification and quantification of relevant risk factors, data mining tools are combined with classical statistical approaches.

Real-Time Monitoring until 2006 Trend Monitoring In the following paragraphs the time trend of the five organochlorine compounds and selected elements will be presented. As analytical methods and recruitments sometimes changed during the history of the ESB-Human since 1981, the periods of the time trends presented differ. Looking at the five organochlorine compounds analyzed, concentrations of HCB in blood plasma (Figure 9), of PCP in blood plasma and 24 h-urine (Figure 10), as well as of the PCB congeners PCB-138, PCB-153, and PCB-180 in blood plasma (Figure 11) decreased markedly over time. Looking at the elements arsenic (As), cadmium (Cd), copper (Cu), lead (Pb), mercury (Hg), selenium (Se), uranium (U), and zinc (Zn) gives the following picture: The concentrations of As and Cd in 24 h-urine (Figures 12 and 13) and selenium in blood plasma (Figure 14) as well as the Cu concentration in blood plasma of males (Figure 15) remained unchanged over time. The concentrations of Cd and U in blood plasma (Figures 13 and 18) as well as the Cu concentration in whole blood of females (Figure 15) increased over time. The concentrations of Cu in 24 h-urine (Figure 15), of Pb in whole blood (Figure 16), of Hg in whole blood and 24 h-urine (Figure 17), of U in blood plasma and 24 h-urine (Figure 18), and of Zn in blood plasma and 24 h-urine (Figure 19) decreased more or less considerably over time. The decrease in PCP body burden over time results from the German Pentachlorophenol Prohibition Ordinance, which came in force in 1989. The decrease in lead body burden over time is a direct consequence of the German Leaded Petrol Law of 1971 (amended in 1994). The decrease in Hg body burden seems to be due to a better dental status of the students, lesser use of dental amalgam, and in case of amalgam fillings the usage of non-g-2-amalgam with a better Hg ligation. The increase in Cu in female blood plasma results primarily from the use of birth control pills.

646

Environmental Specimen Bank for Human Tissues

HCB (µg l−1), blood plasma

5.0

Sex Male Female

4.0 3.0 2.0 1.0 0.0 1985 1988 1990 1992 1995 1997 1999 2001 2003 2005 1986 1989 1991 1993 1996 1998 2000 2002 2004 2006 Error bars: 95% CI

Figure 9 Real-time monitoring: hexachlorobenzene (HCB) concentrations in blood plasma of 20–29-year-old German male and female students of the Environmental Specimen Bank for Human Tissues (ESB-Human).

PCP (µg l−1), blood plasma

40

Sex Male Female

30

20

10

0 1985 1987 1989 1991 1993 1996 1998 2000 2002 2004 2006 1986 1988 1990 1992 1995 1997 1999 2001 2003 2005 Error bars: 95% CI

PCP (µg l−1), 24 h urine

12.0

Sex Male Female

10.0 8.0 6.0 4.0 2.0 0.0 1982 1986 1988 1990 1992 1995 1997 1999 2001 2003 2005 1985 1987 1989 1991 1993 1996 1998 2000 2002 2004 2006 Error bars: 95% CI

Figure 10 Real-time monitoring: pentachlorophenol (PCP) concentrations in blood plasma and 24 h-urine of 20–29-year-old German male and female students of the Environmental Specimen Bank for Human Tissues (ESB-Human).

PCB-138 (µg l−1), blood plasma

Environmental Specimen Bank for Human Tissues

647

Sex

1.2

Male Female

1.0 0.8 0.6 0.4 0.2 0.0 1995

1997 1996

1999 1998

2001 2000

2003 2002

2005 2004

2006

Error bars: 95% CI Sex

PCB-153 (µg l−1), blood plasma

0.8

Male Female 0.6

0.4

0.2

0.0 1995

1997 1996

1999 1998

2001 2000

2003 2002

2005 2004

2006

Error bars: 95% CI Sex

PCB-180 (µg l−1), blood plasma

0.5

Male Female

0.4 0.3 0.2 0.1 0.0 1995

1997 1996

1999 1998

2001 2000

2003 2002

2005 2004

2006

Error bars: 95% CI

Figure 11 Real-time monitoring: concentrations of polychlorinated biphenyl congeners PCB-138, PCB-153, and PCB-180 in blood plasma of 20– 29-year-old German students of the Environmental Specimen Bank for Human Tissues (ESB-Human).

Predictors of Chemical Body Burden in the Student Collective 2006 With the exception of cadmium (whole blood) and uranium (whole blood and blood plasma), all analyzed analytical variables exhibited highly significant differences between males and females (pfemales), arsenic in 24 h-urine (59%, males>females), copper in blood plasma (41%, malesfemales). Inconsistent matrix differences appear for copper, where males have approximately 10% higher levels in 24 h-urine and females have 27% and 40% higher levels in blood plasma.

648

Environmental Specimen Bank for Human Tissues

Sex Male Female

As (µg l−1), 24 h urine

12.5 10.0 7.5 5.0 2.5 0.0 2000

2003

2005

2001

2004

2006

Error bars: 95% CI

Figure 12 Real-time monitoring: arsenic (As) concentrations in 24 h-urine of 20–29-year-old German students of the Environmental Specimen Bank for Human Tissues (ESB-Human).

Cd (µg l−1), whole blood

0.6

Sex Male Female

0.5 0.4 0.3 0.2 0.1 0.0 2000

2002 2001

2004

2006

2003

2005

Error bars: 95% CI

Sex

Cd (µg l−1), 24 h urine

0.5

Male Female

0.4 0.3 0.2 0.1 0.0 2002

2004 2003

2006 2005

Error bars: 95% CI

Figure 13 Real-time monitoring: cadmium (Cd) concentrations in whole blood and 24 h-urine of 20–29-year-old German students of the Environmental Specimen Bank for Human Tissues (ESB-Human).

Samples from Ulm differ from those of other sampling sites (Greifswald, Halle/Saale, and Münster) because of significantly (p 5 mSv in the first year if they were not evacuated. After the Chernobyl accident that occurred in 1986, the Fukushima nuclear accident is the second most severe accident in the history of the nuclear industry.

96

Encyclopedia of Environmental Health, 2nd edition, Volume 3

https://doi.org/10.1016/B978-0-12-409548-9.10974-1

Fukushima Nuclear Disaster: Emergency Response to the Disaster

97

Establishment of Local Headquarters and Relocation of the Headquarters to the Fukushima Prefectural Office The Cabinet Office in Japan is responsible for planning basic disaster management policies and response to large-scale disasters in collaboration with related government organizations. The Cabinet Office is engaged in the collection and dissemination of accurate information, reporting to the Prime Minister, and establishment of the emergency activities system by including the Government’s Disaster Management Headquarters in the event of a large-scale disaster. The Ministry of Economy, Trade and Industry (METI) established the Nuclear Emergency Preparedness Headquarters and onsite local nuclear emergency response headquarters, which are central to the implementation of countermeasures during a nuclear emergency. An Emergency Technical Advisory Body was established under the Nuclear Safety Commission of Japan (NSC Japan). Immediately after receiving notification under Article 10 of the Special Measures Law for Nuclear Emergency Preparedness at 15:42 on March 11, 2011, the government established a Regional Nuclear Emergency Response Headquarters at the offsite center. The local nuclear emergency response headquarters (local headquarters) was set up at 19:03 the next day, when an emergency was declared pursuant to Article 15 of the same law. Factory information systems, the Emergency Response Support System (ERSS), and the System for Prediction of Environmental Emergency Dose Information (SPEEDI) could not be used for a certain period in the offsite center. Subsequently, owing to the progress of the nuclear emergency and the rise in the radiation dose due to the shortage of fuel and food caused by the crowded traffic around the site, it became difficult for the local office to continue effective operation at the offsite center. As an alternative facility for the offsite center, the local headquarters was moved to the Fukushima prefectural government building on March 15, 2011. There was a serious problem in smooth communication among the Cabinet Office, Nuclear Emergency Response Headquarters Secretariat in Nuclear and Industrial Safety Agency, and TEPCO during the early phase of the Fukushima accident due to the complex nature of the disaster. At last, the Prime Minister declared the nuclear emergency and established the Nuclear Emergency Response Headquarters at night on March 11, 2011 (5 h after the earthquake).

Initial Governmental Actions in Response to the Nuclear Disaster After the Fukushima accident, the Japanese government needed to implement urgent countermeasures in response to the nuclear disaster. Government bodies collected information on the disaster situation and established various countermeasure offices in response to the nuclear emergency, which have been described below (Fig. 1). Owing to the potential impact of the accident at FNPP1, many residents living nearby had to be evacuated. After the declaration of a nuclear emergency, the Japanese government ordered the mandatory evacuation of inhabitants around FNPP1. Approximately 177,000 Fukushima residents were evacuated in response to the Fukushima radiological incident. Evacuation

Fig. 1

Initial actions following the Fukushima Dai-ichi Nuclear Power Plant accident.

98

Fukushima Nuclear Disaster: Emergency Response to the Disaster

from a 3 km radius around FNPP1 was ordered at 21:23 on March 11, 2011. Along with the serious accident, the evacuation zone was extended and finally set at a radius of 20 km from FNPP1, within about 24 h after the reactor accident. Although some individuals in neighboring prefectures evacuated voluntarily, mandatory evacuation was ordered by the Japanese government only in Fukushima Prefecture. In order to protect individuals from damage to the thyroid gland, on March 16, 2011, the Central Nuclear Emergency Response Headquarters informed that stable iodine (potassium iodine: KI) be administered to evacuees younger than 40 years. However, most individuals in the affected areas did not receive KI because of the confusion caused by the complex disaster. Fukushima residents, including medical staff, were evacuated from a radius of 20–30 km from the damaged reactor. Consequently, hospital inpatients and elderly individuals with special needs for evacuation were left behind without sufficient medical support. Therefore, the Ministry of Health, Labour and Welfare (MHLW) ordered the evacuation of the 1700 patients sheltered (in-house evacuation) in hospitals and nursing homes within 20–30 km from the damaged reactor, which was carried out from March 15 to 18, 2011. There were 8 hospitals and 17 nursing care facilities within a 20 km radius from FNPP1. Evacuation of hospital inpatients and elderly individuals was one of the major problems related to the Fukushima accident. It was reported that > 50 patients died either during or soon after evacuation due to medical problems and adverse evacuation conditions. The evacuation was carried out without sufficient medical support because of the tense emergency situation. Medical personnel could not accompany patients during transportation owing to the extreme situation. It should be noted that the danger of unprepared evacuation and the effectiveness of indoor sheltering for hospital inpatients and elderly individuals during the passing of the radioactive plume. The MHLW established disaster headquarters a few minutes after the earthquake and established a regional headquarters in Iwate/Miyagi/Fukushima Prefecture the next day. It also strengthened communication between local governments and the central government. The MHLW ordered Disaster Medical Assistance Teams (DMATs) to stand by. These teams consist of a medical specialist trained to provide emergency treatment and patient transposition during an emergency situation. At the initial stage, from March 11 to 22, 2011, about 380 DMATs were dispatched to provide emergency medical assistance at local hospitals and to transport a wide range of patients. The MHLW also dispatched a medical team to conduct contamination tests and to address concerns about residents’ radiation exposure. The MHLW disseminated a compact leaflet on April 7, 2011, regarding the care of children and pregnant women with reference to radiation exposure.

Information Service The national, social, and local media provided general information on the nuclear emergency situation. In response to the anxiety of radiation exposure among the public, the Japanese government made efforts to provide scientific knowledge about the effects of radiation on humans. The Ministry of Education, Culture, Sports, Science and Technology (MEXT) responded to requests for consultation by establishing a health consultation hotline that provided health information on radiation, and it set up a health consultation desk at the National Radiological Research Institute (NIRS). Further, the MHLW responded to issues on the safety of food and tap water, mental health of Fukushima residents and radiation workers, especially pertaining to the psychological care of children, and provided counseling services and scientific information on the website. The National Mental Health Center (NCNP) also created a web page that provided information for health care workers and those who support victims. Furthermore, a “mental care team” (6 teams comprising 24 individuals in total) was dispatched to the affected areas, upon the request of the MHLW. Despite these efforts, scientific communication was one of the major issues after the Fukushima accident. Evacuees around FNPP1 could not access sufficient information, and consequently, the deep fears increased public anxieties associated with radiation. Communication difficulties between local governments, scientific experts, and the local citizens were found during the early radiation safety response. Confusing messages about the declaration of safety and of the health effects of radiation exposure on humans in the initial phase caused severe difficulties in risk communication at later phases.

Implementation of Environmental Monitoring The Japanese government established a number of measures to ensure the safety of residents, such as radiation monitoring of affected areas, food monitoring and monitoring of the health of residents, and risk communication on the health effects of radioactive substances. According to the basic disaster prevention plan, local governments are in charge of environmental monitoring after a nuclear accident. TEPCO continued to monitor the onsite measurement of dose rates and radioactive materials and reported the current situation, such as the radioactive material emission status, to the local nuclear emergency response headquarters. Due to the complex nature of the disaster, with the earthquake, tsunami, and nuclear accident, 23 out of the 24 monitoring posts in Fukushima Prefecture became unusable. On March 15, 2011, the staff of the offsite center, which was the nuclear accident response center in Fukushima Prefecture, had to be evacuated due to adverse conditions. Under such circumstances, the MEXT assumed the responsibility of environmental monitoring after March 16, 2011, as the result of the adjustment among related agencies in the government. In order to improve the accuracy of prediction by SPEEDI, NSC Japan advised the MEXT to improve the monitoring, such as by measuring the concentration of radioactive material in suspended dust in the atmosphere. In addition, NSC Japan evaluated the results of monitoring by the MEXT, and explained the evaluation results to the media from March 25, 2011.

Fukushima Nuclear Disaster: Emergency Response to the Disaster

99

Information Sharing With the Public, and With the World Health Organization (WHO) and Its Member States Receiving accurate information on damages, evacuations, and medical and logistic needs and supplies was crucial for residents, local governments, concerned organizations, and individuals across Japan to enable them to make proper decisions and take appropriate actions. The MHLW began issuing a situation report from March 11, 2011, the day the earthquake occurred, both in Japanese and English. In addition, the MHLW was committed to sharing timely and accurate information on damages and radiation contamination with WHO and its Member States through the International Health Regulations (IHR). The MHLW regularly updated the information about radioactive materials on the IHR event information site until May 31, 2011, with a particular focus on water and food; the MHLW also responded to enquiries. Each member state disseminated actual information from each embassy in Japan by collaborating with the Japanese government and with each other.

Human Exposure Pathways Related to the Fukushima Accident Human bodies are both externally or internally exposed to radioactive elements (Fig. 2). External exposure to radiation mainly occurs through contaminated soil. Emergency and recovery workers are often subjected to external exposure with gamma and beta radiation during their work at highly contaminated sites. On the other hand, internal exposure to radiation occurs through the intake of air, water, food, and other substances containing radioactive materials, through inhalation, oral intake, dermal absorption, and wound penetration. The general public may receive external doses from the decay of radionuclides in contaminated soil or internal doses from the consumption of contaminated food and water.

Effects of Radiation on Humans An absorbed dose in the whole body or significant partial-body irradiation of > 1000 mGy over a short time period causes acute radiation syndrome (ARS). Radiation victims develop vomiting, headache, diarrhea, fever, and confusion during the first 48 h of the prodromal phase. Subsequently, patients present hematopoietic disorders, gastrointestinal disturbances, and cardiovascular disorders in the latent phase of ARS. ARS pertains to “tissue reactions” that have a threshold below which adverse effects do not occur, and the severity is determined by the radiation dose. Fetal abnormalities and temporary infertility in males occur with organ-threshold doses > 100 mSv of local exposure. Furthermore, severe mental retardation has been reported among the atomic bomb survivors of Hiroshima and Nagasaki, who experience in-utero exposure of 120–200 mSv. The number of blood cells

Fig. 2

Situation of radioactive substances in the environment and health risk after nuclear disaster.

100

Fukushima Nuclear Disaster: Emergency Response to the Disaster

transiently decreases by exposures > 500 mGy to the bone marrow, with a reduction of hematopoietic capacity. The radiation exposure threshold for cataracts was initially estimated at 1500 mGy, and this value was recently revised to a lower threshold of 500 mGy according to the Life Span Cohort Study on atomic bomb survivors of Hiroshima and Nagasaki. On the other hand, cancer induction and mutations are stochastic effects with no clear thresholds, and they are thought to depend on the absorbed dose. Among the atomic bomb survivors of Hiroshima and Nagasaki who were exposed to high doses of ionizing radiation, the incidence of leukemia was found to increase a few years following the bombing and it peaked 6–7 years later, while solid cancer risks increased in persons over the age of 40 years, the so-called cancer-prone age. An epidemiological study of cancer risks from natural radiation has been carried out in high natural background radiation areas in Kerala, India and Yangjiang in Guangdong Province, China. Chronic low-dose rates of radiation exposure in high total doses did not show increase in cancer risks in this study. Another study on the risk of medical exposure and childhood cancer also showed no evidence of an association. In contrast, a nationwide cohort study on the risk of childhood cancer due to background radiation in the Swiss National Censuses suggested a positive association. Health risks associated with exposure to low levels of ionizing radiation are a major concern following the Fukushima accident. However, the radiation risks of exposures below around 100 mSv cannot be estimated directly from these epidemiological data. So far, the cancer risks associated with low-dose radiation are uncertain because of insufficient scientific evidence. Further investigations are necessary to clarify the effect of radiation levels below around 100 mSv on cancer risks. Genetic effects from radiation have been reported among various species, except for humans. In an experimental animal model, radiation responsible genes were used as a marker for the detection of radiation-induced mutations. Therefore, the general germline mutations induction rate is likely to be much lower in humans as compared to that estimated in past specific locus studies on other species.

Food and Drinking Water Safety Based on the experience of the Chernobyl accident, food monitoring and restriction after the Fukushima accident enabled the government to mitigate internal radiation exposure from contaminated food. After the Fukushima accident, the MHLW set the provisional regulation values (PRVs) for radioactive materials in food and drinking water by adopting “Indices for Food and Beverage Intake Restriction” in order to mitigate internal radiation exposure due to intake of contaminated food. PRVs are based on 50 mSv/year of thyroid equivalent dose for radioactive iodine, and on 5 mSv/year of the effective dose followed by the recommendation of the International Commission on Radiological Protection (ICRP) and others. The concentration of radioactive materials in food were determined as follows. Radioactive iodine, tellurium, cesium, strontium, uranium, plutonium, and alpha emitting transuranic radionuclides are considered as target radionuclides. Foods were grouped into the following five categories: drinking water, milk and dairy products, vegetables, grains, and others (meat, eggs, fish, nuts, etc.). An annual dose limit for each food group is 1 mSv. In this case, the average concentration of contaminated food was assumed as half of the peak concentration. The MHLW established the food monitoring system using the manual prepared by the National Institute of Public Health in 2002, after the JCO criticality accident in Tokai village that occurred on September 30, 1999. Ingestion of radioactive tap water was avoided with the help of the Japanese Society of Radiation Safety Management, and radioactive substances were regulated by inspecting the radioactivity level of tap water periodically and by adopting guidelines for food and beverage intake restrictions prescribed by the Japan NSC. Several water companies instructed users to refrain from using tap water for babies, when radiation levels continued to exceed regulatory values for several days in March. Some local governments, including the Tokyo metropolitan government, provided separate water for infants during this period. After reporting about the risk assessment of radioactive nuclides in foods by the Food Safety Commission of Japan (FSCJ), the MHLW revised PRVs based on the present standard limits according to the existing exposure situation on April 1, 2012. New radiological standards for foods were based on 1 mSv/year as the maximum permissible dose through food consumption. The target radionuclides were limited to long-life radioactive materials because 1 year had passed since the Fukushima accident. Since uranium levels were almost the same in the nuclear power plant site as in the natural environment, uranium was excluded from target radionuclides. The FSCJ reports concluded that care should be taken with reference to food safety because children are more susceptible to radiation effects than are adults. The safety of drinking water was a major concern after the nuclear accident. Therefore, infant food and milk, and water were determined as different categories. Thus, foods were classified into the following four categories: drinking water, infant foods, milk, and general foods. The intervention level assigned to general foods was 100 Bq/kg as the concentration of radioactive cesium (500 Bq/kg for general foods according to previous PRVs, based on the food intake and dose coefficient according to specific age categories). The ratio of radioactive strontium to radioactive cesium was considered to derive these levels. The basic concept of PRVs and present standard limits for food control in the Fukushima radiological emergency have been explained in other papers. Food with radioactivity levels exceeding these values should not be consumed and distributed in the market. In Japan, a dose of approximately 1 mSv/year is derived from food containing natural radioactive materials such as potassium-40 and polonium-210. The MHLW evaluated the impact of food contamination by using the food monitoring data from after the Fukushima accident (http://www.mhlw.go.jp/english/topics/2011eq/index_food_radioactive.html). The median total committed effective dose was estimated at 0.1 mSv (Fig. 3). However, some residents consumed highly contaminated local food or water before food restrictions were implemented. Therefore, further investigations assessing the radiation dose encountered by each person might be worthwhile.

Fukushima Nuclear Disaster: Emergency Response to the Disaster

101

3.0 Estimated effective dose from radionuclides in food by this accident

Average annual radiation exposure (mSv/year)

2.5 Cosmic rays

2.0 Terrestrial

Cosmic rays Terrestrial

1.5

Food

1.0

Food Radon

0.5 Radon

0.0 World average Fig. 3

Japan

Natural background radiation and estimated effective dose from radionuclides in food.

Emergency Dose Limits for Radiation Workers According to the ordinance on the prevention of ionizing radiation hazards, the dose limit for radiation workers in 1 year is set at 100 mSv for 5 years, which should not exceed 50 mSv for any 1 year in a planned exposure situation. The corresponding limits for equivalent dose for the eye lens and skin are 150 mSv and 500 mSv, respectively. Emergency activities such as recovery of the cooling system, stabilization of the reactor, mitigation of radioactive materials emission in the environment, and water decontamination are indispensable tasks in the early phase of the emergency exposure situation after the Fukushima disaster. Followed by debates of the Radiation Council in Japan and the ICRP recommendation that an exposure dose < 250 mSv may not cause ARS, the emergency dose limits were transiently increased from 100 to 250 mSv for radiation workers on March 14, 2014 in emergency situations. Emergency dose limits for any new emergency workers were separately set at 100 mSv from November 1, 2011. Dose limits for emergency workers, except for specialists highly trained and experienced in operating the reactor cooling systems and in maintaining the facilities for suppressing radioactive emissions, were restored to 100 mSv after December 16, 2011. Finally, from April 30, 2012, the same dose limit was applied for radiation workers as that applied under a planned exposure situation. Similarly, the dose limit for decontamination work was set at 100 mSv for 5 years, which should not exceed 50 mSv for any single year.

Radiation Doses for Emergency Workers Approximately 600 workers were involved in firefighting and other emergency works during the first day of the Chernobyl accident. ARS was reported in 134 of the emergency workers and 28 died within the first 4 months in the Chernobyl accident. ARS was also reported in three workers owing to exposure to high doses of neutron radiation in the JCO accident. In contrast, no ARS was reported in the Fukushima accident. By the end of October 2012, about 25,000 workers were engaged in mitigation and other activities at the FNPP; about 15% of them were employed directly by the plant operator (TEPCO), while the rest were employed by contractors or subcontractors. According to their records, six emergency workers exceeded the dose limit of 250 mSv, and the maximum dose was 678.8 mSv during the first month of the emergency exposure situation. This worker’s high internal exposure was presumably due to the improper use of the charcoal filter cartridge in the respiratory protective equipment. The average effective dose among emergency workers during the Fukushima incident was 12.4 mSv, and 65% of the workers were exposed to a radiation dose of < 10 mSv; almost 99% of the workers at the FNPP were exposed to a radiation dose of < 100 mSv. Biodosimetry using the dicentric chromosome assay was carried out by the NIRS from March 21–July 1, 2011, for the 12 emergency workers who were thought to have been exposed to a high dose of radiation. The results showed that the estimated maximum exposure dose for these emergency workers was < 300 mGy, with a mean value of approximately 101 mGy. According to a report of the United Nations Scientific Committee on the Effects of Atomic Radiation (UNSCEAR), these 12 workers may have absorbed a high dose at the thyroid due to the inhalation of I-131, in the range of 2–12 Gy. Approximately 2000 workers at FNPP1 were administered KI during the emergency response phase to protect their thyroid. Medical interviews were needed to prevent the side effects of KI among the workers with iodine hypersensitivity due to allergic reactions or thyroid

102

Fukushima Nuclear Disaster: Emergency Response to the Disaster

disease. NSC Japan advised that workers who were exposed to an equivalent dose of 100 mSv to the thyroid should take stable iodine in the form of a 50 mg KI tablet, two tablets on the first day (100 mg of KI) and one tablet on every subsequent day, for a maximum of 14 days. Although most of them took fewer than 10 tablets, the maximum taken by any worker was 87 tablets. A thyroid function test was carried out for 229 workers who continuously received KI for 14 days or consumed > 20 tablets. Thyroid dysfunction, such as increased levels of thyroid-stimulating hormone and decreased levels of thyroxin, was only transiently observed in the emergency workers. These parameters returned to normal values after KI distribution was terminated. The UNSCEAR report of 2013 pointed out the uncertainty of the assessment of the dose to workers involved in the accident. For instance, the report identified the research priorities regarding differences in the history of individual workers, dose rates when workers rested at work, use of protective measures, and use of common personal dosimeters. Further, the report acknowledged the importance of estimating the dose to the eye lens of the workers involved in the site mitigation strategy, with reference to exposure to beta rays due to the high concentration of Sr-90. Furthermore, the dose reconstitution test initiated by the MHLW is in progress as part of the worker epidemiological research being conducted by the Radiation Effects Research Foundation (RERF), to improve dose estimates for workers, with reference to both internal and external exposure. Additionally, probabilistic assessment of the dose uncertainty for workers arising from the uncertainty of the ingestion scenarios has been conducted. Future research plans include the creation of a database of information on emergency operation and performance testing of personal dosimeters for irradiation shape, and an examination of the effects of the use of protection devices and shielding. One of the problems of this epidemiological study is the low participation rate. By FY 2011, only 5465 individuals had participated from the target population of about 20,000 individuals. Efforts to improve participation in this important investigation continue. First responders, including Self-Defense Forces, police, and fire services initially engaged in emergency response to the Fukushima accident, such as firefighting and plumbing work at the nuclear reactor, rescue of victims, searching for missing persons, and evacuation guidance of Fukushima residents. The Japan Cabinet Office reported the integral dose among 3000 first responders. Cumulative doses in most of these workers were below 5 mSv, < 5% of the workers were exposed to doses from 5 to 10 mSv, and none exceeded 100 mSv.

Occupational Health Tracking The MHLW has tackled various issues such as radiation exposure monitoring, implementation of emergency medical examination, and establishment of onsite medical care systems for radiation workers (Fig. 4). On May 17, 2011, the policy for “Immediate Actions for the Assistance of Nuclear Sufferers” (also called as the “Government Roadmap”) was presented for the long-term healthcare management of radiation workers. The MHLW established an occupational health tracking system to follow-up all workers who engaged in emergency work, in order to provide long-term healthcare (Fig. 4). The MHLW published and disseminated the Guidelines on Maintaining and Promoting Health of Emergency Workers at the FNPP of TEPCO on October 11, 2011. These guidelines describe methods for sustaining long-term health, development of an occupational health tracking database, and many other forms of support provided by the government for emergency workers. All emergency workers were required by law to receive basic medical examinations during and after their involvement in radiation-related work. Workers exposed to an effective radiation dose > 50 mSv underwent eye lens examinations, and those exposed to an effective radiation dose > 100 mSv underwent thyroid tests and cancer screenings. The MHLW and TEPCO are monitoring 97.6% of all emergency workers (18,874 out of 19,346 workers) for long-term health control and cancer screening based on these guidelines. However, the establishment of an information system for offsite workers has not yet been achieved owing to obtain requests from workers that was explained mainly due to the low radiation exposure level. We briefly introduce the Industrial Accident Compensation Insurance Act for radiation workers in Japan in this paragraph. Accident Compensation Insurance was approved in three leukemia patients, one lung cancer patient and one thyroid cancer patient following the Fukushima accident until September 2018, based on judgments that considered these incidents as Course of Employment (COE) and Arising out of Employment (AOE). It is an administrative decision to provide insurance against an industrial accident, injury, or disease occurring “in the course of employment.” The Labor Standards Bureau Notification of leukemia is determined as described below. Myelocytic leukemia and lymphocytic leukemia, which develop at least over 1 year after initial exposure, are applicable for insurance for workers exposed to radiation doses of over 5 mSv  number of years engaged in radiation-related work. It should be noted that the approval of Industrial Accident Compensation Insurance is an administrative decision that is not based on any scientific evidence regarding radiation-induced leukemia. So far, the applicability of this compensation for radiation risks below around 100 mSv is still being argued and it is being intensively investigated, as described before.

Radiation Protection of Remediation/Decontamination Workers The act on special measures concerning the handling of environmental pollution by radioactive materials discharged by the nuclear power plant accident associated with the Tohoku District off the Pacific Ocean earthquake that occurred on March 11, 2011 was implemented on January 1, 2012. The Japanese Government has decided to carry out decontamination works (e.g., clean-up of buildings, remediation of contaminated land) in order to reduce the radiation exposure of residents in terms of radiological health

Fukushima Nuclear Disaster: Emergency Response to the Disaster

Fig. 4

103

Long-term health care for emergency workers.

and to restore the environment to its original condition in terms of human rights and mental health care. Over 30,000 TEPCO workers and subcontractors were involved in the clean-up and recovery activities after the accident. Measures to prevent radiological contamination of the workers and management of the waste resulting from decontamination works are needed. Although radiation doses to workers were not so high due to self-absorption of radiation in waste, the government needs to provide sufficient radiation protection for decontamination workers. They should manage the radiation exposure from decontamination works and provide workers with opportunities to enhance their knowledge regarding safety and health. The decontamination should be started after measuring dose rates. The decontamination plan for areas with dose rates higher than 2.5 uSv/h (equivalent to 5 mSv/y) needs to be submitted to the relevant labor standard inspection office.

Dose Estimation Among the Public Fukushima Prefecture launched the Fukushima Health Management survey to support and promote long-term healthcare for Fukushima residents. This cohort study consists of a basic survey to estimate external radiation dose, internal radiation dose assessment though whole body counts, thyroid ultrasound examination, a comprehensive health check (Fig. 5). This cohort study enrolled all Fukushima residents. The purpose of the basic survey is to estimate external and internal radiation exposure among the public in Fukushima Prefecture. The basic survey was implemented as a part of the Fukushima Health Management Survey conducted at the end of May 2011. NIRS developed an external dose estimation system based on behavioral records and the chronological data of ambient dose rate map in Fukushima Prefecture. Almost all individuals (99.9%) were exposed to a committed effective dose of < 10 mSv. Thyroid dose of children in Iwaki City, Kawamata Town and Iitate Village of Fukushima Prefecture was surveyed from March 24 to 30, 2011. Although higher background radiation prevented this survey in some cases, the absorbed doses to the thyroid of younger individuals were limited in the Fukushima accident. Internal radiation levels were measured by using WBCs within 2 years after the disaster for Fukushima residents in the areas where the possibility of internal exposure was relatively high. Almost all individuals (99.9%) were exposed to a committed effective dose of < 1 mSv. Only 26 of the 90,024 individuals tested exhibited doses of over 1 mSv. The maximum internal exposure level was

104

Fig. 5

Fukushima Nuclear Disaster: Emergency Response to the Disaster

Framework of the Fukushima health Management Survey.

3 mSv for assumed annual intake. Thus, Fukushima area residents and emergency responders were apparently not exposed to radiation doses higher than the threshold for tissue reactions. International organizations such as WHO, UNSCEAR, and the International Atomic Energy Agency (IAEA) reported assessments of health risks due to the Fukushima accident. By September 2011, WHO urgently assessed the health risks of residents due to radiation exposure in the first 1 year after the Fukushima accident. This report concluded that increases in diseases due to radiation released as a result of the latest nuclear accident was undetectable, and a risk increase was not expected in Japan, except for some areas in Fukushima Prefecture and in neighboring countries. UNSCEAR tried to assess the situation as realistically as possible by examining scientific information in the second year after the accident. Although UNSCEAR reported a possibility that the risk of thyroid cancer could increase among children most exposed to radiation, the committee mentioned that the likelihood of a large number of radiation-induced thyroid cancers in Fukushima Prefecture could be discounted because absorbed doses to the thyroid after the Fukushima accident were substantially lower than those reported after the Chernobyl accident. However, both reports still contain uncertainties regarding dose assessment due to the limitations in actual data. UNSCEAR continued to revise the radiation risk assessment based on subsequently obtained information. Following the Chernobyl accident, no actions to restrict contamination and the exposure of affected individuals were implemented, especially in the early period. This caused an increased risk of thyroid cancer owing to high internal radiation exposure in early childhood (before the age of 5 years) through the consumption of contaminated milk containing I-131. In contrast, following the Fukushima accident, thyroid ultrasound examinations were conducted for all Fukushima children aged 18 years or younger, to identify and treat thyroid cancer in children. In this survey, thyroid cancers were found in children in the late teenage years, but no cases were found in the most vulnerable group of children aged under 5 years at the time of the accident. Thus, the IAEA report concluded that the thyroid abnormalities detected in the survey were unlikely to be associated with radiation exposure due to the accident. On the other hand, one paper claimed to demonstrate that there had been a radiation-induced increase in thyroid cancer incidence, but did not show any dose–response trend clearly adjusting potential confounding factors. Because of the weaknesses and inconsistencies in this study, the UNSCEAR report did not consider that the study by Tsuda et al. presents a serious challenge to the findings of its 2013 report.

Safety and Revitalization of Fukushima Prefecture Resumption of the usual daily life of Fukushima residents is impossible for a long time owing to the disruption of the local community. Although the average air dose rate has decreased remarkably in Fukushima as compared to the levels observed in November 2011, 370 km2 of the Fukushima area (3% of Fukushima Prefecture) is still under evacuation order. Fukushima disaster victims turned to the government, scientists, and the media for an appropriate initial response to the nuclear accident. This unduly amplified the suspicions or psychological anxiety related to various issues, including the effect of radiation on health, and concerns for the health of children and grandchildren. Most families with young children were evacuated soon after the disaster, but the elderly preferred to stay in their hometown. Further, the accelerated aging of the population, also called as super aging of the society, caused various social issues in the current Fukushima situation. In order to prevent isolation of evacuees, government provided medical treatment, nursing care, and mental care by utilizing support centers and consulting staff for observation. The Reconstruction Agency was established in 2012, after the Fukushima Great

Fukushima Nuclear Disaster: Emergency Response to the Disaster

105

East Japan Earthquake, to implement countermeasures related to the provision of health and living support, restoration of towns and housing, reviving industries and livelihoods, and revitalizing and reconstructing Fukushima. New housing was rebuilt on higher land to prevent the effects of future tsunamis. Housing reconstruction is in progress and approximately 14,000 private houses have been or are being rebuilt for the restoration of towns and housing. According to a report of the Reconstruction Agency, by August 2017, the number of evacuees had decreased from over 470,000 to 90,000 individuals. However, several evacuees of the Fukushima accident continue to live in temporary housing. Obviously, long-term efforts on low-dose radiation risk assessment are necessary, and several challenges related to knowledge sharing and stakeholder collaboration with the local government need to be addressed. However, the gap between the Fukushima and other prefectures, in terms of the severity of damage from this disaster, continues to be a major problem. Indeed, continued support for livelihood rehabilitation in Fukushima is required. To recover from the effects of the Fukushima accident, governmental countermeasures for providing health and living support, restoration of towns and housing, reviving industry and livelihoods, and revitalizing and reconstructing Fukushima are in progress.

Decommissioning of TEPCO’s FNPP Fuel debris solidified in the melted fuel assemblies, control rods, and many other reactor materials remain in Unit 1–3 of TEPCO’s FNPP. Continuous water injection to cool the nuclear reactor contributed to the maintenance of a stable state. However, the implementation of countermeasures to handle contaminated water and waste are essential during the decommissioning of the FNPP. It is a long uphill road to complete the decommissioning in the next several decades. In December 2011, the Japanese government and TEPCO have developed the Mid-and-Long-Term Roadmap for the complete decommissioning of TEPCO’s FNPP Unit 1– 4. This roadmap is revised continuously according to the current situation of the nuclear reactor and the newly acquired knowledge to promote the decommissioning process. The Mid-and-Long-Term Roadmap consists of three phases (Fig. 6). The first phase commenced in December 2011, after achieving the cold shutdown state of the nuclear reactor and the prevention of the release of large amounts of radioactive materials. This phase focused on the removal of the fuel in the spent fuel pool over 2 years. The second phase is currently being implemented, which aims to design, examine, and select an appropriate fuel debris retrieval method. The last phase entails the complete dismantling of the facility. It is hard to capture the detailed status inside the nuclear reactor due to the presence of highly contaminated radioactive material. The decommissioning of TEPCO’s FNPP is an arduous task that has never been conducted before. Therefore, this project is being implemented under severe circumstances involving substantial uncertainty. In response to such uncertainty, we need to ensure safety and we need to be prepared for any unexpected accident.

Preparedness for Future Public Health Emergency: Addressing a Combined Disaster Lessons learnt from the Fukushima experience are related to the various actions taken by regulatory bodies and international organizations. Many domestic and overseas researchers have contributed to the process by providing reliable scientific evidences in relation to the effects of the nuclear disaster on human health. The triple combined disaster of an earthquake, a tsunami, and a nuclear power plant accident made the disaster relief operations more complex and difficult. To counter radiological and nuclear emergencies, timely detection and effective response to potential radiological and nuclear hazards/events/emergencies are required. Such activities should be implemented in collaboration with sectors responsible for radiation emergency management. The Basic Disaster Management Plan for Japan included plans for natural disasters including earthquake, tsunami, storm, heavy rain and volcanic eruption, and accidental disasters including maritime disaster, aviation disaster, railroad disaster, nuclear disaster; however, a combined disaster was not included. Further, resources and knowledge related to radiation protection were limited in those who respond to natural disasters. Monitoring of residents was suppressed because of concerns of disturbance. On the other Dec 2011 Stabilization of nuclear reactor

Achievement cold shutdown state of nuclear reactor Prevention of the release of radioactive materials Fig. 6

Nov 2013

First phase (for 2 years from the starting the road map)

Counter measure of the removal of the fuel in the spent fuel pool

Dec 2021

Second phase (for 10 years the starting the road map)

Determination of fuel debris retrieval policies for each unit.

Last phase (for 30–40 years the starting the road map)

Start of fuel debris retrieval at the first implementing unit Complete the decommissioning of reactor

Three phases of the Mid-and-Long-Term Roadmap toward the decommissioning of TEPCO’s FNPP Units 1–4.

106

Fukushima Nuclear Disaster: Emergency Response to the Disaster

hand, expected resources such as electricity and lines of communication in offsite centers were not available to respond to the nuclear emergency. Furthermore, these offsite centers were severely contaminated after the huge amount of radioactive emission from the damaged nuclear plants. The MHLW remains committed to sharing the lessons learnt from this new category of emergency with the international community to strengthen global disaster preparedness.

See also: Fukushima Nuclear DisasterdMonitoring and Risk Assessment; Fukushima Nuclear Disaster: Multidimensional Psychosocial Issues and Challenges to Overcome Them; Radiation Exposures Due to the Chernobyl Accident; Thyroid Cancer Associated with the Chernobyl Accident.

Further Reading IAEA, 2011. Evaluation of the amount released into the atmosphere from the NPS. Additional Report of Japanese Government to the International Atomic Energy Agency-The Accident at TEPCO’s Fukushima Nuclear Power Stations-(second report). http://www.iaea.org/newscenter/focus/fukushima/japan-report2/. (Accessed 30 January 2018). ICRP, 2012. ICRP statement on tissue reactions and early and late effects of radiation in normal tissues and organs: Threshold doses for tissue reactions in a radiation protection context. ICRP Publication, p. 118. MEXT, 2011. Results of airborne monitoring by the Ministry of Education, Culture, Sports, Science and Technology and the U.S. Department of Energy. http://www.mext.go.jp/ component/english/__icsFiles/afieldfile/2011/05/10/1304797_0506.pdf. (Accessed 30 January 2018). Nakamura, N., 2018. Why genetic effects of radiation are observed in mice but not in humans. Radiation Research 189 (2), 117–127. Shimura, T., Kunugita, N., 2018. Lessons learned on public health from the Fukushima Daiichi nuclear power plant accident. Journal of the National Institute of Public Health 67, 1. https://www.niph.go.jp/journal/data/67-1/j67-1.html. UNSCEAR. Developments since the 2013 UNSCEAR Report on the Levels and Effects of Radiation Exposure due to the Nuclear Accident Following the Great East-Japan Earthquake and Tsunami. UNSCEAR 2013 REPORT. 2014. WHO, 2013. Health risk assessment from the nuclear accident after the 2011 Great East Japan earthquake and tsunami, based on a preliminary dose estimation. http://apps.who. int/iris/bitstream/10665/78218/1/9789241505130_eng.pdf. (Accessed 30 January 2018).

Fukushima Nuclear DisasterdMonitoring and Risk Assessment Ichiro Yamaguchi and Naoki Kunugita, National Institute of Public Health, Saitama, Japan © 2019 Elsevier B.V. All rights reserved.

Introduction The Great East Japan Earthquake Caused the Accident at the Fukushima Nuclear Power Plants The Great East Japan Earthquake occurred at 14:46 on March 11, 2011. It, is the biggest disaster in the history of Japan and the fourth largest earthquake in the world since 1900. The earthquake caused a large tsunami, and together these disasters caused a death toll of 19,630, with 2569 people still missing as at March 2018. These disasters caused the Tokyo Electric Power Company (TEPCO) Fukushima Dai-ichi Nuclear Power Plant (FDNPP) accident.

Emergency Environmental Monitoring Response After the Accident At 16:36 on March 11, the government set up a nuclear accident emergency response office at the prime minister’s official residence. The Emergency Response Support System (ERSS) monitors the condition of the nuclear reactor, and predicts the progress of an accident at the time of a nuclear emergency. However, it suffered an error in its data transmission function immediately after the accident. Therefore, it was not possible to obtain necessary information from the plant, and it was impossible to utilize the relevant functions of the system. The System for the Prediction of Environmental Emergency Dose Information (SPEEDI) rapidly predicts atmospheric concentrations and radiation doses of surrounding radioactive materials in emergency situations, where large amounts of radioactive material may be released from nuclear reactor facilities. Therefore, the Nuclear Safety Technology Center shifted SPEEDI into emergency mode. Further, SPEEDI predicts the distribution of gamma ray dose rates (absorbed dose to air) from radioactive nuclides including noble gases. Therefore, SPEEDI is expected to calculate forecast data by entering emission source information consisting of radiation monitoring data transmitted from the reactor facility, and from weather conditions from the Japan Meteorological Agency using terrain data. However, during this accident, it could not quantitatively predict the concentrations of atmospheric radioactive materials and the air dose rates, because the radioactive source information could not be obtained by the ERSS to show the calculated results for the unit release of radioactive nuclides. On March 16, the Ministry of Education, Culture, Sports, Science and Technology (MEXT) took responsibility for managing the implementation of environmental monitoring and publishing the results. Japan’s Nuclear Safety Council (NSC) is responsible for evaluating the monitoring information.

Monitoring Overview The recorded environmental monitoring data is available from the website of the Nuclear Regulation Authority (http://radioactivity. nsr.go.jp/en/). As of February 2012, there were 2700 radiation-monitoring posts to measure ambient radiation doses in Fukushima, with all data shown in real time on the website (Fig. 1). The total number of real time radiation monitoring posts was around 3700 in 2016. Since the Fukushima Dai-ichi nuclear accident occurred simultaneously with the earthquake and tsunami, 23 out of the 24 radiation monitoring posts in the Fukushima prefecture became unusable although these are automatically operated under normal conditions, so decision-making became very difficult. As a result, each ministry and local government (such as the Fukushima Prefecture) had to focus on responding to the earthquake disaster, including monitoring.

Wide Area Dose Rate Monitoring Since March 15, MEXT has been conducting overland monitoring around the FDNPP by measuring the ambient dose rates, using up to 15 monitoring cars to investigate the conditions of dispersal and diffusion of radioactive materials in the overland area beyond 20 km from the FDNPP. This has been conducted in collaboration with the Japan Atomic Energy Agency (JAEA) and the Fukushima Prefecture, the National Police Agency, the Ministry of Defense and electric power companies. In addition, MEXT estimated the cumulative radiation dose for 1 year after the accident, based on data such as the monitored values of the ambient dose rates. Further, they reported the dose rate map to the NSC of Japan on April 10, 2011 (that was released by the Nuclear Emergency Response Headquarters on April 11, 2011), which was utilized in establishing the Deliberate Evacuation Area. Based on wind direction and topographical features, MEXT selected main monitoring points in each direction from the FDNPP, and took periodic measurements at the same points. They released a plan to improve monitoring activities on March 21, and performed 24-h monitoring of the cumulative dose rates using portable radiation survey meters at 15 points from March 23. From April 12, 2011, they started measuring the cumulative ambient radiation doses in the Fukushima Prefecture.

Encyclopedia of Environmental Health, 2nd edition, Volume 3

https://doi.org/10.1016/B978-0-12-409548-9.10982-0

107

108

Fig. 1

Fukushima Nuclear DisasterdMonitoring and Risk Assessment

Real time monitoring posts. Upper figure shows the posts as at April 2011, while the lower figure shows the posts as at April 2018.

Fukushima Nuclear DisasterdMonitoring and Risk Assessment

109

Then, MEXT announced the results of the car-borne survey they conducted in collaboration with the Fukushima Prefecture, JAEA in Minami Soma City, Iitate Village, Namie Town, etc., on April 13, 2011. They also announced a policy to disclose the “cumulative radiation dose map” reflecting the latest data to share with residents (Fig. 2). In this observation, the highest value (170 mSv h 1 on March 17, 2011) was detected at five points located 30 km northwest of the FDNPP. In addition, the highest value of 330 mSv h 1 was observed from 20:40 to 20:50 on March 15, at a point approximately 20 km northwest of the FDNPP. Then, MEXT assessed the accumulation of radioactive materials on the ground (including the planned evacuation areas) in collaboration with the Ministry of Defense, TEPCO, the U.S. Department of Energy (hereinafter referred to as “U.S. DOE”) and relevant research organization such as Japan atomic energy agency. They used aircraft carrying radiation-measuring instruments to monitor the field. Thus, from April 6, 2011, MEXT and the US DOE investigated the wide-ranging fallout of radioactive materials, and evaluated the accumulation of radiation doses in the planned evacuation areas (Fig. 3). From May 18, 2011, MEXT conducted a second airborne surveillance within a radius of 80–100 km of the FDNPP. They also conducted a third airborne surveillance within a radius of 80 km of the FDNPP from May 31, 2011 in collaboration with the

Fig. 2

Example of car-borne monitoring.

Fig. 3

Airborne monitoring conducted by the United States.

110

Fukushima Nuclear DisasterdMonitoring and Risk Assessment

N

N

30

30

km

20

20 km

0

10

20 km

0

20 km

N

30

30

km

km

20

km

0

10

km

20

km

km

20 km

10

N

30

10

190 < 9.5 - 19.0 3.8 - 9.5 1.9 - 3.8 1.0 - 1.9 0.5 - 1.0 0.2 - 0.5 0.1 - 0.2

km

N

0

km

20

km

10

20

30

km

20

km

0

N

20 km

0

10

20 km

Nuclear Regulation Authority

Fig. 4

Ambient dose rates.

Ministry of Defense. They collaborated with the US Department of Defense to analyze the data from the monitoring activities. Further, during the period from March 24 to April 1, 2011, a Department of Defense aircraft equipped with a dust sampler conducted concentration measurement of radioactive materials in the atmosphere at 5000 ft. from the Ibaraki Prefecture to the Niigata Prefecture. Then, MEXT and the US DOE analyzed the spectral energy inherent in each nuclide using NaI gamma ray spectrometer, and conducted a nuclide analysis of the gamma rays observed on the ground. In addition to confirming the degree of accumulation of radioactive cesium on the surface of the ground, the accumulation of radioactive iodine was also confirmed through a more sophisticated analysis. The trend of ambient dose rates monitored by the airplanes is shown in Fig. 4.

Air Dusts Monitoring Measurement of the concentration of radioactive substances in atmospheric dust was started by MEXT on March 18, and radioactive materials (Bq m–3) in the atmosphere were also measured. Atmospheric dust and environmental samples were measured using a germanium semiconductor detector, with a limited preset time that depended on the sample under high background circumstances.

Measures for Drinking Water On March 19, the Ministry of Health, Labor and Welfare (MHLW) notified water supply facilities in each prefecture (and the prefectural waterworks operators) to implement intake restrictions, in case the radioactive substances contained in tap water exceeded the guidance values set by the NSC. In addition, the MHLW disclosed the results from the measurement of tap water provided by related local governments. On April 4, 2011, the MHLW formulated a “Water Quality Monitoring Policy for the Future Tap Water” and requested local governments to conduct tap water inspections mainly in the 10 prefectures around Fukushima Prefecture. They asked the scientific society to assist the water operators in taking the measurements. The MHLW promptly announced the results of the tests on

Fukushima Nuclear DisasterdMonitoring and Risk Assessment

111

radioactive substances such as food and tap water, and issued instructions on distribution and intake restrictions, thereby appropriately setting and publishing regulatory values.

Measures for Agricultural Food Stuff Because radioactive substances were detected from the surroundings of the FDNPP following the accident, the MHLW announced on March 17 the provisional regulation values (PRV) for foods, implementing the “Guideline values on food intake restriction” published before the accident (Article 6, item 2 of the Food Sanitation Law) as shown in Table 1. As shown in Table 2, derived intervention levels of radioactive cesium in foods adopted by Japan were harmonized with international standards in terms of risk management. To derive these standards, the emission ratio for each radionuclide from the FDNPP and the transfer factors of each radionuclide to each food were considered. The available data for fish products were limited, so in setting the new standards, the radiation dose due to radioactive cesium was assumed to be 50%. With reference to these conditions, the maximum concentration of each food category was derived, taking into consideration food consumption and dose conversion coefficient according to age categories (Fig. 5). The number of foods monitored until March 2012 is shown in Table 3. The number of foods monitored after implementing the present standard limits from April 1, 2012 to January 23, 2018 is shown in Table 4, while the violation ratios of the regulatory values for radioactive materials in foods by month is shown in Fig. 6. Effective dose due to food consumption from March 2011 to December 2012 is shown in Table 5 comparing the effect of countermeasures virtually by using simulation. Committed effective doses due to the intake of radiocesium after the Fukushima nuclear power plant accident were shown in Fig. 7 comparing 2011 to 2014. Committed effective doses due to the intake of radiocesium after the Fukushima nuclear power plant accident were shown in Fig. 7 comparing 2011 to 2014. Even immediate after the accident ingestion doses were limited due to the countermeasures and the specificity of the accident caused by the severe earthquake that prevented local food consummation at that time. The MHLW disclosed the inspection results collected from local governments. For food items exceeding the PRV, if the level by which they exceeded the PRV was extensive in that region, the Prime Minister and the Head of Nuclear Emergency Response Headquarters ordered the prefectural governor to comply with shipment limits according to article 20, paragraph 3 of the Special Measures Concerning Nuclear Safety Measures Law (in particular, intake restrictions have been issued for items with very high concentrations). In addition, on March 25, April 26, and May 6, 2011 the Ministry of Agriculture, Forestry, and Fisheries (MAFF) notified related parties on how to dispose of vegetables and raw milk (including distribution-restricted vegetables, etc.) from which radioactive materials were detected, based on technical advice from the Emergency Technical Advisory Body of the NSC. Furthermore, the MAFF notified related persons on the methods of disposal of vegetables and raw milk (including restricted vegetables etc.) from which radioactive substances were detected, based on technical advice from the NSC’s technical advisory body. Table 1

Derived Intervention Levels for radioactive materials in foods adopted in Japan after the Fukushima nuclear accident Provisional regulation values (Bq kg 1) under the Food Sanitation Law (Law #233, 1947)

Radioactive material

Drinking water Milk, dairy products1 Vegetables (except root vegetables, potatoes) Seafood Drinking water Milk, dairy products Vegetables Grains Meat, eggs, fish, etc. Baby food Drinking water Milk, dairy products Vegetables Grains Meat, eggs, fish, etc. Baby food Drinking water Milk, dairy products Vegetables Grains Meat, eggs, fish, etc.

Radioactive iodine (representative nuclide of mixed nuclides: 131I) Radioactive cesium

Uranium

Plutonium, alpha nuclides of transuranic elements (sum of radioactivity concentration of 238Pu, 239Pu, 240Pu, 242 Pu, 241Am, 242Cm, 243Cm, 244Cm)

1

Instructions should be issued not to use milk containing over 100 Bq kg

1

of radioactive materials for baby formula or for drinking for babies.

300 2000 200 500 20 100 1 10

112 Table 2

Fukushima Nuclear DisasterdMonitoring and Risk Assessment Derived Intervention Levels of radioactive cesium in foods adopted by Japan compared to the United States, the European Union, and the Codex Alimentarius Commission Japan a

Food category

USA

Drinking water Milk General foods Infant foods Intervention levele Fraction of food affected

1200 1200 1200 1200 5 mSv /year 30%

EU

b

1000 1000 1250 400 1 mSv /year 10%

CAC

c

1000 1000 1000 1 mSv /year 10%

Provisional regulatory values d

Present standard limits

200 200 500

10 50 100 50 1 mSv/year 50%f

5 mSv/year 50%

a These criteria are thought to be adapted under emergency exposure situations. Accidental Radioactive Contamination of Human Food and Animal Feeds: Recommendations for State and Local Agencies. b Council Regulation (Euratom) 2016/52 of 15 January 2016 laying down maximum permitted levels of radioactive contamination of food and feed following a nuclear accident or any other case of radiological emergency, and repealing Regulation (Euratom) No 3954/87 and Commission Regulations (Euratom) No 944/89 and (Euratom) No 770/90. c Codex General Standard for Contaminants and Toxins in Food and Feed (Codex Stan 193–1995). d Foods were grouped into the following five categories: (1) drinking water, (2) milk and dairy products, (3) vegetables, (4) grains, (5) meat, egg, fish and others. e For radionuclides in foods, WHO proposed 5 mSv/year for emergency exposure situations and IAEA indicated 1 mSv/year for existing exposure situations as reference levels. f Fraction of food affected is 100% for milk and infant foods.

Fig. 5

Derived radioactive concentration for general foods for each age category adapting a reference level of 1 mSv/year. Table 3

Number of foods monitored until March 2012

Food category

Number of tests

Number of violations

Vegetable Fishery products Milk and dairy products Meat, Egg Grain Others Subtotal

21,121 9408 2991 94,155 5553 3808 137,036

451 245 23 286 2 197 1204

The Nuclear Emergency Response Headquarters established inspection plans and food distribution restrictions by reviewing the accumulated inspection results. Specifically, based on technical advice from the NSC, the headquarters announced on April 4, 2011 that (1) The boundary line for the restricted distribution area will basically be the same as that of the prefecture, and that the prefecture and/or Local governments can maintain management in these areas (2) Inspection is to be conducted once a week (it should basically be done in several municipalities) in the restricted distribution area, including the procedure for lifting a restriction. After April 8, the distribution regulations for items and areas that matched this instruction were lifted.

Fukushima Nuclear DisasterdMonitoring and Risk Assessment Table 4

113

Number of foods monitored after implementing the present standard limits from April 1, 2012 to January 23, 2018

Food category

Number of tests

Number of violations

Agricultural products Livestock products Fishery products Milk, infant foods Wild animal meat Drinking water Others Subtotal

206,384 1,424,202 (1,413,999)1 122,728 23,572 7967 5046 55,814 1,845,713

1242 4 (2)1 1509 0 1898 13 212 4878

1

Number of beef samples was shown in parenthesis.

50%

violation rate

40%

30%

20%

10%

0% Mar 2011 Mar 2012 Mar 2013 Mar 2014 Mar 2015 Mar 2016 Mar 2017

sampling month Fig. 6 Violation ratios of regulatory values for radioactive materials in foods by month. The open circle symbols, square symbols, triangular symbols, X mark and cross symbols represent wild animal meat, mushrooms, fish products, beef, and spinach, respectively. Note that two violation ratios for wild animal meat are above 50%, such as 53% for February 2013, and 54% for March 2013. Except for the meat of wild animals, the violation ratios for each food decreased. However, higher concentrations of wild mushroom and wild local food are easily measurable.

Table 5

Radiation doses due to food consumption from 15 March 2011 to 20 December 2012 Median

Committed equivalent dose to thyroid Committed effective dose

1

Age Sex No regulation1 Apply PRV2 Apply present standard limit3 No regulation Apply PRV Apply present standard limit

1-6y Male 6.00 1.72 1.71 0.40 0.16 0.16

7–12 Male 3.16 0.90 0.90 0.29 0.14 0.14

13–18 Male 2.09 0.77 0.76 0.30 0.19 0.19

95th Percentile 19Male 1.04 0.49 0.49 0.24 0.17 0.17

1–6 Male 21.39 4.57 4.57 1.20 0.33 0.33

7–12 Male 8.91 1.88 1.88 0.63 0.22 0.22

13–18 Male 7.80 1.75 1.75 0.72 0.31 0.31

19Male 5.43 1.31 1.30 0.64 0.30 0.30

Calculated on the assumption that foods exceeding the standard limits or the PRVs were distributed. Calculated on the assumption that marketed foods do not exceed the PRVs. 3 Calculated on the assumption that marketed foods do not exceed the standard limits. Yamaguchi, I., Terada, H., Kunugita, N., Takahashi, K. (2013). Dose estimation from food intake due to the Fukushima Daiichi nuclear power plant accident. Journal of the National Institute of Public Health 62(2): 138–143 (in Japanese). 2

Fukushima Nuclear DisasterdMonitoring and Risk Assessment

Committed effective dose (mSv)

114

20 Sendai city Fukushima city

15

Tokyo

10

5

0 2011

2012

2013

2014

Year Fig. 7 Committed effective doses due to the intake of radiocesium after the Fukushima nuclear power plant accident. 2011: Estimated by the market basket method. 2012–14: Estimated by the duplicate portion method. Note that before the accident (2007–09), the committed effective dose for a one-year food consumption was estimated to be 0.02–0.34 mSv for Cs-137, 0.15–0.81 mSv for Po-210. Among 10 cities, Sr-90 was detected only in one city and its dose was 0.68 mSv.

Regarding radioactive iodine in fish products, guideline values had not been established at the time of the accident, although there had been a mention of the necessity for further considerations before the accident. Based on case reports on a considerable amount of radioactive iodine detected in fish products, the MHLW decided to use the same provisional limit values for radioactive iodine in vegetables for fish products since it was preliminarily set up preparatory to potential arrangements, with reference to technical advice from Japan’s NSC, and notified each prefecture of the decision (Table 1). Regarding rice cultivation, before the arrival of the sowing season, the Nuclear Emergency Response Headquarters announced radiation safety measures on rice cultivation on April 8, based on technical advice from Japan’s NSC.

Discharge Monitoring Regarding monitoring of the emission of radioactive substances outside the site, the ventilation and sampling systems were interrupted after the external power supply was lost on March 11, making it impossible to monitor the release of emissions outside the site. Measurement results of air tack monitoring in some units were recorded until March 12, but these results were assumed to be due to the deposited radioactivity around the measurement system, since the dust sampling system had stopped.

Environmental Samples Including Soil Regarding the radioactive analysis of the soil around the FDNPP, soil samples were collected at five locations on March 21 and 22, and plutonium analysis performed. In view of the detected radioactive ratio of plutonium isotope, it was found that there was a possibility that plutonium was released due to the accident, not by the past atmospheric nuclear test. Regarding the detected concentrations, Pu-239 and Pu-240 were within the range of the observations compared to the fallout observed in past atmospheric nuclear tests (1978–2008) but Pu-238 was slightly above the value. Thereafter, samples were taken regularly, including for plutonium and strontium.

Seawater Monitoring TEPCO started collecting seawater samples from the south aqueduct canal, for the radioactive analysis of the seawater near the drainage of the FDNPP. They conducted the radioactive analysis from March 21 as a form of surrounding environmental monitoring. Radioactive substances were detected, and as a result TEPCO has continued radioactive analysis with increasing frequency and coverage from March 22.

Ocean Soil Offshore of the FDNPP To conduct a radioactive analysis of the marine soil around the FDNPP, TEPCO collected samples of the marine soil at two locations (3 km off Komagawa River and Iwasawa coast) on April 29, 2011, and confirmed higher concentrations of radioactive iodine and radioactive cesium. For the offshore area monitoring, the Japan Coast Guard started the measurement of the concentrations of radioactive materials in atmospheric dust for seawater and soil in the sea, and the ambient dose rate above coastal waters in the Fukushima and Ibaraki Prefectures, in collaboration with the Japan Ocean Research and Development Organization (JAMSTEC), the JAEA and TEPCO. The Nuclear Emergency Response Headquarters formulated a governmental announcement on April 22, 2011 to strengthen monitoring

Fukushima Nuclear DisasterdMonitoring and Risk Assessment

115

in the area; therefore, MEXT decided a new monitoring plan on April 25, 2011. Because MEXT predicted the diffusion of radioactive materials in the ocean area and requested a wide-area monitoring of the ocean area, it was announced that related ministries expanded the area for monitoring in the ocean area on May 6, 2011. Subsequently, MEXT announced the results of the analysis of seawater samples taken by the Tokyo Coast Guard on the coast of Ibaraki prefecture on April 29, 2011 using a survey vessel of the Japan Coast Guard.

Ocean Soil Offsite of the FDNPP From March 18 onwards, the measurement of environmental samples (weed, pond water, and soil) commenced, including in the evacuation areas, to investigate the distribution and accumulation situation of radionuclides more than 20 km from the FDNPP. Analysis of the results was conducted by the JAEA, JCAC and the Fukushima Prefecture. The concentration (Bq kg 1) of radioactive substances in soil and weeds more than 20 km away from FDNPP was also measured.

Monitoring System and Countermeasures Preparedness in Japan After the Accident Since the 2011 Great East Japan earthquake, the Japan government has focused considerable effort on strengthening the national radiation and nuclear safety systems. The Nuclear Regulatory Agency was established in 2012 to enhance the overall nuclear safety and radiation preparedness by acting on lessons learned, strengthen whole-of-government planning, and ensure regulatory oversight of the nuclear industry. The Nuclear Regulatory Agency is responsible for the licensing and oversight of nuclear waste and other radioactive material storage and disposal, while the Ministry of Environment is responsible for off-site decontamination due to the FDNPP accident, including the removal of affected soil and related wastes. The Nuclear Regulatory Agency is required to coordinate with relevant stakeholders, including the MHLW. Since 2014, the Nuclear Regulatory Agency also functions as the secretariat for the Nuclear Disaster Management Council as well as the Nuclear Emergency Response Headquarters in the Cabinet Office during a response. The Nuclear Regulatory Agency’s Guide for Emergency Preparedness and Response further describes the national emergency response systems and elements that are required at the prefectural level, including the provision of a network of designated medical facilities. There are currently five “high-level” medical centers capable of handling the most complex cases, and 32 “base hospitals” (among the 24 prefectures that have an urgent protective action planning zone related to a nuclear facility) that have at minimum a decontamination room and radiation-measuring devices. Based on internal and external expert consultations, as well as lessons learned from annual exercises, the national radiation emergency response plan is periodically revised, most recently in 2015. This established a “Local Nuclear Disaster Management Council” to enhance community involvement in radiation safety.

Enhanced Countermeasures After the Disaster Following the experience from the Fukushima incident, Japan has further enhanced existing systems and now has well-resourced national emergency preparedness and emergency planning systems, including comprehensive surveillance for radiation hazards. Japan has developed adequate occupational protection measures, including protection for first responders. Annual exercises are conducted that involve all levels of government, with documentation and follow-up of lessons learned. Specific laws and regulations have been established and revised following the nuclear accident to support national nuclear and radiation safety, with clear and comprehensive national coordination across all levels of government.

Risk Assessment Overview The UNSCEAR assessed that “in general, doses were low and that therefore associated risks were also expected to be low. Nevertheless, the report noted a possibility that an increased risk of thyroid cancer among those children most exposed to radiation could be theoretically inferred, although the occurrence of a large number of radiation-induced thyroid cancers in Fukushima Prefecturedsuch as occurred after the Chernobyl accidentdcould be discounted because absorbed doses to the thyroid after the accident at Fukushima were substantially lower.” However, after the Great East Japan Earthquake, the occurrence of physical and psychological problems such as lifestyle diseases, anxiety, and psychological distress is higher in residents of the Fukushima Prefecture than in other areas. To maintain and develop the current care network in Fukushima, cooperation is vital among various resources, including external experts.

Dose Assessment The main objective of monitoring and dose assessment in radiation accidents is to provide information for evaluating the degree of radiation-induced health risks and/or judging the necessity of radiation protection measures, because “for people most affected by the accident, provision of sound, accurate information should assist with their healing process” (http://www.who.int/ionizing_ radiation/chernobyl/backgrounder/en/). It is also recognized (through the experiences of the FDNPP) that information regarding

116

Fukushima Nuclear DisasterdMonitoring and Risk Assessment

dose assessment should be used not only for decisions by authorities or experts, but also for supporting the residents through the related communications. The importance of paying attention to ethical, legal, and social issues (ELSI) and physiological issues is emphasized in the recent EU recommendations for preparedness and heath surveillance of populations affected by a radiation accident. Therefore, the framework for monitoring and assessment becomes more important. In this context, a network-type organization was established consisting of 34 citizens’ radiation measurement labs all over Japan. The accident compelled people to open these labs (more than 34), and start these activities voluntarily. Much effort has been made to obtain external and internal dose assessments of residents of the Fukushima Prefecture. One common view of the related publications by Japanese experts is that the exposure dose related to the 2011 nuclear disaster in Fukushima is minimal for the majority of residents, with possible exceptions for limited additional exposure. It is thus expected that radiation-induced health effects that are not clinical would be undetectable. This should be largely attributed to the prompt radiation protection measures implemented for the residents. A number of individual dose measurements utilizing sophisticated tools were useful in providing information on the levels of exposure received in daily life.

External dose assessment under the emergency exposure situation

The residents in the municipalities near the FDNPP were ordered to evacuate their hometowns, designated as precautionary action zone (within approximately 5 km) and urgent protective action planning zone (within approximately 30 km) shortly after the accident. The first exploratory committee of the Fukushima Health Management Survey (FHMS) was held at the end of May 2011, and an external dose assessment for all residents in the Fukushima Prefecture ( 2 millions) was proposed as a part of the FHMS (Basic Survey). The Fukushima Prefecture commissioned the Fukushima Medical University (FMU) as the organization responsible for the FHMS. For this survey, it was decided that external doses would be assessed based on self-administered questionnaires to obtain personal behavioral data after the accident. To facilitate this, the National Institute of Radiological Sciences (NIRS, currently, reorganized as one of the directorates of the National Institutes for Quantum and Radiological Science and Technology, QST) developed an external dose estimation system. This system utilized both personal behavioral data and the chronological data of ambient dose rate maps in the Fukushima Prefecture. Note that the system calculated the effective doses from external irradiation for the first 4 months after the accident, intended to be used by each resident utilizing a web browser, although it was not realized due to the complex social situations. As a result, the maximum effective dose was 19 mSv for the delayed evacuation scenario from one location in Iitate village that the Japanese government designated as a deliberate evacuation area for relocation in April 2011. The number of residents whose external doses were estimated reached 552,298 as at June 30, 2017. It was reported that individual external doses for the first 4 months were below 3 mSv for 99.4% of the 421,394 residents (excluding radiation workers). The arithmetic mean and maximum of the doses of the residents from the coast region covering the municipalities within the restricted zone (within the 20 km radius of the FDNPP) were 0.8 and 25 mSv, respectively. Regarding the mean dose, the doses of the middle-north and middle regions of the Fukushima Prefecture where no evacuation orders were issued were higher than those for the coast region. This suggested that the prompt evacuation may significantly have reduced exposure doses for the residents living near the FDNPP. Several municipalities in the Fukushima Prefecture initiated external dose measurements for the residents using passive-type and active-type personal dosimeters, developed after the nuclear accident. After adding the results of the Basic Survey, the first-year average effective dose estimates for Fukushima city, Date city, Nihon-matsu city, Tamura city and Koriyama city were 2.1, 1.9– 3.3, 2.4–2.5, 1.2–1.4 and 1.7 mSv, respectively (Table 6). These estimates are smaller than those from the UNSCEAR 2013 report.

External dose assessment under the existing exposure situation

Under the existing exposure, more precise measurements were expected regarding the discrepancy between the projected doses estimated by ambient radiation doses monitored through a monitoring post, and the individual external doses directly measured with Table 6

Effective doses of residents due to external exposure during the first 4 months Number of respondents (excluding radiation workers) by area

Total

Effective dose (mSv)

Kempoku (North-central) Kenchu (Central) Kennan (South-central) Aizu

Minami-Aizu Soso

Iwaki

Number

Ratio (%)

5 Total Maximum dose (mSv) Average dose (mSv)

23,669 77,265 13,811 433 39 29 115,246 11 1.4

3775 29 0 0 0 0 3804 1.9 0.1

66,634 595 25 3 1 1 67,259 5.9 0.3

261,140 134,848 22,600 1382 494 930 421,394 – –

62.0 32.0 5.4 0.3 0.1 0.2 100.0 – –

53,547 41,613 7115 369 5 2 102,651 5.9 1.0

21,892 2826 12 0 0 0 24,730 2.6 0.6

37,114 254 16 1 0 0 37,385 3.6 0.2

54,509 12,266 1621 576 449 898 70,319 25 0.8

Fukushima Nuclear DisasterdMonitoring and Risk Assessment

117

personal dosimeters (PDs) in terms of risk communication. The Japanese government recognized the importance of monitoring individual doses and it is thought that evaluation should be made on the measured individual PDs rather than the ambient dose rates measured through the monitoring posts to avoid over-estimation. In this context, NIRS and JAEA investigated the relationship between the two doses at several places in the Fukushima Prefecture. As a result, it was found that the estimated radiation doses measured by the PDs worn by adult males with an average body size were about 0.7 times higher than the estimated doses compared to the radiation doses monitored through a monitoring post at the same places. Therefore, individual external dose measurements have been performed using the latest technologies utilizing a data logger and a GPS, especially to enable individuals decide if it was safe to return to their hometowns. These devices are able to provide individuals information on when and where they face the most exposure to radiations in daily life. Furthermore, to visualize radioactive sources in the environment, radiation detectors that are able to discriminate the direction of radiation particles were implemented in several local governments. These devices are useful for considering methods to reduce exposure effectively or as risk communication tools to understand radiation sources encountered in everyday life together with explanations by experts. Issues on personal dose estimation due to external exposure As a part of the FHMS, the external doses of about one quarter of the target subjects (total target population is 2 million residents in Fukushima Prefecture) during the first 4 months were estimated. The other portion has not been completed, partially due to a lack of cooperation on the part of the residents. Regarding the representativeness of this study, the arithmetic mean was confirmed by the additional survey of the remaining higher dosage groups, including the nonevacuated group and the group that consumed wild food. One of the other issues is how to estimate the additional radiation doses due to the accident while separating the background radiation that also have a wide variety, because radiation protection aims at controlling the additional radiation. Further, because the dose rate data in each community before the accident were limited, the newly developed method for estimating background radiation levels (that utilizes counts above 1.3 MeV photons) was employed, and it was confirmed to be useful.

Internal dose assessment under the emergency exposure situation

Regarding internal exposure, the most disturbing radionuclide in the early phase was 131I that is of great concern in particular for small children. The Japanese government regulated the distribution and consumption of food and drinks that exceeded the PRV of the radioactive concentrations shortly after the accident. In addition, a collapse in the food supply chain due to damage of distribution facilities, lack of transportation vehicles or electricity, and the closure of retail stores should have significantly reduced consumption of contaminated foodstuff by people even before the implementation of the restriction orders. However, there were still possible pathways for internal exposure, such as the inhalation of radioactive plumes and the ingestion of local food and water very early after the accident, noting that the tap water system in some areas in Fukushima is unique; for instance, the coverage ratios for the water-supply system for Katsurao and Kawauchi villages were 0%. Therefore, the UNSCEAR estimated absorbed doses to the thyroid of adults, including external exposure (Table 7). After the accident, the Nuclear Emergency Response Local Headquarters of the Japanese government conducted a screening for thyroid exposure at Kawamata town, Iwaki city and Iitate village at the end of March 2011, targeting 1080 children (aged  15 years). The 90th percentiles of the thyroid equivalent doses for each location were 7.3, 15.9, and 14.7 mSv, respectively. The Fukushima Prefecture gradually started individual monitoring using whole body counting (WBC) units since they have to deal with a growing concern about internal exposure among the residents. According to the JAEA, as of January, 2012, the maximum committed effective dose (CED) was 3 mSv and 25 cases exceeded 1 mSv, possibly due to the consumption of traditional wild local Table 7

Estimated settlement-average absorbed doses to thyroid for evacuees for the first year following the accident Precautionary evacuated settlements a

Age group Adults Child, 10-year old Infant, 1-year old

Deliberately evacuated settlements c

Before and during evacuation

At the evacuation destination

First year total

Before and during evacuation

At the evacuation destination

First year total

0–23 0–37

0.8–16 1.5–29

7.2–34 12–58

15–28 25–45

1–8 1.1–14

16–35 27–58

0–46

3–49

15–82b

45–63

2–27

47–83

a Precautionary evacuation refers to the evacuation of settlements that was instructed between the 12 and 15 March 2011 as an urgent protective action to prevent high exposure. The dose assessment considered evacuation scenarios 1–12 (see appendix C of Levels And Effects of Radiation Exposure Due to the Nuclear Accident After the 2011 Great East-Japan Earthquake and Tsunami of Unscear 2013 Report to the General Assembly, with Scientific Annexes VOLUME I Scientific Annex A) for towns of Futaba, Okuma, Tomioka, Naraha and Hirono, and parts of the cities of Minamisoma, Namie and Tamura and villages of Kawauchi and Katsurao. b These absorbed doses to thyroid were principally due to internal exposure from inhalation during the passage of the airborne radioactive material through the affected areas before and during evacuation in the early days of the accident and from ingestion over the subsequent period. c Deliberate evacuation refers to the evacuation of settlements (based upon environmental measurements) that was instructed between late March and June 2011. The dose assessment considered evacuation scenarios 13–18 (see appendix C) for Iitate village and parts of Minamisoma City, the towns of Namie and Kawamata, and of Katsurao village.

118

Fukushima Nuclear DisasterdMonitoring and Risk Assessment

food assuming that radioactive nuclides were exposed on March 12, 2011. The common intake scenario was acute inhalation. Furthermore, the detection of higher sensitivity WBC measurements might partly be related to slight levels of surface radioactive substance on the subject’s clothes brought back from houses in the affected areas during temporal visits. To avoid false positive detections, the subjects were asked to change into contamination-free gowns before WBC measurements were taken. The most challenging issue in the internal dose assessment is estimating the early internal doses due to short-lived radionuclides since direct measurements to assess internal exposure regarding 131I were carried out for only  1300 cases. Most of these data were obtained from a screening based on a simplified measurement technique struggling with high background conditions. Therefore, it was investigated by analyzing the collected filter-tapes of suspended particulate matter (SPM) monitors in air quality monitoring stations that provided hourly atmospheric radionuclides just after the Fukushima accident by measuring Cs-137 and I-129.

Internal dose assessment under the existing exposure situation

More than 50 WBCs (including mobile units) were operated in the Fukushima Prefecture in 2017, with the total number of WBC measurements reaching 327,434 as at October 2017. The number of persons whose CED was greater than or equal to 1 mSv is 26, showing only 4 additional persons to the results from 2012. These were caused by the ingestion of wild food items and tend to be seen in elderly persons.

Unexpected Transfer of Radioactive Substances to Beef Due to Straw On July 8, 2011, it was found that there was beef that contained radioactive cesium in excess of the PRV. The reason was that these cattle were fed with rice straw that had been left outdoors after the accident. The MAFF instructed farmers not to feed cows with inadequately preserved feed. Therefore, MAFF set the provisional tolerable radioactive cesium levels for feed including rice straw at 300 Bq kg 1. The value was revised to 100 Bq kg 1 as a new standard limit on February 3, 2012. The MHLW ordered the local government to trace suspicious beef and to recall contaminated beef that exceeded the PRV. After it was reported, the amount of beef purchased dropped dramatically. Therefore, the Japanese government supported the local government in conducting beef monitoring. As a result, more than 70% of the one million total food samples monitored by May 2015 were beef. Beef monitoring is still being conducted: not only in the Fukushima Prefecture and the neighboring prefectures, but also in other areas (as at 2017).

Unexpected Transfer of Radioactive Substances to Milk Due to the Drying Process One of the milk companies in Japan initiated the recall of approximately 400,000 cans of its infant milk product from 6 December 2011, because it was found to unintentionally contain radioactive cesium by the measurement carried out by an NPO. It was assumed that the radioactive substances might have been contained in the hot air used by the company for the drying process from March 14–20, 2011 since relatively huge amounts of air is necessary for the drying process. The detected concentration was 31 Bq kg 1 in the powdered milk and that was below the limit set by the MHLW that was at 200 Bq kg 1 for powdered milk at that time (up until April 2012). Although the concentration was below the PRV and the estimated radiation dose to infants was small, the company voluntarily recalled the infant milk powder.

Cancer Risk Assessment UNSCEAR concluded that “in general, doses were low and that therefore associated risks were also expected to be low. A discernible increase in cancer incidence in the adult population of Fukushima Prefecture that could be attributed to radiation exposure from the accident was not expected. Nevertheless, the report noted a possibility that an increased risk of thyroid cancer among those children most exposed to radiation could be theoretically inferred, although the occurrence of a large number of radiation-induced thyroid cancers in Fukushima Prefecturedsuch as occurred after the Chernobyl accidentdcould be discounted because absorbed doses to the thyroid after the accident at Fukushima were substantially lower.” However, the anxiety of thyroid cancer is still a major issue in the political situation so that the September meeting in 2016 of Fukushima prefectural congress unanimously adopted a petition for maintaining thyroid examination in the Fukushima prefectural health survey, rather than it being removed. Conversely, the participant ratio for the thyroid survey in the FHMS is changing, especially for the elderly. In fact, the participant ratio for the thyroid survey in the FY 2015 was 76.5% for those under 7 years old, 93.4% for 8–12-year olds, 86.6% for 13–17-year olds, but only 23.4% for 18–22-year olds, showing a significant decrease.

Increased Lifestyle-Related Diseases After the Nuclear Disaster Many residents in the surrounding areas were forced to evacuate their homes, and as a result they had to change their way of life. Through the FHMS, the potential impact to cardiovascular diseases (CVD) was investigated regarding disaster-related risk factors. As a result of a longitudinal study of lifestyle-related diseases in the FHMS, residents who evacuated after the disaster have an increased proportion of overweight/obese people and are more likely to suffer from hypertension, diabetes, dyslipidemia, liver dysfunction, atrial fibrillation, and increased prevalence of polycythemia vera. In addition, the prevalence of diabetes and dyslipidemia increased from 2011 to 2012 after the disaster. This result indicates that disaster evacuees may be susceptible to CVD. For this reason, to

Fukushima Nuclear DisasterdMonitoring and Risk Assessment

119

improve the health situation after disasters, prevention programs for obesity, hypertension, diabetes, and dyslipidemia are being carried out in collaboration with local governments and communities. After the Great East Japan Earthquake, the occurrence of physical and psychological problems such as lifestyle diseases, anxiety, and psychological distress is higher in residents of the Fukushima Prefecture than in other areas. These physical and psychological problems can lead to the onset of CVD in the future. Therefore, from the viewpoint of health condition management, continuous efforts are necessary in collaboration with local governments and local communities.

Psychosocial Effects of the Fukushima Disaster The Great East Japan Earthquake was a major disaster, but the subsequent nuclear accidents additionally influenced the mental health of the residents. Note that between natural disasters and nuclear disasters, there is a significant difference in terms of the psychosocial impacts associated with many factors not only human and property losses but also psychological acceptance, community adherence, disgust, and media influence. These effects include chronic psychiatric symptoms such as depression and alcohol abuse, as well as reactions after mental trauma, and self-destructive behaviors such as suicide that also increased. In addition to these psychiatric problems, residents of Fukushima were subjected to stigma by others and themselves regarding radiation exposure and health effects. In particular, the recognition of the negative risks concerning the genetic effects of radiation is related to depressive symptoms among evacuees, and response to cognitive bias is required. Outside the public, significant fatigue and various types of depressive symptoms have been reported among rescue workers and municipal officials who need more intensive care and treatment in Fukushima. To maintain and develop the current care network in Fukushima, cooperation is vital among various resources including supportive external experts. Violence is cruelty in the home so its structure can be explained from the power and the rustling rings perspective since it has a social background. In fact, due to the nuclear accident, the number of child abuse consultations has increased in the Fukushima prefecture, with the total number doubling to 956 cases in the FY 2016. This increase might partially be due to the report biases, but as a state of paternity of parents in childcare, various investigations have revealed that the high-stress situation is continuing. In addition, the Fukushima Prefecture Mental Health Welfare Center suggested an increase in suicides after the disaster, including among the younger generation. Therefore, the necessity of countermeasures was recognized, and various other projects are being carried out such as providing cartoons regarding radiation issues.

Radiation Dose and Risk Assessment Conducted by International Organizations Developments since the 2013 UNSCEAR report on the levels and effects of radiation exposure due to the accident following the great East Japan earthquake and tsunami in 2013 has been published, such as three white papers until 2017. A 2017 White Paper sets guidelines on the Scientific Committee’s future program of work, and continued the ongoing effort to systematically monitor and evaluate relevant, new scientific information that has emerged since the launch of the Committee’s 2013 Report on the FDNPP accident. This includes the 2015 and 2016 White Papers. The 2017 White Paper presents a scientific analysis of the recent information including releases and dispersion for the atmosphere and marine environments; transfer of radionuclides in terrestrial and freshwater environments; evaluations of doses for the public and workers; health implications for workers and the public; and doses and effects for nonhuman biota. The 2017 White Paper also summarizes major research projects that are currently ongoing in Japan, including the FHMS, mentioning that “thyroid screening is a complex issue and judgments on its scope, nature and/or continuation following the accident at FDNPP require considerations of factors that go far beyond the purely scientific issues alone (e.g., those of a socio-economic, public health, legal, ethical or human rights nature). Should the screening continue, the systematic collection and storage of biopsy material (including related information on exposures) and open access to it for meritorious research, would help investigations to be made of biomarkers and molecular signatures of radiation-induced thyroid cancer (i.e. in a similar manner to the Chernobyl Tissue Bank).”

Insoluble Spherical Cesium-Bearing Radioactive Microparticles The other topic on radiation exposure is insoluble spherical cesium-bearing radioactive microparticles. It has been discovered that there are two types of radioactive particles from a soil sample collected near the FDNPP. It was estimated that small particles were released from units 2 and/or 3 on March 15, 2011. The particles were amorphous and highly oxidized; containing elements derived from the nuclear fission process, reactor, and fuel material, and comprised glassy globules formed from a molten mixture of nuclear fuel and reactor materials. The insoluble radioactive Cs particles are widespread so the Japan Health Physics Society developed a radiation-dose estimation model, and is investigating the health effects and long-term behavior in the environment.

See also: Fukushima Nuclear Disaster: Emergency Response to the Disaster; Fukushima Nuclear Disaster: Multidimensional Psychosocial Issues and Challenges to Overcome Them; Radiation Exposures Due to the Chernobyl Accident; Thyroid Cancer Associated with the Chernobyl Accident.

120

Fukushima Nuclear DisasterdMonitoring and Risk Assessment

Further Reading Government of Japan, 2011. Report of Japanese Government to the IAEA Ministerial Conference on Nuclear Safety- The Accident at TEPCO’s Fukushima Nuclear Power Stations. Available at. https://japan.kantei.go.jp/kan/topics/201106/iaea_houkokusho_e.html. (Accessed 30 January 2018). WHO, 2013. Health risk assessment from the nuclear accident after the 2011 Great East Japan earthquake and tsunami, based on preliminary dose estimation. Available at. http:// apps.who.int/iris/bitstream/10665/78218/1/9789241505130_eng.pdf. (Accessed 30 January 2018). UNSCEAR, 2014. The 2013 UNSCEAR Report on the Levels and Effects of Radiation Exposure due to the Nuclear Accident Following the Great East Japan Earthquake and Tsunami. Available at. http://www.unscear.org/unscear/en/fukushima.html. (Accessed 30 January 2018). UNSCEAR, 2017. Developments since the 2013 UNSCEAR Report on the Levels and Effects of Radiation Exposure due to the Nuclear Accident Following the Great East Japan Earthquake and Tsunami, A 2017 white paper to guide the Scientific Committee’s future program of work. Available at. http://www.unscear.org/unscear/en/publications/ Fukushima_WP2017.html. (Accessed 30 January 2018). Saito, T., Kunimitsu, A., 2011. Public health response to the combined great East Japan earthquake, tsunami, and nuclear power plant accident: Perspective from the Ministry of Health, Labour and Welfare of Japan. Western Pacific Surveillance and Response Journal 2 (4), 7–9. https://doi.org/10.5365/wpsar.2011.2.4.008. Ohno, K., Hamada, M., 2014. Radiation about ourselves. Department Promotion Project. University Seika Kyoto. Available at. http://www.worldvillage.org/jishin/data/houshasen_ comic.pdf. (Accessed 30 June 2018). Shimura, T., Kunugita, N., 2018. Lessons learned on Public Health from the Fukushima Daiichi Nuclear Power Plant accident. Journal of the National Institute of Public Health. NIPH 67 (1). https://www.niph.go.jp/journal/data/67-1/e67-1.html. (Accessed 30 March 2018).

Relevant Websites https://japan.kantei.go.jp/incident/health_and_safety/index.htmldGovernment of Japan. Health and safety, Reconstruction following the Great East Japan Earthquake. Official Website of the Prime Minister of Japan and His Cabinet. (accessed 2018-01-30). http://www.cas.go.jp/jp/seisaku/icanps/eng/dInvestigation Committee on the Accident at the Fukushima Nuclear Power Stations. (accessed 2018-01-30). https://www.iaea.org/newscenter/focus/fukushimadIAEA. Fukushima Nuclear Accident (accessed 2018-01-30). http://www.who.int/ionizing_radiation/en/dWHO. Ionizing radiation. (accessed 2018-01-30). http://www.mhlw.go.jp/english/topics/2011eq/workers/index.htmldMHLW. In Focus: Radiation Protection at Works Relating to TEPCO’s Fukushima Daiichi Nuclear Power Plant Accident (IRPW).

Fukushima Nuclear Disaster: Multidimensional Psychosocial Issues and Challenges to Overcome Them Masaharu Maeda and Michio Murakami, Fukushima Medical University, Fukushima City, Japan Misari Oe, Kurume University, Kurume City, Japan © 2019 Elsevier B.V. All rights reserved.

Abbreviations CAGE Attempts to cut back on drinking being annoyed at criticisms about drinking feeling guilty about drinking and using alcohol as an eye opener FCDMH the Fukushima Center for Disaster Mental Health FDNPP the Fukushima Daiichi Nuclear Power Plant FY Fiscal year K6 Kessler’s six-item questionnaire LHpLE Loss of happy life expectancy MHLS The Mental Health and Lifetime Survey PCL PTSD check list PTSD Posttraumatic stress disorder SDQ Strengths and difficulties questionnaire

Introduction The Great East Japan Earthquake and subsequent tsunami caused not only physical damage to coastal areas in Tohoku, Japan, but also caused a serious nuclear crisis inside and outside Fukushima. The explosions at the Fukushima Daiichi Nuclear Power Plant (FDNPP) resulting from the station blackout caused a great deal of confusion among residents and many evacuated from communities located close to, or even far from the FDNPP. The experiences of evacuees in the initial phase formed robust traumatic memories that will afflict them for a long time. People were exposed to a great deal of information, rumors and recommendations about the health effects of radiation from various resources from both valid sources so-called experts. Also, in many cases, residents received much of their information through the Internet from sources that were inconsistent and sometimes unreliable. People were swayed by such uncertain information and often had no choice but to decide their future plans according to what they read online. The FDNPP accident influenced several dimensions of life at the individual, familial, community, and national levels. In this article, we reviewed mental health issues elicited from people affected by the Great East Japan Earthquake, subsequent tsunami and FDNPP accident, including posttraumatic stress responses, depression, alcohol abuse and more self-destructive behaviors such as suicide. In order to demonstrate comprehensive, longitudinal aspects of psychological problems among evacuees, we focused on the results of a major mental health survey conducted by Fukushima Medical University after the accident. This survey had yielded fruitful results and findings useful for creating mental health policies as well as for developing efficient psychological interventions. In addition, we investigated substantial relationships between individual health risk perception towards radiation and other significant factors related to psychosocial issues. While these phenomena have never been seen in natural disasters and the process of the Fukushima disaster is still ongoing, evidence and findings are accumulating. Lastly, we referred to current challenges implemented by local resources to improve mental health among people affected by the disaster.

Development of the Fukushima Disaster The FDNPP accident occurred primarily following the Great East Japan Earthquake and tsunami on March 11, 2011. A huge tsunami impacted the coastal areas of the Soso area in Fukushima Prefecture, including the FDNPP. According to the Japan Meteorological Agency, the height of the tsunami was 9.3 m in Soso area and 1817 people were presumed dead. The tsunami also caused a total loss of electric power and hydrogen-air explosions at the FDNPP resulted in the diffusion of radioactive substances. Meltdowns of the reactors were reported 9 months after the disaster. The Japanese government decided to establish an evacuation order for residents living within a 20-km radius of the FDNPP in the first 5 days after the accident. On April 22, the government ordered residents to evacuate within 1 month due to the effective dose of 20 mSv/year. However, there was a lack of information among residents, such as in which direction to evacuate and how dangerous it really was to stay home. As a result, some people could not avoid camping in

Encyclopedia of Environmental Health, 2nd edition, Volume 3

https://doi.org/10.1016/B978-0-12-409548-9.10981-9

121

122

Fukushima Nuclear Disaster: Multidimensional Psychosocial Issues and Challenges to Overcome Them

very high radiation areas while trying to evacuate. Moreover, relief efforts for socially vulnerable people within the forced evacuated area were delayed in the initial turmoil after the accident; many patients, especially aged or highly disabled people, at hospitals passed away because of insufficient support. The acute situation was chaotic, and people were afraid and uncertain. Surprisingly, it is estimated that the evacuees changed their location approximately four times on average during the first year after the disaster (Yabe et al., 2014). Among the approximately 2 million people living in Fukushima Prefecture at the time of the disaster, more than 160,000 people were evacuated as of May 2012. Although 37% of evacuees, especially families with small children, relocated far from Fukushima Prefecture, approximately 60% of residents remained within the prefecture or the surrounding area. This was partially because local governments had set up tentative town offices in some areas and it was convenient for people to stay near these offices. Some people also wished to live in areas near their hometown where radioactivity levels were low, so that they could visit their home easily during the daytime. Consequently, many evacuees decided to relocate to Iwaki City, which is located about 30–40 km south of the FDNPP. As of April 2014, the population of Iwaki City had increased by more than 30,000 (approximately 10% increase), and this caused new social conflicts between the original residents and the evacuees. Some residents complained about traffic and that the hospitals were too crowded. Others criticized the financial compensation given to evacuees that they felt may have been used for drinking or gambling. The serious social issues that emerged from such discordance are described in more detail below. Perceived uncertainty of the risk of radioactive substances led parents and schoolteachers to restrict children from playing outdoors. For this reason, the proportion of children aged 5–17 years old with obesity in Fukushima Prefecture in 2012 was the highest among all prefectures in Japan. Relocated evacuees also experienced difficulties with their living situation. Before the disaster, many had lived in large houses in rural areas and it was common for three generations to live together. However, after the disaster, they lived in small temporary housing and many families decided to live separately (e.g., elderly people, such as grandparents, wanted to stay near their hometown). Multiple factors, such as fear of exposure to radioactivity, along with residential restrictions, compensation, employment, and/or personal reasons were associated with the separation of families and communities. The Japanese government has been gradually lifting living restrictions in the evacuation area in Fukushima according to the progression of decontamination. However, many evacuees, especially young evacuees, remain hesitant to return to their hometown due to various reasons such as the unclear future of the towns, poor social resources (e.g., medical, welfare, and commercial facilities), changes in personal circumstances and radiation-related issues. As a result, many towns that have become habitable again face low return rates of original residents and high rates of aged people. As of April 2018, approximately 50,000 people remain evacuated both inside and outside of Fukushima Prefecture.

Mental Health Issues Among Evacuees Mental Health and Lifetime Survey Purpose and method

Soon after the disaster, Fukushima Medical University established the Radiation Medical Science Center in order to measure and explore external radiation doses and the long-term effects on the thyroid, pregnancy, comprehensive physical conditions and mental health of people in Fukushima affected by the disaster. Here, we introduce the results of a major population-based mental health survey (the Mental Health and Lifetime Survey: MHLS) sent by mail every year since January 2012 (2011 fiscal year from April to March). The MHLS has several purposes. The first is to clarify current mental health problems and lifestyle-related issues among people who lived in the evacuation order area at the time of the disaster by using several questionnaires as described below. The second is to provide brief intervention, including psychoeducation and advice by telephone or mail mainly to people at risk of posttraumatic stress disorder (PTSD), depression and other behavioral problems based on the results of the MHLS. The third is to share adequate information with available resources both inside and outside of Fukushima Prefecture, including psychiatric clinics and local health centers as needed (Yasumura et al., 2012). The target population of the MHLS is people who were registered as residents of municipalities that are or have been designated as evacuation order areas after the disaster, namely, Hirono, Naraha, Tomioka, Kawauchi, Okuma, Futaba, Namie, Katsurao, Iitate, Minami-soma, Tamura, Kawamata, and part of Date. The total number of target participants reached 208,044 on October 31, 2017. Participants are divided into five groups according to age: 0–3 y/o, 4–6 y/o, 7–12 y/o (primary school age group), 13–15 y/o (middle school age group) and 16 y/o or more (adult group). The average response rate obtained every year has been 20–30%.

Results

With regard to general psychological states, based on the results of Kessler’s six-item questionnaire (K6) (Kessler et al., 2003), which was used to identify psychological distress and other psychiatric symptoms including anxiety and depressive symptoms, there was an extremely high prevalence of risk of depression and related disorders in 14.6% of the adult group in 2012, the first survey year (Fig. 1). Thereafter, the prevalence gradually decreased during the first 3 years, but remained unchanged at around 7% during the last 3 years (2015–17). The most recent prevalence rate is thought to be still considerably high compared to that in the general population in Japan (3.0%) (Kawakami, 2007). These results suggest that many evacuees are still suffering from not only subclinical mental health problems (psychological distress), but also psychiatric symptoms, especially depressive symptoms. Actually, over 60% of the respondents indicated that they had some sleep difficulties (Fukushima Medical University, 2018) and the total number

Fukushima Nuclear Disaster: Multidimensional Psychosocial Issues and Challenges to Overcome Them

123

% 18 16

14.6

14 11.7

12

9.7

10

7.7

8

7.1

6.8

6 4 3.0%

2 0 2012

2013

2014

2015

2016

2017

Fig. 1 Prevalence of evacuees at risk of depression. Annual change in the prevalence of evacuees at risk of depression based on the results of Kessler-six item scores of >13 (Fukushima Medical University, 2018). The broken line indicates the prevalence in the general population in Japan (Kawakami, 2007).

of disaster-related suicides during the first 5 years after the disaster (officially certified by the Japanese Police Agency) reached over 100, greatly higher than those in Iwate and Miyagi, the other two prefectures affected mainly by the tsunami (Japanese Cabinet Office, 2017). Kunii et al. (2016) examined the K6 data obtained in the first survey year and found that psychological distress in each evacuation zone was significantly positively associated with the radiation levels in their environment, eventually concluding that such a close relationship could be brought by great psychological burden of evacuation life. Moreover, other studies using K6 scores of the MHLS (Suzuki et al., 2015; Oe et al., 2016) indicated a close association between psychological distress including depressive symptoms and negative risk perception of radiation. This unique relationship has never been seen in natural disasters and is described in more detail later. In addition, cross-sectional studies examining K6 scores obtained from the MHLS showed that psychological distress among evacuees was associated with changes in daily drinking patterns before and after the disaster (Ueda et al., 2016), and difficulties in living independently among aged people (Harigane et al., 2017). Regarding PTSD symptoms that should be important factors significantly impacting evacuees’ mental health in natural disasters, the MHLS revealed that the prevalence of respondents at risk of PTSD was as high as 21.6% based on the result of the PTSD Check List (PCL) in 2012 (Yabe et al., 2014). This prevalence rate was almost equal to that of rescue personnel after the 9/11 World Trade Center attacks in the United States using the same cutoff value for the PCL (Stellman et al., 2008). In the most recent MHLS results, there are still over 10% who showed significant PTSD symptoms among the same population (Fukushima Medical University, 2018). Another longitudinal study (Miura et al., 2017) using data obtained from the MHLS during the first 3 years after the disaster revealed that women evacuees who believed that their health was substantially affected by the nuclear disaster were at an increased risk of having poor mid-term mental health, whereas most participants (80.3%) were resistant against PTSD. The MHLS, furthermore, examined problematic behaviors among children (e.g., hyperactivity, irritability, emotional instability, truancy) and adolescents (15 y/o or less) using the Japanese version of the Strengths and Difficulties Questionnaire (SDQ) (Matsuishi et al., 2008) completed by caregivers, mothers in many cases. In the first survey year, the prevalence of respondents at risk of having some problematic behaviors was extremely high in all age groups (see Fig. 2). (Fukushima Medical University, 2018) In particular, many caregivers with young children tended to recognize that their children had some behavioral problems. The prevalence in all age groups, however, decreased quickly and approximated that in another population groups not affected by the disaster, as seen in Fig. 2. This suggests that the worries of caregivers toward their children might have been mitigated. In addition, other cross-sectional studies using SDQ score in the MHLS demonstrated a significant association between the prevalence of problematic behaviors among young respondents and sleep difficulties (Itagaki et al., 2018) or daily exercise habit (Itagaki et al., 2017). Another longitudinal study (Oe et al., 2018) that analyzed SDQ subscale scores (“emotional symptoms” and “peer relationship”) in the MHLS during the first 3 years after the disaster revealed that experiencing the nuclear plant accident and insufficient physical activity were associated with a very severe trajectory toward problematic behaviors, and there was a major gender difference between the two SDQ subscale groups; the male group in emotional symptoms and the female group in peer relationship were more likely to have problematic behaviors. The MHLS also examined problem drinking among evacuees using CAGE (an acronym for “attempts to Cut back on drinking, being Annoyed at criticisms about drinking, feeling Guilty about drinking, and using alcohol as an Eye opener”). The results showed that, despite a high prevalence (20.5%) of males at risk of problem drinking in 2012, the risk decreased annually and was 17.1% in the most recent survey year, 2016. (Fukushima Medical University, 2018) The active campaigns to promote moderation in drinking for evacuees that started in 2012 as a measure to prevent suicide in Fukushima might have contributed to this decrease.

124

Fukushima Nuclear Disaster: Multidimensional Psychosocial Issues and Challenges to Overcome Them

% 30 2012 25

2013

2014

2015

2016

2017

year

24.4 22.0

20 16.5

15

16.3 14.2

14.7 15.1 13.4

16.2 13.7

12.6

12.3

13.2 13.0 11.6

10.8 11.1

10

12.3

9.5%

5 0 4–6 y/o

7–12 y/o

13–15 y/o

Fig. 2 Prevalence of children at risk of problematic behaviors. Annual change in the prevalence of children at risk of problematic behaviors based on results of Strengths and Difficulties Questionnaire scores of >16 (Fukushima Medical University, 2018). The broken line indicates the prevalence in those living in other non-affected area in Japan (Matsuishi et al., 2008).

Other Studies on People Affected by the FDNPP Disaster In addition to the MHLS, other studies have revealed significant findings regarding mental health problems among people affected by the nuclear accident. A retrospective survey was conducted on outpatients newly visiting psychiatric clinics in Fukushima Prefecture in the early phase of the disaster (Miura et al., 2012). The results of that survey showed that 13.9% of the total number of patients (n ¼ 1321) showed symptoms of PTSD or adjustment disorder, and 17.2% were experiencing depressive episodes. Additionally, symptoms in more than 30% of these patients were estimated to be possibly associated with the nuclear accident. Another study (Matsuoka et al., 2012) performed in the same early phase focused on rescue workers working near the FDNPP and revealed that psychological distress in workers was strongly associated with concern about radiation exposure. In addition, other studies (Shigemura et al., 2012, 2014) conducted on Tokyo Electric Power Company plant workers 2–3 months after the accident showed that high psychological distress was associated with discrimination and slurs. Moreover, another study using diagnostic interviews for public employees working in the disaster area revealed that the prevalence rate of depression among public employees was as high as 17.8% and approximately 70% had sleep difficulties (Maeda et al., 2016). These studies indicate great psychological burden among various types of workers working in the disaster area in Fukushima and it is necessary to provide adequate care and psychiatric treatment in order for Fukushima to fully rehabilitate. Considering that the MHLS and other many studies reported significant increases in PTSD and depressive symptoms among people in Fukushima, it should be noted that these psychiatric symptoms might lead to more serious consequences, such as an increase in suicide. Reviews (Kõlves et al., 2013; Matsubayashi et al., 2013) that shed light on the relationship between natural disasters and suicide among affected people have raised concern about an increase in suicide after a major natural disaster. In addition to a prominent number of disaster-related suicides in Fukushima as described above, studies examining the tendency towards suicide using panel data in Japan revealed a significant probability of an increase in suicide among people in Fukushima affected by the disaster. The standardized suicide mortality ratio (SMR) in Fukushima decreased after the 2011 disaster (108 in 2010, 107 in 2011, 94 in 2012, and 96 in 2013), but then increased to 126 in 2014, which exceeded predisaster levels (the reference of the SMR is the average suicide rate among the general population in Japan) (Ohto et al., 2015). This pattern, an increase after a short-term drop, is similar to that pointed out by Kõlves et al. (2013). Another study (Orui et al., 2018) that analyzed the panel data of Fukushima Prefecture during the first 3 years after the disaster found that a changing pattern of suicide rates among people previously living in the evacuation zone was considerably different from that in people living in other areas of Fukushima, suggesting great psychological stress and burden due to long-term evacuation. Given the prominent number of disaster-related suicides in Fukushima Prefecture, establishing a care network to provide more prompt and efficient interventions is still needed, as well as useful screening to identify risk of suicide. With regard to mental health issues among children in Fukushima, another study (Lieber, 2017) using the SDQ was conducted on approximately 3650 elementary and middle school students in Fukushima in 2012. The results showed that children relocated to another town (Koriyama City) had significantly higher SDQ scores than children who were native to Koriyama as well as a control group that lived outside of Fukushima Prefecture (Lieber, 2017). Given the instability of children affected by the disaster reported by the MHLS and other studies, the mental health status of their caregivers, especially mothers, should be considered. Similar to the strong psychological reaction among young mothers after the Chernobyl accident (Bromet and Havenaar, 2007), many young mothers living in Fukushima became very concerned about the health effects of radiation on their children, especially in the initial phase after the accident (Kitajo, 2011). As well as mothers having

Fukushima Nuclear Disaster: Multidimensional Psychosocial Issues and Challenges to Overcome Them

125

young children, a major survey of 8196 pregnant women living in Fukushima found a high prevalence rate (28%) of those at risk of depression (Goto et al., 2015). Many mothers decided to leave Fukushima with their children to relocate to other less affected places, even though their husbands wanted to stay behind. Such decisions often resulted in long separations of families, or even eventual divorce. It is considered that close interactions between mothers and their children influenced the whole family system and weakened familial ties (Maeda et al., 2017). On the other hand, a recent epidemiological study (Goto et al., 2017) using data from two independent prefecture-wide surveys of pregnant women showed resilience in parenting, whereas their experiences and concerns in the aftermath of the nuclear disaster were associated with depressive symptoms. Given the recent SDQ score obtained from the MHLS (Fig. 2), it seems that substantial interactions between caregivers and their children in Fukushima allowed them to rise above the vicious circle in the early phase after the disaster and gradually change their situation for the better.

Psychological Distress, Radiation Risk Perception, and Interventions Psychological distress has been regarded as a serious health risk issue after a nuclear disaster. In particular, the 1986 Chernobyl accident revealed that psychological distress becomes severe in association with high radiation anxiety or risk perception (Bromet, 2012). The magnitude of psychological distress risk, however, has not yet been quantified to enable comparison with radiation risk for cancer mortality. This comparison is essential to prioritize policy measures. One of the potential reasons for the lack of such an interdisciplinary comparison is the unavailability of indicators, which allow us to assess various kinds of risks. Radiation causes a potential loss of life expectancy along with cancer incidence, whereas psychological distress can cause a decline in quality of life or well-being. Here, we developed a novel risk indicator, which we called “loss of happy life expectancy” (LHpLE) based on the concept that risks include not only death or disability, but also feelings of subjective well-being (Murakami et al., 2018a,b). HpLE is defined as the lifespan that people live with a subjective emotional feeling of well-being and is calculated from objective survival probabilities and a simple question regarding emotional happiness: “Did you experience a feeling of happiness yesterday [yes/no]?” (Kahneman and Deaton, 2010). A difference in HpLE between the presence and absence of a risk event (i.e., increase of mortality or decline of well-being) can be used as a risk comparison indicator, LHpLE. LHpLE was used to compare risks of psychological distress and radiation among residents of 13 municipalities subjected to evacuation order areas after the Fukushima disaster (Murakami et al., 2018a,b). Radiation doses followed the United Nations Scientific Committee on the Effects of Atomic Radiation report (United Nations Scientific Committee on the Effects of Atomic Radiation, 2014). It is noted that doses and corresponding risks were overestimated under the concepts of radiological protection (Murakami, 2018), based on the assumptions that residents were to return to their homes during the second year, decontamination was not implemented, and weathering did not occur, and cancer mortality was estimated by using linear-quadratic models under low doses. On the other hand, the increase of severe psychological distress was based on observation using a cutoff of K6 scores  13 (Kessler et al., 2003). The additional prevalence from the fourth year was projected based on the assumption that high psychological distress constantly reduced, as was found in the first 3 years. Only a decline in emotional happiness was considered as a result of psychological distress, whereas no additional mortality risk due to psychological distress was considered. As a result of comparison, LHpLE of psychological distress and cancer due to radiation among the entire population was 54 days and 2.0 days, respectively (Fig. 3) (Murakami et al., 2018a,b). Risk of psychological distress in women was higher than that in men and increased with age. Risk of cancer due to radiation did not include decline of well-being caused by suffering from cancer incidence itself; however, a sensitivity analysis showed inclusion of suffering approximately doubled LHpLE due to radiation exposure. Overall, even with this inclusion, risk of psychological distress was 1–2 orders of magnitude higher than that of cancer due to radiation exposure. Strikingly, the risk of psychological distress still remained high from the fourth year onward. This result clearly highlighted that psychological distress after a disaster becomes prolonged and should be a priority for policy measures. A strong potential risk factor of psychological distress is radiation risk perception. There are various types of radiation risk perceptions (e.g., dread risk and unknown risk (Slovic, 1987), delayed risk and genetic risk (Lindell and Barnes, 1986); among them, perceived genetic risk was strongly associated with psychological distress (Suzuki et al., 2015). Perceived genetic risk can be measured from the following question: “What do you think is the likelihood that the health of your future (i.e., as yet unborn) children and grandchildren will be affected as a result of your current level of radiation exposure? (1 ¼ very unlikely, 2 ¼ unlikely, 3 ¼ likely, 4 ¼ very likely).” The odds ratio for severe psychological distress in 2011FY (fiscal year; from April to March) was 2.17 for perceived genetic risk (very likely), which was higher than 1.50 for bereavement (Suzuki et al., 2015). Lower psychological distress (K6< 13) in 2012FY was inversely associated with perceived genetic risk in 2011FY (odds ratio ¼ 0.64) and positively associated with lowering of its perception from 2011FY to 2012FY (odds ratio ¼ 1.35) (Murakami et al., 2018a,b). Furthermore, lowering of perceived genetic risk promoted a frequency of laughter, another indicator of emotional well-being, via alleviation of mental health distress. Reduction of radiation-related anxiety was also reported to increase well-being in another study (Murakami et al., 2017a,b). High radiation risk perceptions among residents in 13 municipalities subjected to evacuation order areas gradually decreased for 6 years after the Fukushima disaster (Fig. 4) (The Radiation Medical Science Center of Fukushima Medical University, 2018). However, approximately 40% of evacuees still perceived that genetic risk is likely, although no discernible increase in heritable disease is expected (United Nations Scientific Committee on the Effects of Atomic Radiation, 2014). According to another survey, annual public opinion surveys conducted by the Fukushima Prefectural government, which have targeted residents in all of Fukushima Prefecture, reported that overall radiation-related anxiety differed among regions (Suzuki et al., 2018a,b). Overall radiation-

126

Fukushima Nuclear Disaster: Multidimensional Psychosocial Issues and Challenges to Overcome Them

Loss of happy life expectancy (day) 0

50

100

31

20 y

Psychological distress Cancer due to radiation

3.4

40 y

Psychological distress Cancer due to radiation

1.6

65 y

Psychological distress Cancer due to radiation

0.3

20 y

Psychological distress Cancer due to radiation

5.4

40 y

Psychological distress Cancer due to radiation

2.3

65 y

Psychological distress Cancer due to radiation

0.4

Whole population

Psychological distress Cancer due to radiation

2.0

Men

43

51

Women

28

63

92

54

Exposure in the 1st-3rd years Exposure from the 4th years Fig. 3 Risk comparison between psychological distress and cancer due to radiation using an indicator of loss of happy life expectancy among residents of 13 municipalities subjected to evacuation order areas (Murakami et al., 2018a,b).

Genetic risk Delayed risk

80% 60.1 60%

48.1 39.4

38.0

37.6

36.1

31.5

32.8

32.5

2017

40%

48.1

2016

48.1

2015

Ratios of high radiation risk perception

100%

39.6

20%

2014

2013

2012

0%

Fig. 4 Secular trends of ratios of high radiation risk perception among residents of 13 municipalities subjected to evacuation order areas (The Radiation Medical Science Center of Fukushima Medical University, 2018). Risk perceptions were measured based on responses to the following questions using a four-point Likert scale: genetic risk, “What do you think is the likelihood that the health of your future (i.e., as yet unborn) children and grandchildren will be affected as a result of your current level of radiation exposure?”; delayed risk, “What do you think is the likelihood of damage to your health (e.g., cancer onset) in later life as a result of your current level of radiation exposure?” High risk perceptions represent responses of either “likely” or “very likely.”

related anxiety among residents in the less affected regions showed larger reductions than that among residents in evacuation order areas. Various factors affect radiation risk perception. A systematic review after the Fukushima disaster stated that factors governing radiation risk perception can be classified into demographics (e.g., sex, age, presence of children), trusted information (e.g., rumor, internet), disaster-related stressors (e.g., evacuation experience), and radiation-related variables (e.g., absence of persons to consult) (Takebayashi et al., 2017). Importantly, these factors include not only hard-to-change characteristics, which were linked to cultural

Fukushima Nuclear Disaster: Multidimensional Psychosocial Issues and Challenges to Overcome Them

Demographics Women, older age, presence of children etc, lower education, living far from the plant after the accident, job

Trusted information

Disaster-related stressors

Radiationrelated variables

Rumor, internet, local news paper, radio, national newspaper (−), central government (−)

Evacuation, traumatic reaction, house damage, bereavement, living arrangement, loss of income

Safety behavior (information seeking etc), no one to consult, no written contract

127

+ Radiation risk perception

+ Psychological distress



Wellbeing Fig. 5 Relationships of well-being, psychological distress, radiation risk perception, and their factors. The figure was prepared based on previous studies (Murakami et al., 2018a,b; Suzuki et al., 2018a,b; Takebayashi et al., 2017).

worldview (Kahan et al., 2007) and were fixed before a disaster, but also radiation- or disaster-related factors, such as evacuation experience and traumatic reactions (Suzuki et al., 2018a,b). Well-being, psychological distress, radiation risk perception and its factors are summarized in Fig. 5. These relationships were based on the design or hypothesis of previous studies and may not reflect actual causality. However, this conceptual diagram is useful to illustrate potential implications for what we should do and how we should do it. If we aimed to reduce radiation-related anxiety, it would be essential to mitigate traumatic reactions just after a disaster. Risk communication or education before a disaster may also be useful in keeping affected people free from strong anxiety. After a disaster, some strategies could be developed to address high-risk perception groups. However, we should be mindful that reduction of radiation-related anxiety itself is not always the goal. Of course, mitigation of perceived genetic risk is important to counter some social issues. Specific risk communication or education, which aims to reduce radiation-related anxiety, is expected from the viewpoint of its potential to resolve perceived or actual stigma and discrimination (Sawano et al., 2018). On the other hand, for instance, if our goal were promotion of health and well-being, psychotherapy would be more beneficial. It should be noted that low mental health distress had a stronger association with well-being than lowering of radiation risk perception (Murakami et al., 2018a,b). Moreover, several evidence-based psychotherapies have been developed to alleviate mental health distress (Layard and Clark, 2015), whereas risk perception is generally likely to be stable. In actuality, a randomized controlled trial regarding a behavioral activation program for mothers in Fukushima showed that mental health can be improved without changes in radiation-related anxiety (Imamura et al., 2016). After the 2011 disaster, many risk communication activities have been performed. It is sometimes considered that they aimed to reduce radiation-related anxiety. However, it is worth noting that professionals have continually implemented risk communication activities with the intention of supporting residents’ decisions as well as promoting participatory public health, rather than just reducing radiation-related anxiety (Murakami et al., 2017a,b). The reasons why and how we preform risk communication depend on our goals and should be well balanced to meet the social consensus for the kind of world we want to live in (Murakami et al., 2018a,b).

Psychosocial Issues Differences From Natural Disasters Needless to say, the Fukushima disaster was one of the most serious manmade-disasters ever, and caused very complicated, longterm psychological effects that are considered greatly different from those in natural disasters (see Table 1) (Maeda et al., 2017).

128

Fukushima Nuclear Disaster: Multidimensional Psychosocial Issues and Challenges to Overcome Them Table 1

Difference between natural disasters and the Fukushima disaster

Impact of trauma Affected area Physical loss Psychological acceptance Anger or disappointment Compensation Groundless rumors Stigma and self-stigma Influence of media Voluntary evacuation Remote evacuation Cohesiveness of community Psychological recovery

Natural disasters

Fukushima disaster

Acute, instant Visible, clear Apparent Relatively easy Relatively low Simple, limited Rare Rare Relatively low Few Few High Dependent on physical relief

Chronic, continuous Invisible, unclear Ambiguous Very difficult Very strong Complicated, unsettled Common Common Very strong Numerous Numerous Low Independent of physical relief

In contrast to natural disasters, the impact of the Fukushima nuclear disaster was largely invisible, which made it difficult to differentiate the disaster area from the nondisaster area. Moreover, the impact of a major nuclear disaster is more likely to be persistent, leading to chronic fear of fallout and radiation exposure among people living in and around the disaster area. In nuclear disasters, physical losses due to the disaster itself are unclear; therefore, it is not easy for many evacuees to accept substantial physical or psychological losses (Maeda et al., 2017). These ambiguous losses, furthermore, might elicit disappointment and anger among evacuees and diminish their hopes of returning to their hometown someday. Actually, the concept of “ambiguous loss” proposed by Boss (1999), who has been studying psychological issues among people with missing loved ones such as families experiencing the 9/11 terrorist attack in New York City, seems to be well suited to the current situation in Fukushima (Maeda and Oe, 2015). In addition, many people affected by the disaster were highly influenced by the mass media, including social networks, and often endured groundless rumors or stigmatization from the general public (Maeda et al., 2017). With regard to the Fukushima disaster, compared with evacuees from other areas affected by the tsunami such as Iwate and Miyagi Prefectures, a much higher proportion of evacuees from Fukushima volunteered to move to remote areas. In general, psychological recovery among evacuees after a nuclear disaster is often dissociated from physical recovery (e.g., rebuilding of infrastructure and decontamination), leading to great delays in returning home.

Multidimensional Discordance In Fukushima, people often needed to decide to leave or stay in their home town on their own based on a large body of information about radiation, which was mostly diverse, contradictory or even inappropriate. In many cases, individual family members had very different ideas or perceptions towards the health effects of radiation. This is known as “intra-familial discordance.” Similarity exists between families forming a community. For example, while one family decided to leave, another did not, even though they had had good relationship with each other before the disaster. This can be called as “inter-familial discordance.” These two types of discordance often resulted in a community crisis, whereas, in cases of natural disasters, many communities are resilient and cohesiveness immediately improves (Maeda and Oe, 2015). As Kõlves et al. (2013) pointed out, preexisting social capital should be expected to play a very important role in reestablishing communities when natural disasters occur, preventing serious psychological consequences of community members, including suicide or other self-destructive behaviors. In addition to these discordances, there is another type of discordance between evacuees and the original community members inside and outside Fukushima Prefecture. In towns that received many evacuees, such as Iwaki City, original community members often have complicated and sometimes very negative feelings towards evacuees for different reasons such as the unclear evacuation timeline, the increase in land values, and worsening social circumstance. These situations are thought to be more similar to cases of refugees than evacuees in natural disasters (Maeda and Oe, 2015). Studies (Kessler et al., 1999) revealed that in spite of the great destruction and psychological damages brought by natural disasters, the prevalence of people at risk of PTSD is estimated to be relatively low compared with those who are involved in man-made disasters, accidents or crimes (rape, physical assault, motor vehicle accident, etc.). It is conceivable that these differences regarding prevalence of PTSD should reflect the strength of community resilience. In the case of natural disasters, people are more likely to help each other, eliciting cohesiveness and bonds of communities. In contrast, in the case of manmade disasters and crimes, affected people tend to become isolated and vulnerable to posttraumatic symptoms. Although a huge tsunami also struck Fukushima and other prefectures in the Tohoku coastal area and resulted in many deaths, the FDNPP disaster seems to have made it difficult to elevate community resilience due to the various types of discordance described above.

Fukushima Nuclear Disaster: Multidimensional Psychosocial Issues and Challenges to Overcome Them

129

Social Stigma and Self-Stigma Among Evacuees Despite the provision of intensive risk communication, risk perception of radiation among many evacuees, as well as the public in Japan, still remains considerably negative. (Fukushima Medical University, 2018) Specifically, it is thought to be a great sociological issue that the proportion of people who were concerned about the risk of genetic effects of radiation has been larger than the proportion of those who were concerned about the risk of long-term effects (e.g., leukemia or thyroid cancer) over all survey years (2012–17) (Fukushima Medical University, 2018). Negative risk perception of genetic effects can induce social stigma regarding marriage or reproduction of people in Fukushima affected by the disaster. Actually, many evacuees relocating to other prefectures, especially young women often tried to conceal the fact that they were evacuees from Fukushima Prefecture because of worries that people might look down them (Maeda and Oe, 2015). This negative attention focused on the affected people themselves in response to public stigma, which is called “self-stigma,” might produce low self-esteem and self-efficacy, and eventually result in depressive symptoms as Corrigan et al. (2006) pointed out with regard to stigmatization of mentally disabled individuals. The results obtained by the MHLS showed close association between psychiatric problems (symptoms of depression or PTSD) and risk perception of genetic effects (Suzuki et al., 2015, 2018a; Oe et al., 2016). Taking into account the evidence, it is rational to assume that there are significant interactions between stigma (self-stigma), negative risk perception of genetic effects of radiation and psychiatric symptoms. Furthermore, recently, there have been many media reports that malicious rumors regarding financial compensation that might damage many evacuees psychologically, driving them further into isolation. Actually, different outcomes in the recent MHLS data (K-6, PCL, SDQ, etc.) show that evacuees staying outside Fukushima Prefecture are more likely to have psychological problems than those staying inside Fukushima (Fukushima Medical University, 2018), suggesting a great influence of public stigma and rumors. As well as adequate risk communication, antistigma campaigns targeting the general public in Japan should be effective in decreasing such prejudice and contribute to the mitigation and prevention of psychiatric symptoms among evacuees.

Mental Health Care System in Fukushima Japan has a long history of numerous natural disasters including earthquakes, tsunamis, and typhoons that caused physical and psychological damage to those affected. In spite of these awful and traumatic experiences, such natural events can enhance community resilience and even strengthen the bonds between residents in many cases. In recent decades, knowledge about posttraumatic responses in natural disasters has increased, and many people affected by different types of disasters in Japan can have the opportunity to receive quality mental health care, treatment, and substantial support. In fact, after the Great East Japan Earthquake and nuclear accident, several different organizations including nongovernmental organizations have emerged to provide psychological support and health care to affected residents and evacuees. However, they often meet difficulties seldom seen in natural disasters. For example, because of the large dispersal area of evacuees, they have to cover very broad areas to provide adequate care and support. Before the disaster, the number of mental health professionals working in Fukushima was insufficient and after the nuclear crisis in 2011, some mental health workers left Fukushima Prefecture due to worries about radiation health effects, further complicating the lack of human resources. On the other hand, workers who had special interest in helping with the rehabilitation efforts in Fukushima came from other prefectures. They, however, faced many unexpected difficulties that often bewildered and exhausted them. One mental health care facility, the Fukushima Center for Disaster Mental Health (FCDMH), was newly established after the disaster to represent care resources facing the unique difficulties in Fukushima. The history of the challenges and trials of this facility mark the essential issues caused by the Fukushima disaster.

FCDMH The FCDMH, fully funded by the Japanese government, was established in 2012 in order to provide mental health care to people living in Fukushima Prefecture affected by the disaster. Predisaster care facilities across Fukushima Prefecture were not able to provide adequate support, mainly because of the shortage of human resources. Currently, the FCDMH employs about 40 mental health care professionals, including clinical psychologists, social workers, nurses, and other mental health specialists. To cover the broad area affected by the nuclear accident, the FCDMH currently has branches in the following five locations: Fukushima, Koriyama, Minamis oma, Aizuwakamatsu, and Iwaki. The FCDMH, in cooperation with local health care facilities, is attempting to provide a wide range of activities, including various types of interventions such as outreach services (home visiting services), psychoeducation, and relaxation methodology. When the activities of the FCDMH started in Fukushima, the situation in the affected area was extremely complicated and confusing. It was not easy for many of the FCDMH staff to develop good working relationships with the evacuees. While the preexisting health care facilities in the region were lacking and no one had any experience in dealing with people suffering from the anxieties caused by a nuclear crisis, the staff of the FCDMH struggled to preform and develop their activities. They had to learn gradually through trial and error. However, thanks to their continued efforts, they succeeded in convincing those at the preexisting facilities and other stakeholders in the region to recognize and acknowledge the FCDMH as one of the most useful, important care resources currently available in the region to provide the health care and support necessary for evacuees (Maeda et al., 2014).

130

Fukushima Nuclear Disaster: Multidimensional Psychosocial Issues and Challenges to Overcome Them

While the FCDMH has been conducting different types of activities, the most valuable have been the outreach and visiting services. 4000–5000 affected people on average yearly are directly visited by FCDMH staff for support in cooperation with municipal governments. Urgent crisis interventions are typically provided on a priority basis when people at risk of serious mental health problems and/or suicide are identified. These professionals usually attempt to conduct risk assessments, share information with other care facilities, and recommend that the people at risk visit a nearby psychiatric clinic. In recent years, in addition to the prior focus on evacuees, the professional staff at the FCDMH have also started to provide mental health care/support to many public employees working at different municipal offices who faced burnout owing to the long-term challenges related to the recovery activities in the remaining disaster-affected areas. While these efforts have gradually produced good results, new and more difficult and diverse challenges and demands that seem beyond the capabilities of local preexisting health care facilities, are being encountered. Stressful long-term assignments at the FCDMH have gradually exhausted some of the staff, and as a result, many professionals left the FCDMH during the first several years. Therefore, to adequately address the mental health care needs of affected populations, it is important for both the government and policy makers to realize that specialized mental health care facilities are indispensable after a major nuclear accident, and should be operated for a long time (a minimum of about 10 years). As well as the FCDMH staff, many other local staff carry the bulk of the burden in providing actual mental health care and require continuous support from national and local governments, who should ensure a stable working environment for workers, continuous training, and adequate financial remuneration to maintain the required number of staff. These national-wide, long-term supports can also enhance the morale and continuous interest of professional staff in the mental health field in Fukushima.

See also: Fukushima Nuclear DisasterdMonitoring and Risk Assessment; Fukushima Nuclear Disaster: Emergency Response to the Disaster; Radiation Exposures Due to the Chernobyl Accident; Thyroid Cancer Associated with the Chernobyl Accident.

References Boss, P., 1999. Ambiguous loss: Learning to live with unresolved grief. Harvard University Press, Cambridge, MA. Bromet, E.J., 2012. Mental health consequences of the Chernobyl disaster. Journal of Radiological Protection 32, 71–75. Bromet, E.J., Havenaar, J.M., 2007. Psychological and perceived health effects of the Chernobyl disaster: A 20-year review. Health Physics 93, 516–521. Corrigan, P.W., Watson, A.C., Barr, L., 2006. The self–stigma of mental illness: Implications for self-esteem and self-efficacy. Journal of Social and Clinical Psychology 25, 875–884. Goto, A., Bromet, E.J., Fujimori, K., 2015. Immediate effects of the Fukushima nuclear power plant disaster on depressive symptoms among mothers with infants: A prefectural-wide cross-sectional study from the Fukushima Health Management Survey. BMC Psychiatry 26, 59. Goto, A., Bromet, E.J., Ota, M., et al., 2017. The Fukushima Nuclear Accident Affected Mothers’ depression but not maternal confidence. Asia-Pacific Journal of Public Health 29 (2 suppl), 139S–150S. Harigane, M., Suzuki, Y., Yasumura, S., et al., 2017. The relationship between functional independence and psychological distress in elderly adults following the Fukushima Daiichi Nuclear Power Plant accident: The Fukushima Health Management Survey. Asia-Pacific Journal of Public Health 29 (2 suppl), 120S–130S. Imamura, K., Sekiya, Y., Asai, Y., et al., 2016. The effect of a behavioral activation program on improving mental and physical health complaints associated with radiation stress among mothers in Fukushima: A randomized controlled trial. BMC Public Health 16, 1144. Itagaki, S., Harigane, M., Maeda, M., et al., 2017. Exercise habits are important for the mental health of children in Fukushima after the Fukushima Daiichi disaster. Asia-Pacific Journal of Public Health 29 (2 suppl), 171S–181S. Itagaki, S., Ohira, T., Nagai, M., et al., 2018. The relationship between sleep time and mental health problems according to the strengths and difficulties questionnaire in children after an earthquake disaster: The Fukushima Health Management Survey. International Journal of Environmental Research and Public Health 30 (4), E633. Kahan, D.M., Braman, D., Gastil, J., Slovic, P., Mertz, C.K., 2007. Culture and identity-protective cognition: Explaining the white-male effect in risk perception. Journal of Empirical Legal Studies 4, 465–505. Kahneman, D., Deaton, A., 2010. High income improves evaluation of life but not emotional well-being. Proceedings of the National Academy of Sciences of the United States of America 107, 16489–16493. Kawakami, N., 2007. National survey of mental health measured by K6 and factors affecting mental health status in research on applied use of statistics and information. In: Health Labour Sciences Research Grant. (in Japanese). Kessler, R.C., Borges, G., Walters, E.E., 1999. Prevalence of and risk factors for lifetime suicide attempts in the National Comorbidity Survey. Archives of General Psychiatry 56 (7), 617–626. Kessler, R.C., Barker, P.R., Colpe, L.J., et al., 2003. Screening for serious mental illness in the general population. Archives of General Psychiatry 60, 184–189. Kitajo, T., 2011. Effects of the Fukushima nuclear accident on children in Fukushima. Nishoikaiho 42, 119–121 (in Japanese). Kõlves, K., Kõlves, K.E., De Leo, D., 2013. Natural disasters and suicidal behaviours: A systematic literature review. Journal of Affective Disorders 146, 1–14. Kunii, Y., Suzuki, Y., Shiga, T., et al., 2016. Severe psychological distress of evacuees in evacuation zone caused by the Fukushima Daiichi Nuclear Power Plant Accident: The Fukushima Health Management Survey. PLoS One 8 (7), e0158821. Layard, R., Clark, D.M., 2015. Thrive: The power of psychological therapy. Penguin, London. Lieber, M., 2017. Assessing the Mental Health Impact of the 2011 Great Japan Earthquake, Tsunami, and Radiation Disaster on Elementary and Middle School Children in the Fukushima Prefecture of Japan. PLoS One 12, e0170402. Lindell, M.K., Barnes, V.E., 1986. Protective response to technological emergency: Risk perception and behavioral intention. Nuclear Safety 27, 457–467. Maeda, M., Oe, M., 2015. The Great East Japan Earthquake: Tsunami and nuclear disaster. In: Cherry, K.E. (Ed.), Traumatic stress and long-term recovery coping with disasters and other negative life events. Springer International Publishing, New York, pp. 71–90. Maeda, M., Ueda, Y., Hiruta, G., 2014. What is the role of Fukushima Center for Disaster Mental Health? Japanese Journal of Traumatic Stress 12, 5–12 (in Japanese). Maeda, M., Ueda, Y., Nagai, M., Fujii, S., Oe, M., 2016. Diagnostic interview study of the prevalence of depression among public employees engaged in long-term relief work in Fukushima. Psychiatry and Clinical Neurosciences 70, 413–420.

Fukushima Nuclear Disaster: Multidimensional Psychosocial Issues and Challenges to Overcome Them

131

Maeda, M., Suzuki, Y., Oe, M., 2017. Psychosocial effects of the Fukushima disaster and current tasks: Differences between natural and nuclear disasters. Journal of the National Institute of Public Health 67, 50–58. Matsubayashi, T., Sawada, Y., Ueda, M., 2013. Natural disasters and suicide: Evidence from Japan. Social Science & Medicine 82, 126–133. Matsuishi, T., Nagano, M., Araki, Y., et al., 2008. Scale properties of the Japanese version of the strengths and difficulties questionnaire (SDQ): A study of infant and school children in community samples. Brain & Development 30, 410–415. Matsuoka, Y., Nishi, D., Nakaya, N., et al., 2012. Concern over radiation exposure and psychological distress among rescue workers following the Great East Japan earthquake. BMC Public Health 12, 249. Miura, I., Wada, A., Itagaki, S., et al., 2012. Relationship between psychological distress and anxiety/depression following the Great East Japan earthquake in Fukushima Prefecture. Japanese Journal of Clinical Psychiatry 41, 1137–1142 (in Japanese). Miura, I., Nagai, M., Maeda, M., et al., 2017. Perception of radiation risk as a predictor of mid-term mental health after a nuclear disaster: The Fukushima Health Management Survey. International Journal of Environmental Research and Public Health 14 (9), E1067. Murakami, M., 2018. Importance of risk comparison for individual and societal decision-making after the Fukushima disaster. Journal of Radiation Research 59, ii23–ii30. Murakami, M., Harada, S., Oki, T., 2017a. Decontamination reduces radiation anxiety and improves subjective well-being after the Fukushima accident. The Tohoku Journal of Experimental Medicine 241, 103–116. Murakami, M., Sato, A., Matsui, S., Goto, A., Kumagai, A., Tsubokura, M., et al., 2017b. Communicating with residents about risks following the Fukushima nuclear accident. AsiaPacific Journal of Public Health 29, 74s–89s. Murakami, M., Hirosaki, M., Suzuki, Y., Maeda, M., Yabe, H., Yasumura, S., et al., 2018a. Reduction of radiation-related anxiety promoted wellbeing after the 2011 disaster: “Fukushima Health Management Survey”. Journal of Radiological Protection 38, 1428. Murakami, M., Tsubokura, M., Ono, K., Maeda, M., 2018b. New “loss of happy life expectancy” indicator and its use in risk comparison after Fukushima disaster. Science of the Total Environment 615, 1527–1534. Oe, M., Maeda, M., Nagai, et al., 2016. Predictors of severe psychological distress trajectory after nuclear disaster: Evidence from the Fukushima health management survey. BMJ Open 6, e013400. Oe, M., Maeda, M., Ohira, T., et al., 2018. Trajectories of emotional symptoms and peer relationship problems in children after nuclear disaster: Evidence from the Fukushima health management survey. International Journal of Environmental Research and Public Health 15 (1), E82. Ohto, H., Maeda, M., Yabe, H., et al., 2015. Suicide rates in the aftermath of the 2011 earthquake in Japan. Lancet 38, 1727. Orui, M., Suzuki, Y., Maeda, M., Yasumura, S., 2018. Suicide rates in evacuation areas after the Fukushima Daiichi nuclear disaster. Crisis 39, 353–363. Sawano, T., Nishikawa, Y., Ozaki, A., Leppold, C., Tsubokura, M., 2018. The Fukushima Daiichi nuclear power plant accident and school bullying of affected children and adolescents: The need for continuous radiation education. Journal of Radiation Research 59, 381–384. Shigemura, J., Tanigawa, T., Saito, I., Nomura, S., 2012. Psychological distress in workers at the Fukushima nuclear power plants. JAMA 308, 667–669. Shigemura, J., Tanigawa, T., Nishi, D., et al., 2014. Associations between disaster exposures, peritraumatic distress, and posttraumatic stress responses in Fukushima nuclear plant workers following the 2011 nuclear accident: The Fukushima NEWS project study. PLoS One 9, e87516. Slovic, P., 1987. Perception of risk. Science 236, 280–285. Stellman, J.M., Smith, R.P., Katz, C.L., et al., 2008. Enduring mental health morbidity and social function impairment in world trade center rescue, recovery, and cleanup workers: The psychological dimension of an environmental health disaster. Environmental Health Perspectives 116, 1248–1253. Suzuki, Y., Yabe, H., Yasumura, S., Ohira, T., Niwa, S., Ohtsuru, A., et al., 2015. Psychological distress and the perception of radiation risks: The Fukushima health management survey. Bulletin of the World Health Organization 93, 598–605. Suzuki, S., Murakami, M., Nishikiori, T., Harada, S., 2018a. Annual changes in the Fukushima residents’ views on the safety of water and air environments and their associations with the perception of radiation risks. Journal of Radiation Research 59, ii31–ii39. Suzuki, Y., Takebayashi, Y., Yasumura, S., et al., 2018b. Changes in risk perception of the health effects of radiation and mental health status: The Fukushima health management survey. International Journal of Environmental Research and Public Health 15, E1219. Takebayashi, Y., Lyamzina, Y., Suzuki, Y., Murakami, M., 2017. Risk perception and anxiety regarding radiation after the 2011 Fukushima nuclear power plant accident: A systematic qualitative review. International Journal of Environmental Research and Public Health 14, 1306. Ueda, Y., Yabe, H., Maeda, M., et al., 2016. Drinking behavior and mental illness among evacuees in Fukushima following the Great East Japan earthquake: The Fukushima health management survey. Alcoholism, Clinical and Experimental Research 40, 623–630. United Nations Scientific Committee on the Effects of Atomic Radiation, 2014. Sources, effects and risks of ionizing radiation. In: UNSCEAR 2013 Reports to the general assembly with scientific annexes. United Nations, New York. Yabe, H., Suzuki, Y., Mashiko, H., et al., 2014. Psychological distress after the great East Japan earthquake and Fukushima Daiichi nuclear power plant accident: Results of a mental health and lifestyle survey through the Fukushima health management survey in FY2011 and FY2012. Fukushima Journal of Medical Science 60, 57–67. Yasumura, S., Hosoya, M., Yamashita, S., et al., 2012. Study protocol for the Fukushima Health management survey. Journal of Epidemiology 22, 375–383.

Relevant Websites Fukushima Medical University, 2018. http://fmu-global.jp/download/outline-of-mental-health-and-lifestyle-survey-for-fy-2016-2/?wpdmdl¼4417. (Accessed 29 October 2018). Japanese Cabinet Office, 2017. Statistical data regarding suicide. http://www8.cao.go.jp/jisatsutaisaku/toukei/pdf/saishin_shinsai.pdf. (Accessed 29 October 2018) (in Japanese).

Functional ‘Omics and Molecular Analysis of a Subtropical Harmful Algal Bloom Species, Karenia brevisq TI McLean and M Pirooznia, University of Southern Mississippi, Hattiesburg, MS, United States; and Johns Hopkins University, Baltimore, MD, United States © 2019 Elsevier B.V. All rights reserved.

Abbreviations BLAST Basic local alignment search tool BLASTn A BLAST search of nucleotide databases using a nucleotide query BLASTx A BLAST search of protein databases using a translated nucleotide query CDK Cyclin-dependent kinase cDNA Complementary DNA CEGMA Core eukaryotic genes mapping approach cob Cytochrome b cox Cytochrome c oxidase DASH Drosophila, arabidopsis, synechocystis, human DNA Deoxyribonucleic acid ER Endoplasmic reticulum EST Expressed sequence tag ESTdb EST database G1 phase Gap 1 phase of the cell cycle GAPDH Glyceraldehyde 3-phosphate dehydrogenase GoMex Gulf of Mexico HAB Harmful algal bloom KR Ketoreductase KS Beta-ketoacyl synthase M phase The phase in which cells divide (mitosis) Mb Megabases (millions of bases) mRNA Messenger RNA NADD Nicotinamide adenine dinucleotide NADPD Nicotinamide adenine dinucleotide phosphate NCBI National Center for Biotechnology Information NOAA National Oceanographic and Atmospheric Administration nr Nonredundant NSP Neurotoxic shellfish poisoning PCNA Proliferating cell nuclear antigen PCR Polymerase chain reaction PKS Polyketide synthase PPR Pentatricopeptide repeat RNA Ribonucleic acid rRNA Ribosomal RNA Rubisco Ribulose-1,5-bisphosphate carboxylase oxygenase S phase DNA synthesis phase of the cell cycle SL Spliced leader tRNA Transfer RNA

q

Change History: February 2019. TI McLean updated text and further reading. This is an update of T.I. McLean, M. Pirooznia, Functional Genomics and Molecular Analysis of a Subtropical Harmful Algal Bloom Species, Karenia brevis, Editor(s): J.O. Nriagu, Encyclopedia of Environmental Health, Elsevier, 2011, Pages 816–828.

132

Encyclopedia of Environmental Health, 2nd edition, Volume 3

https://doi.org/10.1016/B978-0-12-409548-9.02246-6

Functional ‘Omics and Molecular Analysis of a Subtropical Harmful Algal Bloom Species, Karenia brevis

133

Introduction The power and utility of functional ‘omics analyses (e.g., genomics, transcriptomics, proteomics, etc.) have deepened the understanding of biology in such areas as how organisms develop, how they regulate their metabolisms, and how they interact with their environments and each other. The application of ‘omics-related techniques and analyses, which initially focused on biomedically important model organisms, has now been incorporated into the study of organisms representing almost all levels of taxonomy. Over the last 15 years, the sphere of application has expanded to include dinoflagellates. Dinoflagellates are a very important component of aquatic ecosystems, but the rush to implement advanced molecular tools toward their study has been dampened by the complexity and singularity of their genomes (see below). Starting in the 1980s, a few labs were successfully using molecular biological techniques to study a handful of isolated genes or proteins of interest in some dinoflagellate species. The first large-scale ‘omics-based studies, which did not take place until many years later, focused on a genomics analysis that led to the creation and interrogation of a number of expressed sequence tag (EST) libraries. Since then, reference transcriptomes and transcriptomes generated from different cultures under specific conditions have been generated and analyzed. Proteomic and metabolomics studies are just beginning to ramp up, and these studies may hold the most promise yet for understanding dinoflagellate biology. It is the hope of investigators that using functional ‘omics approaches to study different species of dinoflagellates will help to find the relevant genes/proteins for and determine the roles for these molecules in such aspects of dinoflagellate biology as growth control, metabolism, regulation of gene expression, plastid function and evolution, and toxin production. This article will review some new molecular data and briefly describe some examples of environment–gene interactions associated with a subtropical dinoflagellate, Karenia brevis, which is the causative agent of a harmful algal bloom (HAB).

The Dinoflagellate, Karenia brevis From an ecological perspective, dinoflagellates are important for three reasons. First, they make up a large percentage of the biomass in marine and freshwater ecosystems, and, with  50% of dinoflagellates capable of photosynthesis, they also contribute a large percentage of the primary productivity to these systems. Heterotrophic (and mixotrophic) dinoflagellates also serve as critical intermediates in the trophic transfer of energy and nutrients within multiple food webs. Second, dinoflagellates form many important symbiotic relationships with invertebrates and other protistan organismsdthe most notable example is the endosymbiotic relationship between corals and zooxanthellae (dinoflagellates). Third, some have the ability to create HABs, often referred to as red tides. The last reason is the primary impetus for studying K. brevis (Davis) G. Hansen and Moestrup (¼ Ptychodiscus brevis, Gymnodinium breve). Karenia brevis is an unarmored dinoflagellate endemic to the Gulf of Mexico (GoMex). It is naturally found in offshore GoMex waters at background concentrations of 1000 cells L 1, but during late summer and fall K. brevis can form large (> 1000 km 2), dense (> 0.05–2  107 cells L 1) blooms in nearshore waters that can be nearly monospecific. Associated with K. brevis blooms is the presence of brevetoxins, a suite of neurotoxins that can kill a wide range of sea life as well as affect human health. Brevetoxins can bioaccumulate in shellfish, which if consumed can cause neurotoxic shellfish poisoning (NSP) in humans. NSP produces both gastrointestinal and neurological symptoms that typically last only a few days. Unlike other algal toxins, however, brevetoxins can affect human health via a second route. Brevetoxins are released from K. brevis cells when they break open, and then the toxins can become aerosolized. Exposure to aerosolized brevetoxin can cause irritation to eyes, nose, and throat and can induce breathing difficulties. Individuals who already have breathing difficulties, such as asthmatics, are particularly susceptible to aerosolized brevetoxins. Although it has not been studied, the possibility that long-term or repeated exposures to inhaled brevetoxins can produce chronic effects in humans is suggested from studies involving laboratory rats and postmortem assays of marine wildlife killed by brevetoxin exposure. The fact that brevetoxins can form deoxyribonucleic acid (DNA) adducts after direct exposure of brevetoxins to cultured cells or after intratracheal exposure in live animals suggests that even acute exposures could potentially be carcinogenic. Incidents of K. brevis blooms in the GoMex date back hundreds of years. Historically, occurrences have been sporadic and infrequent. More recently, the frequency, duration, and coverage of blooms appear to be trending upward causing ecological devastation and economic hardship to affected areas. Mechanisms responsible for bloom formation and maintenance are likely different, if not unique, for each geographic area in which K. brevis can bloom. Multiple theories as to the cause of K. brevis blooms have been proposed, many of which involve complex physical, chemical, and biological interactions. Some of the models can be supported by direct or circumstantial evidence, but none has been capable of satisfactorily addressing the formation or maintenance of every bloom year over year. Much more work is clearly needed to test and to augment the current models. In hopes of contributing to the body of knowledge surrounding K. brevis, molecular techniques are now being employed. Understanding K. brevis at the molecular level should facilitate attempts to decipher how this organism behaves physiologically and genetically and that, in turn, may help us comprehend how and why it produces toxin and forms blooms. This desire is based on the understanding that all aspects of an organism’s biology are underpinned by the complement of genes it possesses and the mechanisms by which the expression of those genes are regulated. The first step toward reaching this goal is to determine what genes the organism has. Once identified, it will be necessary to unravel the intracellular mechanisms and environmental conditions that combine to regulate their expression. Then it may be possible to determine what the genes contribute and what limitations they impose toward establishing the biogeography of K. brevis and regulating its metabolism, growth, toxin production, and other aspects of K. brevis biology.

134

Functional ‘Omics and Molecular Analysis of a Subtropical Harmful Algal Bloom Species, Karenia brevis

Genomics Dinoflagellate genomes have been estimated to range from 1000 to 215,000 Mb in sizedthe K. brevis genome contains over 100,000 Mb distributed over 121 chromosomes. The unusual size and makeup of dinoflagellate genomes have prevented the full genome sequencing of all but seven species/clade members to date. (All seven genomes belong to the Symbiodinium genus which are ecologically important as coral endosymbionts but, more importantly here, have some of the smallest dinoflagellate genomes known, being around 1100–1500 Mb in length.) In addition to the size, dinoflagellate chromosomal composition and behavior is indeed unique for a number of reasons. Only a few examples will be described here. Dinoflagellate chromosomes are morphologically indistinguishabledthey have no heteromorphisms along their length, for example, no constrictions that would correspond to a tightly compacted centromere, but they are linear and do have telomeres that contain a characteristic repeated sequence. Throughout the entire cell cycle, the chromosomes are permanently condensed into a compact, liquid crystal structure that is not predicated on the formation of nucleosomes as seen in all other eukaryotes. The amount of protein relative to DNA in a dinoflagellate nucleus is closer to 1:1 rather than 10:1 found in other eukaryotes. The reduction in protein amount is due to the lack of histones. Instead, the few proteins in a dinoflagellate nucleus are histonelike proteins and dinoflagellates/viral nucleoproteins likely acquired via horizontal gene transfer from bacteria or from viral infections, respectively. In a species-specific manner, the percentage of thymine in the DNA is substituted with the unusual base, 5-hydroxymethyluracil. Lastly, dinomitosis is the process of mitotic cell division in dinoflagellates but with the following modifications: the nuclear envelope does not break down, and an extranuclear spindle coordinates the separation of duplicated chromosomes and daughter nuclei. The molecular study of dinoflagellates is still very much in its infancy, but data accumulated from studies of various dinoflagellates are beginning to form a cohesive picture of the molecular nature of their genomes. At least some genes have been found at a very high copy number:  1000 copies of luciferin-binding protein and  5000 copies of the peridinin-chlorophyll a-binding gene in Lingulodinium polyedrum (formerly Gonyaulax polyedra),  150 copies of the form II rubisco gene in Prorocentrum minimum, and  280  32 copies of the PCNA gene in K. brevis. In the first two examples, the gene copies appear to be arranged in tandem. In K. brevis, the tandem nature of any gene has not been specifically assessed. For the PCNA gene, at least 15% of these copies appear to be the result of retrotransposition. This result not only demonstrates that transposition events can occur in dinoflagellates but also that copies of this gene, at least, are likely spread throughout the genome and not all clustered in a single tandem repeat sequence. If the majority of genes are shown to have similarly high copy numbers, it could go a long way to explaining why dinoflagellate genomes are so large. Based on analysis of K. brevis expressed sequence tags (ESTs), it does in fact appear that many genes are represented with multiple copies in the genome. Such evidence includes the presence of many single nucleotide polymorphisms found within a significant percentage of sequence clusters and the presence of diverged but highly similar copies of particular genes indicative of recent (and sometimes, multiple) gene duplication events. Additionally, the study that described the many copies of the PCNA gene also detected between two and one hundred copies of 10 other DNA replication-related genes in K. brevis. These numbers are likely underrepresentations of the actual copy number of each, but these results support the hypothesis that many (all?) genes in a dinoflagellate genome are present in multiple copies. Alternatively, the high copy number of replication-related genes may specifically correlate with the high demand for efficiency of replication in these organisms because they possess such large genomes and/or provide a competitive advantage for growth relative to other organisms by not having a protracted S phase.

EST Libraries for Functional Genomics As of 2018, there are at least four EST libraries created from sequencing K. brevis complementary DNA (cDNA) collections (Table 1). Each was constructed from cells grown under different culture conditions or using different cultures of K. brevis. For the two libraries where the unigene sequences underwent BLAST analysis using the Genbank nr database (Dark 1 and multi-strain libraries), the level of returned annotations was characteristically low. The low success rate of annotation is not unique to K. brevis. Dinoflagellate EST libraries have a much lower percentage of identifiable genes than other taxa of microalgae and eukaryotes in general. This low success rate of annotation is problematic for the purposes of ‘omics-based analysis as even genes that are highly expressed, and, therefore, assumed to be important to the dinoflagellate, are unidentifiable. Note that two of the top five most abundantly expressed genes in the multi-strain library have no annotation (Table 1). It is unclear if the low annotations success is due to the expression of truly novel genes in dinoflagellates or if homologous sequences in dinoflagellates have diverged sufficiently to make similarity matching ineffective using standard bioinformatics parameters and search tools. It is also possible that many of these ESTs will never find a match as these sequences are not transcribed from genes but are noncoding transcripts that have some regulatory role (e.g., parts of long, noncoding RNAs, antisense RNAs, pre-microRNAs, etc.) or exist as “transcriptional noise.” Further work (see below) is building a case whereby dinoflagellate gene expression is not, or at least not primarily, regulated at the transcriptional level but rather at one or more posttranscriptional and/or translational steps. It may be that most dinoflagellate genes as well as other genomic regions are under a constant state of active transcription to produce a large bulk of RNAs that the cells can identify, sort, and use as necessary. Having a ready supply of all transcripts eliminates the need for signaling to the nucleus to activate transcription. Such a short-circuiting of normal environment-gene interactions may also provide a competitive advantage by more rapidly responding to a change in an environmental condition (e.g., the sudden availability of a limiting nutrient) via shifts in translation of available mRNAs. These hypotheses need further exploration.

Functional ‘Omics and Molecular Analysis of a Subtropical Harmful Algal Bloom Species, Karenia brevis Table 1

135

Comparison of four Karenia brevis expressed sequence tag (EST) libraries Dark 1 library

Total ESTs

7001 cDNAs sequenced from 50 end

Dark 2 library

14,163 cDNAs sequenced from 50 to 30 ends Singletons 4399a/3873 4263 Clusters 881a/894 2992 Unigenes 5280a/4767 7255 ND % Annotated 29% (e .0001). This result indicates that the carcinogenic potency of the chemicals maintained the same ranking in the two sexes. It also parallels the good correlation between the tumor profiles induced in males and females. The same analysis was applied to study the interspecies differences in carcinogenic potency. The TD50’s were averaged over each species and then transformed into logarithms. The correlation coefficient between the log(TD50) values of rat and mouse was r¼.81 (n¼171; plithium¼uranium>selenate> boron. Studies have identified lithium toxicity levels in certain organisms (Table 7). Effective concentration (EC50) is the concentration of a material in water, a single dose that is expected to cause a biological effect on 50% of a group of test animals. Lethal Table 7

Test results for environmental (aquatic) toxicity

Species

Latin name (common name)

Compound

Exposure duration

Mollusc Crustacean Worm Fish

Dreissena polymorpha (zebra mussel) Daphnia magna (water flea) Tubifex tubifex (tubicid worm) Pimephales promelas (fathead minnow) Tanichthys albonubes (white cloud mountain minnow)

LiCl Li2SO4 Li2SO4 LiCl LiCl

24 h 24 h 24–96 h 26 days 48 h

EC50 (mg l1)

LC50 (mg l1)

– 33–197 9.3–44.8 1–6.4 –

185–232 – – 1.2–8.7 9.2–62

Source: From Aral H and Vecchio-Sadus A (2008) Toxicity of lithium to humans and the environment – A literature review. Ecotoxicology and Environmental Safety 70: 349–356, with permission from Elsevier.

Lithium: Environmental Pollution and Health Effects

123

concentration (LC50) is the amount of a substance in air that, when given by inhalation over a specified period of time, is expected to cause the death in 50% of a defined animal population. The presence of sodium is sufficient to prevent lithium toxicity to Pimephales promelas (fathead minnow), Ceriodaphnia dubia, and Elimia clavaeformis (a freshwater snail) in most natural waters. The acute environmental effect concentration (measured as EC50) on Daphnia magna was determined to be 33–197 mg l1, which is at least 1000 times higher than the level in freshwater. Both lithium chloride and lithium sulfate have high water solubility, and the compounds will dissociate in aqueous environment. No lithium compounds are classified for adverse environmental effects. No data regarding bioaccumulation of lithium were found, but based on its low affinity to particles, it is not expected to bioaccumulate. In Salar de Uyuni in Bolivia, lithium is dispersed over a 9000 km2 salt flat at 3600 m altitude in the Andes. Salar de Uyuni is classified by the tourist industry as a land of outstanding natural beauty. The area becomes a flamingo breeding ground from December to February when the rain floods the surface of the salar between January and March. The discharge of the Rio Grande into the salar, adjacent to where the lithium concentration is highest, creates a permanent lagoon area used by the birds.

Terrestrial Environment Lithium is taken up by all plants, although it is not an essential nutrient for their growth and development. In some cases, stimulation of plant growth has been observed. Lithium is relatively toxic to citrus plants. The amount of lithium in plants usually lies between 0.2 and 30 ppm due to preferential uptake or rejection across species. Plants such as Cirsium arvense and Solanum dulcamera accumulate lithium in concentrations of three- to sixfold over other plants. Nightshade species may reach concentrations of up to 1 mg g1. Salt-tolerant plants such as Carduus arvense and Holoschoenus vulgaris may reach lithium contents of 99.6–226.4 mg g1. Lithium concentrations vary widely from 0.01 ppm (dry basis) in bananas to 55 ppm in oats. There appears to be a greater uptake of lithium by plants in acidic soils. Soil acidity increases the solubility of the heavier metallic elements such as iron, nickel, cobalt, manganese, and copper, and to some extent also aluminum, lead, and cadmium. Plant lithium levels are directly and significantly correlated with the concentrations of these elements. Calcium can be added to soils to prevent toxicity and the uptake of lighter minerals. Lithium in plants and animals interacts with sodium and potassium as well as with enzymes requiring magnesium. Its complexing properties are stronger than those of Naþ and Kþ but weaker than those of Mg2þ. At concentrations attained during therapy, Liþ and Mg2þ are present in comparable concentrations; thus, Liþ binds to sites not occupied by Mg2þ. Once all Mg2þ sites are saturated, Liþ substitutes for Naþ and Kþ. All alkali metal ions are exchanged more than 1000 times more rapidly than Mg2þ; this may explain why lithium preferentially affects the activity of Mg2þ-containing enzymes. Chlorophyll mutants were produced in the progeny of Pisum abyssinicum plants treated with lithium nitrate in addition to other nitrates (Cu, Zn, Cr, Mn, Fe, Co, Ni, and Al). This was most likely due to the presence of the other confirmed mutagenic metal nitrates. Yeast (Saccharomyces cerevisiae) has been shown to take up limited amounts of lithium, with growth inhibition exhibited at high levels (115–400 ppm). A high ability to accumulate lithium was exhibited by strains of the bacteria Arthrobacter nicotianae (1.0 mg g1 dry weight cells) and Brevibacterium helvovolum (0.7 mg g1 dry weight cells). Exposure of earthworms (Eisenia fetida) to lithium chloride identified a mortality rate at concentrations of approximately 70 mg kg1 soil. A limited investigation of the levels of lithium and other elements in major emissions and waste streams was conducted in Denmark in 2001. In the Danish study, lithium was found in all environmental samples especially compost, wastewater, sewage sludge, and sediment from road runoff retention basins (Table 8). The concentration in effluent from wastewater treatment plants was low and was not considered as being acutely toxic to aquatic organisms.

Mining-Related Pollution Mining and mineral-processing industries producing lithium minerals, metals, and salts contribute to the lithium burden in the environment. The processing of lithium-containing minerals such as spodumene, in general, comprises crushing, wet grinding in a ball mill, sizing, gravity concentration, and flotation using a fatty acid (oleic acid) as the collector. The major lithium mineral in lithium ore is spodumene, which is considered insoluble in water and dilute acids. However, a small amount of dissolution may occur during processing of the ore especially in the grinding and flotation stages where some dilute (0.01 M) sulfuric acid is used (see Table 6). Tailings are discharged to storage areas, and the decanted water is usually recovered for reuse. Lithium concentrations in tailing dams increase gradually. The dissolved lithium found in the tailing dams of lithium mineral beneficiation plants could be as high as 15 mg l1. The repeated use of tailing waters without any treatment further increases the dissolved lithium levels in these waters. Some of the lithium minerals are more soluble than the others. Manufacturing of lithium chemicals could contribute to the lithium burden in the environment. Most of the lithium chemicals are often more soluble than lithium minerals, and therefore, the risk to the environment could be higher than the risk introduced by the lithium minerals (see Table 5).

Consumer-Created Pollution Man-made lithium pollution sources are mainly from the use of lithium-based grease in vehicles and irresponsible disposal of lithium batteries. Lithium grease is a lubricant to which lithium hydroxide monohydrate is added to give the lubricant higher

124

Lithium: Environmental Pollution and Health Effects Table 8

Levels of lithium in selected emissions and waste products in Denmark

Emission/waste type Compost: Compost from household waste Compost from garden waste Landfill leachate: Landfill 1 Landfill 2 Stack gas from municipal solid waste incineration: Incinerator 1, semidry gas cleaning Incinerator 2, wet gas cleaning Municipal solid waste gas cleaning residuals: Landfill leachate, semidry gas cleaning Landfill leachate, wet gas cleaning Wastewater and sludge from municipal wastewater treatment plant: Plant 1, effluent Plant 2, effluent Plant 1, sludge Plant 2, sludge Road runoff retention basins, sediment: Motorway 1 Motorway 2

Li concentration 4.64 mg kg1 4.69 mg kg1 0.2 mg l1 0.049 mg l1 49% of the 3110 counties, an > 44% increase form the number of counties that reported their presence in 1998. Increases in the number of reported cases and in the incidence of Lyme disease were described globally, including Canada and Europe. In Europe, where Lyme disease is not reportable in all countries, differences in surveillance and laboratory diagnostic approaches make comparisons difficult. About 65,500 cases are diagnosed annually in Europe and, of these, > 26,000 are estimated to occur in France. During recent decades the number of cases has increased in several countries, a trend that was accompanied by the expansion of the geographical areas that reported infections.

The Pathogen and the Tick Lyme disease is a zoonosis, a term used to describe infectious diseases caused by pathogens that normally are confined to animal reservoirs, but occasionally gain the ability to cross the species barrier and cause human disease. In 1982, Burgdorfer and coworkers identified the etiologic agent of Lyme disease in the midgut of I. dammini ticks that they collected in Shelter Island, New York, an area endemic for Lyme disease. The pathogen, a Gram-negative spirochetal bacterium, was subsequently called Borrelia burgdorferi.

q

Change History: April 2019. RA Stein prepared this update. Most sections of this article have been updated or changed. This is an update of R.A. Stein, Lyme Disease, Editor(s): J.O. Nriagu, Encyclopedia of Environmental Health, Elsevier, 2011, Pages 528–534.

150

Encyclopedia of Environmental Health, 2nd edition, Volume 4

https://doi.org/10.1016/B978-0-12-409548-9.11868-8

Lyme Disease

151

Lyme disease is caused by certain members of the B. burgdorferi sensu lato complex, a group of spirochetes that includes over 20 genospecies. While Borrelia burgdorferi sensu stricto (Borrelia burgdorferi) has initially been widely reported as the only genospecies involved in disease in North America, additional genospecies, including B. americana, B. andersonii, B. bissettii, B. garinii, B. kurtenbachii, and B. mayonii were more recently implicated. B. afzelii and B. garinii are the predominant genospecies linked to human disease in Europe and Asia, but B. burgdorferi, B. spielmanii, and B. bavariensis have also been implicated. After the pathogen was discovered, investigators showed that ticks from the Ixodes family, also known as “hard ticks,” transmit the bacterium between hosts, and this represents the only known route by which humans become infected naturally. The main vectors are I. scapularis (the blacklegged tick) in the northeastern and central North America, I. pacificus (the western black-legged tick) west of the Rocky Mountains, I. ricinus (the European sheep tick) in Europe, and I. persulcatus (the taiga tick) in Asia. Ticks from the Ixodes family are three-host ticks, which means that, as they complete their life cycles, they feed on three different animal species. Their life cycle includes three post-egg stagesdlarva, nymph, and adult, and lasts an average of 2 years in the United States. At each of its three developmental stages, the tick takes a blood meal that lasts for 3–5 consecutive days. These blood meals are separated by longer interstadial developmental stages, in which the ticks are independent of their hosts and their survival is shaped by environmental conditions. When the larvae, nymphs, and adults seek a host, they adopt the “questing” position, in which they climb up the vegetation, and their front legs extend as they are looking for a passing host. During attachment to the host, ticks inject histamine-binding proteins, cytokine inhibitors, complement inhibitors, and anticoagulants, which explain a painless bite and the absence of an inflammatory response. Spirochetes from an infected animal enter the tick hemolymph, travel to the midgut, and during a subsequent blood meal may be transferred to a new host. The life cycle of the tick starts in the spring, when females lay eggs that are deposited on the ground. Ticks are uninfected as they hatch from eggs, but they may acquire the bacterium by feeding on reservoir hosts. During the summer, the six-legged larva that emerges after hatching climbs into the vegetation, starts its questing, or host-seeking behavior, and attaches to small animals or birds to take its first blood meal. These hosts, if they had previously been infected with B. burgdorferi, now have an opportunity to transmit the spirochete to the tick during the larval feeding. The larvae subsequently detach and, on the ground, molt to the nymph stage the following spring. Nymphs feed on small mammals or birds. Four species of small mammalsdthe white-footed mouse (Peromyscus leucopus), the Eastern chipmunk (Tamias striatus), the short-tailed shrew (Sorex brevicauda), and the masked shrew (Sorex cinereus) are thought to be responsible for infecting  80%–90% of the ticks, and their ecology has attracted increasing interest in context of the disease. During this second blood meal the ticks transmit the pathogen through their saliva, perpetuating the cycle. Infected nymphs are very efficient vectors, and a single bite is sufficient to infect a mouse. During the autumn of the second year, the nymph falls to the ground and molts to generate the adult ticks, which subsequently climb to the tips of vegetation, waiting for larger hosts, such as the white-tailed deer, which they can attach to and feed on. After mating, the female ticks lay eggs, and the cycle starts again. The nymph stage is the one most likely to infect humans, which are not implicated in the B. burgdorferi cycle, but represent dead-end hosts. During its complex life cycle, B. burgdorferi has to adapt to environments with very different characteristics. For example, it has to survive both in mammals, which have a body temperature of 37–39  C, and in ticks, with a body temperature that varies with the environmental conditions. When the complete sequence of the B. burgdorferi genome became available in 1997, it revealed several characteristics that explain this pathogen’s ability to adapt to environmental changes. In addition to its linear chromosome, B. burgdorferi has at least 21 extrachromosomal DNA elements, the largest number for any bacterium examined to date, and they harbor many of the genes involved in pathogenesis. Some of the B. burgdorferi survival strategies include adaptive responses to facilitate survival at various temperatures or pH values; the synthesis of anti-inflammatory cytokines for active immune evasion; localization to the so-called “immunologically privileged organs” such as the nervous system and the eyes, where they are less accessible to the components of the immune system; and the ability to undergo mutation and recombination to evade host antibody responses.

Forest Fragmentation and Forest Patches The ecology of Lyme disease is very complex. Lyme disease requires that infected hosts, ticks, and susceptible humans coexist in close spatial proximity. This can be accomplished in several ways, such as an increase in the number of infected ticks, an increased abundance of mouse and deer populations, or the relocation of humans to the vicinity of habitats that harbor ticks or animal reservoirs. Several anthropogenic ecosystem changes that alter the spatial structure of forests have been linked to the recent worldwide surge in Lyme disease. These include deforestation, reforestation, forest fragmentation, and urbanization. The expansion of human habitats into wildland areas, such as forests, has received particular attention, and several studies reported that proximity to forests is a good predictor of human Lyme disease risk. Deforestation causes a 2%–3% annual loss of the forest cover worldwide. In many locations globally, forested areas are increasingly being replaced with forest patches that are separated by suburban residential settlements. Based on current deforestation rates, forested regions are predicted to disappear in 150 years in Africa and in 250 years in South America. In a study that used satellite imaging in a suburban region around Lyme, Connecticut, Brownstein and colleagues reported that a decrease in the size of forest patches and an increase in the distance between the patches are associated with higher tick densities and a higher prevalence of tick infections, but they also found a lower risk of human Lyme disease in the fragmented areas. Allan and collaborators studied highly fragmented forest patches in Dutchess County from southeastern New York State and revealed that nymphal tick density and

152

Lyme Disease

nymphal infection with B. burgdorferi were inversely correlated with the patch area. The authors underscored the importance of reducing the fragmentation of deciduous forests, especially in areas endemic for Lyme disease, and advised against the establishment of forest fragments smaller than 1–2 ha, which are prone to low diversities of vertebrate hosts and high densities of whitefooted mice, a very competent Ixodes reservoir. White-footed mice are overrepresented in small forest patches, especially in those smaller than 2 ha, which are generally too inhospitable to harbor the larger species that, in natural environments, are predators of mice and control their numbers. Tran and colleagues examined the incidence of Lyme disease between 2002 and 2006 in 13 states with different landscape variables from the northeastern United States, and revealed that more fragmentation between forests and residential areas, and climatic factors, led to higher Lyme disease incidence. Some studies found contradictory effects when they interrogated the link between forest fragmentation, the density of infected nymphs, and human Lyme disease incidence. It was proposed that these seemingly contradictory patterns could be explained by variations in human behavior between landscapes. Larsen and colleagues suggested that a feedback cycle between forest fragmentation and Lyme disease risk could be a confounding factor in statistical analyses and at least partly reconcile some of the divergent conclusions. Using longitudinal data analyses from 12 states and the District of Columbia, the authors revealed that in states with a high incidence of Lyme disease, a higher incidence of the disease reduces the proportion of the population that would settle to the wildland-urban interface, pointing for the first time toward a human behavioral risk response to Lyme disease risk. In a study that performed biological modeling for different land usage scenarios, Li and colleagues found that increasing woodland fragmentation strongly influences Lyme disease risk and underscored the importance of transitional areas of vegetation. The models predicted that with increased woodland fragmentation, the prevalence of nymphal infection and the density of infectious nymphs increases in woodlands adjacent to non-vegetated areas. This is explained by the lower survival of infectious ticks and the lower prevalence of the infection in reservoir hosts in grassland areas, which act as a sink for ticks. Based on these results, the authors hypothesized that the presence of grassland beside woodland would lower Lyme disease risk in the woodland because of the lower tick survival rates, and strategies such as mowing or burning the grassland may increase Lyme disease risk in the adjacent woodland. Perturbations in forest ecosystems shape the dynamics of several infectious diseases. In undisturbed habitats, biodiversity is greater and more evenly distributed. A study in southeastern Brazil revealed that bird diversity decreases with the size of forest fragments, and the prevalence of ticks on birds was inversely correlated with bird diversity and richness. In the Amazonian rainforest, reduced forest cover contributed to the increasing prevalence of Anopheles darlingi larvae, increasing the risk for malaria. In southern Ghana and southwestern Togo, the savannah blackflies of the Simulium damnosum complex that transmit onchocerciasis became more prevalent with deforestation. In western and central Africa, human cases of Ebola virus were documented mostly in hotspots of forest fragmentation. In 1998, a new pathogen, the Nipah virus, originating from fruit bats, caused a highly fatal disease among Malaysian pig farmers. It is widely believed that the destruction of forested areas, which removed flowering and fruiting trees, together with a draught caused by the 1997–98 El Niño Southern Oscillation Event, and the human encroachment of natural habitats as part of suburbanization, brought fruit bats closer to human establishments. This allowed bats to start feeding from fruit orchards, and the location of these orchards in the proximity of pigsties created opportunities for the cross-species transmission of the virus, initially to swine, and subsequently to humans.

The Dilution Effect One of the major effects of forest fragmentation is its impact on species abundance and survival. Species are currently lost an estimated 100–1000-fold faster than before humans emerged on the planet, and some authors predicted that almost half of all species will be in danger of extinction by 2100. The perturbation of forest ecosystems impacts different species to different degrees. Although some species are more sensitive to habitat changes, others survive, and their relative abundance increases. Whitefooted mice have a wide tolerance to habitat changes and, as a result, they may become overrepresented in forest patch areas that have lost several other species. This effect is compounded by the loss of certain predators that normally attack mice. As a result, both the number of mice and the number of blood meals that ticks can take from mice increase. Ostfeld and Keesing coined the term dilution effect to describe vertebrate communities with high species diversity in which the higher proportion of incompetent hosts decreases the number of blood meals that vectors, such as ticks, take from competent hosts. In other words, populations with higher host diversities and many incompetent hosts increase the proportion of tick bites that are “wasted,” and the pathogen does not persist. Thus, the probability that a tick will become infected with B. burgdorferi after a blood meal from a mouse depends not simply on the number of white-footed mice, but also on their relative proportion, in the same habitat, to other hosts, which otherwise may not function as good reservoirs. Several studies validated the dilution effect model and concluded that the prevalence of tick infections and Lyme disease risk are lower in habitats with increased species diversity. The dilution effect was described for other infectious diseases as well. Clay and colleagues demonstrated that increased species diversity reduces the frequency of contacts between deer mice, the reservoir for the Sin Nombre virus, and reduces transmission risk, whereas the more frequent encounters in communities with low species diversity facilitate transmission opportunities. In studies that examined avian communities from the eastern United States, Swaddle and Calos correlated increased species diversity with a lower incidence of human West Nile virus. The same phenomenon was also described in plants, where a monoculture had an almost three times higher pathogenic fungal load than a region that had been planted with 24 different grassland plant species to mimic natural diversity.

Lyme Disease

153

The dilution effect is not supported by all studies. One criticism has been that most analyses used proxies or indirect measures of species biodiversity, such as habitat fragment size or the abundance of white-footed mice. Certain models that describe the dynamics of Lyme disease took into consideration only the interactions between ticks, deer, and mice, and Granter and colleagues point out, as a fallacy of this approach, the fact that it ignores the presence of many other animal species, particularly considering that I. scapularis ticks can feed on at least 125 vertebrate species. This experimental shortcoming is particularly important, considering that other species might respond differently than the white-footed mouse. An analysis of forest fragments from the northeastern United States revealed that even though the prevalence of nymphal infection was influenced by variations in host competence, it was not reduced by increased host richness. Another study, conducted in Europe, found that the density of infected nymphs did not necessarily decrease with increased biodiversity, but was shaped by the species composition of the host community, and concluded that the dilution effect has to be considered in a more nuanced manner, with each Borrelia genospecies possibly exerting its own dilution effect. In an analysis of 19 years of field data from a tick and small mammal trapping program in southeastern New York, Levi and colleagues incorporated variations in vertebrate host density and community composition in data-driven models for tick dynamics to examine tick redistribution. Mice, chipmunks, short-tailed shrews, and masked shrews were identified as important amplification hosts. Removal of squirrels and opossums increased the density of infected nymphs, but most other species examined were inconsequential hosts with respect to the larval to nymph transition. Based on these findings, the authors proposed that diverse host communities contain two types of dilution hosts. One type includes species such as squirrels and opossums, which are sufficiently abundant and heavily parasitized, and deflect tick blood meals away from the more competent species. Another type of dilution hosts reduces the abundance of the more competent hosts by interactions that include predation and competition. Predation was proposed as a mechanism to explain the link between increased biodiversity and decreased Lyme disease risk. In the past half a century, a major predatory-prey change in North America has involved the expansion of the coyotes (Canis latrans), which reduced the abundance of several small-mammal predators. The best studied among these is the red fox (Vulpes vulpes), which suppresses Lyme disease hosts. Studies in North America revealed a cascade from wolves through coyotes to red foxes. Coyotes outnumber foxes in regions where wolves were killed by humans, but red foxes outnumber coyotes in regions where wolves are present. Along this model, Levi and colleagues showed that in several states from northeastern and mid-western United States, the increase of Lyme disease over the last three decades was often not correlated with deer abundance but, instead, with a decline in the red fox abundance, and in four states, its incidence was predicted by coyote abundance and fox rarity. In the same analysis, the spatial distribution of Lyme disease in New York was positively correlated with the abundance of coyotes and negatively correlated with the abundance of foxes. Suburbanization is another factor that influences Lyme disease. White-footed mice, eastern chipmunks, and short-tailed shrews are highly resilient species that present an adaptive advantage in various environments, and when species diversity is lost, they tend to be overrepresented as compared to other species. Tick survival rates vary on different host species, providing another mechanism to link biodiversity to Lyme disease.

The Edge Effect The junction between two different landscape elements, known as “edge,” may be changed by natural or anthropogenic factors and has been of considerable interest due to its impact on biodiversity. Changes in the shape and surface area of a forest patch, as a result of deforestation, lead to changes in its perimeter or edge area. Thus, forest fragmentation not only reduces the available habitat surface for certain species but also increase the circumference that separates adjacent habitats. Sometimes, habitat edges have unpredictable cascade effects. For example, by changing the access of light, they may create a suitable environment for herbivorous insects, which attract birds, which in turn attract predators. Habitat edges may also be associated with increased air and soil moisture and temperature, which can shape species composition. Some species show a preference toward, whereas others exhibit avoidance of habitat edges. The “edge effect,” which refers to changes in community structures that occur at habitat edges, affects species composition and diversity, and impacts community dynamics and functioning. Several studies reported that ticks are more abundant at forest edges, a finding that is at least partly explained by the abundance in these regions of more small vertebrate hosts. An ecological analysis that used data from passive surveillance reports between 1996 and 2000 in 12 counties from Maryland found that each 10% increase in an index that measured the adjacency between forest and herbaceous covers was associated with a 34% increase in Lyme disease incidence. This underscored the importance of avoiding a high degree of interspersion between forest and herbaceous areas during landscape design as a measure to control Lyme disease. The edge effect is also relevant for other infectious diseases that are influenced by ecological perturbations. Suzán et al. found that in Panama, where extensive deforestation and habitat fragmentation occurred in recent decades, two rodent species that are competent Hantavirus reservoirs, the fulvous pygmy rice rat (Oligoryzomys fulvescens) and the common cane mouse (Zygodontomys brevicauda) were more frequently found at edge habitats and in disturbed habitats than in forests. The first recognized Hantavirus infection occurred in 1978 near Hantan River in South Korea, and this potentially deadly infectious disease, first reported in the United States in 1993, increasingly emerges as a global concern.

154

Lyme Disease

Acorn and Lyme Disease A common and remarkable reproductive strategy in plants is the synchronized variation of large crop seeds between years, a process known as masting. Oak tree-dominated forests produce variable acorn crops from year to year. For oak trees, masting occurs every 2–5 years, known as mast years, and fewer acorns are produced in the intervening years. The impact of masting on the availability of food resources for tick hosts is an important factor that shapes Lyme disease risk. Oak tree masting and the more abundant mouse populations, as a result, initiate two parallel chain reactions. The first chain reaction involves acorn, deer, ticks, and mice. Deer are attracted to oak-dominated forests in the years when acorn is available, but they tend to occupy non-oak-dominated forests in the years with poor acorn production. Larger numbers of deer carry the female Ixodes ticks, leading to an increase in the larval population and an expansion of the mouse population during the following year. The larvae feed on the mice, and this process is predicted to increase the number of infected nymphs 2 years after masting. In a study that examined a forested region from southeastern New York State, Ostfeld and colleagues found that acorn production in the fall strongly influenced the abundance of white-footed mice and eastern chipmunks, another host for Ixodes ticks, in the following summer. The increased density of these hosts provided more opportunities for larval ticks to feed and become infected, increasing the abundance and the infection prevalence of nymphal ticks during the following year. The risk of human Lyme disease appears, thus, to be affected by the density of white-footed mice during the prior year, and by acorn production 2 years earlier. A second chain reaction relevant for Lyme disease connects mice and acorns via gypsy moths. Gypsy moths were introduced into the United States in 1868 to produce silk, and subsequently invaded large forest areas in the northeastern and mid-western United States and Canada. They feed on the foliage of many plant types and oak trees, causing defoliation, and their populations undergo periodic outbreaks and declines. White-footed mice are natural predators of the gypsy moth pupae and suppress their populations, maintaining them at low densities. When mouse populations collapse, gypsy moth populations expand. As a result, gypsy moth larvae defoliate oak trees, leading to decreased acorn production and the collapse of mouse populations, eventually allowing moth populations to increase. In a study conducted upstate New York in the summer of 1995, 1 year after masting, the removal of mice, which were abundant that year, from three forest patches, led to an increased survival of moth pupae. When masting was simulated by spreading acorn on experimental plots, the mouse populations expanded up to 9 months later and led to an increased larval tick burden on the mice the summer after masting.

Recent Changes in the Geographical Distribution of Ixodes Ticks Lyme disease is just one example to illustrate how infectious diseases are shaped not only by the biology of the pathogen, vectors, hosts, and vegetation, collectively known as biotic factors, but also by variables such as climate, precipitation, and humidity, known as abiotic factors. Although many authors agree that climatic factors alone are not sufficient to explain the recent surge in Lyme disease in many locations worldwide, several studies found that environmental temperature and relative humidity are among the variables that shape infection risk. Tick molting and reproduction are influenced by temperature. Even though ticks resist at very low temperatures, they require ambient temperatures over 4–5  C to be active. For example, I. ricinus larvae are able to survive at 7  C and adults at 20  C. These values vary with other factors, such as length of exposure, and also depend on whether the tick came into contact with ice. I. ricinus optimal activity occurs between 15  C and 27  C for larvae and between 10  C and 22  C for nymphs. I. scapularis molting is most successful at 28  C for larvae and 24  C for nymphs, and temperatures > 30  C were consistently shown to be detrimental for development. The distribution of Ixodes ticks has recently experienced geographical changes that were linked to climate perturbations. Although a few decades ago I. ricinus used to be limited to below 61 N in Sweden, ticks have now become established up to 66 N. Lindgren and colleagues found that this northward expansion that happened between the 1980s and the 1990s, and the increasing abundance of ticks, are correlated with mild winters and extended spring and autumn seasons. The authors found that this northward expansion of the geographical range of ticks is linked to the decreased number of days with winter temperatures below 12  C. Additionally, although years ago I. ricinus ticks were found only up to an elevation of 700 m above the sea level, the border of their habitat recently shifted toward higher altitudes, and the risk of tick-borne diseases currently exists at elevations up to 1200 m above sea level in the Czech Republic, and up to 1300 m in the Italian Alps. These are altitudes where, decades earlier, ticks could not have completed their life cycles. Tick population modeling studies predicted that, as a result of climate change, the limit of I. scapularis in Canada could shift 200 km northward by the 2020s, and 1000 km by the 2080s. Simon and colleagues combined climate niche with landscape modeling to estimate the contribution of climate and habitat changes to the distribution of the white-footed mouse and the black-legged tick, and predicted that by 2050, B. burgdorferi will experience a northern expansion of about 250–500 km in North America, or an annual expansion rate of 3.5–11 km. Warmer climates, besides facilitating the ability of ticks to complete their life cycles, could also affect the dynamics of Lyme disease by changing the migration pattern of birds, which shifted northward in recent decades. Approximately 2% of an estimated 3 billion birds that migrate northward through eastern and central Canada carry I. scapularis ticks, and are thought to contribute to the expansion of the northern geographic range of the tick. Another critical aspect in context of ambient temperature changes is urbanization. Although  50 years ago 30% of the world population lived in cities, over half of the population currently resides in urban areas, a number predicted to further increase to  60% by 2030. Urbanization impacts the ecosystem in several ways, one of which is the creation of urban heat islands, thought

Lyme Disease

155

to be one of the major challenges confronting humanity in the 21st century. Asphalt, concrete, and brick from urban heat islands absorb solar radiation during the day, store it as heat, and radiate it back during the night. This phenomenon, compounded by the thermal mass of buildings, and additional anthropogenic factors associated with urbanization, explain the higher surface temperatures recorded in urban settlements as compared to the surrounding suburban and rural areas. Differences as high as 2–10  C were recorded between urban areas and the surrounding, vegetation-rich rural regions. Although climate modifications provide an intriguing explanation for recent changes in vector-borne diseases observed worldwide, relatively few studies have examined this link, and there is an acute need for more in-depth analyses. Another abiotic factor with implications for tick biology is precipitation. Ticks require humid climates, and several studies reported that saturation deficit, which is the difference between the actual and the maximum vapor content at an existing temperature, and reflects air humidity, is inversely related to tick density. High rainfall is favorable for tick survival, and even though ticks can survive for short periods of low humidity, their survival decreases at longer exposures. During questing, ticks wait on vegetation, in anticipation of their blood meal, until a suitable host appears. During this time, they lose water and are susceptible to dehydration. When desiccating conditions develop, ticks move to the base of the vegetation to rehydrate, and if they exhaust their energy resources before finding a host, they die. Decreasing relative humidity from 82% to 75% led to an approximately 10-fold initial decrease in the survival of I. scapularis, followed, at lower humidity levels, by a further linear decrease in survival rates. Between 1992 and 2002, increased late spring/early summer precipitation in northeastern United States was associated with an elevated incidence of Lyme disease. Enhanced success during tick questing, as a result of higher humidity, is thought to have played a role. Some studies did not find significant links between temperature or precipitation level fluctuations and human Lyme disease risk, and understanding these connections is an exciting and critical research topic.

Preventive Measures Lyme disease is the most common tick-borne disease in the United States. Prevention is preferable to treatment, and represents an important facet of disease management, particularly in those at high risk. The major strategies for prevention include personal protective measures to avoid tick bites, prophylactic antibiotics in the case of a tick bite, environmental interventions, and vaccination. While a human vaccine was approved by the FDA and became available for a few years, it was voluntarily discontinued in 2002 due to low demand. There are currently no human vaccines available for Lyme disease in the United States. Avoiding tick bites can be accomplished in several ways. Personal protective measures include wearing light-colored clothing, long-sleeved shirts, long pants, tucking the pants into the socks, walking in the center of trails and avoiding tall shrubs to minimize contact with vegetation, wearing rubber boots, shoes, or closed-toed sandals, using tick repellents, and performing frequent body checks after outdoor activities. Prompt removal of the ticks that have attached is critical. The risk to develop Lyme disease is lower if a tick has been attached for < 24 h, but other pathogens may be transmitted more quickly; and it increases if the tick has been attached for > 36 h. Removing a tick is done by carefully holding it as close as possible to its mouthpart with tweezers and pulling it gently. Some of the environmental interventions include landscape modifications such as deer exclusion fencing and grass mowing, removing leaf litter and woodpiles, and the application of acaricides.

Conclusions Lyme disease is a complex multi-organ infectious disease with worldwide distribution. It is the most frequent tick-borne infection in North America and Europe, where the number of cases has been increasing, a trend that is predicted to continue. In addition, the infection has been reported from countries where it was not documented previously. Detection is challenging in both established and emerging areas. A critical need for the management of Lyme disease and other vector-borne infectious diseases is the in-depth understanding of the complex interplay between the pathogen, the vector, the reservoir, and the non-reservoir host. The development of a cross- and inter-disciplinary framework that incorporates biomedical, medical, social, agricultural, and veterinary sciences is an indispensable component of these endeavors, one that is ideally positioned to characterize and help mitigate the factors that shape disease risk. Deforestation and reforestation, habitat fragmentation, and biodiversity changes have attracted attention as some of the main anthropogenic perturbations linked to the emergence of Lyme disease. The contribution of multiple biotic and abiotic factors, and their complex interplay, provide an important paradigm and illustrate the need to reexamine the framework of the hostpathogen interface for many infectious diseases.

Implications Many emerging and reemerging human pathogens were linked to specific anthropogenic factors that cause ecosystem perturbations. The consequences of these perturbations are often impossible to predict. For example, a higher risk of alveolar echinococcosis, caused by Echinococcus multilocularis, was described in inhabitants from certain Tibetan villages with large areas of fenced pastures. This was explained by the fact that partial fencing in pastoral settlements exacerbates overgrazing on unfenced, common pastures, leads to pastureland degradation, and facilitates the growth of small mammals that serve as susceptible E. multilocularis hosts. Several European cities are experiencing the “urban fox phenomenon,” which refers to the increasing overlap between the habitat

156

Lyme Disease

of red foxes and that of humans. This is thought to result, at least in part, from successful rabies eradication programs and the increased availability of food sources, which led to the subsequent increase in E. multilocularis infections in foxes and humans. The construction of irrigation canals for agriculture in the Thar Desert in India, which may have facilitated the penetration of Anopheles culicifacies into the desert, is thought to be responsible for the local resurgence of P. falciparum malaria. Construction of the Diama dam on the Senegal River Basin, built to prevent salty water from entering the river and make it more suitable for irrigation, caused a massive outbreak of Schistosoma mansoni in northern Senegal. This is thought to have been made possible, at least in part, by creating a habitat suitable for the growth of freshwater snails, a natural reservoir of the pathogen. The worldwide trade of secondhand tires facilitated the dissemination of disease-infected mosquitoes to remote locations, due to the environment provided by discarded vehicle tires for the development of mosquito larvae that serve as vectors for many infectious diseases. As a result, Aedes albopictus, considered the most invasive mosquito species in the world, and an efficient vector for many infectious diseases, was transported to at least 28 countries worldwide from its native habitat in Asia. These examples illustrate the profound and farreaching consequences of ecosystem perturbations, and underscore the need to incorporate a multidisciplinary, interdisciplinary, and cross-disciplinary approach into the analysis of their impact on host-pathogen biology.

Further Reading Allan, B.F., Keesing, F., Ostfeld, R.S., 2003. Effect of forest fragmentation on Lyme disease risk. Conservation Biology 17 (1), 267–272 17:267-272. Benedict, M.Q., Levine, R.S., Hawley, W.A., Lounibos, L.P., 2007. Spread of the tiger: Global risk of invasion by the mosquito Aedes albopictus. Vector Borne and Zoonotic Diseases (Larchmont, NY) 7, 76–85. https://doi.org/10.1089/vbz.2006.0562. Benhin, J.K., 2006. Agriculture and deforestation in the tropics: A critical theoretical and empirical review. Ambio 35, 9–16. Bradley, C.A., Altizer, S., 2007. Urbanization and the ecology of wildlife diseases. Trends in Ecology & Evolution 22, 95–102. https://doi.org/10.1016/j.tree.2006.11.001. Brownstein, J.S., Skelly, D.K., Holford, T.R., Fish, D., 2005. Forest fragmentation predicts local scale heterogeneity of Lyme disease risk. Oecologia 146, 469–475. https://doi.org/ 10.1007/s00442-005-0251-9. Burgdorfer, W., Barbour, A.G., Hayes, S.F., Benach, J.L., Grunwaldt, E., Davis, J.P., 1982. Lyme disease-a tick-borne spirochetosis? Science (New York, N.Y.) 216, 1317–1319. Chua, K.B., Chua, B.H., Wang, C.W., 2002. Anthropogenic deforestation, El Nino and the emergence of Nipah virus in Malaysia. The Malaysian Journal of Pathology 24, 15–21. Clark, K.L., Leydet, B.F., Threlkeld, C., 2014. Geographical and genospecies distribution of Borrelia burgdorferi sensu lato DNA detected in humans in the USA. Journal of Medical Microbiology 63, 674–684. https://doi.org/10.1099/jmm.0.073122-0. Clay, C.A., Lehmer, E.M., St Jeor, S., Dearing, M.D., 2009. Testing mechanisms of the dilution effect: Deer mice encounter rates, sin Nombre virus prevalence and species diversity. EcoHealth 6, 250–259. https://doi.org/10.1007/s10393-009-0240-2. Cook, M.J., 2015. Lyme borreliosis: A review of data on transmission time after tick attachment. International Journal of General Medicine 8, 1–8. https://doi.org/10.2147/ ijgm.S73791. Craig, P.S., et al., 2000. An epidemiological and ecological study of human alveolar echinococcosis transmission in South Gansu, China. Acta Tropica 77, 167–177. Eisen, R.J., Eisen, L., Beard, C.B., 2016. County-scale distribution of Ixodes scapularis and Ixodes pacificus (Acari: Ixodidae) in the continental United States. Journal of Medical Entomology 53, 349–386. https://doi.org/10.1093/jme/tjv237. Embers, M.E., Ramamoorthy, R., Philipp, M.T., 2004. Survival strategies of Borrelia burgdorferi, the etiologic agent of Lyme disease. Microbes and Infection 6, 312–318. Ferraz, G., Nichols, J.D., Hines, J.E., Stouffer, P.C., Bierregaard Jr., R.O., Lovejoy, T.E., 2007. A large-scale deforestation experiment: Effects of patch area and isolation on. Amazon Birds Science (New York, NY) 315, 238–241. https://doi.org/10.1126/science.1133097. Frank, D.H., Fish, D., Moy, F.H., 1998. Landscape features associated with Lyme disease risk in a suburban residential environment. Landscape Ecology 13, 27–36. Granter, S.R., Bernstein, A., Ostfeld, R.S., 2014. Of mice and men: Lyme disease and biodiversity. Perspectives in Biology and Medicine 57, 198–207. https://doi.org/10.1353/ pbm.2014.0015. Gray, J.S., Dautel, H., Estrada-Pena, A., Kahl, O., Lindgren, E., 2009. Effects of climate change on ticks and tick-borne diseases in europe. Interdisciplinary Perspectives on Infectious Diseases 2009, 593232. https://doi.org/10.1155/2009/593232. Halos, L., et al., 2010. Ecological factors characterizing the prevalence of bacterial tick-borne pathogens in Ixodes ricinus ticks in pastures and woodlands. Applied and Environmental Microbiology 76, 4413–4420. https://doi.org/10.1128/aem.00610-10. Jackson, L.E., Hilborn, E.D., Thomas, J.C., 2006. Toward landscape design guidelines for reducing Lyme disease risk. International Journal of Epidemiology 35, 315–322. https:// doi.org/10.1093/ije/dyi284. Jaenson, T.G., Talleklint, L., Lundqvist, L., Olsen, B., Chirico, J., Mejlon, H., 1994. Geographical distribution, host associations, and vector roles of ticks (Acari: Ixodidae, Argasidae) in Sweden. Journal of Medical Entomology 31, 240–256. Jones, C.G., Ostfeld, R.S., Richard, M.P., Schauber, E.M., Wolff, J.O., 1998. Chain reactions linking acorns to gypsy moth outbreaks and Lyme disease risk. Science (New York, NY) 279, 1023–1026. Jones, K.E., Patel, N.G., Levy, M.A., Storeygard, A., Balk, D., Gittleman, J.L., Daszak, P., 2008. Global trends in emerging infectious diseases. Nature 451, 990–993. https:// doi.org/10.1038/nature06536. Kilpatrick, A.M., et al., 2017. Lyme disease ecology in a changing world: Consensus, uncertainty and critical gaps for improving control. Philosophical Transactions of the Royal Society of London Series B, Biological sciences 372. https://doi.org/10.1098/rstb.2016.0117. Larsen, A.E., MacDonald, A.J., Plantinga, A.J., 2014. Lyme disease risk influences human settlement in the wildland-urban interface: Evidence from a longitudinal analysis of counties in the northeastern United States. The American Journal of Tropical Medicine and Hygiene 91, 747–755. https://doi.org/10.4269/ajtmh.14-0181. Levi, T., Keesing, F., Holt, R.D., Barfield, M., Ostfeld, R.S., 2016. Quantifying dilution and amplification in a community of hosts for tick-borne pathogens ecological applications: A publication of the. Ecological Society of America 26, 484–498. Levi, T., Kilpatrick, A.M., Mangel, M., Wilmers, C.C., 2012. Deer, predators, and the emergence of Lyme disease. Proceedings of the National Academy of Sciences of the United States of America 109, 10942–10947. https://doi.org/10.1073/pnas.1204536109. Li, S., Hartemink, N., Speybroeck, N., Vanwambeke, S.O., 2012. Consequences of landscape fragmentation on Lyme disease risk: A cellular automata approach. PLoS One 7, e39612. https://doi.org/10.1371/journal.pone.0039612. Lindgren, E., Talleklint, L., Polfeldt, T., 2000. Impact of climatic change on the northern latitude limit and population density of the disease-transmitting European tick Ixodes ricinus. Environmental Health Perspectives 108, 119–123. https://doi.org/10.1289/ehp.00108119. LoGiudice, K., Duerr, S.T., Newhouse, M.J., Schmidt, K.A., Killilea, M.E., Ostfeld, R.S., 2008. Impact of host community composition on Lyme disease risk. Ecology 89, 2841–2849. McCabe, G.J., Bunnell, J.E., 2004. Precipitation and the occurrence of Lyme disease in the northeastern United States. Vector Borne and Zoonotic Diseases (Larchmont, NY) 4, 143–148. https://doi.org/10.1089/1530366041210765. Mead, P.S., 2015. Epidemiology of Lyme disease. Infectious Disease Clinics of North America 29, 187–210. https://doi.org/10.1016/j.idc.2015.02.010.

Lyme Disease

157

Mitchell, C.A., Tilman, D., Groth, J.V., 2002. Effects of grassland plant species diversity, abundance, and composition on foliar fungal disease. Ecology 83, 1713–1726. Morens, D.M., Folkers, G.K., Fauci, A.S., 2004. The challenge of emerging and re-emerging infectious diseases. Nature 430, 242–249. https://doi.org/10.1038/nature02759. Murcia, C., 1995. Edge effects in fragmented forests: Implications for conservation. Trends in Ecology & Evolution 10, 58–62. https://doi.org/10.1016/s0169-5347(00)88977-6. Newsome, T.M., Ripple, W.J., 2015. A continental scale trophic cascade from wolves through coyotes to foxes. The Journal of Animal Ecology 84, 49–59. https://doi.org/10.1111/ 1365-2656.12258. Ogden, N.H., et al., 2004. Investigation of relationships between temperature and developmental rates of tick Ixodes scapularis (Acari: Ixodidae) in the laboratory and field. Journal of Medical Entomology 41, 622–633. Ogden, N.H., et al., 2006. Climate change and the potential for range expansion of the Lyme disease vector Ixodes scapularis in Canada. International Journal for Parasitology 36, 63–70. https://doi.org/10.1016/j.ijpara.2005.08.016. Ogrzewalska, M., Uezu, A., Jenkins, C.N., Labruna, M.B., 2011. Effect of forest fragmentation on tick infestations of birds and tick infection rates by rickettsia in the Atlantic forest of Brazil. EcoHealth 8, 320–331. https://doi.org/10.1007/s10393-011-0726-6. Ostfeld, R.E., Keesing, F., Eviner, V. (Eds.), 2010. Infectious disease ecology. Effects of ecosystems on disease and of disease on ecosystems. Princeton University Press, Princeton. Ostfeld, R.S., Canham, C.D., Oggenfuss, K., Winchcombe, R.J., Keesing, F., 2006. Climate, deer, rodents, and acorns as determinants of variation in Lyme-disease risk. PLoS Biology 4, e145. https://doi.org/10.1371/journal.pbio.0040145. Ostfeld, R.S., Hazler, K.R., Cepeda, O.M., 1996a. Temporal and spatial dynamics of Ixodes scapularis (Acari: Ixodidae) in a rural landscape. Journal of Medical Entomology 33, 90–95. Ostfeld, R.S., Jones, C.J., Wolff, J., 1996b. Of mice and mast: Ecological connections in eastern deciduous forests. Biosciences 46, 323–330. Ostfeld, R.S., Keesing, F., 2000a. Biodiversity and disease risk: The case of Lyme disease. Conservation Biology 14, 722–728. Ostfeld, R.S., Keesing, F., 2000b. The function of biodiversity in the ecology of vector-borne zoonotic disease. Canadian Journal of Zoology 78, 2061–2078. Ostfeld, R.S., Schauber, E.M., Canham, C.D., Keesing, F., Jones, C.G., Wolff, J.O., 2001. Effects of acorn production and mouse abundance on abundance and Borrelia burgdorferi infection prevalence of nymphal Ixodes scapularis ticks. Vector Borne and Zoonotic Diseases (Larchmont, NY) 1, 55–63. https://doi.org/10.1089/153036601750137688. Perret, J.L., Guigoz, E., Rais, O., Gern, L., 2000. Influence of saturation deficit and temperature on Ixodes ricinus tick questing activity in a Lyme borreliosis-endemic area (Switzerland). Parasitology Research 86, 554–557. Perret, J.L., Rais, O., Gern, L., 2004. Influence of climate on the proportion of Ixodes ricinus nymphs and adults questing in a tick population. Journal of Medical Entomology 41, 361–365. Prevention CCfDCa, 2007. Lyme disease–United States, 2003–2005 MMWR morbidity and mortality weekly report, vol. 56, pp. pp. 573–576. Raveche, E.S., Schutzer, S.E., Fernandes, H., Bateman, H., McCarthy, B.A., Nickell, S.P., Cunningham, M.W., 2005. Evidence of Borrelia autoimmunity-induced component of Lyme carditis and arthritis. Journal of Clinical Microbiology 43, 850–856. https://doi.org/10.1128/jcm.43.2.850-856.2005. Rodgers, S.E., Zolnik, C.P., Mather, T.N., 2007. Duration of exposure to suboptimal atmospheric moisture affects nymphal blacklegged tick survival. Journal of Medical Entomology 44, 372–375. Rulli, M.C., Santini, M., Hayman, D.T., D’Odorico, P., 2017. The nexus between forest fragmentation in Africa and Ebola virus disease outbreaks. Scientific Reports 7, 41613. https://doi.org/10.1038/srep41613. Ruyts, S.C., et al., 2016. Diversifying forest communities may change Lyme disease risk: Extra dimension to the dilution effect in Europe. Parasitology 143, 1310–1319. https:// doi.org/10.1017/s0031182016000688. Schotthoefer, A.M., Frost, H.M., 2015. Ecology and epidemiology of Lyme Borreliosis. Clinics in Laboratory Medicine 35, 723–743. https://doi.org/10.1016/j.cll.2015.08.003. Scott, J.D., Foley, J.E., Anderson, J.F., Clark, K.L., Durden, L.A., 2017. Detection of Lyme disease bacterium, Borrelia burgdorferi sensu lato, in blacklegged ticks collected in the Grand River valley, Ontario, Canada. International Journal of Medical Sciences 14, 150–158. https://doi.org/10.7150/ijms.17763. Sehgal, R.N., 2010. Deforestation and avian infectious diseases. The Journal of Experimental Biology 213, 955–960. https://doi.org/10.1242/jeb.037663. Simon, J.A., et al., 2014. Climate change and habitat fragmentation drive the occurrence of Borrelia burgdorferi, the agent of Lyme disease, at the northeastern limit of its distribution. Evolutionary Applications 7, 750–764. https://doi.org/10.1111/eva.12165. Southgate, V., et al., 2001. Studies on the biology of schistosomiasis with emphasis on the Senegal river basin. Memorias do Instituto Oswaldo Cruz 96 (Supplement), 75–78. Steere, A.C., et al., 1983. The spirochetal etiology of Lyme disease. The New England Journal of Medicine 308, 733–740. https://doi.org/10.1056/nejm198303313081301. Steere, A.C., Malawista, S.E., Snydman, D.R., Shope, R.E., Andiman, W.A., Ross, M.R., Steele, F.M., 1977. Lyme arthritis: An epidemic of oligoarticular arthritis in children and adults in three Connecticut communities. Arthritis and Rheumatism 20, 7–17. Suzán, G., et al., 2008. Epidemiological considerations of rodent community composition in fragmented landsacpes in Panama. Journal of Mammalogy 89, 634–690. Swaddle, J.P., Calos, S.E., 2008. Increased avian diversity is associated with lower incidence of human West Nile infection: Observation of the dilution effect. PLoS One 3, e2488. https://doi.org/10.1371/journal.pone.0002488. Tack, W., Madder, M., Baeten, L., Vanhellemont, M., Gruwez, R., Verheyen, K., 2012. Local habitat and landscape affect Ixodes ricinus tick abundances in forests on poor, sandy soils. Forest Ecology and Management 265, 30–36. LEaJ, T.G.T., 2006. Lyme borreliosis in Europe: Influences of climate and climate change, epidemiology, ecology and adaptation measures. In: Menne, B., Ebi, K.L. (Eds.), Climate change and adaptation strategies for human health. Springer, Geneva, pp. 157–188. Tran, P.M., Waller, L., 2013. Effects of landscape fragmentation and climate on Lyme disease incidence in the northeastern United States. EcoHealth 10, 394–404. https://doi.org/ 10.1007/s10393-013-0890-y. Tyagi, B.K., 2004. A review of the emergence of Plasmodium falciparum-dominated malaria in irrigated areas of the Thar Desert, India. Acta Tropica 89, 227–239. Vittor, A.Y., et al., 2009. Linking deforestation to malaria in the Amazon: Characterization of the breeding habitat of the principal malaria vector, Anopheles darlingi. The American Journal of Tropical Medicine and Hygiene 81, 5–12. Vourc’h, G., et al., 2016. Mapping human risk of infection with Borrelia burgdorferi sensu lato, the agent of Lyme borreliosis, in a periurban forest in France. Ticks and Tick-Borne Diseases 7, 644–652. https://doi.org/10.1016/j.ttbdis.2016.02.008. Wang, Q., et al., 2004. Fenced pasture: A possible risk factor for human alveolar echinococcosis in Tibetan pastoralist communities of Sichuan, China. Acta Tropica 90, 285–293. https://doi.org/10.1016/j.actatropica.2004.02.004. Yee, D.A., 2008. Tires as habitats for mosquitoes: A review of studies within the eastern United States. Journal of Medical Entomology 45, 581–593.

Lymphocystis Disease Virus (LCDV) in Aquatic Environment Eleni Golomazou and Panagiota Panagiotaki, University of Thessaly, Volos, Greece © 2019 Elsevier B.V. All rights reserved.

Abbreviations APC Anaphase-promoting complex GC content Guanine-cytosine content Csf1r Colony stimulating factor 1 receptor DNA Deoxyribonucleic acid dsDNA Double-stranded DNA HAMP Hamp hepcidin antimicrobial peptide Ifn Interferon Ighm Immunoglobulin heavy constant mu Il1b I Interleukin 1 beta Irf Interferon regulatory factor 3 LAMP Loop-mediated amplification assay LCD Lymphocystis disease LCDV Lymphocystis disease virus mcp Major capsid protein Mhc2a Major histocompatibility complex class II integral membrane alpha chain MMC Melanomacrophage centers Nccrp Nonspecific cytotoxic cell receptor protein pDNA Plasmid-DNA PCR Polymerase chain reaction qPCR Quantitative PCR Tcra T cell receptor alpha chain

Introduction Viruses are considered to be of the most abundant organisms in natural waters playing a crucial role in aquatic webs. The dynamics of viral infection within aquatic ecosystems is complex. The environmental control on viral abundance and the impact of viral infection upon host community structure are of major importance in order to elucidate the interaction between virus, host and environment. The global increase in aquaculture has provided new opportunities for the transmission of aquatic viruses. Viral diseases remain a significant limiting factor for aquaculture production and for the sustainability of biodiversity in the natural environment. Many of the viral pathogens cause severe mortality in farmed organisms, as on-farm stressors may compromise their ability to combat infection and farming practices facilitate rapid transmission of diseases. Lymphocystis disease (LCD) is one of the most common viruses infecting fish. The etiological agent of LCD is the Lymphocystis disease virus (LCDV), belonging to Iridovirids. Iridovirids have been isolated only from ectothermic vertebrates and invertebrates, usually associated with damp or aquatic environments, including marine and freshwater habitats. Iridovirid species vary widely in their natural host range and virulence. LCD was one of the first fish viral diseases reported at the beginning of the 19th century (1938) when it was first detected in the orange filefish Aleutera schoepfii. It was, until 1954, the only fish virus showing positive experimental evidence for fish-to-fish transmission. It was in 1962 that the fine structure of LCDV was first described and it was experimentally isolated under laboratory conditions allowing for information to be collected about its retained infectivity and viability. Since then, it has attracted the interest of researchers as it affects a wide variety of fish species and is the main viral infection reported to affect cultured gilthead seabream in Europe.

Taxonomic AspectsdGenotypes The LCDV is a member of the Iridoviridae family, Alphairidoviridae subfamily and Lymphocystivirus genus. Iridoviridae viruses are large icosahedral, double-stranded DNA (dsDNA) viruses and are divided into two subfamilies (Alphairidovirinae and Betairidovirinae) and five genera, which are distinguished by their primary hosts: the Alphairidovirinae members (Ranavirus, Megalocytivirus,

158

Encyclopedia of Environmental Health, 2nd edition, Volume 4

https://doi.org/10.1016/B978-0-12-409548-9.10968-6

Lymphocystis Disease Virus (LCDV) in Aquatic Environment Table 1

159

Nine different genotypes isolated from different fish species

Genotype

LCDV isolate

Fish species

G-I G-II G-III G-IV

LCDV-1 LCDV-C LCDV-KRF LCDV-RC LCDV-SB LCDV-PGF LCDV-PG LCDV-SA LCDVSSE NFH YP

European flounder (Platichthys flesus, Linnaeus 1758) Olive flounder (Paralichthys olivaceus, Temminck & Schlegel 1846) Korean rockfish (Sebastes schlegelii, Hilgendorf 1880) Cobia (Rachycentron canadum, Linnaeus 1766) Japanese sea bass (Lateolabrax japonicus, Cuvier 1828) Painted glass fish (Parambassis baculis, Hamilton 1822) Pearl gourami (Trichogaster leeri, Bleeker 1852) Gilthead sea bream (Sparus aurata, Linnaeus 1758) Senegalese sole (Solea senegalensis, Kaup 1858) Largemouth bass (Micropterus salmoides, Lacepèpe 1802) Yellow perch (Perca flavescens, Mitchill 1814)

G-V G-VI G-VII G-VIII G-IX

Lymphocystivirus), which infect mainly ectothermic vertebrates (bony fish, amphibians, reptiles) and the Betairidovirinae members (Iridovirus, Chloriridovirus), which infect mainly invertebrates (insects, crustaceans). Furthermore, among the latter, genome sizes tend to be larger and GC content lower. Between the five genera, there are distinct differences concerning nucleotide/amino acid sequence identity/similarity, host range, GC content, phylogenetic relatedness, genome colinearity and disease manifestations. Furthermore, genera within the family are serologically distinct from one another. Species within the same genus generally show greater than 50% sequence identity/similarity within a common set of core genes. LCDC is an iridovirus globally distributed. The mcp gene of LCDV is the main structural component of the viral particles in the Iridoviridae family and it is mainly selected as a molecular marker in order to perform phylogenetic studies of iridoviruses. It comprises 40%–45% of the total viral polypeptides and has a molecular weight of approximately 50 kDa, while it adapts proliferation in different host cells presenting a high degree of genetic variation. Nine genotypes of Lymphocystivirus have been identified to date, on the basis of the mcp gene sequence (Table 1). Many LCDV isolates have been studied in Europe, Asia and America. Although the genetic diversity of the virus has been related to the host fish species in some cases, the evolutionary relationship of LCDV and its hosts reveals a low correlation in an isolation area, showing that the LCDV strains and their host fish evolve independently.

Epidemiology Lymphocystis disease has been described in more than 125 marine, brackish and freshwater fish species, wild, cultured and aquarium, in more than 34 families (Cichlidae, Osphronemidae, Centrarchidae, Gobiidae, Chaetodontidae, Pomacentridae, Sciaenidae, Serranidae and Pleuronectidae, Sparidae, etc.). It is a disease of evolutionary advanced teleosts and does not affect salmonids, catfish or cyprinids. It is a common disease in Europe (the North Sea and the Mediterranean Sea), while it has also been detected in Asia and America. Lymphocystis disease is usually a self-limiting disease. Mortalities are limited, but the rate is usually increased in aquaculture species. The appearance and the low growth rate of infected fish retard the expected breeding time, causing significant economic losses. LCD outbreaks have been reported in heavy infections, related to environmental and management stress factors. Environmental condition factors such as water salinity, temperature, pollution, oxygen levels, and fish density, as well as common aquaculture practices, may affect the prevalence of the disease, causing clinical or subclinical infection. In some cases, prevalence is up to 70% in aquaculture species. LCD outbreaks may favor secondary bacterial infections, cannibalism and/or parasitic infestations, factors that may increase mortality rate.

Transmission The mechanism of transmission is already known. Waterborne contamination and direct contact are responsible for the horizontal transmission of the virus, while its vertical transmission is confirmed in the case of cultured gilthead sea bream. In case of horizontal transmission, a trauma on the skin and the gills are the main portals of entry for the virus. During aquaculture practice, infected rotifers/artemia nauplii may be carriers of the virus to the larvae during the stage of live feeding in the hatchery facilities. Also, asymptomatic fish or fish that have recovered may be LCDV-carriers, transmitting the virus to healthy fish. They have been considered responsible for LCD outbreaks that appear in aquaculture facilities where stressful rearing conditions prevail. LCDV is able to persist in the fish epidermis over a long period of time, producing a subclinical infection depending on water temperature. Lesions disappear after several weeks in warm-water species while they may be evident for up to 1 year in cold-water fish. The viability of the virus outside the host (water or sediments) is estimated at 1 week and the incubation period may be longer varying from weeks to months.

160

Lymphocystis Disease Virus (LCDV) in Aquatic Environment

Clinical Signs–Pathogenesis–Host Immunity The pathognomonic characteristic of this LCD is the nodular lesions consisting of hypertrophied fibroblastic cells in the dermis connective tissue (lymphocysts). They are small, up to 1 mm in diameter, while they are oversized when grouped in clusters, creating papillomatous tumors (Fig. 1). Nodules are usually observed in a white-creamy color, but they may render gray to black when the epithelial tissue is rich in chromatophores. Skin and fins are the primary target organs where lymphocystis cells are observed, while in heavily affected fish they may cover the entire body including the bucal cavity, gills and eyes. Despite the fact that LCDV is considered to be dermatropic, it establishes a systemic infection as viral genomes and antigens have been detected over the mesenteries, peritoneum and several internal organs (intestine, liver, spleen, kidney and brain). Fibroblasts, hepatocytes and cells of the mononuclear phagocyte system are the permissive cells for LCDV replication and the spread to different host tissues is achieved via the bloodstream. In these cases, no lymphocysts are detected, but different types of histopathological changes, directly related to viral replication are. Necrotic changes in the liver and kidney, inflammatory response in the intestine submucosa, and intraventricular hemorrhage are usually reported in these cases. Also, the number of MMCs is increased in the liver, spleen, and kidney, which is probably associated with a cellular response to viral infection. The histological changes are reversible after recovery, as a normal architecture of infected organs and tissues is observed, while lymphocysts are not obvious in primary infected tissues. The infection is systemic and persistent even in case of asymptomatic and recovered fish, which are subclinically infected, while the brain is the main organ that supports viral expression in the recovered fish. Fish recovery from LCDV and acquired immunity is confirmed and recovered fish develop high antibody titration measured in sera. There is a correlation between host immune response and disease resistance and the single genetic locus related to susceptibility to LCDV infection in Japanese flounder is a promising selective breeding tool in order to develop fish populations with high resistance to LCD. As a dermotropic disease, the proliferation of macrophages and epithelioid cells around lymphocysts in the dermis has been detected in the immune response of several fish species. LCDV promotes the infiltration of acidophilic granulocytes to the detriment of macrophages in the skin. The head kidney, as the main hematopoietic organ, plays an important role in immune response in cases of natural LCDV outbreaks and leucocytes are activated to encounter the virus at the sites of replication. Innate cellular immune response, peroxidase and respiratory burst activities of the head kidney leucocytes in infected fish is increased. The expression of some immune-related genes in the head kidney (as the main hematopoietic organ) and in the skin (the target tissue of LCDV) has revealed a gap between the innate and adaptive immune response. LCDV-infected fish prolong expression of gene nccrp1 in the head kidney, indicating an increase in the leucocyte killing of virus-infected cells. However, this is the only upregulated gene after infection. Transcription of antiviral genes (ifn and irf3) in the skin and cellular immunity genes (csf1r, mhc2a, tcra, ighm), in skin and the head kidney is downregulated. Inefficacy of the local adaptive immune response is suggested as mhc2a (a major expert in pathogen recognition on the surface complex of APCs) and tcra and ighm (the main receptors for T and B cells, respectively) are strongly inhibited, while hamp and il1b transcription remain unaffected after infection. More information on the immune response of infected fish to LCDV, based upon some of the major immune system parameters, is useful to better design preventive or curative future treatments.

Diagnostic Methods Standard diagnostic techniques for viral diseases of fish include pathological and histopathological examination, virus isolation in cells, molecular techniques including conventional and real-time PCR, in situ hybridization and various immunodiagnostic techniques. The diagnosis of LCDV has been based mainly on the observation of external symptoms. The typical pathology of skin lesions provide strong presumptive evidence for lymphocystis infection. The cytologic examination of lesions showing hypertrophy

Fig. 1

Lymphocysts covering the body and the fins in gilthead seabream.

Lymphocystis Disease Virus (LCDV) in Aquatic Environment

161

of the epidermal cells is usually sufficient for clinical diagnosis. The study of the microanatomy of specific tissues, has been successfully employed as a diagnostic tool and infected tissue shows hypertrophied fibroblasts with basophilic, intracytoplasmic inclusions. Viral disease diagnosis has traditionally relied on the isolation of viral pathogens in cell cultures. This “gold standard” for the laboratory diagnosis of viral disease is often slow and requires considerable technical expertise. In case of LCDV, the in vitro ability of the virus to replicate, is cell culture specific. This inhibitory factor limits the cell culture application as isolation and cultivation of the virus is a difficult and time-consuming technique. Immunofluorescence, immunoblot and flow cytometry are serological techniques that have been applied for LCDV detection in cell culture. However, the necessary step of viral multiplication still makes the techniques unapproachable as it is not easily applied under field conditions. Molecular methods are preferable compared to serological methods. Several conventional PCR assays have been demonstrated to detect LCDV in different species even in apparently healthy fish. Although, it is a rapid, sensitive and highly specific tool is not commonly used for LCD diagnosis as it is a nonquantitative PCR assay. Furthermore, these assays have been tested against a small number of genotypes, limiting their application in different fish species. The PCR combined with slot-blot hybridization has proven to be an appropriate sensitive technique, allowing for virus detection in very low concentrations (asymptomatic carriers, live food for larval stages). Nevertheless, this combined assay does not provide quantitative results. It is not easily applied in epidemiological and pathological studies of LCDV and is relatively time-consuming as an additional step of blot-hybridization of the PCR products is required to detect LCDV-positive samples. A rapid, specific, and sensitive detection method for LCDV based on loop-mediated isothermal amplification (LAMP) has been developed for detecting the virus in less than 12 min. The LCDV LAMP assay has proven to be a promising diagnostic method to detect the presence and spread of this iridovirus. However, this application has been applied in genotype VII, limiting its wider use in other genotypes due to genetic variability among strains. Real-time PCR (qPCR) has been used for the detection and quantification of numerous viral fish pathogens, including LCDV. It has proven to be the most useful diagnostic tool to overcome the disadvantages of conventional PCR showing greater sensitivity. It is specific and reproducible for the detection and quantification of LCDV and it is suitable for various applications of considerable epidemiological and pathogenic interest. The method is appropriate for the detection of subclinical LCDV infections in carrier fish, identifying LCDV reservoirs and viral replication in fish. The qPCR assay allows for the detection of the virus at levels as low as one copy of viral DNA per mg of fish tissue. This high sensitivity, combined with its wide dynamic range, makes the qPCR assay suitable for detecting low viral loads correlated with the clinical stage. However, molecular methods could be limited by the genetic variability among strains requiring the use of multiple primer pairs in the detection of LCDV in different fish species. In the case of LCDV, primers have been designed to minimize primer template mismatches against all known LCDV genotypes. Primers F1: AATGAAATAAGATTAACGTTTCAT and R3: TACCCATCAATCGACGTTC are designed on a multiple alignment of mcp gene sequences available from the GenBank of LCDV classified into the nine known genotypes. The F1/R3 qPCR primer pair is able to detect and quantify viral DNA directly from infected cell culture lysates and from tissues in detection limit 2.6 DNA copies/ml.

Control and Prevention General control strategies relying on the basic principles of biosecurity are key in both prevention and control of the viral disease. Lymphocystis disease control is based on preventive husbandry practices, as no effective treatments or commercially available vaccines currently exist. The various stressors can predispose the fish to compromised growth and health, consequently promoting disease. Improved husbandry, water quality and lower stocking densities are considered to be significant for the restriction of stress conditions during intensive farming, thereby enhancing welfare. Also, better nutrition upgraded with immunostimulants like herbal extracts and probiotics may reduce the incidence of LCD. The selection of noninfected juvenile fish is necessary as they may, during the growing period, become symptomatic under conditions of stress. In case of infection, the removal of affected animals and other potential reservoirs is crucial in order to eliminate the spread of the viral disease. Furthermore, the selection of LCDV-free broodstock is of major importance as vertical transmission of the virus is already confirmed. Multiple reservoirs of LCDV have been detected in farm facilities, including rotifer cultures and artemia used for larval rearing. Therefore the supply of virus-free live food is of major importance. Quarantine and disinfection of each fish before introduction to aquaculture facilities is considered to be very important. During hatching, decontamination of egg-surfaces using effective chemical disinfection treatments (iodine, formalin, hydrogen peroxide) is recommended as an adequate preventive measure. Early detection of the virus combined with an effective method of detecting LCDV in carriers is a crucial in the control of lympocystis. A real-time PCR assay has been proposed as a quantitative, simple, rapid, easy to apply and sensitive diagnostic method. It is a validated technique to detect LCDV in symptomatic, asymptomatic infected or fish that have recovered and other potential reservoirs. It is a nonlethal assay and the viral load can be estimated even by analyzing caudal fin samples. Rigorous fish screening, before they are introduced into a LCD-free rearing system or before reproduction, is an important biosecurity measure for productive aquaculture systems. The application of specific prophylactic measures using vaccination is not applied as there is no commercial vaccine available for the LCDV infection. However, in the case of Japanese flounder, formalin or heat inactivated and genetically engineered experimental vaccines targeting LCDV have been tested and have proven to have a protective effect. Also, a pDNA vaccine has been designed for

162

Lymphocystis Disease Virus (LCDV) in Aquatic Environment

protection from LCDV-C inducing effective protection. Oral DNA-based immunotherapy through the vaccine, encapsulation in microspheres of alginate, chitosan and poly/DL-lactide-co- has been tested in the case of LCDV. This new strategy for fish immunization in intensive culture reduces the rate of DNA vaccines’ degradation thereby increasing the efficiency of the vaccine.

Future Prospectives Lymphocystis disease has been described in a lot of fish species but mortalities are limited to aquaculture. LCD outbreaks have been reported in heavy infections, related to environmental and management stress factors. Horizontal and vertical transmission of the disease in combination with asymptomatic viral carriers promotes the spread of the virus. Despite the fact that it is a self-limiting disease, its control is important as it causes significant economic losses. A successful strategy plan must include a “cross-curricular” approach combining such interrelated scientific fields. Cooperation between scientists such as ichthyopathologists, immunologists, pharmacologists, nutritionists, marine biologists, aquaculture researchers and managers with a knowledge of LCDV, is necessary. The most important strategic steps for the adoption of a successful approach are: (i) the transmission mode, (ii) the infectious ability, (iii) the potential hosts and their immune response, (iv) the pathogenesis, (v) the appropriate timing for an early diagnosis, (vi) a robust validated nonlethal molecular diagnostic technique for early detection and quantification, (vii) precautionary measures in order to minimize the spread of the disease, and (viii) effective treatment and immunity in order to control the infection and reduce mortality rate. Although some of these issues have already been studied, more critical scientific data has to be gathered for better comprehension of health management practices. Lympocystis is usually detected when the dissemination of the disease has already occurred. Early diagnosis and immunity remain critical and could be a useful tool for fish-farmers to assist them in taking all necessary precautionary measures before an outbreak occurs. Considering the lack of any commercial vaccinations, self-immunity in combination with immunostimulation seems to be an efficient formula. Recovery mechanisms of infected fish are of utmost importance due to the fact that it may be influenced by different parameters, while vaccination, as a specific prophylactic measure, would be the key factor in the elimination of LCD. The selection and adoption of this approach is based on the intention to support and to raise standards in prevention and control of this viral disease. A combination of different interrelated scientific fields of basic and applied science will provide a more inclusive vision toward environmental protection ensuring the implementation of fish welfare practices.

Further Reading Borrego, J.J., Valverde, E.J., Labella, A.M., Castro, D., 2017. Lymphocystis disease virus: Its importance in aquaculture. Reviews in Aquaculture 9, 179–193. Cano, I., Ferro, P., Alonso, M.C., et al., 2007. Development of molecular techniques for detection of lymphocystis disease virus in different marine fish species. Journal of Applied Microbiology 102, 32–40. Cano, I., Valverde, E.J., Lopez-Jimena, B., et al., 2010. A new genotype of Lymphocystivirus isolated from cultured gilthead seabream, Sparus aurata L., and Senegalese sole, Solea senegalensis (Kaup). Journal of Fish Diseases 33, 695–700. Cano, I., Valverde, E.J., Garcia-Rosado, E., et al., 2013. Transmission of lymphocystis disease virus to cultured gilthead seabream, Sparus aurata L., larvae. Journal of Fish Diseases 36, 569–576. Chinchar, V.G., Hick, P., Ince, I.A., et al., 2017. ICTV virus taxonomy profile: Iridoviridae. Journal of General Virology 98, 890–891. Ciulli, S., Pinheiro, A.C., Volpe, E., et al., 2015. Development and application of a real-time PCR assay for the detection and quantitation of lymphocystis disease virus. Journal of Virological Methods 213, 164–173. Cordero, H., Cuesta, A., Meseguer, J., Esteban, M.A., 2016. Characterization of the gilthead seabream (Sparus aurata L.) immune response under a natural lymphocystis disease virus outbreak. Journal of Fish Diseases 39, 1467–1476. Crane, M., Hyatt, A., 2011. Viruses of fish: An overview of significant pathogens. Viruses 3, 2025–2046. Middelboe, M., Brussaard, C.P.D., 2017. Marine viruses: Key players in marine ecosystems. Viruses 9, 302. Noga, E.G., 2010. Lymphocystis fish diseases. In: Noga, E.G. (Ed.), Fish disease: Diagnosis and treatment, 2nd edn. Blackwell Publishing, Iowa, pp. 128–130. Palmer, L.J., Hogan, N.S., van den Heuvel, M.R., 2012. Phylogenetic analysis and molecular methods for the detection of lymphocystis disease virus from yellow perch, Perca flavescens (Mitchell). Journal of Fish Diseases 35, 661–670. Valverde, E.J., Cano, I., Labella, A., et al., 2016. Application of a new real-time polymerase chain reaction assay for surveillance studies of lymphocystis disease virus in farmed gilthead seabream. BMC Veterinary Research 12, 71. Valverde, E.J., Borrego, J.J., Sarasquete, M., et al., 2017. Target organs for lymphocystis disease virus replication in gilthead seabream (Sparus aurata). Veterinary Research 48, 21. Yan, X.-Y., Wu, Z.-H., Jian, J.-C., Lu, Y.-S., Sun, X.-Q., 2011. Analysis of the genetic diversity of the lymphocystis virus and its evolutionary relationship with its hosts. Virus Genes 43, 358–366.

Magnesium and Calcium in Drinking Water and Heart Diseasesq A Kousa, Geological Survey of Finland, Kuopio, Finland © 2015 Elsevier Inc. All rights reserved.

Nomenclature Al Aluminium AMI Acute myocardial infarction Ca Calcium CHD Coronary heart disease Cl Chloride CVD Cardiovascular disease dH Degree of hardness Fe Iron Mg Magnesium Mn Manganese Na Sodium RDA Recommended dietary allowance SO4 Sulphate Sr Strontium Zn Zinc

Introduction The protective role of drinking water hardness against cardiovascular diseases (CVDs) has been discussed since it was first reported 50 years ago. A Japanese agricultural chemist Jun Kobayashi found an association of geochemistry of river water with apoplexy in 1957. Since then, several, but not all, of the epidemiological studies have reported the lower coronary heart disease (CHD) risk in areas with hard water with high levels of magnesium or calcium. Both calcium and magnesium, the main contributors of water hardness, are essential nutrients for human health. CHD is a multifactorial disease, and it is clear that there is not just one certain explanation for geographical variation of the disease risk. The major risk factors for heart disease are hypertension, smoking, high serum cholesterol level, diabetes mellitus, and behavioral factors such as physical inactivity, poor eating habits, and obesity. Also certain nutrients, such as fiber, potassium, phosphorus, and trace minerals, have emerged lately as risk factors. The link between geological materials, such as rocks, minerals, and water, and human health has been known for centuries. Hippocrates (460–377 BC), a Greek physician of the Classical period, noted the following in his treatise On Airs, Waters, and Places in Part 1: ‘Whoever wishes to investigate medicine properly . We must also consider the qualities of the waters, for as they differ from one another in taste and weight, so also do they differ much in their qualities . These things one ought to consider most attentively, and concerning the waters which the inhabitants use, whether they be marshy and soft, or hard, and running from elevated and rocky situations, and then if saltish and unfit for cooking.’ Calcium and magnesium in drinking water as risk factors for heart disease are recently receiving increasing interest concerning the geographical variation in occurrence of heart diseases. The extensive reviews of epidemiological studies on drinking water hardness and CVD have been written by Monarca et al. (2006) and Catling et al. (2008).

Regional CVD Mortality and Morbidity CVDs are a major public health concern in Western countries and also in developing countries. CVDs are the main cause of death in Europe. Almost half (48%) of all deaths are from CVD. In the United States, about one in three adult females and men have some form of CVD. CHD mortality has decreased dramatically since 1960s in the United States but is still found especially among the elderly. Geographical variation in occurrence of CHD between different countries and also within countries is well established but reasons for this variation are still poorly understood. CHD is more common in northern and eastern European countries than in the southern countries. The classical CHD risk factors, such as hypertension, serum cholesterol, and smoking, are basically q

Change History: February 2015. A Kousa updated text, further readings, and Table 1.

Encyclopedia of Environmental Health, 2nd edition, Volume 4

https://doi.org/10.1016/B978-0-12-409548-9.09489-6

163

164

Magnesium and Calcium in Drinking Water and Heart Diseases

higher in eastern Europe than in the western parts of Europe. Despite the decreasing trend of CHD mortality, it is still the major cause of death in industrialized countries. Socioeconomic status, obesity, alcoholism, psychosocial factors, and poor dietary habits among others are associated with high CHD mortality trends in Europe. Also in a country known with high CHD mortality, Finland, the CHD mortality has substantially decreased during past decades. However, geographical variation within country has still remained (Figure 1). CHD risk was still higher in eastern part than in western and southern parts of the country. Therefore, the stability in the geographical variation in the incidence of acute myocardial infarction (AMI) supports the hypothesis that some factors in the physical environment might play a role in developing heart disease in the susceptible population. Although overall incidence of AMI in women was one-third of that in men, the spatial pattern of incidence seems to be quite similar for men and women, suggesting that the spatial risk factors are the same for both sexes.

Water Intake Water is essential for human life and cellular homeostasis. The daily consumption of drinking water is approximately 1–3 l in adults. Requirement is slightly higher in men than in women. Requirement for hydration in pregnancy is an average 4.8 l day 1 and during lactation 3.3 l day 1. In infants, the requirement is approximately 1.0 l day 1. Infants and small babies have highest water volume requirement per body weight. Working in high temperature or physical activity increases requirement. Humans ingest water as plain drinking water, as a part of meal, and in other beverage. Some part of ingested water comes from metabolism of food. Major part of the daily water intake is derived from consumed fluids and about one-third is derived from food. Drinking water can contribute to variable fractions of the total intake of essential minerals.

Magnesium and Calcium Intake Minerals consist approximately 4% of human body mass. The group of major minerals includes magnesium, calcium, sodium, potassium, phosphorus, and chlorine, and their necessary daily requirement is over 100 mg each. All other elements

Figure 1 Posterior mean age standardized AMI incidence among men and women (data pooled) in 1991–2003 in rural Finland. Reproduced from Kousa A (2008) The regional association of the hardness in well waters and the incidence of acute myocardial infarction in rural Finland. Doctoral dissertation, 2008. Kuopio University Publications D. Medical Sciences, vol. 442, 92 pp.

Magnesium and Calcium in Drinking Water and Heart Diseases

165

are present in smaller concentrations in the body and are therefore called trace elements. Daily requirements of trace minerals are less than 100 mg. Food is the principal source of calcium and magnesium. Nonetheless, people consuming refined food may suffer deficiency of essential nutrients, and thus even the relatively low intake of essential elements through drinking water may play an essential role in human health. It has been reported that elements that are present as free ions in water are more readily absorbed from water than from food bound to other substances. Most of food supplies contain some magnesium but the amount of magnesium varied largely between different foodstuffs. Foods rich in magnesium are green vegetables, unpolished cereal grains, nuts, soy beans, and chocolate. Among gardening products, green leafy vegetables with chlorophyll and legumes are the richest dietary sources of magnesium and they are also usually locally produced in rural areas. Bread made from whole seeds provides more magnesium than that made with white refined flour. Magnesium is almost completely lost from processed food and refined sugar. Fish, meat, milk, and several fruits, with the exception of bananas, are relatively poor sources of magnesium. Drinking water can be a complementary source of magnesium. However, the amount of magnesium in water varies according to the tap water or geological environment of the water supply. Consuming 1 l water with 100 mg l 1 of magnesium could provide approximately 25% of daily magnesium requirement of 300– 400 mg l 1. The same amount of water with low magnesium level, < 10 mg l 1, provides only less than 3% of the daily requirement. Waterborne magnesium has been reported to account for approximately 10% of the total daily magnesium intake. Most of dietary calcium, approximately 70%, is derived from milk and dairy products such as yogurt and cheese. It has been suggested that the bioavailability of calcium from water is at least as high as that from milk and other dairy products. Small fishes, when eaten with fish bones, are also quite a good source of calcium. Only 16% of calcium intake comes from green vegetables and dried fruits. From plant world, nettle and rosehip are the best sources of calcium. Approximately 6–7% of calcium intake comes from drinking water and mineral water. Among women, the contributions of the daily uptake of Mg and Ca from drinking water are 1.0–7.2% and 2.2–12.8%, respectively, assuming a daily consumption of 2 l water and a gastrointestinal uptake of 50% from food. Loss of magnesium and calcium from food has been found when using soft water for cooking. It is assumed that losses of these essential elements could be even 60%. On the other hand, when the hardness of water used for cooking is high, the loss of elements is much lower. The modern diet with refined food is not always an adequate source of essential elements for humans. In a borderline case of deficiency of certain essential elements, even the relatively low intake of a certain element through drinking water may play a significant protective role. Free ions, as the elements usually occurring in water, are more easily absorbed than elements in food where they often are bound to other substances. Magnesium and calcium content in beverages is presented in Table 1. Oral magnesium supplementation can sometimes cause mild side-effects such as abdominal cramps and diarrhea. Severe sideeffects are remarkably rare. Excessively high calcium intake can be associated with hypercalcemia, impaired kidney function, and decreased absorption of other minerals. Hypercalcemia can also ensue from excessively high intakes of vitamin D. However, excess intake of magnesium or calcium from diet or supplements is uncommon.

Physiology of Magnesium and Calcium Magnesium is the fourth most abundant cation in the body and the second most abundant intracellular cation after potassium. The normal adult body contains approximately 21–28 g of magnesium. Almost half is present in muscle and soft tissues and the other

Table 1

An average magnesium and calcium content in beverages

Beverage

Mg (mg per 100 g)

Ca (mg per 100 g)

Mineral water Filtered coffee drink Boiled coffee drink Fruit juice, concentrated, average Cocoa unsweetened Cocoa sweetened Milk 3.8

Environmental Elements Affecting Mg and Ca Level in Drinking Water Groundwater is an integral part of the hydrologic cycle. The water in the atmosphere condenses to clouds and falls to the ground in the form of snow, rain, or fog. Some part of the water evaporates directly back to the atmosphere. Another part of water used by plants may return to the atmosphere as vapor by transpiration. Some of the water flowing through ground flows to the rivers, streams, lakes, and finally back into the oceans and becomes surface water. Precipitation is a principal factor in the process forming groundwater. About half of the rain infiltrates deep into the ground and gravity pulls a small part of that water down through pores until it reaches the water table. The water below water table is generally called groundwater. However, sometimes all water in the ground could be called groundwater. Elements are unevenly distributed in the Earth’s crust and thus geochemistry of local groundwater can vary according to geological conditions. In the Earth’s crust, the average concentration is 2.1% in magnesium and 3.6% in calcium. The mineral composition of the bedrock mediates to the chemical quality of groundwater. Calcium level is higher than magnesium level in all rock types in bedrock and so calcium concentrations are much higher in groundwater than magnesium concentrations. Most of calcium and magnesium ions found in groundwater originated from the dissolution of carbonate rocks. Thus, groundwater partly contributes to intake of magnesium and calcium via drinking water. Owing to geological circumstances, the levels of calcium and magnesium as well as calcium/magnesium ratio in drinking water vary in different parts of the world. Compared to northern Europe, groundwater is much harder in the Mediterranean area, containing higher calcium and magnesium levels. Magnesium, calcium, and Ca/Mg ratio in drinking water in certain areas of the world are presented in Table 3. A deficiency or excess in the content or availability of trace elements in rocks and soils, or in water connection with them, may be a cause of certain chronic ailments, including CVD. Bedrock and soils in the northern European countries with high CHD mortality are poor sources of many essential trace elements. These northern countries are generally covered by geologically old crystalline rocks, which are typically characterized by a low availability of trace elements and low water hardness. Countries in the Mediterranean region with low death rates from CVD are covered with geological formation rich in calcareous rocks containing higher Ca and Mg levels, the dominant contributors to water hardness. There are detectable variations in magnesium and calcium levels and Ca/ Mg ratio in drinking water in different parts of the world. Basically, higher calcium and magnesium levels are found in central and southern Europe compared to northern Europe and other areas. In Turkey, calcium, magnesium, and Ca/Mg ratio were 427 mg l 1, 108 mg l 1, and 3.95, respectively. Corresponding levels in well water in rural Finland were 12 mg l 1, 2.6 mg l 1, and 5.1.

168

Magnesium and Calcium in Drinking Water and Heart Diseases Table 3

Ca, Mg, and Ca/Mg ratio in drinking water in certain areas of the world Measured minerals (mg l 1)

Continent

Country

Area

Ca

Mg

Ca/Mg

Europe

Turkey Greece

Pamukkale Athens city Kos Island ITI Mountain Mountain natural Lourdes Fountain Contexeville North Pole Sea Tennessee River Nashville City Santiago city Sydney city Sydney suburb Kwang-ju village Sang Sa village Zhoukoudian well Town near Beijing Beijing city Young Ding River near Beijing Mountain near Jakarta Wonogiri Mountain near Solo River Well in the plain Ulanbaartar City Tur River Kyoto City (Biwa Lake) Niigata Mountain area River in Osaka Kirishima Mountain

427 54.9 131.1 64.4 5.3 77.9 486 309.5 28.9 30.1 53.4 11.6 nd 38.6 53.5 97.1 62.1 60.5 28.9 7.0 27.3 40.9 15.1 nd 13.7 2.1 10.0 42.0

108 9.9 18.5 24.2 1.90 5.20 84 1200 6.71 6.70 8.7 4.9 nd 6.0 7.0 19.8 17.4 20.4 14.5 2.2 7.1 7.20 2.25 0.01 2.64 0.97 1.64 9.70

3.95 5.55 7.09 2.70 2.79 14.98 6.14 0.26 4.31 4.49 6.14 2.37 – 6.43 7.64 4.90 3.57 2.97 1.99 3.18 3.85 5.68 6.71 – 5.19 2.16 6.10 4.33

Belgium France The Americas

Norway United States

Oceania

Chile Australia

Asia

Korea China

Indonesia Mongol Japan

Abbreviation: nd, not determined. Reproduced from Morii H, Matsumoto K, Endo G, Kimura M, and Takaishi Y (2007). In: Nishizava Y, Morii H, and Durlach J (eds.) New Perspectives in Magnesium Research. Nutrition and Health, pp. 11–18. London: Springer.

The time factor has also a significant impact in groundwater quality. The abundance of dissolved substances in groundwater depends on residence time; thus, element concentrations in wells drilled in bedrock are higher compared to shallow groundwater in dug wells. Well water consumed in Europe is usually derived from bedrock while, for example, in Finland, Scandinavia, the private household wells in rural areas draw mostly water from overburden. In Finland, over 80% of households are within the public water supply. However, over 1 million people (i.e., 20% of the total population) living in rural areas use well water. Besides the geological factors, atmospheric, marine, and anthropogenic factors contribute to the composition of groundwater. Several of those factors act together; thus, their individual contributions to chemical composition of groundwater may be difficult to distinguish. Atmospheric factors contribute to the quality of groundwater through the composition and abundance of substances in rainwater and melt water from snow. Anthropogenic factors reflect the agricultural and industrial influence by human activities on the quality of groundwater. The relict seawater trapped in the fissures and fractures in marine deposits may contribute increased concentrations of certain ions, particularly Na, Cl, and SO4. Geological environment has a considerable impact on drinking water in sparsely populated areas where people mostly use water of private wells. The quality of the raw water in public water supplies also depends on geological environment of that area. The quality of groundwater is generally good and no treatment other than sometimes disinfection is required. The water treatment systems influence the chemical quality of tap water. Softening filters decreased calcium levels in drinking water but the levels of Mg seem to remain almost unchanged compared to raw water. Sodium concentrations largely increased after treatment by softening filters. The element concentration in soil, groundwater, and plants is derived in geological circumstances from the bedrock. Thus, the element concentration of the whole food chain of animals and human beings is originally dependent on the chemical composition of the bedrock (Figure 2).

Magnesium, Calcium, and Water Hardness in Drinking Water in Relation to Heart Diseases Several epidemiological studies have shown the association of CHD mortality or morbidity with water hardness or magnesium and calcium. In groundwater or drinking water, calcium and magnesium are mainly present as free physiologically active form.

Magnesium and Calcium in Drinking Water and Heart Diseases

Figure 2

169

Circulation of elements. Reproduced from Geological Survey of Finland GTK. © Geological Survey of Finland GTK, T. Tervo.

Magnesium in food represents the major portion of magnesium intake in the general population. However, drinking water may be a major source of magnesium especially for those who have low dietary intake and use water with high Mg level. A waterborne intake of magnesium may vary because of geographical variation in magnesium levels in drinking water due to geological environment of an area. Examples from Sweden in northern Europe showed that mortality risk of AMI was approximately 30% lower in men and women who used drinking water containing over 9.9 mg l 1 magnesium. A few years later, it was found that magnesium in drinking water protected against death from AMI but not the total incidence in women. In Finnish ecological study, the incidence of AMI in men and women in rural area in 1991–2003 was studied. One milligram per liter increment in Mg level in local groundwater was associated with an average 2% decrease in the incidence of AMI while Ca concentration did not have a clear association with the incidence of AMI. A special feature of Finnish studies mentioned is that 10  10 km grid cells were used when defining the study areas instead of administrative areas. Maps of magnesium and Ca to Mg ratio in groundwater in rural Finland are presented in Figures 3 and 4. A protective association of magnesium and calcium in drinking water has been demonstrated with cardiovascular mortality among all deaths occurring in 69 parishes of the southwest France. CVD mortality was lower among subjects who consumed water containing 4–11 mg l 1 magnesium as compared to those who consumed lower than 4 mg l 1. A potential protective dose–effect relation between calcium in drinking water and cardiovascular causes was found when calcium level in drinking water was over 94 mg l 1. The phenomenon called French paradox has been proposed for low CHD incidence in France compared with some other European countries. One interpretation is that the high consumption of red wine with polyphenols has attributed to the low incidence of CHD. However, it has also been suggested that drinking water hardness could be one potential protective factor; particularly the water hardness seems to be higher in France compared with some other countries (see Table 3). For example, in rural Finland with high CHD incidence, groundwater is soft. In Asia, drinking water consumed is, in general, soft. Several studies carried in Asia have suggested an inverse association between water hardness or magnesium and CVD mortality. A low dietary magnesium intake was associated with a higher risk of coronary artery disease in north India. The protective role of drinking water hardness on the CHD mortality has been reported in Taiwan. Mortality from CHD was 9.6% higher in municipalities with soft water compared with the hard water areas. The protective effect of magnesium intake from drinking water on the risk of cerebrovascular disease has also been reported.

170

Magnesium and Calcium in Drinking Water and Heart Diseases

Figure 3 Regional distribution of magnesium in well water in rural Finland. Reproduced from GTK and KTL, with permission from Elsevier. Basemaps: © National Land Survey of Finland, licence no MML/VIR/TIPA/217/10). Source: Kousa A, Havulinna AS, Moltchanova E, et al. (2008) Magnesium in well water and the spatial variation of acute myocardial infarction incidence in rural Finland. Applied Geochemistry 23: 632–640.

The higher mortality rates for CHD in South African White population compared to population of several industrialized countries have been reported. Inverse association between magnesium levels in drinking water with mortality due to CHD in the age group of 15–64 years in 12 South African districts has also been reported, but not in Black population. The role of calcium in the etiology of CHD is controversial. More attention has been paid to the possibility that calcium is the protective ‘water factor’ against CHD because calcium is the primary constituent contributing to water hardness and occurring at a greater amount than magnesium in hard water. There is some biological evidence that calcium may play a role to treat or prevent essential hypertension but the protective association between drinking water calcium and hypertension is weak. The results of the Swedish case–control study showed lower mortality of AMI in females in 17 municipalities in the areas supplied with water containing > 70 mg l 1 calcium compared to those with calcium level < 31 mg l 1. In Finland, country with high CHD mortality, the intake of calcium derived mainly from dairy products is higher than in most other countries. High Ca/Mg ratio in local groundwater was associated with an average 3% higher AMI incidence. It has also been reported that the indirect association of drinking water calcium with CHD risk is possible. Water with low hardness may dissolve lead and other heavy metals from water supply systems. Low levels of lead in blood have been associated with hypertension and also with stroke. On the contrary, hard water with high calcium levels has an anticorrosive effect that can hinder dissolving toxic metals from water pipes. An inverse association between risk for heart diseases and water hardness has been suggested but the theory what is harmful in soft water or protective in hard water is not totally clarified. Several hypotheses for the link between CVDs and water hardness are proposed. It has been suggested that some metals that are more soluble in soft water may contribute to CVD. Klevay and Combs concluded that hard water is beneficial because it contains essential nutrients and these nutrients can decrease the impact of toxic elements in the environment. They also stressed that to decrease the risk for heart disease, the ideal drinking water should be moderately hard, containing sufficient calcium and magnesium. Some researchers have recommended that a minimum level of drinking

Magnesium and Calcium in Drinking Water and Heart Diseases

171

Figure 4 Ca/Mg ratio in local groundwater in rural Finland. Basemaps: © National Land Survey of Finland, licence no MML/VIR/TIPA/217/10. Modified from Kousa (2008) The regional association of the hardness in well waters and the incidence of acute myocardial infarction in rural Finland. Doctoral dissertation, 2008. Kuopio University Publications D. Medical Sciences, vol. 442, 92 p.

water magnesium should be 10 mg l 1 and an optimum level approximately 20–30 mg l 1. Corresponding levels for calcium should be 20 mg l 1 minimum and approximately 50 mg l 1 optimum.

Perspective on Public Health There are no international regulations – either minimum levels or maximum limits – for magnesium, calcium, or water hardness. The international health-based regulations based usually on toxic or harmful effects, not on beneficial effects of certain elements for humans. However, some European countries have included magnesium and calcium into their national regulations. For example, in Czech Republic, guideline level for calcium is 40–80 mg l 1, for magnesium 20–30 mg l 1, and for water hardness (Ca þ Mg) 2.0– 3.5 mmol l 1. The association of magnesium or calcium with CVD has been presented in several epidemiological studies in different parts of the world. Owing to the ecological nature of most studies, the causality has not been proved. Monarca et al. concluded that it should be approved that effective public health actions may be taken even with incomplete knowledge or certainty about causality. A prospective, new multicountry study of major ions and health outcome following a single study protocol coordinated by WHO is warranted to eventually prove – or disprove – if the soft water, poor in Mg or Ca, is the risk for spatial variation of CHD. The results should then be exploited for the possible detection of health-based guideline values for Mg and Ca in drinking water.

Further Reading Catling, L.A., Abubakar, I., Lake, I.R., Swift, L., Hunter, P.R., 2008. A systematic review of analytical observational studies investigating the association between cardiovascular disease and drinking water hardness. Journal of Water and Health 6 (4), 433–442.

172

Magnesium and Calcium in Drinking Water and Heart Diseases

Cotruvo, J., Bartram, J. (Eds.), 2009. Calcium and magnesium in drinking-water: Public health significance. World Health Organization, Geneva. Driscoll, F.G., 1989. Groundwater and wells. Johnson Filtration Systems Inc., St. Paul, MN. Elin, R.J., 1987. Assessment of magnesium status. Clinical Chemistry 33 (11), 1965–1970. Fawcett, W.J., Haxby, E.J., Male, D.A., 1999. Magnesium: Physiology and pharmacology. British Journal of Anaesthesia 83, 302–320. Guéguen, L., Pointillart, A., 2000. The bioavailability of dietary calcium. Journal of the American College of Nutrition 19 (2), 119S–136S. Havulinna, A.S., Pääkkönen, R., Karvonen, M., Salomaa, V., 2008. Geographic patterns of incidence of ischemic stroke and acute myocardial infarction in Finland during 1991– 2003. Annals of Epidemiology 18, 206–213. Karvonen, M., Moltchanova, E., Viik-Kajander, M., et al., 2002. Regional inequality in the risk of acute myocardial infarction in Finland: A case study of 35- to 74-year-old men. Heart Drug 2, 51–60. Kobayashi J (1957) On geographical relations between the chemical nature of river water and death rate from apoplexy, Berichte des Ohara Institut fu¨r Landwirtschaftliche Biologie, Okayama University 11, 12–21. Kousa, A., Havulinna, A., Moltchanova, E., et al., 2006. Calcium to magnesium ratio in local ground water and incidence of acute myocardial infarction among males in rural Finland. Environmental Health Perspectives 114, 730–734. http://www.ncbi.nlm.nih.gov/pmc/articles/PMC1459927/pdf/ehp0114-000730.pdf. Kousa, A., Havulinna, A.S., Moltchanova, E., et al., 2008. Magnesium in well water and the spatial variation of acute myocardial infarction incidence in rural Finland. Applied Geochemistry 23, 632–640. Marx, A., Neutra, R.R., 1997. Magnesium in drinking water and ischemic heart disease. Epidemiologic Reviews 19, 258–272. Masironi, R., 1987. Geochemistry, soils and cardiovascular diseases. Experientia 43, 68–74. Monarca, S., Donato, F., Zerbini, I., Calderon, R.L., Creau, G.F., 2006. Review of epidemiological studies on drinking water hardness and cardiovascular diseases. European Journal of Cardiovascular Prevention and Rehabilitation 13, 495–506. Nishizava, Y., Morii, H., Durlach, J. (Eds.), 2007. New perspectives in magnesium research. Nutrition and health. Springer, London, 411 p. Razowska-Jaworek, L. (Ed.), 2014. Calcium and magnesium in groundwater. Occurrence and significance for human health. Selected papers on hydrogeology, 21, p. 222. Rosborg, I., Nihlgård, B., Gerhardsson, L., Sverdrup, H., 2006. Concentrations of inorganic elements in 20 municipal water in Sweden before and after treatment – Links to human health. Environmental Geochemistry and Health 28, 215–229. Rubenowitz-Lundin, E., Hiscock, K., 2005. Water hardness and health effects. In: Selinus, O., Alloway, B., Centeno, J.A., et al. (Eds.), Essential of Medical Geology. Impacts of the Natural Environment on Public Health. Elsevier Academic Press, Burlington, MA. Rude, R.K., 1998. Magnesium deficiency: A cause of heterogeneous disease in humans. Journal of Bones and Mineral Research 13 (4), 749–758. Clinical Review. Saris, N.E., Mervaala, E., Karppanen, H., Khawaja, J.A., Lewenstam, A., 2000. Magnesium. An update on physiological, clinical and analytical aspects. Clinica Chimica Acta 294, 1–26. WHO MONICA Project, 1994. Ecological analysis of the association between mortality and major risk factors of cardiovascular disease. International Journal of Epidemiology 23, 505–516. WHO, 2005. Nutrients of drinking water. Protection of the human environment water, sanitation and health. WHO, Geneva, 186 p. http://www.who.int/water_sanitation_health/dwq/ nutrientsbegin.pdf.

Relevant Websites http://www.americanheart.org/presenter.jhtml?identifier¼2011 – American Heart Association (AHA), Statistical Fact Sheets. http://ebooks.adelaide.edu.au/h/hippocrates/airs/index.html – Hippocrates: On Airs, Waters, and Places. Translated by Francis Adams. eBooks@Adelaide. The University of Adelaide Library. University of Adelaide. South Australia 2005. Last updated 3 April 2007. http://www.fineli.fi – Ó National Institute for Health and Welfare, Nutrition Unit. Fineli. Finnish food composition database. Release 16. Helsinki 2013. https://doi.org/10.6027/Nord2014-002 – Nordic Nutrition Recommendations. Integrating nutrition and physical activity (2012). http://ods.od.nih.gov – Office of Dietary Supplements, National Institutes of Health, Dietary Supplement Fact Sheets, Calcium and Magnesium.

Malaria as an Environmental Diseaseq Lauretta Ovadje and Jerome Nriagu, University of Michigan, Ann Arbor, MI, United States © 2019 Elsevier B.V. All rights reserved.

Abbreviations ACT Artemisinin combination therapy CI Confidence interval DALY Disability-adjusted life year DDT Dichlorodiethyltrichloroethane GDP Gross domestic product IMC Integrated mosquito control IRS Indoor residual spraying ITN Insecticide-treated net IVM Integrated vector management MDG Millennium Development Goals SDG Sustainable Development Goals WHO World Health Organization

Introduction In 2015, there were about 212 million malaria cases and 429,000 malaria deaths. 92% of these deaths occurred on the African continent. Malaria is also one of the leading causes of death for children below the age of 5, killing one every 2 minutes. Even when it does not lead to death, it can impair learning or cause brain damage. Malaria not only impacts human health but also affects the social and economic aspects of life. When people are ill, they miss school and work, tourism suffers, and foreign investment is suppressed. Economic loss due to malaria is estimated to be about $12 billion yearly (approximately 1.3% of the gross domestic product (GDP)) in malaria-endemic countries. In addition, this disease inflicts a huge burden on disability-adjusted life years (DALYs) lost. Almost 3% of DALYs are attributed to malaria mortality and morbidity globally; in Africa, it accounts for 10%. It disproportionately affects those who cannot afford treatment or have limited access to health care. However, it is preventable and curable. Malaria can be considered an environmental disease for many reasons. The transmissibility of the disease and survival of the vector are intricately linked to environmental conditions in the air and water. Malaria vector mosquitoes utilize naturally occurring water bodies for breeding; as a result, malaria does not exist in regions where the water quality and hydrodynamics are not appropriate or environmental conditions prevent the formation and persistence of water bodies. The adult mosquito must find a human host in a quest that can be strongly influenced by vagaries in weather conditions and the atmospheric environment. The multimedia life stages of malaria parasite (water, air, and human host) makes the disease vulnerable to environmental interruption as the pathogen is transferred from one medium to the other, more so than diseases that are strictly waterborne or water-based. Malaria has been reduced significantly or completely eliminated historically when environmental management was used alone or in conjunction with other control measuresda hallmark of environmental diseases. Additionally, the impact of environmental effects on malaria outcomes could be moderated by other contextual factors such as personal protective measures, access to effective treatment, and acquired personal immunity. Environmental controls on malaria outbreaks are thus multidimensional and multifactorial. The environmental risk factors addressed in this article pertain only to any physical factors that influence mosquito abundance, longevity, and activity hence the transmissibility of the malarial pathogen.

Mosquito Ecology and Limiting Factors on Vector Life Cycle Malaria is a vector-borne disease caused by a protozoan, Plasmodium, that completes its complex cycle of development in both human hosts and Anopheles mosquitoes. There are four Plasmodium species associated with human malaria: Plasmodium falciparum, P. malariae, P. ovale, and P. vivax. They all develop by asexual reproduction in the red blood cells of man. They are transmitted by the female Anopheles mosquito, of which Anopheles gambiae is the principal vector in most malaria-endemic countries. P. falciparum is the most efficient, dominant, and lethal species in Africa. The large majority of malaria infections worldwide are q

Change History: August 2017. Lauretta Ovadjie updated the text and references. This is an update of L. Ovadje, J. Nriagu, Malaria as an Environmental Disease, Editor(s): J.O. Nriagu, Encyclopedia of Environmental Health, Elsevier, 2011, Pages 558–567.

Encyclopedia of Environmental Health, 2nd edition, Volume 4

https://doi.org/10.1016/B978-0-12-409548-9.11053-X

173

174

Malaria as an Environmental Disease

caused by P. falciparum and P. vivax. Transmission takes place year-round in tropical, lowland endemic areas, whereas seasonal transmission takes place in more temperate zones or higher altitudes. The development of both Plasmodium and Anopheles is dependent on physical characteristics of the environment such as climate and vegetation. Anophelines can be divided into two groups: those that need sunlight for breeding and those that require shade. Those that require sunlight for breeding are apt to be closely associated with humans. This is because they are often abundant in cleared areas close to human habitation. Those that require shade to breed are typically forest species. These two groups of mosquitoes require different strategies to control their ecological success and regulate their ability to transmit the malaria parasites. The life of the female Anopheles mosquito is divided into two parts: the immature stages (egg, larva, and pupa) and the mature stage, where onset of maturity is defined by time of first flight. This is soon followed by the first bite. The immature stages do not participate in infection of humans and are therefore in a waiting period. This period basically limits rapid growth of the mosquito population. High temperature in breeding sites and evaporation (which results in elimination of sites) are generally lethal. Mosquitoes must find water to reproduce. In some circumstances, eggs can survive for weeks without water but lack of moisture will cause reduced abundance of mosquitoes. Natural predators in well-established pools (not temporary water bodies) also affect the population development and size. The development stages of mosquitoes include eggs and larvae (which go through several instar stages and molting), and pupae (Fig. 1). These stages stay close to the surface of water bodies and therefore depend on the availability of freestanding habitats for their development. The habitats include natural and artificial containers and ephemeral bodies of water. For breeding to occur, the pH, sunlight or shade, surrounding vegetation, turbidity, etc. of the site have to be compatible with the larval habitat of the local vector. For example, A. gambiae prefer breeding in open areas, and exposure to sunlight increases the water temperature. This rapidly increases the development rate of the aquatic stages of the mosquito. The mosquito spends between 7 and 20 days as eggs/larvae in its aquatic habitat. Mature mosquitoes emerge from the pupae (eclosion) in the late evening and can fly within a few minutes. After eclosion, the male and female mosquitoes mate soon after. Females then require blood meals for their fertilized eggs to mature. They can feed within half a day of eclosion. After taking a blood meal from the human, the mosquito rests so digestion can take place on nearby walls inside the host’s residence. After digestion, ovaries develop and eggs are laid (takes approximately 2 days). The eggs are laid singly at the water surface, anywhere from 50 and up to 200 eggs at a time. This egg production cycle is known as the gonotrophic cycle. Once the mosquito attains adulthood, it becomes a malaria vector. The lifetime of adult A. gambiae is approximately 2– 4 weeks. The sporogonic development of Plasmodium in the mosquito lasts between 12 and 23 days. Anopheles mosquitoes cannot transmit malaria until approximately 2 weeks after they have been infected, when the parasite has metamorphosed into sporozoites (sporogonic cycle) and the subsequent invasion of their salivary glands. The mosquito then bites and infects the host and transmission of malaria continues. The life stages of A. gambiae in niche environments provide an opportunity for environmental interventions to reduce the mosquito population. The location of human hosts by the female Anopheles mosquito is done using a variety of sensory receptors geared toward detection of movement, carbon dioxide gradients, and sweat of the hosts. Two odorant-binding proteins have been isolated in A. gambiae, which are hypothesized to aid in locating human hosts. The use of pheromone-like compounds to moderate the ability of

Fig. 1 The mosquito life cycle. Reproduced from http://extension.entm.purdue.edu/publichealth/insects/mosquito.html, with permission from Purdue University (Authors: Catherine, H. and Macdonald, J.; illustration by Charlesworth, S.).

Malaria as an Environmental Disease

175

Plasmodium to find its host has not been explored to any significant extent in the fight against malaria. Anopheles mosquitoes typically prefer to feed at night while hosts are sleeping. This is when hosts are less active and therefore less sensitive.

Environmental Factors That Increase Risk of Malaria Transmission Factors That Affect Adult Mosquito Abundance Mosquito density is dependent on the abundance and diversity of vector habitats, especially for immature stages. The greater the number of local habitats, the greater the vector density. The habitat can either be temporary or permanent. A. gambiae prefers temporary breeding sites, whereas A. funestus shows a strong preference for permanent bodies of water. The rate of oviposition is related to recent rainfall. This is because oviposition is dependent on the presence of water bodies, and A. gambiae likes to oviposit in temporary water bodies such as puddles. It is, in principle, also related to temperature and humidity by evapotranspiration, and to soil type by water absorption characteristics. Generally, within an accepted temperature range, aquatic stages of Anopheline mosquitoes develop faster as temperature increases. There is a lower development temperature threshold below which development does not take place, and a higher lethal temperature threshold. Above the higher lethal temperature, development of the aquatic stage does not occur. The relationship between temperature and development is nonlinear and varies for several Anopheline species. For example, A. gambiae has a low temperature limit of 16 C and upper limit of 34 C. Its rate of development increases proportionally between 22 C and 28 C. Shortening the aquatic stage of the mosquitoes by increasing temperature leads to increased adult production, which may result in higher biting rates and disease transmission. Temperature also affects the development of the Plasmodium parasite within the mosquito. P. falciparum fails to develop between 16 C and 19 C. At an ambient temperature of 23 C, the parasite takes 16 days to mature and become infectious, whereas at 27 C it takes only 10 days. A study conducted to better characterize the relationship between temperature and the development of the adult mosquito showed that the emergence times for the adult mosquitoes differed between temperature regimes. The mean duration from egg to adult and the number of adults produced are shown in Fig. 2. Less than 50% of pupae developed into adults at 30 C and 32 C, and the proportion of larvae becoming adults was similar between 20 C and 28 C. Unfavorable environmental conditions such as cooler temperatures may extend the time to emergence, thereby preventing large numbers of mosquitoes to survive till emergence. Favorable conditions reduce the time to emergence of mosquitoes and allow large numbers of mosquitoes to survive to emergence. For malaria transmission to occur, both favorable temperature and rainfall conditions have to coincide temporally. Rainfall indirectly affects mosquito abundance. A. gambiae preferentially breeds in temporary and turbid water bodies such as those formed by rain. Whether temporary or permanent, these water bodies are dependent on rain. Rain is also connected to humidity, which affects mosquito survival. Even when flooding occurs, it may destroy the breeding sites for the mosquito and reduce vector abundance temporarily. However, it never completely eliminates the vector and so high rainfall is still considered ideal for malaria transmission. The duration of the rainy season is important for mosquito abundance. Regions with high temperature but limited rain tend to have mosquito populations that develop rapidly at the onset of rain. This causes increased mosquito populations and means that short periods of rainfall (e.g., 3 months) may sufficiently constitute one transmission season. Whereas in places with lower temperatures, mosquito populations slowly increase with increase in temperature as the rains arrive. This leads to a long development cycle for both the parasite and vector, and the favorable conditions need to last for a while so malaria transmission can occur. Mosquito populations normally lag behind rainfall so that increases in rainfall are followed by peaks in mosquito numbers. Climatic variables are also directly related to elevation. As elevation increases, the temperature decreases. As a consequence, the abundance and species composition of malaria vectors may change with elevation. It is known that low elevation areas typically have higher vector densities and are malaria endemic compared to highland areas.

Fig. 2 Rate of development of Anopheles gambiae (solid line) and the percentage that develop to adults (connected dots) at different temperatures. With permission from Bayoh, M. N. and Lindsay, S. W. (2003). Effect of temperature on the development of the aquatic stages of Anopheles gambiae sensu stricto (Diptera: Culicidae). Bulletin of Entomological Research 93, 375–381.

176

Malaria as an Environmental Disease

Factors That Affect Adult Mosquito Longevity Local climate affects not only the development of parasite and vector but also the mosquito biting rates and longevity. Temperature directly affects mosquito survival. As temperature decreases, the sporogonic cycle slows down. This means that fewer adult mosquitoes will survive this period and this affects mosquito abundance. Mosquito abundance is limited by long larval duration. For example, at 18 C, the sporogonic cycle takes 56 days, whereas at 22 C it is completed in less than 3 weeks. This has implications for transmission of malaria because if few adult mosquitoes survive, the transmission cycle may not be completed. However, at 22 C, mosquito survival is high enough (approximately 15%) for the malaria transmission cycle to be completed. Temperatures above 32 C are reported to cause not only high turnover of the vector population but also weak individuals and high mortality. At 40–42 C, thermal death occurs and at 40 C mosquito survival is nonexistent (i.e., zero). In addition to temperature, providing breeding sites that last for a longer duration also aids in intensifying the development of both the malaria parasite and its vector. Humidity affects mosquito survival and so may prevent completion of the cycle. Increased humidity is favorable for increased life span, whereas decreased humidity reduces the lifetime of the mosquito. The length of the gonotrophic cycle is governed by humidity. Relative humidity and saturation deficit (the amount by which the water vapor in air must increase to achieve saturation without changing environmental temperature and pressure) also affect mosquito longevity and thus potentially increase the infective lifetime of a mosquito. It is clear that there has to be a delicate balance between both variables for malaria transmission to occur. A rise in temperature that favors malaria transmission may cause a fall in relative humidity, which favors reduced transmission. A rise in relative humidity that also favors malaria transmission is associated with decreased temperature, which might have the same effect. Mosquito size has a direct association with its survival. The size of adult mosquitoes is indirectly associated with the larval density in their aquatic habitats. If the larval density is high, the size of the adults is seen to decrease and vice versa for lower larval densities. Larval density is affected by competition for food and nutrients. This is tied to how suitable for development the habitat is. If more larvae are present in the habitat, there will be more competition and therefore smaller size as a result. Larger mosquitoes have longer survival periods than smaller mosquitoes. However, larger mosquitoes have larger numbers of oocysts, which in turn results in higher mortality. Natural selection favors parasites that increase the biting rate of their vectors. There are conflicting results as to whether infection affects survival of the vector. Some studies show that infected mosquitoes have a reduced longevity than noninfected mosquitoes, whereas others show the opposite result. Not only are infected mosquitoes more persistent in biting than noninfected mosquitoes, but they also have poorer flight ability. This trait potentially weakens their ability to avoid host defensive behavior. Longevity has a big impact on the reproductive rate of Anopheline vectors. It seems that Plasmodium can induce behavior modifications by the Anopheline mosquitoes so that infected mosquitoes are attracted to higher temperatures than those not infected. As previously mentioned, temperature has an inverse association with the duration of sporogonic development. As temperature increases within the optimum temperature range, the period of sporogonic development decreases. The earlier the sporozoites reach the salivary glands, the greater the opportunity to be transmitted.

Factors That Affect Adult Mosquito Activity There have been numerous studies that demonstrate the spatial distribution of the malaria vector as a function of distance from known or suspected breeding sources. Larval stages usually aggregate in pools of water with specific characteristics. The distribution of adults is mostly dependent on the distribution of the larval habitats and the flight range of the adults. The flight range of Anopheline mosquitoes varies; for example, A. gambiae is said to have a flight range of approximately 2 miles. Anopheles mosquitoes and malaria cases typically cluster in low-lying valley bottoms, often in proximity to swamps or rivers. Malaria prevalence and incidence is higher for people who live closer to these major larval habitats because the mosquitoes do not tend to fly too far from their breeding sites. Both the blood meal and aquatic habitats are in proximity increasing the convenience of feeding and reproduction for the mosquito. Even water bodies that are not suitable for breeding may be used as oviposition sites, thereby increasing the proportion of emerging infectious mosquitoes. This is one method by which Anopheles mosquitoes have adapted to environments they would typically not be found in. Other water-related risk factors for increased Anopheline mosquito abundance and therefore malaria transmission include nearby presence of water sources such as wells, storage containers, water pumps, dams, canals, drains, and irrigation systems. Small pools of water are often present around wells and are therefore potential breeding grounds for A. gambiae. Since water is more likely to accumulate on flat ground, this is also a risk factor for increased incidence of malaria. The threshold of the relationship between the nearby presence of breeding sites and malaria transmission has been suggested to be approximately 500 m. This means that Anopheline breeding sites less than 500 m from households are associated with an increased risk of malaria transmission and vice versa. Generally, as altitude increases, there is a decreased risk of malaria infection. This is because temperature decreases with increase in altitude and temperature plays a major role in determining malaria transmission. These areas are also typically characterized by hilly topography. This means that stagnant bodies of water cannot accumulate long enough in hills for the mosquito larvae to complete their life cycle. Places with high altitudes (above 2000 m) have historically been prone to low and unstable transmission, but now these areas are showing increased incidence of malaria transmission. Examples include the highland areas of Kenya and Ethiopia. This recent phenomenon has been attributed to the development of the agroforestry industry, migration, and scarce health resources. Climatic changes with an increase in temperature of several degrees may also favor the growth and development of the

Malaria as an Environmental Disease

177

malaria vector. This has consequences on malaria transmission levels. Climate change may result in shift in distribution of malaria vectors or expand geographical areas susceptible to malaria transmission and therefore cause a widespread increase in transmission potential of the malaria mosquito population. Temperature affects vital functions such as feeding behavior with a minimum temperature threshold estimated to be approximately 10 C. Outside the accepted upper temperature limit, increase in temperature decreases biting activity of mosquitoes. High temperatures also have a limiting effect on feeding, especially in the presence of high humidity. Independent of humidity, mosquito flight is inhibited by increasing temperature. This is the major reason that Anophelines are more active in cooler temperatures at night. One recommendation to control malaria transmission is for people to modify their behaviors to reduce personal contact with mosquitoes. Conducting activities during the periods that mosquitoes are not active due to temperature is a possible way of controlling malaria transmission. Living near bushes, forests, and agricultural fields also increases risk of malaria infection. Those who live near agricultural fields may be targeted to use mosquito repellants and wear light-colored clothing as mosquitoes are attracted to dark clothing. This is another way for reducing contact with the malaria vector and therefore impacting malaria transmission. Maize pollen is a good source of nutrition for the larvae of the Anopheles mosquito and has been associated with a higher incidence of malaria. Farming is not restricted to rural areas only as is seen in various urban areas in sub-Saharan Africa. In urban areas, malaria transmission may be tied to human activity. Human activities that lead to shallow water body production and increased artificial water reservoirs/collectors produce ample aquatic habitats for mosquitoes. Physical deterioration such as blocked drains, potholes, and tires produces potential mosquito breeding sites. Construction activities such as excavation sites, building construction, and irrigation schemes also contribute. Land-use changes associated with human activity such as clearing of forests and dam construction can impact mosquito habitats and therefore the distribution and abundance of mosquito vectors. Land-use change can also mediate the interactions between humans and the malaria vector. It can allow the colonization of new habitats; it can extend or reduce the habitat of the vector, or modify the composition of the mosquito vector community, because different vector species have varying preferences for a habitat for their immature stages. Even though land-use change influences the mosquito population and therefore malaria transmission, its exact effect can only be looked at within the local context. Large-scale land-use changes such as irrigation developments typically result in increased human malaria incidence. This can be attributed partly to demographic factors related to the influx of nonimmune individuals and partly to vector-related factors (increased abundance, survival, and human contact).

Habitation-Related Factors House construction has been shown to influence malaria transmission. The type of materials used for construction and condition of the house might determine the entry, abundance, and activity of the mosquito and therefore affect risk of malaria infection. The type of roof has been associated with the risk of exposure to the malaria vector. People who live in houses with earth or grass-thatched roofs are more likely to have malaria than those who live in houses with concrete roofs. This is postulated to be because these roofs provide a favorable environment for the vector to rest and stay close to its hosts and may increase the survival chances of mosquitoes. The type of wall also influences malaria transmission with increased risk of transmission with the presence of mud walls. Other materials used for walls that may increase risk of malaria infection include wood, banana leaves, straw, and maize leaves. The presence of openings in roofs or walls may allow mosquitoes to enter houses. Living in houses without roofs also allows mosquito entry, thereby increasing the risk of malaria transmission in the house occupants. Open eaves (spaces between walls and roofs) also increase the risk of entry of mosquitoes; therefore, sleeping in rooms with open eaves has been associated with increased exposure to mosquitoes. Whether the house has door or window screening affects the abundance of mosquitoes indoors. Lower mosquito densities and malaria incidence rates have been recorded in inhabitants of houses made with brick and plaster walls. Houses made of brick walls usually have screened windows and iron sheet roofs and therefore tend to reduce human–malaria vector contact. The age of a house might facilitate the entry of Anopheles mosquitoes. The older the house, the more likely for mosquitoes to be able to enter as a result of its deterioration. The presence of latrines near the house is also said to increase the risk of malaria transmission. Occupants of houses with separate kitchens are said to have a higher risk of acquiring malaria. The absence of wood smoke indoors has been associated with an increased risk of malaria infection. This might be due to the fact that smoke is a mosquito repellant. Apart from its proximity to a breeding site, housing parameters play other important roles in mosquito abundance therefore impacting malaria transmission. Factors such as latitude and longitude, elevation and slope type, and tree canopy coverage over roofs influence adult mosquito abundance. Houses located at lower elevations have been shown to have higher abundance of mosquitoes than houses at higher elevations. In houses in areas of deforestation (low roof canopy coverage), mosquito abundance is higher than in houses in areas with trees. Land cover (e.g., tree canopies) tends to reduce water temperature of breeding sites surrounding the house. It does this by reducing the amount of solar radiation getting to the larval habitats. Reducing the temperature, as mentioned previously, slows down Anopheline mosquito development cycle that has impacts on the survival of the larval stages and therefore adult mosquito abundance. Therefore, the removal of land cover by deforestation changes the ecology and provides clear lands that are generally more sunlit and prone to the formation of puddles that tend to form on flat land. These puddles tend to be neutral pH, which favors the development of larvae. The presence of tree canopies also affects the indoor temperature of the house; which increases the duration of the gonotrophic cycle, thereby implying that biting frequency is reduced.

178

Malaria as an Environmental Disease

Deforestation can change local climate, reduce moisture held by the vegetation, and increase ground temperatures. Increased temperatures result in higher rate of development for the mosquito, frequency of feeding, and incubation of parasite in the mosquito. Deforestation is typically the start of land-use changes such as road construction, dams, and irrigation systems. As mentioned in the previous section, this is accompanied by migration, especially of nonimmunes to an area. This may enhance the spread of malaria. Deforestation may affect regional weather patterns. Trees play a major role in the carbon cycle. Large-scale ecosystem change such as removal of huge amounts of trees may cause significant effects on temperature and moisture and therefore affect vector habitats.

Other Associated Risk Factors Generally, malaria endemicity is lower in urban areas than rural areas. There are several explanations for this phenomenon. One explanation is that higher levels of pollution exist in urban areas and pollution disrupts larval habitats. This affects the life cycle of mosquitoes and therefore its capacity as a vector for transmitting malaria. Another explanation could be that people who live in urban areas practice better mosquito avoidance behavior in the form of house screening, doors, use of insecticides, and use of insecticide-treated nets (ITNs). Also, increase in population density may reduce biting rates as there are more humans to mosquitoes. Low transmission rates may cause delay in acquiring immunity to malaria. Migration of people from rural, malaria-endemic areas to receptive urban areas also affects the malaria transmission dynamic. When nonimmune individuals travel from urban to rural areas, they may become infected. They will therefore become malaria transmitters when they return home. Travel outside area of residence has therefore been associated with increased risk of malaria. The complexity of malaria epidemiology is associated with low acquired malaria immunity, behavior of city dwellers, access to health care and preventive measures, and the heterogenic suitability of urban ecosystems for malaria transmission. The presence of infrastructure, especially roads, might be linked to an increase in malaria risk. This is because the construction of roads and railways has promoted the spread of malaria. Humans generally settle near roads and travel occurs along the road. This creates many puddles, footsteps, and tracks along roads, resulting in the provision of suitable breeding sites for mosquitoes. In addition to human behavior, factors such as the presence of domestic animals may influence mosquito abundance and therefore malaria transmission. Some studies show that keeping domestic animals such as pigs, dogs, goats, and cattle around the house is associated with increased malaria parasite prevalence and incidence of malaria. A few did not find keeping cattle in home surroundings to be a risk factor for malaria acquisition after adjusting for confounders. This of course may be due to the different settings of these studies (Fig. 3).

Malaria Prevention and Control Strategies Through Environmental Management To prevent and control transmission of malaria, behavioral, clinical (mass drug administration), and environmental interventions are needed. Behavioral interventions include methods individuals can use to reduce their contact with the malaria vector such as use of mosquito repellants, ITNs, insecticide sprays, and screens on windows and doors to prevent the entrance of mosquitoes. Environmental interventions include, but are not limited to, indoor residual spraying (IRS), clearing of grassland and bushes around the residence, making sure water is not left standing in containers (i.e., stagnant water), and gutters are closed to reduce the breeding places for the Anopheles mosquito. Lessons from the past show that environmental management strategies have little or no toxicity, are low-cost, low-tech, sustainable, and can contribute to local self-reliance by making use of local resources and knowledge. They can be applied in various eco-epidemiological settings, and studies have shown that these strategies all contribute to reducing malaria incidence and prevalence. Early diagnosis of malaria and treatment with appropriate antimalarial drugs is a key intervention strategy. Drugs such as quinine and chloroquine have been routinely used to treat malaria in the past; but due to incomplete and indiscriminate use of these drugs, the Plasmodium parasite has shown increased resistance to them. Owing to this resistance, artemisinin combination therapy (ACT) drugs are recommended by the World Health Organization (WHO) and therefore are now increasingly being used against malaria infection in most malaria-endemic countries. Because of the propensity of Plasmodium to develop drug resistance, mass drug administration must be regarded as a supplementary tool in mosquito eradication programs. In contrast to the heavy reliance on and highly promoted ITNs, malaria chemotherapy, or IRS, the use of environmental management and control strategies to reduce malaria transmission is currently receiving less attention. There is a large body of historical evidence to show that environmental management programs are very effective in reducing human burden of malaria. It was common practice during the turn of the 20th century for engineering personnel to work alongside malaria control officers to ensure the implementation of environmental management that would deal simultaneously with mosquito habitat reduction and malaria suppression. The advent of dichlorodiethyltrichloroethane (DDT) after World War II, which offered a standardized single attack, led to the so-called Global Malaria Eradication Campaign spearheaded by the WHO that put a stop to environmental management strategies for controlling malaria transmission. Since then, vector control, an essential component of malaria control, has become less effective in recent years, partly due to poor use of alternative control tools, inappropriate use of insecticides, lack of an epidemiological basis for interventions, inadequate resources and infrastructure, and weak management. Changing environmental conditions, the behavioral characteristics of certain vectors, and resistance to insecticides have added to the difficulties. Ecological changes driven by deforestation, human migration, and unmanaged urbanization have increased the densities of human hosts and vector breeding sites in some malarious regions. Rekindling of the forgotten knowledge of the breeding habits of the local mosquito

Malaria as an Environmental Disease

179

Fig. 3 A summary causal web of the relationships between malaria transmission and environmental risk factors. Dashed lines have positive impact on malaria transmission, whereas solid lines have negative impact. With permission from Keiser, J., De Castro, M.C., Maltese, M.F., et al. (2005). Effect of irrigation and large dams on the burden of malaria on a global and regional scale. American Journal of Tropical Medicine and Hygiene 72, 392–406.

species essential in designing optimal vector control strategies would now seem to be needed if the battle against malaria is to be tilted in favor of humans. Since vector density is a key variable in malaria transmission, vector reduction should be an important tool for malaria control. The WHO defines environmental management as “The planning, organization, carrying out, and monitoring of activities for the modification and/or manipulation of environmental factors or their interaction with man with a view to preventing or minimizing vector propagation and reducing man-vector-pathogen contact.” Before the discovery and widespread use of DDT, environmental management was used effectively. At the macroscale, it involved environmental modification, measures to create a permanent or long-lasting effect on land, water, or vegetation to reduce vector habitats. These measures included installation and maintenance of drains, modification of river boundaries, and draining flooded areas and swamps. On a microscale, it involved environmental manipulation (measures to create temporary unfavorable conditions for the vector) that included the clearing of bushes, picking up empty bottles, tins or cans, and efficient waste collection, among others. Modifying or manipulating human habitation and behavior to reduce man–vector contact was another component of environmental management. The specific environmental intervention depends on the locale so it should not be a one-size-fits-all approach. Jennifer Keiser and her colleagues conducted a meta-analysis of 40 studies that involved environmental management interventions with clinical malaria variables as reported outcome measures. The malaria control programs were implemented in 18 different countries, in different eco-epidemiological settings, and involved different Plasmodium vectors and different levels of endemicities. Twenty-seven of the studies focused on environmental modifications (permanent measures such as installation and maintenance of drains, filling of swamps, borrow pits, pools, and ponds; modification of river boundaries; or other engineering approaches), four dealt with environmental manipulation (methods that created temporary unfavorable conditions for the vector such as water or vegetation management), while nine involved modifications in human habitation. Interestingly, most (85%) of the environmental management studies were implemented before the Global Malaria Eradication Campaign by DDT spraying (1955–69); between 1969 and 1995, only six such studies were launched. Keiser and her colleagues reported that applied environmental modification and modification of human habitation reduced the risk ratio of malaria by 88 (95% CI 82–92) and 80% (95% CI 67–87), respectively. The four programs that relied on environmental manipulation, which were implemented in China, India, North Borneo, and

180

Malaria as an Environmental Disease

the United States, were also very successful in reducing the malarial risk. For instance, one such interventiondintermittently irrigating rice fields combined with filling or draining of borrow pits in Indian villagesdreduced the parasite rate from 42% to 0%. This intervention strategy was rediscovered more than 50 years later in the Sichuan Province, China, where most rice fields in Xindu District and Quionglai County are now irrigated intermittently. No malaria cases have been reported in these two areas since this program was implemented. Well-known examples of malaria eradication through environmental intervention included the construction of the Panama Canal that led to a large reduction in malaria incidence in addition to eradicating yellow fever. There, they utilized a comprehensive strategy of altering mosquito habitats by drainage and landfill, and used insecticidal oils in ponds and swamps. They were able to see a reduction in malaria incidence from 821 per 1000 in 1906 to 14 per 1000 in 1917. In copper mining communities in Zambia, environmental management strategies were used effectively to reduce the malaria burden. The control measures included vegetation clearance, modification of river boundaries, draining swamps, application of oil to open water bodies, and house screening. Some of the population also used quinine and mosquito nets. This resulted in reducing the malaria-related morbidity, mortality, and incidence by 70%–95%. History has a way of repeating itself. A Global Malaria Action Plan, announced with much fanfare at the UN Millennium Development Goals Malaria Summit in New York in September 2008, set the ambitious goals of reducing malaria burden and eventually eradicating the disease. This announcement was eerily similar to the confidence exuded by the WHO in the 1950s when they vowed to eradicate malaria with the wonder chemical (DDT). Today, at least 35 countries with a lot of outside financial support have rapidly scaled up their interventions with the stated goal of achieving malaria elimination by 2030. The lessons from the past failure seem to have generally been ignored in the recent eradication effort. Although several countries have reported promising results, with steep reductions in morbidity after deploying the long-lasting ITNs and ACTs, a key question that remains to be answered is what happens after aggressive short-term targets are achieved. How can the malaria-free situation be sustained in the long run without a collateral reduction in Anopheline population? Recent models of malaria transmission provide supporting evidence that integrated mosquito control (IMC) programs that combine environmental management tools and larvicide application are most effective in reducing the Plasmodium inoculation rate. One would expect that broadscale adoption of IMC will provide opportunities for methodological innovation that has been stymied for nearly 50 years.

Conclusions African countries carry a disproportionate burden of malaria. Several control methods exist but with the rapid development of drug and insecticide resistance, it is essential to find alternative ways to control malaria. The WHO has concluded that approximately 42% of the global malaria burden is due to modifiable environmental factors. There is clearly a need to redesign the paradigm for malaria control programs and inculcate environmental management into intervention strategies. The Sustainable Development Goals (SDGs) build on the momentum of the Millennium Development Goals (MDGs) post-2015. Holding to the same timeline as the SDGs are the WHO Global Technical Strategy for Malaria 2016–30 and the RollBack Malaria Action and Investment to fight Malaria (AIM); both documents touch on environmental management as one of the interventions for malaria vector control and elimination since reducing the burden of malaria is central to achieving the SDGs. Despite this recommendation, implementation is lacking and funding for environmental management activities is scarce. The focus now in malaria control is use of ITNs and in some places, IRS while environmental management gets very little attention as an intervention strategy. Effective control of malaria may occur through the use of environmental management and reduces the pressure on other control methods such as use of ITNs or antimalarial drugs. Since the aquatic habitats used by mosquitoes are highly species specific, it stands to reason that only those sites that support breeding of key vectors need to be controlled. Risk of malaria can also be lowered by reducing human–malaria vector contact through designing and construction of improved housing (house screens and closed eaves) in addition to personal protection. As previously mentioned, environmental management has many advantages and it’s disadvantages are few. Environmental management is not meant to be a replacement but one of several options in an integrated vector management (IVM) approach. Combining environmental management with current interventions in an IVM approach has the potential to lead to a substantial reduction in malaria incidence. Environmental management therefore needs to play a more central role in malaria control and eradication programs.

Further Reading Al-Taiar, A., Assabri, A., Al-Habori, M., et al., 2009. Socioeconomic and environmental factors important for acquiring non-severe malaria in children in Yemen: A case-control study. Transactions of the Royal Society of Tropical Medicine and Hygiene 103, 72–78. Bayoh, M.N., Lindsay, S.W., 2003. Effect of temperature on the development of the aquatic stages of Anopheles gambiae sensu stricto (Diptera: Culicidae). Bulletin of Entomological Research 93, 375–381. Castro, M.C., Tsuruta, A., Kanamori, S., Kannady, K., Mkude, S., 2009. Community-based environmental management for malaria control: Evidence from a small-scale intervention in Dar es Salaam, Tanzania. Malaria Journal 8, 11. Craig, M.H., Snow, R.W., le Sueur, D., 1995. A climate-based distribution model of malaria transmission in sub-Saharan Africa. Parasitology Today 15, 105–111. della Torre, A., Costantini, C., Besansky, N.J., et al., 2002. Speciation within anopheles gambiaedThe glass is half full. Science 298, 115–117.

Malaria as an Environmental Disease

181

Ernst, K.C., Lindblade, K.A., Koech, D., et al., 2009. Environmental, socio-demographic and behavioural determinants of malaria risk in the western Kenyan highlands: A casecontrol study. Tropical Medicine & International Health 14, 1258–1265. Geissbuhler, Y., Chaki, P., Emidi, B., et al., 2007. Interdependence of domestic malaria prevention measures and mosquito-human interactions in urban Dar es Salaam, Tanzania. Malaria Journal 6, 17. Gill, C.A., 1920. The influence of humidity on the life history of mosquitoes and on their power to transmit infection. Transactions of the Royal Society of Tropical Medicine and Hygiene 14 (5), 77–87. Keiser, J., De Castro, M.C., Maltese, M.F., et al., 2005. Effect of irrigation and large dams on the burden of malaria on a global and regional scale. American Journal of Tropical Medicine and Hygiene 72, 392–406. Keiser, J., Singer, B.H., Utzinger, J., 2005. Reducing the burden of malaria in different eco-epidemiological settings with environmental management: A systematic review. Lancet Infectious Diseases 5, 695–708. Kibret, S., Lautze, J., McCartney, M., Nhamo, L., Wilson, G.G., 2016. Malaria and large dams in sub-Saharan Africa: Future impacts in a changing climate. Malaria Journal 15, 448. Lambin, E., Eichhorn, M., Flasse, S., Harbach, R., Oskam, L., Vanwambeke, S., 2007. Impact of land-use change on dengue and malaria in northern Thailand. EcoHealth 4, 37–51. Lindsay, S.W., Martens, W.J., 1998. Malaria in the African highlands: Past, present and future. Bulletin of the World Health Organization 76 (1), 33–45. Lindsay, S.W., Emerson, P.M., Charlwood, J.D., 2002. Reducing malaria by mosquito-proofing houses. Trends in Parasitology 18, 510–514. MacDonald, G., 1957. The epidemiology and control of malaria. Oxford University Press, London. Martens, W.J.M., Niessen, L.W., Rotmans, J., Jetten, T.H., McMichael, A.J., 1995. Potential impact of global climate-change on malaria risk. Environmental Health Perspectives 103, 458–464. Onyango, E.A., Sahin, O., Awiti, A., Chu, C., Mackey, B., 2016. An integrated risk and vulnerability assessment framework for climate change and malaria transmission in East Africa. Malaria Journal 15, 551. Palsson, K., Jaenson, T.G.T., Dias, F., Laugen, A.T., Bjorkman, A., 2004. Endophilic Anopheles mosquitoes in Guinea Bissau, West Africa, in relation to human housing conditions. Journal of Medical Entomology 41, 746–752. Parham, P.E., Waldock, J., Christophides, G.K., et al., 2015. Climate, environmental and socio-economic change: Weighing up the balance in vector-borne disease transmission. Philosophical Transactions of the Royal Society, B: Biological Sciences 370, 370–386 pii:20130551. Prior J (2003) Anopheles gambiae (online), Animal Diversity Web. http://animaldiversity.ummz.umich.edu/site/accounts/information/Ano-pheles_gambiae.html (accessed June 2010). Pruss-Ustun, A., Corvalan, C., 2007. How much disease burden can be prevented by environmental interventions? Epidemiology 18, 167–178. Robert, V., MacIntyre, K., Keating, J., et al., 2003. Malaria transmission in urban sub-Saharan Africa. American Journal of Tropical Medicine and Hygiene 68, 169–176. Rowley, W.A., Graham, C.L., 1968. The effect of temperature and relative humidity on the flight performance of female Aedes aegypti. Journal of Insect Physiology 14 (9), 1251–1257. Russell, P.F., 1959. Insects and the epidemiology of malaria. Annual Review of Entomology 4, 415–434. Takken, W., Martens, P., Bogers, R.J. (Eds.), 2005. Environmental change and malaria risk: Global and local implications. Springer, Dordrecht. http://library.wur.nl/ojs/index.php/ frontis/issue/view/202. Utzinger, J., Tozan, Y., Singer, B.H., 2001. Efficacy and cost-effectiveness of environmental management for malaria control. Tropical Medicine & International Health 6, 677–687. WHO, 1982. Manual on environmental management for mosquito control, with special emphasis on malaria vectors. WHO, Geneva. WHO Offset Publication No. 66. WHO (1995) Vector control of malaria and other mosquito-borne diseases. Report of a WHO study group. Technical Report Series 857. WHO (2006) Malaria vector control and personal protection. Report of a WHO study group. Technical Report Series 936.

Relevant Websites http://animaldiversity.ummz.umich.edu/site/accounts/information/Ano-pheles_gambiae.htmldAnimal Diversity Web. http://www.cdc.gov/malaria/dCenters for Disease Control and Prevention. http://www.ft.com/reports/combating-malaria-2009dFinancial Times Reports on Malaria. http://www.gbcimpact.org/dGlobal Business Coalition on HIV/Aids, Tuberculosis and Malaria. http://www.malariaconsortium.org/dMalaria Consortium. http://www.malarianomore.org/dMalaria No More. http://www.malariasite.com/index.htmdMalaria Site. http://www.metapathogen.com/mosquito/anopheles/dAnopheles gambiae. http://www.mmv.orgdMedicines for Malaria Venture. http://www.rollbackmalaria.org/about-rbm/aim-2016-2030dRollBack Malaria (AIM 2016–2030). http://www.who.int/malariadWHO (Malaria). http://www.who.int/malaria/areas/global_technical_strategy/en/dWHO Global Technical Strategy (GTS) for malaria.

Malaria, Bilharzia and Geo-Helminth Transmission in Kenya: Environmental Determinants BA Okech, College of Public Health and Health Professions, and the Emerging Pathogens Institute, University of Florida, Gainesville, FL, United States CS Mwandawiro, Eastern and Southern Africa Centre for International Parasite Control, Kenya Medical Research Institute, Nairobi, Kenya © 2011 Elsevier B.V. All rights reserved.

Abbreviations ACT artemisinin-based combination therapy Bti Bacilus thuringiensis isrealensis DDT dichlorodiphenyl trichloroethane DoMC Division of Malaria Control DoPH Division of Public Health DoVD Division of Vector-Borne Diseases EM environmental management IPT intermittent presumptive treatment IRS indoor residual spraying ITNs insecticide-impregnated nets IVM Intergrated Vector Management NEMA National Environmental Management Authority SP sulfadoxine-pyrimethamine STH soil transmitted helminths

Introduction Kenya (Figure 1) is located on the east coast of Africa lying right on the equator. It is bordered to the north by Sudan, Somalia, and Ethiopia; to the west by Uganda and Lake Victoria; and to the south by Tanzania. It has a short coastline on the southeast of the Indian Ocean. The climate in Kenya is tropical, equatorial with temperatures averaging approximately 22  C throughout the year. At the coastal areas, it is hot and humid, whereas most of the inland areas are dry, especially to the north and northeast parts of the country. There are two seasons for this type of climate in this country: short rains (October to December) and long rains (March to June). The topography of Kenya is quite diverse from the low-lying arid and semiarid land areas to the coastal belt, the highlands, and the lake basin around Lake Victoria. The Great Rift Valley cuts the country into two parts separating the north from the south. Kenya has a population of approximately 34 million, with 80.5% of the people living in rural areas. Nairobi is the capital city of Kenya and the largest in East and Central Africa. There are other larger cities and towns in Kenya including Mombasa, Kisumu, Nakuru, Eldoret, and Kisii. The bulk of the Kenya’s population, which stands at 34 million, live in only 25% of arable land. The population growth in Kenya, as in other sub-Saharan countries, has led to an increase in food demand. To boost food production, large-scale agricultural development schemes that are based on irrigation have been initiated, and in other cases, forested areas have been cleared to pave way for agricultural development. However, it has brought with it many problems, particularly waterborne diseases including malaria and bilharzia, because of the abundance of water, which is a suitable breeding ground for the mosquitoes and snails. In Kenya, such large-scale agricultural developments are located in a division called Mwea in central Kenya, in Ahero town in the Nyanza Province in western Kenya, and in Perkera in the central Rift valley. Mwea division (Figure 1) is home to Mwea Irrigation Scheme, the largest of these irrigation schemes, and also bears a heavy toll of diseases as a result of irrigation. The water for this irrigation comes from swamps that were present in those areas and have been transformed into agriculturally productive zones by redirecting the water draining into the swamp for irrigation. Examples other than the Mwea Irrigation Scheme include the Yala River Swamp and the Budalangi Swamp in western Kenya. Such development activities that modify the swamps may actually help in curbing the propagation of waterborne diseases if only the irrigation activities are well maintained and the transmission cycle of the pathogens broken. Other than swamps, rivers flowing through arid lands have been used for irrigation activities like in the Perkera Irrigation Scheme in the Rift Valley area but also resulting in an increase in waterborne diseases. In a bid to provide farming areas for the increasing populations, forested areas have been lost to subsistence agriculture, whereas roads that are carved out of the forests to transport people and goods worsen the problem even further. Human settlements into previously unoccupied

182

Encyclopedia of Environmental Health, 2nd edition, Volume 4

https://doi.org/10.1016/B978-0-444-63951-6.00517-9

Malaria, Bilharzia and Geo-Helminth Transmission in Kenya: Environmental Determinants

Figure 1 lies.

183

Map of Kenya showing location of Mwea Irrigation Scheme. The darker area is Mwea division, where the Mwea–Tebere Irrigation Scheme

forests are common in Kenya. Clearing of forests for human settlements increases the risk of malaria transmission. In Kenya, for instance, the Kakamega Forest in western Kenya and the Mau Summit forests in the rift valley are steadily being lost to settle landless people. Such settlements are done without proper planning and often results in negative health impacts on the people living in these areas. This article reviews some of the environmental factors in Kenya and how they affect the health and spread of diseases. In addition, it also considers how environmental changes brought about by agricultural practices create ecological conditions conducive to malaria transmission. This article also reviews some of the control measures in place, paying special attention to malaria.

Policy on Environmental Health in Kenya The Kenyan government has spelt out a policy on environmental health. The objective of this policy is to develop and formulate strategies that will assist in the sustainable reduction in the disease burden among the population through improvement of the living standards. The government through the current health policy framework places more emphasis on preventive health care than on curative health, and one key pillar in preventive health is environmental sanitation. Several agencies have been charged with the responsibility of translating this policy to action. These include the National Environmental Management Authority (NEMA), Division of Public Health (DoPH), Division of Vector-Borne Diseases (DoVD), and Division of Malaria Control (DoMC), all of which fall under the Department of Preventive and Promotive Health within the Ministry of Health. The functions of these agencies sometimes overlap, but the overall goals are to provide a healthier environment for Kenyans.

Environment and Human Health Parasitic diseases are a major cause of morbidity and mortality in the developing countries of the tropics and subtropics. The warm climate in these areas is conducive to the growth of these parasitic pathogens. The most important parasitic diseases include malaria followed by helminthiasis, which is caused by several different genera and species of helminths. Malaria parasites infect 300–500 million people each year with an annual death toll of 1.5–2.7 million people. In Kenya, there are close to 26 000 deaths due to malaria every year. The disease burden placed by endemic malaria in afflicted countries slows down economic growth and substantially inhibits economic development. In Africa alone, where the greatest burden of disease rests, close to 450 million people live in areas of malaria transmission, and a further 50 million people experience occasional episodes of malaria. Clearly, malaria poses

184

Malaria, Bilharzia and Geo-Helminth Transmission in Kenya: Environmental Determinants

a big threat and requires urgent measures to be brought under control. However, intestinal parasitic infections are among the most common infections in humans worldwide. It is estimated that approximately 2 billion people worldwide are infected with geohelminths, including species of Ascaris lumbricoides, Trichuris trichiura, and hookworms, and approximately 300 million persons suffer various illnesses associated with these infections. Many of these parasitic infections are associated with low socioeconomic status and poor hygienic and sanitation conditions, particularly in areas with unsafe water conditions. In Kenya, endemic malaria is found in the western part of the country, particularly around the Lake Victoria basin, in central Kenya within the Mwea Irrigation Scheme and the surrounding areas, and in coastal Kenya. Some of the research done in Mwea division where malaria transmission is sustained by Anopheles arabiensis indicates that unplanned rice cultivation in parts of the division has led to an increased mosquito vector population. In the Mwea area of Kenya, malaria is often linked to irrigation practices related to rice production. Continual soil submergence through irrigation helps maintain healthy rice plants and high grain yields, mainly by controlling weeds. Unfortunately, the use of irrigation to flood agricultural land during rice cultivation has increased the number of malaria carrying mosquitoes and a corresponding increase in malaria cases and other vector-borne and waterborne diseases. In Mwea, the local mosquito population also feeds on farm animals used in plowing rice fields, making the presence of livestock an important agroecological determinant of malaria risk. Livestock, especially cattle, can play a significant part in malaria transmission because certain mosquito vectors, such as A. arabiensis, readily feed on them, rather than on human hosts (Figure 2). Blood sources from animals will enable the mosquito to lay eggs and continue its cycle while potently capable of cyclical transmission of malaria to humans. This mosquito is also widely distributed in Africa largely due to its ability to adapt to diverse ecological settings including semiarid areas. Mosquitoes and malaria generally thrive among underprivileged communities, which are also affected by myriads of other environmental hazards and socioeconomic problems. Poor nutrition, inadequate living standards, and a lack of medical care are among the more general characteristics of such communities. The complexity of factors influencing malaria transmission requires that any attempts to malaria control must be comprehensive and integrated taking into account the role of various risk factors. The interactions between environments and lifestyle factors determining health of communities must also be seriously considered in such an approach. Malaria control activities in Mwea division rely mainly on treatment of cases (mainly artemisinin-based combination therapy (ACT) and sulfadoxine–pyrimethamine (SP)) and personal protection using insecticide-impregnated nets (ITNs); this approach is in line with the Kenya National Malaria Control Program. The drugs are given in government dispensaries, whereas Ministry of Health and other nongovernmental organizations distribute the ITNs, although the number of nets donated is still insufficient. Even with these interventions, the number of malaria cases reported in dispensaries and hospitals is rising, clearly pointing to a need for a more integrated malaria control effort including environmental management (EM) and larviciding which has been seriously neglected. Other parasitic diseases like schistosomiasis and soil-transmitted helminths (STHs) are also quite prevalent in Mwea division. These helminth infections constitute the second most prevalent parasitic infection in Mwea after malaria. Poor conditions of toilets in the area and the lack of safe drinking water (Figure 3) have led to an increase in these infections in the area. Many communities in Mwea, as in many parts of the country, rely on boreholes, well water, river water, lake water, and sometime runoff water from rainfall. The lack of safe drinking water or water to clean hands and food promotes these infections. Keeping stagnant water

Figure 2 A picture of a farmer using an ox-plow to prepare land for rice planting. The use of livestock in the farming activities is widespread. However, mosquito vectors also thrive by feeding on the blood of farm animals as an alternative to human blood.

Malaria, Bilharzia and Geo-Helminth Transmission in Kenya: Environmental Determinants

Figure 3 area.

185

The toilet conditions in Mwea division schools. The toilets are very dilapidated and are a risk factor for geohelminth transmission in this

like in wells, or rainfall harvest, may increase breeding places for mosquitoes and therefore lead to added risk for mosquito-borne diseases.

Vector-Borne Diseases with Special Attention to Malaria Sub-Saharan Africa bears the heaviest burden of global malaria. It has been estimated that close to 2.5 million people die every year because of malaria-related complications, with approximately 90% being children below the age of 5 years and pregnant mothers. Various efforts have been put in place to control the disease and its vectors with very limited success. It is estimated that more than half of the population of Kenya (approximately 20 million Kenyans) are regularly infected with malaria. The households affected by the disease suffer both socially and economically. When a family member suffers from a malaria episode, household resources are stretched out to cater for transportation of the patient to the hospital, medical consultation fees, and costs of drugs. It is estimated that each household may spend US$20 each year for the clinical management of malaria attacks. And with almost 53% of Kenya’s rural populations living below the poverty line, an equivalent of less than US$1.00 a day, the cost of managing malaria is an overwhelming financial burden to the rural population. Malaria-associated morbidity is responsible for a significant decrease in productivity, with estimates showing that 170 million working days are lost each year. The negative economic impact is most severe on agricultural productivity and the livelihoods of rural populations, especially in epidemic-prone districts that are constantly under threat. School attendance and learning is also disrupted by malaria. Many malaria control and prevention projects have addressed specific medical and even mosquito-related questions, some of which have proved to be the scientifically sound. However, the integration of strategies such as an Integrated Vector Management (IVM) strategy is often absent from malaria disease management plans. Environmental considerations like changed land use within the communities are becoming increasingly relevant in malaria management plans. Over a number of years in the Kisii and Gucha highlands in western Kenya, it was shown that farmers developed brick-making trade that consequently led to the formation of breeding grounds for mosquito vectors of malaria. The study further documented that the habitats were invaded and highly populated by the main malaria vectors, Anopheles gambiae and Anopheles funestus, respectively, even in the dry season, when most of the natural habitats, like swamps, were without water. So, emphasis ought to be put on the relevance of EM plans for malaria control and prevention in the wake of such startling new information from malaria-endemic regions. These environmental interventions can be very effective in reducing malaria transmission when done by the communities themselves with proper instructions by environmentalist, landscape engineers, field entomologists, and public health officers among others.

Environmental Factors Leading to Malaria Risk Irrigation: As part of agricultural activity, irrigation is more often associated with increased potential of malaria transmission due to vector propagation. Wetland rice cultivation, more than any other crop, requires the greatest quantities of water. This provides ideal

186

Malaria, Bilharzia and Geo-Helminth Transmission in Kenya: Environmental Determinants

conditions for mosquito breeding. In a field site in the Mwea Rice Irrigation Scheme (Figure 1), the ecological and climatic conditions are suitable for the breeding of mosquitoes that are competent vectors of malaria and filariasis. Increase in vector production due to rice cultivation results in higher prevalence rates of malaria and an extension of malaria transmission season. Rice is the most important crop in the developing world in terms of production and in contribution to diet. It is also the major form of employment and income for the rural population in those regions. Ninety-five percent of the world’s harvested rice of 146 million hectare is in the developing countries. To increase production of rice and other crops in arid areas, it is usually necessary to develop irrigation capabilities. Unfortunately, large-scale irrigation projects are often planned and implemented without any assessment of their negative impact on human health. Riceland agroecosystems, in which water is present in the land throughout much of the crop-growing season, may provide ideal habitats for mosquito vectors of malaria, lymphatic filariasis, and the arboviruses. Flooded paddies in Kenya, Uganda, and Tanzania produce significant numbers of A. arabiensis in areas hyperendemic for malaria. Ahero Irrigation Scheme remains flooded most of the year round due to its proximity to the Lake Victoria, and therefore ample breeding grounds are present leading to the production of large numbers of A. gambiae species of mosquitoes throughout the year, hence year-round year malaria transmission. However, paddies in Mwea are flooded twice a year following a tightly regulated flooding plan by the National Irrigation Board; therefore, mosquito numbers fluctuate periodically leading to seasonal transmission of malaria. Brick-making industry: The making of bricks for the construction of homes is another factor that has led to the creation of mosquito-breeding habitats. In a study conducted in Kisii district during the dry season, all the man-made pools from brickmaking activity were found to harbor larval stages of malaria parasite transmitting mosquitoes. Brick-making pits were further investigated for coassociations of larval densities with numbers of mosquitoes present indoors. A positive correlation was found between the numbers of mosquitoes inside houses with a brick-making pit nearby. The study found that the most abundant habitat type containing Anopheles larvae was brick-making pits. It also found that houses close to brick-making sites had malaria vectors, whereas those next to swamps did not. Therefore, brick making generates dry season habitats for malaria vectors in western Kenya. The brick makers do not fill up the holes dug when making the bricks resulting in the persistence of water in these pits. Deforestation: The clearing of forested areas for settlements or for agricultural purposes has been associated with increase in malaria transmission. In western Kenya, the indoor microhabitat temperatures and their influence on the gonotrophic rate in Anopheles mosquitoes were looked at in one study. Houses in deforested areas reported higher temperatures than those in forested areas, which in turn affected the rate of blood meal digestion and gonotrophic rate. Deforested areas had a higher chance of formation of open sunlit pools that are preferred by malaria mosquitoes. Such sunlit pools forming resulted in shorter larval development times due to increased temperature, making it ideal for the production of malaria vectors. The encroachment of people in forested areas also exposes them to natural disease transmission cycles in the wild. These vector-borne diseases include leishmaniasis, yellow fever, and dengue hemorrhagic fever. Pesticides for malaria vector control: Vector control is taking center stage as an effective way to control malaria transmission in Kenya and in many malaria endemic areas in sub-Saharan Africa. The major method used to kill the vectors is by indoor residual spraying (IRS) with any of the synthetic pyrethroids. Currently in Kenya, IRS is conducted in epidemic-prone districts usually in the highland areas of the country. Within those areas, there are District Outbreak Management Teams and Provincial Outbreak Management Teams who are responsible for containing the malaria epidemics, whereas the divisions of Malaria Control, Environmental Health, and Vector-Borne Diseases train teams of spray men and mobilize resources for the operations. During such operations, insecticides recommended by the Ministry of Health that are registered for household use by the Pest Control Products Board of Kenya are used, usually synthetic pyrethroids like lambda-cyhalothrin. The impregnation of bed nets with permethrin is also encouraged as a way of malaria control. The reintroduction of DDT in the arena has not been received wholeheartedly by the Kenyan government, although Uganda and Tanzania use DDT. The Kenyan government would like to see the results in Uganda and Tanzania before they can implement any form of DDT for malaria vector control. In those countries, however, information is not available on how they monitor the environmental spill after application of the DDT indoors. The Kenyan authorities have been getting suggestions that DDT should be applied only to forestall serious and large epidemics and even so this could be applied every 7 years. The rationale for this is that if malaria vectors have the capacity to develop resistance to DDT, and if this happens after continuous use, malaria transmission will actually increase and there will be no tools available to combat it effectively. Consequently, DDT must be kept in the chemical stores and only used in a ‘fire brigade’ style to quell serious malaria epidemics. Other measures that could have a major impact but have not been attempted include controlling the aquatic stages of the malaria vector by pesticides. The pyrethrum board of Kenya produces a permethrin-based insecticide that is used against larval stages of mosquitoes. Other methods that do not involve insecticides include source reduction, draining of swamps, and cleaning of the environment. The clearing of brush around the home is also encouraged to remove hiding places for the adult mosquitoes and should go hand in hand with other methods.

Environmental Factors Spreading Bilharzia and Geohelminths in Kenya The global burden of schistosomiasis and STH infections is enormous. The transmission of schistosomiasis is associated with lack of safe water for domestic use and poor sanitary facilities that are often present in poor communities with low socioeconomic conditions. Such conditions are rampant in many developing countries of the world. Many environmental factors are associated with transmission of these diseases. Irrigation projects that are put in place to boost food production often bring with it health-

Malaria, Bilharzia and Geo-Helminth Transmission in Kenya: Environmental Determinants

187

related problems including the spread of bilharzia as seen in Mwea Irrigation Scheme. Current estimates show that more than 3 million Kenyans are infected with either one or both species of the parasites that cause schistosomiasis (Schistosoma mansoni causes intestinal schistosomiasis and Schistosoma haematobium causes urinary schistosomiasis), and more than 10 million people living in the rural areas are at risk of acquiring schistosomiasis. About a decade ago, morbidity due to schistosomiasis was the fourth most frequently noted infection in the Coast Province, the fifth in Eastern and Nyanza provinces, sixth in the North Eastern Province, and 10th in the Central Province. The scenario has not changed much today. Approximately 17 000 cases were reported in hospitals as chronic mild morbidity due to either intestinal or urinary schistosomiasis. Severe morbidity often leading to mortality is focal and less common, but sufficient to be a public health concern. Epidemiological patterns of schistosomiasis vary considerably between geographic areas and in different communities. More often than not, the areas that experience higher morbidities are communities that do not have safe drinking water. Schistosomiasis is found in 41 of the 70 districts in Kenya. The southern half of the country is the most affected area, where the two forms of the disease overlap in Machakos, Kitui, and TaitaTaveta districts. In the Lake Victoria, in the western part of the country, both intestinal and urinary schistosomiases are widespread. District-wide studies conducted to map out the distribution and prevalence of these infections indicate that the prevalence varies considerably in different districts. Based on these surveys, three separate endemic areas have been identified. These include 1. Coastal region: Urinary schistosomiasis is widespread with a prevalence of greater than 50% in school-age children in Kilifi, Kwale, Malindi, Tana River, and Taita Taveta districts and also in Garissa district in the North Eastern Province. A survey around the Hola Irrigation Scheme found an infection rate of greater than 90% in five of every nine schools survey in the area. In other districts, prevalence rates were below 25%. 2. Central region: Intestinal schistosomiasis is widespread, but the prevalence of infection is low, ranging between 25% and 30%. However, higher prevalence rates occur in parts of Machakos district where baboons have been implicated as a major reservoir of infection and in the Mwea irrigation area of Kirinyaga district in Central Province. Urinary schistosomiasis is also found at a lower prevalence in Machakos and Kitui districts. 3. Lake region in western Kenya: Intestinal schistosomiasis is widespread, although the infection prevalence is less than 45% in many places. Infections have been recorded in parts of Busia district, in South Nyanza, Kisumu, and Siaya districts of Nyanza Province. Urinary schistosomiasis prevalence rates of less than 50% are found in South Nyanza and slightly lower rates in Kisumu district. Only intestinal schistosomiasis is found in Mwea division, central Kenya. Heavy infection of S. mansoni can be seriously debilitating. Heavy egg loads, sometimes more than 500 per g of stool, may lead to hepatosplenomegaly. The infection by schistosomiasis follows an age-dependent trend with heaviest infections occurring in adolescents. Infections, especially of the S. mansoni form, are prevalent in men than women. In a case study in Mwea involving 2244 person whose stool samples were examined for S. mansoni, 21% were positive with significantly more men than women infected. The infection prevalence also differed when the data was analyzed by agroecological zones (higher in the rice irrigation zone (32.1%) than nonirrigation zone 14.4%). Also in a study of 1737 school-age children (aged 8–20 years) in 25 schools in Budalangi, a flood-prone area in Funyula division, Busia district, Kenya, in 1998, revealed an overall STH infection of 89%. The prevalence ranged from 70% to 100% in different schools. Hookworm infection was 77% (8.6% heavy infection), Ascaris 41.9% (4.5% heavy infection), and Trichuris (6.9% heavy infection). In total, 19.8% of the children were infected with both STH and S. mansoni (ranging from 1.4% to 77.7%), with the highest prevalence found in lakeside schools. Microgeographical characteristics influence both the prevalence and the intensity of S. mansoni infection. This brings into focus the importance of geography and varying environmental parameters when targeting control efforts to specific sites of focal transmission. This is necessary so that the costs involved are minimized. Such geographical variation that influences infection depends on activities that expose people to frequent or prolonged contact with contaminated water. Irrigation of rice is one activity that increases the risk of infection because of the long hours the farmers have to be in contact with contaminated water. Such wet fields of rice are ideally suitable for the growth of snails because the growing rice modifies the water temperature and provides a suitable microhabitat for parasite transmission. Rice farming, therefore, increases the chances for the development of schistosome parasite and causes an increase in the prevalence and intensity of schistosomiasis. Some of the major areas in Kenya where seasonal flooding occurs or were swampy have seen the development of irrigation schemes. Some of the major irrigation areas in Kenya are in Central Province where rivers and streams from Mount Kenya flood the low-lying lands; others are in Western Province around the Lake Victoria basin and in the Coast Province near the Athi River basin. In these areas, there are serious problems with transmission of schistosomiasis. For example, one area would be discussed in central Kenya – the Mwea–Tebere Irrigation Scheme. Mwea (Figure 1) is located approximately 100 km northeast of Nairobi. It has an area of 513 km2 and a population of 126 000 persons according to the 1999 national census. The division is divided into three agroecological zones: the rice irrigation zone, which was until a few years ago supported by the National Irrigation Board and the other part of it is outside the support of the irrigation board and falls under the nonirrigation zone. The rice irrigation scheme covers approximately 13 640 ha of the division and produces 90% of the country’s rice output. A well-designed canal water network serves the rice irrigation scheme. With the recent liberalization of rice farming from the irrigation board, there has been a steady development of new irrigation fields for both unplanned rice farming and farming of horticultural produce like beans and tomatoes. This zone is sometimes referred to as an unplanned rice/horticulture zone and is found in the twilights, surrounding the rice irrigation zone. The nonirrigation zone is drier than the other zones and therefore may appear conducive for coffee growing and other subsistence agricultural activity.

188

Malaria, Bilharzia and Geo-Helminth Transmission in Kenya: Environmental Determinants

Household Environmental Health Many households in Kenya and in other developing countries have poor sanitary conditions. The indoor and outdoor environments of these homes are not well maintained to prevent the formation of pools of water or growth of brush. These two are important risk factors for malaria mosquito abundance and malaria transmission. In some studies, many household structures erected inside the home may influence disease transmission. For instance, studies have shown that the bulk of field-collected mosquitoes originating from grass-thatched houses or from outdoor sheds is prevalent with malaria parasites. The reasons for this are many, but one factor that is critical is that they have cooler temperatures that may result in better survival of malaria parasites in mosquito midguts. It is clearly evident that the household environment may have a big role to play in determining the transmission of malaria.

Ecofriendly Malaria Control Interventions Kenya has been in the forefront in ensuring that the environment is protected even while they have tried to control these parasitic diseases. One of the means advocated, especially when it comes to mosquito vector control, is the use of biodegradable pesticides whose active ingredient is based on pyrethroids. Efficient control of these diseases will require that control program is proactive in detecting changes in the environment that can lead an increase in the incidence of the disease. Many useful indicators are easily observed from the environment. For instance, an increase in vector-breeding sites is an indicator that mosquito population densities will increase. It also points to changes that have occurred in the rainfall patterns or may reflect poor maintenance of agricultural and irrigation activities. Currently, there are different methods being applied to control malaria in Kenya. These include the following:

Indoor Residual Spraying IRS of an insecticide constitutes a classical mode of intervention against malaria transmission. When a malaria transmitting Anopheles mosquito feeds on a human host, it dramatically increases its body weight, which will interfere with its normal flight activity. The mosquitoes are forced to remove some of the excess body fluids in the ingested blood meal by a process called diuresis immediately after feeding. The engorged female will fly to a nearby vertical surface where she will stay before moving to another site. Because many vector mosquitoes feed mainly indoors, they become vulnerable to contact with lethal concentrations of insecticides placed on the walls of houses. IRS, therefore, is a very effective way to reduce the force of transmission of malaria more effectively than other modes of intervention because the vector mosquito will most likely not survive long enough for the pathogen to mature in the vector.

Insecticide-Treated Bed Nets Insecticide-treated bed nets are a direct mode of personal protection against potentially infecting mosquito bites in malaria-endemic sites. Although the fabrics may require retreatment after a short period of use, there has been a trend now to replace them with nets that do not need retreatment very often and these long-lasting nets have more recently become available. There is some evidence that when a large proportion of homes are covered by ITNs, malaria transmission may be reduced in the community through a mass effect, which may extend even to people who did not sleep under these nets. However, the mass effect of ITN during intervention studies is very controversial as some studies have found no effect in protected villages. Therefore, ITN reduces transmission by protecting people from the bites of infected mosquitoes, and wider reduction of transmission in a community would require a greater level of ITN coverage.

Environmental Management EM includes a diverse package of methods that fits the community and area where it is applied. Some of the features of this package include source reduction (removing suitable habitats for mosquito breeding) by draining and filling open pools of water. Large pools of water that cannot be filled by soil or gravel could be sprayed by larvicides such as Bacillus thuringiensis var israelensis (Bti) or methoprene. Some of these methods have been used in many countries with a lot of success, because interventions based on source reduction attack the most fundamental causes of malaria and their application is most sustainable. Improvement in homes by building walls that do not crack, installing screens on eaves, and having spring-loaded doors that shut by themselves are also helpful measures that will prevent the entry of mosquitoes into houses, which could be made as a strong part of campaign in EM. In fact, the building of better houses can also be used to prevent the development of Plasmodium parasites in mosquitoes. This has been demonstrated in a study in western Kenya where it was shown that the hotter the microhabitats of mosquitoes, the lower the chances of mosquitoes getting malaria parasite infection in mosquito guts. In addition, this study found that there are differences in survival of mosquitoes when they are held in different houses constructed with different materials. The underlying factors that led to such differences are the heterogeneous nature of the microclimates observed inside houses that correlated with variation in the survival of mosquitoes. So, house design could play a role in EM.

Malaria, Bilharzia and Geo-Helminth Transmission in Kenya: Environmental Determinants

189

Antimalarial Medicines The use of medicines to treat malaria episodes is the first line of defense. ACT has become the preferred method for case management in Kenya. In addition, the intermittent preventive treatment (IPT, it is the presumptive course of therapy delivered to any child who appears at a health clinic during a postnatal visit) of children with antimalarial drugs is another method being advocated. If IPT does not have a direct effect on the force of transmission of malaria, it may delay morbidity and mortality that will help reduce childhood mortality due to malaria while permitting immunity to develop.

Acknowledgments Funding support from Japanese International Cooperation Agency and Kenya Medical Research Institute.

Further Reading Afrane, Y.A., Lawson, B.W., Githeko, A.K., Yan, G., 2005. Effects of microclimatic changes caused by land use and land cover on duration of gonotrophic cycles of Anopheles gambiae (Diptera: Culicidae) in western Kenya highlands. Journal of Medical Entomology 42 (6), 974–980. Afrane, Y.A., Zhou, G., Lawson, B.W., Githeko, A.K., Yan, G., 2006. Effects of microclimatic changes caused by deforestation on the survivorship and reproductive fitness of Anopheles gambiae in western Kenya highlands. The American Journal of Tropical Medicine and Hygiene 74 (5), 772–778. Alemayehu, T., Ye-ebiyo, Y., Ghebreyesus, T.A., Witten, K.H., Bosman, A., Teklehaimanot, A., 1998. Malaria, schistosomiasis, and intestinal helminths in relation to microdams in Tigray, northern Ethiopia. Parassitologia 40 (3), 259–267. Booth, M., Vennervald, B.J., Kenty, L., et al., 2004. Micro-geographical variation in exposure to Schistosoma mansoni and malaria, and exacerbation of splenomegaly in Kenyan school-aged children. BMC Infectious Diseases 4, 13. Bukenya, G.B., Nsungwa, J.L., Makanga, B., Salvator, A., 1994. Schistosomiasis mansoni and paddy-rice growing in Uganda: An emerging new problem. Annals of Tropical Medicine and Parasitology 88, 379–384. Bundy, D.A., Medley, G.F., 1992. Immuno-epidemiology of human geohelminthiasis: Ecological and immunological determinants of worm burden. Parasitology 104 (supplement), S105–S119. Carlson, J.C., Byrd, B.D., Omlin, F.X., 2004. Field assessments in western Kenya link malaria vectors to environmentally disturbed habitats during the dry season. BMC Public Health 4, 33. el-Hawy, A.M., Negm, I.A., el-Alamy, M.A., Agina, A.A., 1993. Effect of rice cultivation on the prevalence and infection rates of Schistosoma intermediate host. Journal of the Egyptian Society of Parasitology 23 (3), 759–767. Guerra, C.A., Snow, R.W., Hay, S.I., 2006. A global assessment of closed forests, deforestation and malaria risk. Annals of Tropical Medicine and Parasitology 100 (3), 189–204. Hotez, P.J., Brindley, P.J., Bethony, J.M., King, C.H., Pearce, E.J., Jacobson, J., 2008. Helminth infections: The great neglected tropical diseases. The Journal of Clinical Investigation 118 (4), 1311–1321. Kabatereine, N.B., Kemijumbi, J., Ouma, J.H., et al., 2004. Epidemiology and morbidity of Schistosoma mansoni infection in a fishing community along Lake Albert in Uganda. Transactions of the Royal Society of Tropical Medicine and Hygiene 98 (12), 711–718. Lindsay, S.W., Emerson, P.M., Charlwood, J.D., 2002. Reducing malaria by mosquito-proofing houses. Trends in Parasitology 18 (11), 510–514. Lindsay, S.W., Jawara, M., Paine, K., Pinder, M., Walraven, G.E., Emerson, P.M., 2003. Changes in house design reduce exposure to malaria mosquitoes. Tropical Medicine and International Health 8 (6), 512–517. Mukiama, T.K., Mwangi, R.W., 1989. Seasonal population changes and malaria transmission potential of Anopheles pharoensis and the minor anophelines in Mwea Irrigation Scheme, Kenya. Acta Tropica 46 (3), 181–189. Okech, B.A., Gouagna, L.C., Walczak, E., et al., 2004. The development of Plasmodium falciparum in experimentally infected Anopheles gambiae (Diptera: Culicidae) under ambient microhabitat temperature in western Kenya. Acta Tropica 92 (2), 99–108. Omer, S.M., Cloudsley-Thompson, J.L., 1970. Survival of female Anopheles gambiae Giles through a 9-month dry season in Sudan. Bulletin of the World Health Organization 42 (2), 319–330. Pullan, R., Brooker, S., 2008. The health impact of polyparasitism in humans: Are we under-estimating the burden of parasitic diseases? Parasitology 1–12. Roberts, D.R., Vanzie, E., Bangs, M.J., et al., 2002. Role of residual spraying for malaria control in Belize. Journal of Vector Ecology 27, 63–69. Schellenberg, D., Menendez, C., Aponte, J.J., et al., 2005. Intermittent preventive antimalarial treatment for Tanzanian infants: Follow-up to age 2 years of a randomised, placebocontrolled trial. Lancet 365 (9469), 1481–1483. Thiong’o, F.W., Luoba, A., Ouma, J.H., et al., 2001. Intestinal helminths and schistosomiasis among school children in a rural district in Kenya. East African Medical Journal 78 (6), 279–282. Utzinger, J., Tozan, Y., Doumani, F., Singer, B.H., 2002. The economic payoffs of integrated malaria control in the Zambian copperbelt between 1930 and 1950. Tropical Medicine and International Health 7 (8), 657–677. Yapi, Y.G., Briet, O.J., Diabate, S., et al., 2005. Rice irrigation and schistosomiasis in savannah and forest areas of Cote d’Ivoire. Acta Tropica 93, 201–211. Yasuoka, J., Levins, R., 2007. Impact of deforestation and agricultural development on anopheline ecology and malaria epidemiology. The American Journal of Tropical Medicine and Hygiene 76 (3), 450–460.

Malathion: An Insecticide Consolato M Sergi, University of Alberta, Edmonton, AB, Canada © 2019 Elsevier B.V. All rights reserved.

Chemistry, Production, Use, and Exposure Malathion (MLT) has a unique IUPAC name, which is diethyl 2-[(dimethoxyphosphorothioyl) sulfanyl] butanedioate and its chemical formula is C10H19O6PS2. The chemical structure is shown in Fig. 1. MLT is an insecticide of the organophosphoric class with the mechanical action on the enzyme acetylcholinesterase. In the Russian Federation (or the former Soviet Union), MLT was known as Carbophos, while the term Maldison was in use in Australia and New Zealand, while South Africa used the term Mercaptothion. The production is still massive with 49 producers in 10 countries, including China, India, United States, United Kingdom, Singapore, Egypt, Mexico, Denmark, Japan, and Switzerland (IARC, 2017). In the United States, MLT is the most frequently utilized organophosphate insecticide. MLT is extensively used in agriculture, residentia